Feature

5 Amazing NLP Practical Resources – Part 2

Are you interested in some practical resources for NLP (natural language Processing)? There are so many online NLP resources, especially those that rely on deep learning approaches, that it can be quite a task to sift through to find the quality. There are some well-known, top-notch, mainly theoretical resources with deep learning courses, especially the Stanford and Oxford NLP:  

  • Natural Language Processing with Deep Learning (Stanford)
  • Deep Learning for Natural Language Processing (Oxford)

But what if you have completed these, have already gained a foundation in NLP and want to move to some practical resources, or are you simply interested in other approaches that may not necessarily depend on neural networks? This post is going to be helpful (hopefully).

  1. A Primer on Neural Network Models for Natural Language Processing

Yoav Goldberg’s comprehensive overview. As a bonus, you might also want to read this short set of Yoav’s recent blog posts discussing this same topic, though from another angle:

  • An Adversarial Review of “Adversarial Generation of Natural Language”
  • Clarifications re “Adversarial Review of Adversarial Learning of Nat Lang” post
  • A Response to Yann LeCun’s Response
  1. Natural Language Processing with Deep Learning (Stanford)

Chris Manning & Richard Socher taught one of the most popular and well-respected NLP courses available online. The course provides an in-depth introduction to cutting-edge research applied to NLP in deep learning. On the model side, we will cover word vector representations, window-based neural networks, recurrent neural networks, long-term memory models, recursive neural networks, convolutional neural networks, and some recent memory component models.

  1. Deep Learning for Natural Language Processing (Oxford)

Another great resource for courseware from another top-notch university, this one was taught at Oxford by Phil Blunsom. This is a natural language processing advanced course. A key component of Artificial General Intelligence is automatically processing natural language inputs and producing language outputs. The ambiguities and noise inherent in human communication make traditional symbolic AI techniques ineffective for language data representation and analysis. Recently, statistical techniques based on neural networks have achieved a number of remarkable successes in the processing of natural language, resulting in much commercial and academic interest in the field.

Oracle
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top