Yesterday I released an open-source course to teach you how to build your own Neural Networks for Natural Language Processing (NLP). My focus is code rather than theory: just Jupyter notebooks explaining the basic concepts in plain English and Python, using Keras as an approachable library to get your feet wet in the Neural Network game!
My target is anyone with a basic command of Python. The course is structured to build an intuition of Neural Networks, how to build your own architectures, and how to use them to perform text classification. It is not comprehensive: I do away with theory, and instead focus on explaining the ideas underlying different architectures — and how to apply them! On the course, we will cover how a single Perceptron works, how Neural Networks get deep, and how you can leverage Convolutional and Recurrent architectures. Since Neural Networks are so much fun to work with, this course is just a taste of them. However, we will not (yet) look into topics like Attention, Generative Adversarial Networks or Graph Convolutions. That will be on your checklist when we are done!
You can follow the course and contribute to it on GitHub. It is free and will remain free — forever. I am also open to contributions, suggestions, and any comments you may have. Access the repository here: