In my last Innovation Papers post, I approached the subject of Deep Learning and introduced the concept of Neural Networks, which I had began to understand through Andrew Ng’s Coursera course on the matter. We got as far as introducing these computing constructions called neural networks (due to their “shape” being similar to neuron cells), which are capable of identifying patterns in the information we input, allowing them to distinguish between different sets of data (it is or it is not the picture of a cat). That is great, and certainly most useful. However, it is not my objective to speak about the possible applications of Deep Learning using neural networks, but instead to try to explain, in simple terms, how they are able to do what they do, as well as how surprising it was for me to see its simplicity.
(more…)