# Neural Networks – an Intuition

Our perceptron says it is a cricket ball.

This is the most rudimentary idea behind a neural network.

We connect lot of these perceptrons in a particular manner and what we get is a neural network.

I’ve oversimplified the idea a little bit but it still captures the essence of a perceptron.

We take this idea of a perceptron and stack them together to create layers of these neurons which is called a Multi-layer perceptron (MLP) or a Neural Network.

In our simplified perceptron model we were just using a step function for the output.

In practice lot of different transformations or activation functions are used.

We chose our features, red color and spherical shape manually and rather arbitrarily but it is not always practical to choose such features for many other complex tasks.

The MLP addresses this problem to some extent.

The inputs to MLP could simply be just the pixel values of an image, the hidden layer then combines and transforms these pixel values to create features in the hidden layer.

Essentially a single layer of perceptrons transforms the data and sends it to the following layer.

A 28 pixel x 28 pixel image of a handwritten digitSuppose our MLP is trying to identify a digit by looking at images of handwritten digits similar to the one above.

With a MLP we can pass this image as their pixel values directly without extracting any features out of it to the input layer.

The hidden layer then combines and transforms these pixels to create some features.

Each neuron in the hidden layer tries to learn some features as a part of the training process.

For instance neuron number 1 in the hidden layer might learn to respond to horizontal edges in the image, neuron 2 might respond to a vertical edge in the image and so on.

Now if the input image has a horizontal edge and a vertical edge together, neuron 1 and 2 in the hidden layer would respond and the output neuron will now combine these two features and might say it is a 7 since 7 roughly has a vertical edge and a horizontal edge.

Of course the MLPs don’t always learn features which necessarily make sense but it essentially is a transformation of the input data.

For example for a MLP which takes in the handwritten digit image as input having 16 neurons in hidden layer learns the features something like —Each square represents the learned features by a neuron in the hidden layerAs you can clearly see the patterns are more or less random and don’t make sense.

This inference process and transformations of MLP is best explained in this awesome YouTube series by 3Blue1Brown — What is a Neural Network.

I would highly encourage you to go to this channel and watch the videos to get an intuition about neural networks.

A nice tool to visualize some of the things which a neural network does is Tensorflow Playground and if you have some background in Linear Algebra the visualizations in this blog are nice.

So what tasks a neural network can perform apart from identifying a cricket ball or a handwritten digit?.Today neural networks are deployed for a wide range of tasks.

Image Recognition, Voice Recognition, Natural Language Processing, Time series data and many many other applications.

Image classification is one particular field where neural networks have become the de facto algorithm.

Neural networks have surpassed humans in identifying images accurately.

Of course there are lot of sophisticated techniques and math to build such high fidelity neural networks.

Source: xkcd.

comNeural networks are everywhere.

Companies deploy them to give you recommendations about which video you might like to watch on YouTube, identify your voice and commands when you speak to Siri or Google Now or Alexa.

Neural networks are now generating their own paintings and music pieces.

Still, there are a lot of challenges.

Neural networks can be huge with hundreds of layers with hundreds of neurons in each layer which makes computing a big challenge with the current hardware technology.

Training neural networks is another huge challenge.

Several GPUs are employed together to train these neural networks which take several hours to days.

But they are the present and the future in our quest for building machines with intelligence and capabilities as sophisticated as the human brain.

We are still far away from making an artificial brain which can someday “download” all of your consciousness creating a machine copy of yourself.

But for now that is just science fiction!Source: xkcd.

comNeural network is a vast subject and the idea behind this article is to elicit enough curiosity to give you a small impetus to go and explore the topic.

So go and click those YouTube and TensorFlow links in the article and have fun learning!.

Look out for the part-2 of this series on How to train your D̶r̶a̶g̶o̶n NN.

Bio: Prateek Karkare is an Electrical engineer in training with a keen interest in Physics, Computer Sciences and Biology.

Original.

Reposted with permission.

Resources:Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.