# The Most Intuitive and Easiest Guide for Artificial Neural Network

It’s getting the gradients for each step.

And with the slopes, we will update the weights as we talked above.

What we’ve gone through so far is from 1 to 4 stage.

We input X values and get the predicted value ŷ by forward-propagation.

There is b term called bias, but we just had it as 0 here.

If you are familiar with the chain rules, you’ll get the concept of backpropagation easily.

If not, don’t worry.

It’s like biting the tail of the previous equation.

I want you to see what’s the outcome of dW₂.

It takes the multiplied values of the error and the slope of the activation function at the given point and the input value.

Just like forward propagation passes the input data through the hidden layer and toward the output layer, backpropagation takes the error from the output layer and propagates it backward through hidden layers.

And in each step, the weight value will be updated.

Repeating the whole process several times, the model finds out the optimized weight and the prediction with the best accuracy.

And when the metrics converge to a certain level, our model is ready to make a prediction.

SummarizationLet’s recap what we’ve discussed so far.

We talked about 5 steps in ANN.

While passing through hidden layers and updating weights of each layer, what neural networks are doing is representing the pattern of data internally.

They are modified version by adding special technique.

Therefore having a solid understanding of the basic structure of a neural network should be the first and foremost thing for beginners.

This post was focused on getting the fundamental intuition of neural networks with a minimum amount of maths.

This could be a little bit simplified case but truly efficient for starters.

If you are ready to take the advanced level, I’d recommend you to read many other resources as well.

These are my picks.

Such a great post by Shirin Glander.

This covers how to build an ANN in R with the additional concept of regularization and optimization: ‘How do neural nets learn?’ A step by step explanation using the H2O Deep Learning algorithmMaybe the most popular article on this topic by James Loy on Toward Data Science.

It’s been a while but still, it’s great to understand deep learning from scratch: https://towardsdatascience.

com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6Excellent course for getting the intuition of neural networks on DataCamp.

Even for someone who does not know linear algebra and derivatives: https://www.

datacamp.

com/courses/deep-learning-in-pythonThank you for reading and hope you found this post interesting.

I’m always open to talk so feel free to leave comments below and share your thoughts.

I also post other valuable DS resources weekly on LinkedIn, so please follow me, contact me and reach out.

I’ll come back with another exciting story of deep learning.

Until then, happy machine learning!.