Backpropagation in Neural Networks

Backpropagation refers to backward propagation of errors. It is used for calculating derivatives in deep feedforward neural networks. It constitutes an important part of supervised learning algorithms for training feedforward neural networks, e.g. stochastic gradient descent(SGD). After all, backpropagation is an algorithm. It makes neural network learn from its mistakes. To do so, weights of the neurons are adjusted, the error is propagated backwards through the network, and weights of the neurons are adjusted depending on how these contributed to the error.

Working Overview

Input data is fed forward. It produces an output. This output is compared to the desired output to compute the error. This error is propagated backwards through the network to calculate how much each neuron contributed to the error. The weights of each neuron are adjusted based on how much they contributed to the error. These steps are repeated till an acceptable level of accuracy is attained.

print

Leave a Reply

Your email address will not be published. Required fields are marked *