Use of Calculus in Neural Networks

Calculus helps us understand the internal workings of different ML algorithms. One application of calculus in ML is the gradient descent algorithm along with backpropagation. This is used to train a neural network.

Backpropagation involves taking the error rate of forward propagation and feeding this loss backward through the layers of neural network. The aim is to fine tune the weights. This is the essence of neural net training.

The method calculates the gradient of the error function with respect to the weights of the neural network. It reduces the error values in randomly allocated weights and biases in such a way that it produces the desired output.

The gradient of the loss function for a single weight is computed by the chain rule. It is computed for one layer at a time.

print

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *