Working of Deep Learning Networks — 2

In deep learning, the number of hidden layers can be as large as 1000 layers.

Deep nets process data by employing sophisticated math modelling, Here a model takes a set of inputs and gives an output.

Using a deep net is very simple–as simple as copying and pasting a line of code for each layer.

Weight optimization is a process of fine tuning the algorithms used to assign values to individual parameters that influence a neural networks output.

An optimizer is an algorithm that adopts the neural network’s attributes (learning rate and weights). It assists in improving accuracy and reduces the total loss.

Weights are modified in training deep learning optimizers model. This reduces the loss function. The weights are updated using the backpropagation of error algorithm.

Compare an expected output with a predicted output. You will set the error. Propagate these errors backward to update weights and biases.

Backpropagation enables us to calculate the gradient of the loss function with respect to each of the weights of the network. Every weight is thus updated individually to reduce slowly the loss function over many training iterations.

Gradient descent is optimization algorithm. Its aim is to adjust parameter function to minimum.

In linear regression, gradient descent finds weights and biases. In deep learning, backward propagation uses this method.

The three types of gradient descent are — batch, stochastic and mini-batch gradient descent. Batch gradient descent is called a training epoch.

print

Leave a Reply

Your email address will not be published. Required fields are marked *