2018年3月14日水曜日

What are the meanings of batch size, mini-batch, iterations and epoch in neural networks?

Gradient descent is an iterative algorithm which computes the gradient of a function and uses it to update the parameters of the function in order to find a maximum or minimum value of the function. In case of Neural Networks, the function to be optimized (minimized) is the loss function, and the parameters are the weights and biases in the network.

Number of iterations (n): The number of times the gradient is estimated and the parameters of the neural network are updated using a batch of training instances. The batch size B is the number of training instances used in one iteration.

When the total number of training instances (N) is large, a small number of training instances (B<<N) which constitute a mini-batch can be used in one iteration to estimate the gradient of the loss function and update the parameters of the neural network.

It takes n (=N/B) iterations to use the entire training data once. This constitutes an epoch. So, the total number of times the parameters get updated is (N/B)*E, where E is the number of epochs.

Three modes of gradient descent:

Batch mode: N=B, one epoch is same as one iteration.

Mini-batch mode: 1<B<N, one epoch consists of N/B iterations.

Stochastic mode: B=1, one epoch takes N iterations.

Note: The answer assumes N is a multiple of B. It would take int(n)+1 iterations otherwise.

0 件のコメント:

コメントを投稿