ANN Calculation for each layerwhere,x — is the input vector with dimension [p_l, 1]W — Is the weight matrix with dimensions [p_l, n_l] where, p_l is the number of neurons in the previous layer and n_l is the number of neurons in the current layer.b — Is the bias vector with dimension [p_l, 1]g — Is the activation function, which is usually ReLU.This calculation is repeated for each layer.After passing through the fully connected layers, the final layer uses the softmax activation function (instead of ReLU) which is used to get probabilities of the input being in a particular class (classification).And so finally, we have the probabilities of the object in the image belonging to the different classes!!And that is how the Convolutional Neural Network works!!And input images get classified as labels!!Now, let’s visualize how to calculate the dimensions of the output tensor from the input tensor.Fig 7..Output Dimension Calculations from Input Dimensionswhere,W1 — is the width / height of the input tensorF — is the width / height of the kernelP — is the paddingS — is the strideW2 — is the output width / heightPadding?Usually, the input matrices are padded around with zero so that the kernel can move over the matrix uniformly and the resultant matrix have the desired dimension..So, P=1 means that all sides of the input matrix is padded with 1 layer of zeros as in Fig 4 with the dotted extras around the input (blue) matrix.Stride?The kernel moves over the matrix 1 pixel at a time (Fig 4)..So, this is said to have a stride of 1..We can increase the stride to 2 so that the kernel moves over the matrix 2 pixels at a time..This will, in turn, affect the dimensions of the output tensor and helps reduce overfitting.And what happens to the number of channels of the output tensor?Well, in case of a Convolutional Layer, it is equal to the number of kernels.And in case of a Pooling Layer, the channels in the input tensor and the output tensor remains the same!. More details