# Forward Propagation

Forward propagation in Multi-Layered Perceptron

The basic architecture of MLP.

In the above diagram, we have a total of 4 layers. One input layer is followed by two hidden layers and the final output layer. And we will be having 4-dimensional data for this architecture i.e 4 parameters of data followed by an output data (predicted data).

In the architecture, if we look at the first weight we have 4 inputs, 12 weights, and 3 biases. We can count the weight as the number of connections from weight to the biases. In the below diagram, you can see there are 3 weights from each input, which in total for all becomes 12 with 3 biases.

If we go to the second layer, we can see we have 6 weights and 2 biases. Here we have 3 input layers O1, O2, and O3, and 2 biasesB1 and B2. As we keep on moving from one layer to another, you can see the output layer in the first layer becomes the input layer for the second layer.

Similarly, for the last layer we 2 inputs, 2 weights, and one bias.

If we calculate the total number of trainable parameters (weight and biases) it will be layer 1(12 weight + 3 biases) + layer 2(6 weight +2 biases) + layer 3(2 weight + 1 bias) = 26 parameters.

Let's see how the first layer uses the data to get the output. As we discussed, the first layer has total 15 trainable parameters. In the below diagram we can consider X row to be the input W to be the weight and b to be the biases.

So once we do the calculation for the matrix, we will have the sigmoid value of the final matrix, which will act as the input layer for the second matrix. In this way, we will have a final sigmoid value of the last matrix calculation.

In the coming articles, we will look into more details, plus backpropagation will also be a topic.