 This is a fun step because we're really starting to get into the what makes a neural network a neural network We're putting in forward propagation But so far only the linear parts. We're not yet adding a non-linear activation function So to do this we go to layer dot pi go to our dense layer object We create a new method forward prop it takes in a set of inputs these are the inputs to the layer and The first thing we do is take a single bias input So it's a just a two-dimensional array, but size one by one and it has a value of one and Then concatenate so we just stack that on top of the inputs which itself We have set up as a two-dimensional array and we specify to stack it on Axis number one so axis number zero is always the row Axis number one is always the column so you can picture this as a horizontal set of inputs and The bias neuron is being tacked on to the very right side the very final column Then we have this set of full augmented inputs X and And we matrix multiply that by the weights also a two-dimensional array Doing that then let's us get the output Y Then we return that as a result Now we can go into our ANN class in framework dot pi and add a forward prop method there We take in the inputs X in this case. These are the inputs to the entire neural network Whatever form that it's in we take it and Ravel it meaning we turn it into a one-dimensional NumPy array and Then the construction on the end the square brackets new axis comma colon square bracket What this does is it takes that one-dimensional array and it stretches it out across columns puts it all into a single row But make sure it cuts across columns Then the next line takes our from our list of layers, which we only have one layer in it right now So take that one and only element Run the forward prop method on it We've already taken our input and renamed it from X to Y The reason for doing this will be clear a little bit later It just makes it easier if you're calling one layer after the other The output of the previous layer is always the input to the next layer So in this case we take and run forward prop on the inputs We get a Y as an output and then we return that We flatten it again. We ravel it. So return it as a one-dimensional array Then we can go back now Into our train and evaluate methods and we can add this forward prop step So we bring our data in we take our training set. We pull an example. We normalize it Now the next thing we do is you run it through our forward propagation We get the result Y and we'll print that out in our training method Just so we can see that it's working, but then we do the exact same thing in our evaluate method Now when we run it we can see sure enough we get what we would expect One-dimensional arrays and we're expecting a size of four and these are the outputs of this neural network So it appears to be working as intended