 All right, this exercise is pretty exciting if up to this point we've created a skeleton for our framework Now we start to put the muscle on Here we're building actual Individual layers. We really have to start understanding what a neural network is and and how the pieces How do the nuts and bolts fit together? If you haven't yet watched the video what do neural networks learn? I highly recommend taking a few minutes about a half an hour Watching through it or reading through the blog post. It will inform everything we do for the next few exercises With this basic understanding in place then we can take it piece by piece and implement it So the very first step is we go into the neural network Framework package and create a module called layer dot pie and then within this we create a class called dense and Initialize it with the things that we'll need to flesh out our neural network specifically our inputs and our outputs You can see I also have an initialization argument for a debug This is an artifact of when I was developing that I never went through and removed. You don't need this at all We just need to specify at this point the number of inputs and the number of outputs It's helpful also to turn those into internal attributes so that we can remember them over time You can see here. I also include a learning rate. This is something that we will need Eventually you don't need to include it yet. We're gonna talk about it later, but I happened to include it here Here in the initialization there are a couple of other things that are more complex than they need to be We go through and simplify these out in the very final version. These are things that are Leftover when I was doing trial and error getting everything running during initial development They still allow it to run, but they can be dramatically simplified. So the initial weight scale Here it's equal to one. I tried different values there But that could be removed as a constant and then initializing the weights then as a random sample that is between Minus one and one is what that statement does. So every weight now Connecting each of the inputs to each of the outputs has a randomly selected value between minus one and one There's also this this member attribute W grad. This is something that actually we end up not needing. There was I used it for an intermediate calculation at one point And then simplified things so that it wasn't necessary So these will remain as artifacts in the code until the very end when we'll clean them out but just know that it is not necessary for you to have them in your solution at all and Initialize a set of inputs and a set of outputs just for good hygiene to have a Zero vector of inputs going in and a zero vector of outputs going out So that if we happen to reference them in the code before the first iteration There's something there. We don't get an error the fact that we initialize the inputs as The number of inputs M inputs plus one is a reference to the bias neuron This extra bias that we add in or this extra constant that we add in at each layer So with each layer we'll take the number of inputs that we feed it plus this extra That always has a value of one so the total inputs X will have the number of inputs plus one Then we also go into our run framework. Pi the script that we use to run this we can now import this module layer and From that Once that we have our number of input and output nodes We can create a layer that connects them so it'll be a dense layer meaning all the inputs are connected to all the outputs and We include the number of input nodes and the number of output nodes Those are the inputs and outputs for this layer and we put that within a list In this case, we just have a list of one. There's a single layer no hidden layers and that is our model and Now that we have an actual model an actual list of layers We can substitute that in when we initialize our neural network. So now we're passing it an actual network Specification even though we haven't yet fully defined what this dense layer does