 It's really helpful as it runs to be able to see what it's doing and in this case The number that we are watching carefully is the loss So we have a value logger, which is just a convenient way to go through each iteration and pull out a particular value if we want to watch how it changes over time and We'll create it. We'll specify. Hey every thousand iterations show me and exist show me what's going on and make sure to Average the results over every thousand iterations. So we'll get one point every thousand on this plot Tracking our progress seeing how our loss changes We have specified a million Training iterations. So if we let it run to completion, we'll end up with a little line plot that has a thousand points on it Next we can step through our training loop We've chosen our number of iterations in this case a million for Each one of those we can say take our classifier our structure run the forward pass forward pass is the function that steps through this graph and Runs every block Generates its output and passes it as input to whatever blocks It's connected to and steps all the way across the graph Then we log the loss whatever the loss value is when it gets to the end we record that and Then we run the backward pass step backward through this directed acyclic graph Starting at the end go all the way to the beginning in our case running Back propagation and updating all of those parameters as it goes and then Every time we pass I we have it set to every 20,000 iterations every save interval stop Make a copy of our model Strip out the training data block because it's kind of large it has all 50,000 Examples of digits loaded into it 50,000 images. So we want to trade take those out and Then save it as our model file name and then we have that cached for use later So in under a hundred lines of code you can see we Imported all the things that we need from cottonwood We chose our key parameters We created a structure and pulled all those blocks in created them and named them Then we connected them together Into our structure into our graph and then ran them through their training iterations So that there is the training loop We can then run this at the command line or wherever you want train dot pi and it will take and Train this model to perform as well as it possibly can on these on this data