 Now, to be able to see what's going on over time, we'll create a logger. What this does is it keeps track of a value, whatever value we send to it, and it'll keep track of it for as long as we want, and it'll automatically produce a running list of that whole value's history and a convenient plot to sum it up. This is especially good for tracking the loss of a network. You can imagine a lot of other instances where there's a value and you'd like to see how it changes over time. We can do it with this logger. We'll visit this in a bit too because this is new with the new version of Cottonwood. Now we can go in and explicitly create our training loop. In its simplest form, it's pretty straightforward. We have our number of training iterations. We iterate through that and for each iteration, we do one forward pass through the network and one backward pass through the network. That's all we really need. Everything else is for convenience and for visualization and for documentation. So on each pass through, we'll use our logger and we will pull out the loss value for that pass and log it. This will let us track it over time and see a plot of it. Also we have a nice way to visualize what's going on inside of an individual convolutional block. This is a good way to get snapshots of where things are and to try to get a sense of how things are operating. So we'll take snapshots periodically at enviz interval of each of our two blocks to be able to see what they're doing. Then after passing all the way through our training loop, we'll do some more record keeping, some more visualization. We'll summarize the structure and what this does is it generates a text file documenting all the pieces and what they do and how they're built and what their parameters are and we'll save that out to the reports directory. And then also we'll do a visualization of the structure. So this is the network visualization that we looked at before that shows each of the blocks and how they're connected together. It can be a good way to get a nice visual one shot overview of what's going on. Now to do training, we can take and modify the structure. So we reach in, we remove the training block, we add the evaluation data block into the structure and then we explicitly reconnect it. We wire it up. So remember our zero with port on the eval block carries the signal. It connects to the first convolution block and then port number one, the second port on the evaluation block carries the label. We wire it back up to the one hot block. Now we have successfully removed the training data block, inserted the evaluation data block and we can repeat this process for the evaluation phase. So we iterate through our number of evaluation iterations and we do a forward pass and what makes this an evaluation is that we don't do a backward pass. We don't update the parameters in order to walk down the loss gradient and minimize the loss. We keep that frozen, but just keep doing passes to get the result. And again, we log it so that we can see it. We periodically visualize what's going on in the convolution blocks to see what's going there, but all of that is extra. The only part we really need for evaluation is that forward pass loop.