 When we go to run our convolutional neural network example, we get a polite little message telling us where to look for all of the results. We have a reports directory and then a date and time stamp sub-directory. So this is so that it will be unique and every time you run, you'll generate a new set of results and you can go back through and pull the one that you want. When we go to look at this set of results, let's walk through it. First, the structure summary dot text. This gives us a nice text-based report of everything that was involved in our network. This should be enough if we've done it right to give to someone else and maybe be able to recreate our work from scratch. This is a convenient thing to do if you're publishing a paper and someone wants to know some detail about some run, what some parameter was set out that you might have forgotten to put in the paper. You can go back and check this and see exactly what it was. We start with a simplified representation of the network. This shows each connection individually. You can take this and from this you can reconstruct and get the full diagram, but that's tough to represent textually. So this way we just generate a list of connections and at least the information is there if we need to recreate it. Then for each block, we get a label for the block and then a short description of what it does. For the convolution block, you can see now all of our work generating these strings and documenting pays off. We get the number of inputs, of channels, of outputs, of kernels, the kernel size. We get which initializer we used, which optimizers we used, what the parameters were for them. Then we see our hyperbolic tangent block, our next convolution block, our next hyperbolic tangent block, our flatten, our linear block that shows the number of inputs, the number of outputs, and all of the parameters associated with it. It's our logistic block, our difference block, and our mean squared error loss block. Altogether, this tells us everything we need to know to reconstruct our network. We also get a nice visual representation. There's a separate function that we called in the example that created this and saved it out. It's beyond the scope of this walkthrough, but it'll be included later. Here we can see visually everything starts with our training block. It passes through our two convolution blocks and both of their nonlinear activation functions. The flatten, the linear block, and its nonlinear activation function. This is a dense block if you put these two together. Then you see the labels going through the one-hot block. The difference to compare the output of our convolution loop with our one-hot loop and then the calculating the mean squared loss. Visually, we get a representation of the network here. Then we can see how the loss changes over time. There's two representations. One is a CSV, so if we need to go through and do an analysis on it later, we can always load this into NumPy and do whatever kind of visualization and calculation we want. But it's convenient to have a visual representation. This is particularly helpful when we go to run the neural network. This gets updated periodically, so before it's done running, you can open this report and watch it in real time creep down the curve. This is also really helpful if it's misbehaving from the outset and not decreasing the loss at all. You can start to debug and you don't have to wait for the whole run, which can be quite long with larger data or bigger data sets or deeper networks.