 So, we've already looked at the structure diagram. It's worth taking a minute to look at this text summarization. It takes advantage of in every cottonwood block and every layer of our neural network, there is a short text description of what it is and a listing of all the meaningful parameters. If you wanted to or needed to make a detailed report, say, for publication, these are all of the things that you would need to describe to someone else how to build an identical network. So, we can look through this and see what it looks like for this particular diagram. We can see things like the number of kernels in each convolution layer. It shows all of the regularization parameters. It shows for each layer what optimizer is being used and what its learning rate is. Note that we don't have to use the same learning rate or even the same optimizer for different layers. It shows what the loss function is, it shows all the individual pieces, and even some of those small parameters or tweaks that we might not think to include when describing a network, it's all here in a text file. Now, it's not really easy to read, so it's more for the sake of documentation and completeness and communication, but when paired with the structure diagram, it does give a pretty good sense of what's going on, and even if someone wanted to recreate this, say, intense or flow, this gives them a really good running start to be able to get something that is either identical or very close. So now let's come back to the custom visualization that's specific to this dataset, our render results. To get at this, we use our model that we've loaded, we add the testing data block in like we did before in our test script, and we connect it up to our convolution layer and our one-hot layer using the right ports, and we initialize lists of examples that we got right and got wrong, correct and incorrect examples. And then we run through a while loop until we're done. So we just keep doing forward passes through the classifier. We pull out all the bits, the image, the label, what the actual predictions were, what the labels are of each position in that prediction array, what the one hot representation of the labels are, what that top choice was, so what the hard max was, and then pulling out the index of the top choice and getting the appropriate label name, so the predicted label that goes with that. To be able to represent this, the predictions really nicely in a bar chart, it's nice to have them ordered so that class 0 is in the bottom and class 9 is on the top. So to do that, we'll take our predictions and order them, and that's what this loop does, is it goes through and for each index of the label, it pulls out the name associated with it and then goes into our ordered predictions array and for the position corresponding to that name, it puts the value associated with that index. So I don't pretend that that was made sense or was easy to follow, but it's some indexing fancy footwork to get those predictions in the order of the label names so that the order of the label names makes sense. Then we take all of that, the image, the actual label, the predicted label, and the ordered predictions and we bundle them together into a tuple, which we call the outcome for that example. And then we check, is the label array and that top choice array, so our hardmax, so these both should be one hot arrays. If we subtract one from the other, take the absolute value and then take the sum of that, if they're predicted and the actual labels are the same, that should be 0. So we'll give it a little bit of a wiggle room. If it's less than one one-thousandth, so if it's close to zero, then we'll say the actual and the predicted were the same. Then if our collection of correct examples is still short, if we fall short of the end render examples, we'll add that to the list. Otherwise, we know it's a wrong example and if our collection of wrong examples is still short of end render examples, we'll add it to that list. If both lists are sufficiently long, then we jump out of this while loop. Now there are two layers of if statements there and there's a break in a infinite while loop. Sometimes the logic is tricky here, so during development, I put a couple of assert lines here just to add a check to make sure I didn't make any mistakes. So at this point, the length of these two lists, right and wrong, should both equal end render examples if I did it right. So quick check on that just gives me a warm feeling inside that whatever, if the code gets past this, I know those lists are the right length.