 So our data is trained or at least is in the process of training. It is really natural to know how well it's doing What's our accuracy rate? How well is it stack up against the whatever last run that we're comparing it against? To look at this we have the test script This one's a little bit tighter. It's a little bit shorter. We did a lot of our hard work already But we start by importing our testing data our 10,000 examples of handwritten images to test on Another logger called a confusion logger, which we'll see gathers up our actual and predicted labels and produces a confusion matrix and Then a convenience function load structure. That's just a quick way to load that structure up from a file name We specify our number of testing iterations Here we've just manually coded it to be the number of data points We happen to know are in the test set a little bit sloppy, but effective We instantiate this testing data block from our testing data class and Then we go and load up the structure that we saved and now we can tweak it we can modify it we can add this new testing data block to it and Connect it up and if we refer back to our structure diagram port zero on the testing data connected to the port zero of the convolution 2d zero block and Port one from the testing data output port Connected to port zero of the one hot block So we connect that testing data block exactly how the training data block had been connected and We initialize our total loss We initialize our confusion logger and Here we did a little tweak So n iter report is the time at which a report is generated to get it to generate a report on the last iteration Specified n testing iterations minus one so that when it gets to that n minus one iteration Which is what the iterator will count to then it'll generate the report and Then we have our testing loop So for each iteration in our number of testing iterations We run the forward pass on our network There is no backward pass this time. We're not training anything. So Computationally, it'll be a lot cheaper So we run the forward pass we take our total loss and we add whatever loss we have to it and Then we can log the appropriate values to our confusion matrix If we go and look at the confusion matrix log values method specifically We can see that it wants three things It wants first the predicted result in the form of an Array of the appropriate number of classes. That's one hot. It's all zeros except for a one in The place of the prediction. It also wants the actual result another one hot array with the one in the place of the given label and Then an optional method, which is the mapping from the Label name to the position in that array and we can get that conveniently from the one hot block using the get labels method So with these three things it can track Actual label predicted label and it can give it the right label name when it generates the plots I did want to call out that even if you pass it predictions that are not one hot if they're valued between zero and one But but could be fractional it can handle that too, but the interpretation will be different than a standard confusion matrix So it's if you want to match apples to apples and call it a confusion matrix best to pass the predictions as a one hot Array then after it goes through all of the testing iterations It takes the total loss and it finds the average so divides by that number of testing iterations and we get the average loss and Then we can use our confusion logger so this confusion matrix that we created and Call its calculate accuracy method to find the accuracy So of all of the examples what fraction of them did we get right? and in this case that's a Reasonable way to summarize the results in one number because we know we have the same number of Examples in every class because it's such a carefully constructed data set