 So we trained in your network. Can we understand what is in there and how it works? Could we make sense of all the weights that are there, all the connections, everything they do together? Well, it's hard to know. We have written a paper recently with Tim Lallecrap, who's at Google DeepMind, where we argue that maybe no. But there are some aspects that we can visualize and we will do that because it will help us understand a few of the things. So what could, for example, be an idea? Now, we could start, for example, with a gray image and we look at a neuron and we can ask how can we change that image so that that neuron gets to be as active as possible? Now, what's the idea there? We want to basically optimize the image so that we optimize the output, which how would we do that? Well, we'd follow the gradients. So there's a beautiful literature that tries to visualize these aspects. And here's, for example, a set of findings from such a study where you can look at very low-level features on the left-hand side. What is that there? We find that there's some neurons that like local oriented filters and then some neurons that like local colors or color contrasts at the lowest level. Then, neurons are most activated by certain curvatures or things like that at the mid-level. And as you go towards higher levels, it's almost like we find things like the beaks of birds. Now, we can compare these results from the visualization of the early layers with the early layer with real neurons in early areas like primary visual cortex. And there we indeed find that we have very similar neurons in the real brain as we find it at the early layers of convnets. Now, I want you to visualize some tuning curves. We will not be doing exactly what they did in the previous studies I talked about, but we will take the network that you trained before and we will visualize which aspects of space matter that basically allows you to know where in space are things important. And now, do the results make sense to you?