 Well, let's talk a little bit visual processing in the brain. This is very important because a lot of the early approaches on confnets have been inspired by insights from the brain. So let's look at historical electrophysiological experiments. These experiments are arguably still the most influential ones that drove kind of the drive towards confnets and towards thinking about neural networks. So, typical experiment from Eubel and Wiesel. We have a cat on the left hand side. They're recording from the brain of the cat. Just to be clear, this is a very thin electrode in the brain, so thin that it doesn't really disturb the cells. And then that electrode is attached to an amplifier and out of that amplifier we get signals. You can see the kinds of signals in the middle, these big black lines that we have here. These signals consist of two parts. They consist of some background noise that comes from all the other neurons in the brain. And then it has the spikes from usually one neuron. That's the neuron that is closest to the tip of the electrode. Now, almost by chance, what they discovered is that neurons in primary visual cortex really produce very high firing rates when they see an edge. So what they did is they showed in front of a screen in front of the cat stimuli like this. And they can produce dark stimuli or bright stimuli. In this case they will be using bright stimuli. And with the cat seeing those stimuli and then what they did is they took these edges which make the neuron fire a lot if they come up at the right position with the right orientation. And then they rotated them. And then they moved them. But let's first talk about rotating. So what you find is if the stimulus looks like this, no responses at all. That's what we can see here in the top part. Now if we rotate that more towards the direction that the neuron prefers, the so-called tuning of the neuron, then we see some spikes. Now at the right orientation we see lots of spikes. Further on we see a few spikes. Even further on we see no more spikes from the neuron. Now what they do is they take that what they record and directly play it on a monitor. Now directly playing allows us to listen to that. And that's what we'll do in the next slide. But just to take the upshot already what they find here is that as we vary that orientation stimulus orientation which we have on the x-axis on the graph on the right, we find that the firing rate of the neuron varies between 0 and 50 Hertz. 50 Hertz is a very high firing rate. Keep in mind that the average firing rate of neurons in the brain is of the order of one per second. Okay so now let's look at how this actually looks like. Okay now we'll just play that video of a complex cell and we'll talk a little about what complex cells are in a second so that you get a feeling for how these experiments work. And what you will hear are the spikes from the neuron and what you will see is what they show in the stimulus. So let's play this and I will need to do some fancy operations so that you'll hear it. So I hope you can hear the spikes. Every time you hear it's a spike. There they mark the favorite orientation the size of the receptor field. Okay I think you get the idea here. Now let me tell you how much this affected me. So when I was in Zurich Kevin Martin was doing these experiments. It was in the basement of the institute and whenever they did the experiment it was basically two days of non-stop doing these experiments. And now remember seeing Kevin do this mapping and it so much feels like you understand how the brain works. It's just like really visual how seeing this kind of mapping gives you a feeling for at least the illusion of understanding how it works. Now let's recap what we have here. We have a nerve cell that will be very active if you show a bar. It doesn't care where you show it within that area. It just cares that it has the right orientation. So it's if you want within limits it's an invariant cell. As I move the bar within that area it produces exactly the same response. And this idea of having a production of invariances is what gave a lot of drive to what's the search for neural representation and for the search for representation learning as well later. So let's see what the idea is that it drove. So the idea was that we first have a neuron, the so-called simple cell and Hubel and Wiesel found those units. They are only active if you show a bar at the right place. And if so it would be here very active and if you move it even just a tiny bit it will be gone the activity. Contrast that to complex cells where it will be active anywhere within the region. Now what today's tutorial is about is basically this. We have a bunch of neurons that have a receptive field. In fact we have lots of them that's what we use convolutions for. And they basically tell us the existence of a feature at a given place and then we have complex cells. In the context of what we will do this will be implemented as a max pull layer which basically say look I mean there's something with this orientation. I don't really care where it is. So wherever it is I want to have like the best fit of that feature. And we'll talk much more about this today. And now the idea of course coming up immediately with the Eubel and Wiesel data is the idea that we could be hierarchical about this. So here's a nice review paper by Rawls who had been doing a really really important work throughout the 90s in that area and later as well. Where you can say we have an input layer and we have then a layer one, a layer two, a layer three, a layer four. As we go on and on the the receptive fields get to be bigger and the invariances get to be larger. Where you could say at the first layer we're really in the retina in our eyes. It just really tells us if a point is bright or dark. It has arguably no meaningful invariances as we go towards v1 and complex cells. It has some invariance about small movements and as we go through the hierarchy of the brain we have more and more invariance. Whereas towards very high levels we might have object recognition. It doesn't actually care about where exactly that object appears. Now let's briefly talk about invariances. When we talk about invariant object recognition in cognitive sense. And it's a beautiful slide here by Matt Krauser. You can say what are we talking about? The first one is translation invariant. Now if I detect an object maybe a coffee cup as a random object out of Convert's environment. It's the same coffee cup regardless of where exactly it's shown. A concept called translation invariance. Then there's rotational invariance and viewpoint invariance. The size invariance we might want to detect the object even if it's scaled. And there's illumination invariance. So there's all these different invariances and within the domain of cognitive science there's a lot of research on what kind of things people are invariant to in their high level recognition. Now I want to say that the idea of invariance is what as we will discuss later ultimately gives rise to Convert's. I want to also emphasize that it's not entirely true. So here's some analysis from one of the earliest papers I've ever been involved in. Where what we did is we measured as a function of where we are in the visual field of a cat in that case. The average brightness. And look it's much brighter towards the top than it is towards the bottom. Why is that the case? It's the case because there's a horizon. Not like this below the horizon usually is the stuff that we walk on. Above the horizon is the sky. And that's why there's this brightness difference. And it turns out that if you look at real nervous systems they are slightly different above the horizon. Then below the horizon for example on the right hand side you can see here the different receptors. You can see it's not quite the same towards the top and at the bottom. And there's also different differences in the neural responses to stimuli as a function where they are. Now here's a question that I want you to discuss. How do you think neuroscience and cognitive science have inspired deep learning? And how do you think they will continue inspiring deep learning? And how much do they inspire yourselves?