 So let's talk a little bit about minor details here. So the first one is there's padding. Now, like what do we do if the feature hits the outside of the image? At some level, the result of convolution without padding will always be smaller than where we start. So if we use, no, because we cannot apply the kernel say here, we can apply it here, a three by three kernel. We can apply it here, but we cannot appear it anywhere on the edge of that. We can solve that problem by simply adding some zeros all the way around it. It's a process called padding. You can tell the code like how much padding you want. And in certain circumstances, it's very useful, in particular, if you have very small images, if you have very large images, you can say the padding is roughly irrelevant. Now, the next one is strite. I can apply it to every location, which is technically what we'd call strite one. Alternatively, I could apply the kernel only every other location here, which would be all these green ones. So instead of getting an output that's nine by nine, I would get an output that is five by five. Now we can use strite two, strite three, basically only applies it to these nine locations and strite four to these locations. Okay, so we can change padding and we can change strite, both for what we do here and for max pooling, what we'll be talking about later. So now what's the motivation at some sense for convolutions? One of them is we can really say it gives us meaningful local features. So here we have four features that pick up on horizontal versus vertical lines or diagonal lines. For example, the horizontal filter that you see here, look, it basically will be active if there's positive activity, positive inputs on this line and negative ones along those lines. If we apply it to something like the house, we get the edges of this, the horizontal edges of this house. Now you can say kind of the existence of this feature here, of these many lines here could be indicative of say stairs. And maybe the existence of a line like this could be indicative for the roof. And the absence of very strong horizontal or vertical lines could be indicative that would be looking at this around face or something like that. Okay, so you can see how intuitively these might be useful features. And in fact, you can maybe even imagine how in old fashioned AI, we would have handcrafted features that tell us what the orientations are. Now feature detection and edges, we can run a local feature across an image. And I want you to think how that could be useful for image recognition. So what I want you to do is take a local feature and convolve it with a simple image and ask yourself how this could be useful.