 So, today's topic is how we can make ConvNet scale. And I just want to start with a reminder. When we have a ConvNet, we need ridiculously fewer parameters than if we would have a fully connected network. In fact, we can use some further tricks to use even fewer, but the advantage that we get by needing fewer parameters is just massive. Images are big. How big is an image? Image length, times image width, times the number of channels. If you have hyperspectral imaging, it might be far more. But the upshot is that the order is something like a million different inputs, a million channels. So if we were fully connected, we would need to have very few neurons at the next layer. But if we have a ConvNet, we can basically reuse the same parameters all over the place. We can have lots of neurons at the first layer, namely one at every location in a way. And because they're all the same, we still have very few parameters. And what we'll learn today is ideas on how to make ConvNet scale, how to make them deep, and so forth. So let's start with an exercise on counting parameters to get you an intuition for which domain we're working in here.