 Two-dimensional convolutional neural networks are what a lot of people think about when they think about machine learning or AI, the ability to take an image and classify it. We're going to start with a best starter example for this, the MNIST Digits dataset. It's a carefully collected and pre-processed version of a whole bunch of handwritten digits 0 through 9, each labeled by humans, and they give a good opportunity to test drive any image classification algorithm that we want to get up and going. It's not the hardest image classification problem by a long shot, and in fact, it's not a good way to prove that you have a great algorithm, but it is a good way to test that your algorithm is functional, that it works. It's a good way to set a baseline. It's also a good way to take the cottonwood framework and to work out the kinks of the implementation of two-dimensional convolutions. We're going to run this course differently than we've done with previous courses. Previously, we've started at the bottom, at the lowest level, from the concepts, to the code implementation, to the gluing it all together, to the looking at the results, and then the visualization and interpretation. Here, we're going to start at the top. We're going to look at the overall problem, look at the results that it produces and what they mean. Then we'll move down to the coding implementation at a high level, how we get that to happen, and then we'll drill down to the individual pieces of code that make that happen, the raw Python and NumPy implementations. Then we'll go even deeper into the conceptual, and this will give you a chance to go as deep as you want. If you're just interested in a high level overview, you can bow out at the appropriate time. If you're interested in going the next level down and looking at how things are implemented, but want to save the detailed deep dive for later, you can do that. Then if you want to go all the way to the bottom, you can do that too.