 Now, if you've been following along with the course sequence starting at 312, 313, 314, and then 321, you're already kind of familiar with Cottonwood, this machine learning framework. It's worth seeing a couple more words about, not least because I actually changed it significantly between finishing course 321 and publishing this course. The great part about having code that is so flexible, so research oriented, is that I can make big changes and I don't have to worry about continuing to support older versions. Instead, I've opted to clearly communicate that any time you use Cottonwood, you need to specify the particular version that you want to use. So in this case I had called out, we're using version 28. So we specifically check out branch V28 and that we know will work on it. So that has the advantage of anytime anyone comes back to this later, after no matter how much time, they know they can pull up V28 of Cottonwood and it'll still run. The disadvantage is that anything new and exciting that gets added to Cottonwood afterward, they won't have access to. So there may be some extra legwork involved in moving it forward, but at least this particular case study stands as an operating example. So the current version is different enough from the previous versions that we've used that I want to call some things out. The first thing is that I took and removed a lot of sub-directories. So if you look at the structure now, there's the top-level Cottonwood directory. And to be able to handle this as a pip installable library, it has to have another Cottonwood sub-directory. It feels a little awkward to me. There's the license, there's the readme, those are standard setup.py that hasn't changed. And under the Cottonwood directory now is all of the meat, all of these blocks, the activation functions, our convolution, dropout layer, initialization functions, linear layers, loggers, loss functions, normalization options, different operations, operations specific to 2D arrays, our optimizers, our pooling functions, the structure, data structure itself, and then a test script and our toolbox. This right here is the framework. And I love how this makes it feel small, tight and accessible. Anytime you need to look at the code, it's right there at your fingertips. What I hope is that by having it be so easy to navigate, that you'll feel comfortable jumping right into the code, saying, I'm not sure exactly how to call this. Let me go look at the code and see how it's implemented and what the comments say there. And then that minimizes the need to document that again somewhere else. There's also, of note, this experimental directory. So these are things that are not canon. There are changes, either things that I've tried out or things that I've implemented from someplace else that I'm not 100% confident pulling into the core set of functionality. So to put an asterisk by it, a caveat I put in this experimental sub-directory. So the structure visualization, it works okay. There are other methods that I've seen that give prettier results. I'm not 100% happy with it, so it lives here. There are some initialization methods that I'm playing around with. My implementation of online normalization, which we'll talk about later in the course, is here because it deviates from the original paper in a non-trivial way. And there's just a couple of other things that are here because they are experimental to a certain extent. I've had good success with them or they wouldn't be here at all, but I have not, they haven't gone through all, gone through peer review or been beat on by lots of other people. Another thing we have that I have found really useful is this collection of data sets. Some of them are very small, the 2x2, the 3x3, the blips. They are meant to, as a very beginning data set to be able to just try things out and make sure things are working nicely. They're not meant to be challenging in any way. They're very small. But we also have, for instance, Nordic runes, which is a small alphabet from the Elder Futhark alphabet, and then MNIST, like we're using in this course, which actually goes and pulls down the MNIST package and creates these blocks. In all of these cases, it takes the data sets and wraps them in data blocks, so that we can conveniently import them and add them into a cottonwood structure and begin to work with them right away. I expect that this will expand significantly in the future.