 If we go to GitLab to the Study Emnist Digits repository to the project, we can see the files involved and there's a readme right up front that gives a good overview of what's going on. The instructions for getting this up and going are in the readme here in the project. The very first thing that you'll need to do is to get Cottonwood installed and running on your machine. If you go to your command line, clone the repository, pip install it, and then make sure to check out the V28 branch. That is the version that was used to develop and run this case study. Cottonwood is not backward compatible, meaning that with any new version there might be changes that break how it used to work. So we have to make sure each time we use it to get the right version. So this is a version 28 project. And if you don't have them already, installing Cottonwood will also install NumPy, Numba, and Matplotlib, some other packages that you'll need to be able to run this. Another thing it will install is the Emnist Digits dataset. So luckily there's a convenient Python library for downloading these digits pre-processed. And then within Cottonwood, we've wrapped them inside of a convenient data layer so we can pull them out one at a time, one from the training set during training and one at a time from the test set during testing. There are 50,000 training examples and 10,000 test examples. So 5,000 each from each of the 10 digits for training and 1,000 each for testing. Then to be able to train a new model from scratch, you can run the train script. When you're done, you can run the test script to evaluate it. Also, it's set up so that during training it saves out the model in its current state periodically. So you can actually in another window run the test script at any time during training to get a snapshot of how the model is performing on the test data at that point.