 Another thing that we have that I have found really useful is a collection of tests. So when I am working or when you are working in Cottonwood and make a small change, it's not always clear that that change is going to work with everything else. There's enough moving pieces here that it can break. If you use it with a different data set or in combination with a different architecture, it might not behave well. So in order to do a very rough first pass test of everything, this is a collection of what are called smoke tests. It's as if we had a device, we plug it in and we watch and we make sure that no smoke comes out. That at least tells us we haven't done anything horrendously wrong right off the bat. It doesn't tell us that everything is working perfectly, it doesn't guarantee future performance, but it says, hey, it's not a bad start. So these are a set of smoke tests. It takes the major classes, puts them together into structures, runs them, you can run them. One, make sure they don't crash. Two, you can look at the results and see if they are reasonable, but none of them are meant to achieve any kind of good performance. They train for a very short period of time. They just give you a chance to look and see and make sure everything is going. And so when you finish making changes, you can step back out in the cottonwood directory above this. You can just run tests.py and it'll step through and run each of these sequentially and as a very rough first pass, if none of them crashes, then you can keep going. I expect that the single most helpful piece of this will be a cheat sheet and at least that's what I intended. So knowing where everything is and how it's connected is the most important thing for me when I'm building a new structure from scratch. So I tried to list this out here. So for the structure, the things that we'll need, and these are all things that we used in our case study, adding a new block, connecting or connecting a sequence, the forward and backward pass, how to remove something when we're done with it and we need to get it off the table. Connecting and loading a structure, all the basics. And then the rest of this is kind of like a library, a dictionary of all of the different building blocks that you have to work with, so different neural network layers that you can play with, including normalization and sparsification, also convolution and pooling, the different activation functions that you have to work with, different loss functions, types of initializers, operations in flat data and in 2D, different optimizers, a listing of the different tools, most of which we've seen, and then the data blocks which we were just looking at. So my hope is that this is the page that you have up when you're creating a new structure and you want to see what all is available. That's how I use this. And then when I need to know, oh, for this resample, what was the order of the arguments, then I know I can just jump right down to the Cottonwood directory, to the operations 2D module, pull up the resample code and see it right there. And that way I have it straight from the ground truth, the code itself, I can see what arguments it expects and what format they need to be in.