 One of the things that distinguishes a neural network framework from just a single point implementation of a neural network is that it's a little more general. We would expect it to be able to construct different types of networks of different numbers of layers, different numbers of nodes, different types of layers, and also to work with different types of inputs. So one step we'll take toward more generality toward being able to handle broader variety of inputs is we'll do some scaling on the input to make sure that it falls roughly between the range of plus and minus one. There are different strategies for normalization, this is called, kind of bringing all the interesting behavior of your input into a fixed range. If you're working with a linear model, like linear regression, this doesn't matter. But when you're working with a nonlinear model, the actual values can make a difference. So it's really helpful to have this normalization step. So what we'll do is we'll go first into our run framework.py, into the script that runs everything. And for now, we're just going to hard code in a value range. In this case, we created all of our test data. We know that the lowest value is zero and the highest value is one. So we'll just code that in. It's always possible that we could get fancier, that we could use some samples from our training set and based on that estimate the range, that would be even more general, would require making even fewer assumptions. But for now, we'll set that aside for future work and just hard coded in for simplicity to keep the code nice and crisp. And as we create our autoencoder, as we initialize an instance of this artificial neural network object, we pass in the input value range as the expected range keyword argument. Now of course, we have to go to our ANN class. Now we can add that expected range keyword argument to the ANN class. We'll specify a default range of plus and minus one just in case we don't explicitly define it. That's a reasonable one. Also we will then create that or shift that or make a copy of that as the member attribute expected range. And then when we go to train and when we go to evaluate, the very first thing we're going to want to do when we get a new example is to normalize it. So instead of just peeling an example off the training set and flattening it, we're also going to want to normalize it. So scale it, shift it so that it gets down into this range. For our purposes here, we chose to try to get it down between plus and minus 0.5. The reasons for that will become a little more clear later. But that's a nice, that allows the neural network to really do its job and learn the patterns well. So we introduce this normalize step into both the training and evaluation methods and every iteration each time we get a new data point. Now we go to write the normalize method itself. So as we set it up previously, we pass it the values that we want to normalize. And we're going to want to transform it so that whatever the existing expected range is of our inputs, when we're done, it all falls between plus and minus 0.5. So there's two things we can do. We can add or subtract a number to our inputs to shift that range higher or lower. And we can divide it by a scale factor to make it smaller or larger, depending on the size of our scale factor. So in this case, we'll pull out, we'll be a really explicit, say our minimum and maximum value of our expected range, give them their own names. Our scale factor is the range between them. So in our case, our minimum value is 0, our maximum value is 1. So our scale factor will be 1, pretty vanilla scale factor. And an offset then will be our minimum value, 0. To transform our values, we can subtract the offset. So that means whatever the minimum value was, now it's 0. Everything's shifted down so that the lowest possible value is 0. And then we divide it by the scale factor. So whatever the original range was, now it's 1. Lowest value is 0, now highest value is 1. And then if we take all of that and we subtract 0.5 from all those values, now the lowest value is minus 0.5 and the highest value is plus 0.5. So this completes the scaling and offsetting, shifting of our inputs. So they follow between minus and plus 0.5. And what's cool about this is that it would work with inputs of any range. Inputs between 0 and 1 are pretty boring and pretty easy to transform, but they could be between any number, positive and negative, as long as the lower number is actually lower than the higher number, the min's lower than the max. This will result in inputs that end up between minus 0.5 and plus 0.5.