 Now, in this task we just want to include a denormalized function that is the complement of our normalized function. You can imagine if we take an image and we bring it into our autoencoder, the autoencoder breaks it down into its component pieces, we'll dig into that much more later, and then reconstitutes something that is as close as it can get to the original image. We can compare our inputs and outputs to see how close we are. In practice we might want to take that output and use it in the same way we might want to use one of our, one of the original input images. And so in order to do that we need to make sure that it's restored to the original scale. So whatever shifting and scaling we did in the normalized step, we want to do the opposite of that in the denormalized step. And this exercise was just a chance to play with that and to think through what the inverse of that would look like. So we go back to our ANN class and right under our normalize method, we'll add a denormalized method. So looking at the denormalized function that I wrote here, the first thing that becomes clear is that it's wrong. Not just a little bit, but like really wrong. I didn't catch this until I was pretty far through the development process in the course. And we don't actually use the denormalized function in this course. It's just something that will be nice to have in a complete framework. So it's included here. I left it broken in order to illustrate what happens if you have a little piece of code that sits off to the side and doesn't have a test associated with it. So this wasn't part of any of the examples that I ran or any of the things that I printed out. It never got visited and so I never got a chance to see that it was broken. Originally, when I very first wrote it, I think it worked with the rest of the code, but other things went on and evolved and I changed the range that I was targeting and a couple of other things and now it's completely irrelevant. Worse than irrelevant, it looks like it would run and give a wrong answer. The actual corrected denormalized code looks like this and this is what's now committed to the master branch in this repository. For the rest of these exercises, this broken denormalized code will be sitting there. I want it to be a cautionary tale. Every time you see it, think, you know what? This is what happens if you don't have a test, at least a functional test for your code. It can be badly broken and you would never know. Now this is a pretty benign example. Neural networks are complex enough that they hide a lot and you can actually have much more subtle bugs, much more tougher things to catch going on. This is just to triple underline the importance of having simple test examples where you know exactly what the results should be, running those on your neural network before you step back and start trying to tune it and scale it up and make it run really fast and get two amazing things.