 Hi everybody and welcome to lesson 17 of practical deep learning for coders. Really excited about what we're going to look at over the next lesson or two. It's actually been turning out really well, much better than I could have hoped. So I can't wait to dive in. Before I do, I'm just going to mention a couple of minor changes that I made to our mini AI library this week. One was I went back to our callback class in the learner notebook and I did decide in the end to add a Dunder Get Attra to it that just adds these four attributes. And for these four attributes, it passes it down to self.learn. So in a callback, you'll be able to refer to model to get self.learn.model, opt will be self.learn.opt, batch will be self.learn.batch, epoch will be self.learn.epoch. You can change these. You could subclass the callback and add your own to underscore forward or you could remove things from underscore forward or whatever, but I felt like these four things I access a lot and I was sick of typing self.learn. And then I added one more property which is in a callback, there'll be a self.training, which saves from typing self.learn.model.training. Since we have model, you can get rid of the learn, but still, I mean, you so often have to check the training now, you can just get self.training in a callback. So that was one change I made. The second change I made was I found myself getting a bit bored of adding trainCB every time. So what I did was I took the four training methods from the Momentum Learner subclass and I've moved them into a train learner subclass along with zero grad. So now Momentum Learner actually inherits from train learner and just adds momentum. There's kind of a quirky momentum method and changes zero grad to do the momentum thing. So yeah, so we'll be using train learner quite a bit over the next lesson or two. So train learner is just a learner which has the usual training. It's exactly the same that FastAI2 has or you'd have in most PyTorch training loops. And obviously by using this, you lose the ability to change these with a callback. So it's a little bit less flexible. Okay, so those are little changes. And then I made some changes to what we looked at last week, which is the activations notebook. And specifically, okay. So I added a hooks callback. So previously we had a hooks class and it didn't really require too much ceremony to use, but I thought we could make it even simpler and a bit more FastAI-ish or MiniAI-ish by putting hooks into a callback. So this callback, as usual, you pass it a function that's going to be called for your hook and you can optionally pass it a filter as to what modules you want to hook. And then in before fit, it will filter the modules in the learner. And so this is one of these things we can now get rid of. We don't need the dot learn here at all is one of the four things we have a shortcut to. And then here we're going to create the hooks object and put it in hooks. And so one thing that's convenient here is the hook function. Now you don't have to worry and we can get rid of learn.model. You don't have to worry about checking in your hook functions whether in training or not. It always checks whether in training. And if so it calls that hook function you passed in and after it finishes it removes the hooks and you can iterate through the hooks and get the length of the hooks because it just passes these iterators and length down to self.hooks. So to show you how this works, we can create a hooks callback. We can use the same append stats and then we can run the model. And so as it's training, what we're going to be able to do is, yeah, we can now then here we go. So we just added that as an extra callback to our fit function. I don't remember if we had the extra callbacks before. I'm not sure we did. So just to explain, I just added extra callbacks here in the fit function and we're just adding any extra callbacks here. So then now that we've got that callback that we created because we can iterate through it and so forth, we can just iterate through that callback as if it's hooks and plot in the usual way. So that's a convenient little thing. I think it's convenient thing I added. Okay. And then I took our colorful dimension stuff, which Stefano and I came up with a few years ago and decided to wrap all that up in a callback as well. So I've actually subclassed here our hooks callback to create an activation stats. And what that's going to do is it's going to use this append stats, which appends the means, the standard deviations and the histograms. And I changed that very slightly. Also the thing which creates these kind of dead plots, I changed it to just get the ratio of the very first, the very smallest histogram bin to the rest of the bins. So these are really kind of more like very dead at this point. So these graphs look a little bit different. Okay. So yeah, so I subclassed the hooks callback and added the colorful dimension method, a dead chart method and a plot stats method. So to see them at work, if we want to get the activations on all of the cons, then we train our model and then we can just call. And so we've created our activation stats. We've added that as an extra callback. And then, yeah, then we can call colored in to get that plot, dead chart to get that plot, and plot stats to get that chart plot. So now we have absolutely no excuse for not getting all of these really fantastic informative visualizations of what's going on inside our model because it's literally as easy as adding one line of code and just putting that in your callbacks. So I really think that couldn't be easier. And so I hope you're even for models you thought you know are training really well. Why don't you try using this? Because you might be surprised to discover that they're not. Okay. So those are some changes, pretty minor, but hopefully useful. And so today and over the next lesson or two, we're going to look at trying to get to a important milestone, which is to try to get fashion, MNOS training to accuracy of 90% or more, which is certainly not the end of the road. But it's not bad. If we look at papers with code, there's so 90% will be a 10% error. So there's folks that have got down to 3% or 4% error in the very best, which is very impressive. But you know, 10% error wouldn't be way off what's in this paper leaderboard. I don't know how far we'll get eventually, but without using even any architectural changes, no resnets or anything, we're going to try to get into the 10% error. All right. So the first few cells are just copied from earlier. And so here's our ridiculously simple model. Like all I did here was I said, okay, well the very first convolution is taking a nine by nine by one channel input. So we should have compressed it at least a little bit. So I made it eight channels output for the convolution. And then I just doubled it to 16, doubled it to 32, doubled it to 64. And so that's going to get to a, that will be as I say, 14 by 14 image, a seven by seven, a four by four, a two by two. And then this one gets us to a one by one. So of course we get the 10 digits. So there was no thought at all behind really this architecture, this pure, just pure convolutional architecture. And remember this flatten at the end is necessary to get rid of the unit axes that we end up with because this is a one by one. Okay, so let's do a learning rate finder on this very simple model. And what I found was that this model is, and you know, this situation is so bad that when I tried to use the learning rate finder kind of in the usual way, which would be just to say, you know, start at one in egg five or one in egg four, say, and then run it. It kind of looks ridiculous. It's impossible to see what's going on. So if you remember, we added that, that multiplier, we called it LRMult or Gamma is what they called it in PyTorch. So we ended up calling it Gamma. So I dialed that way down to make it much more gradual, which means I have to dial up the starting learning rate. And only then did I manage even to get the learning rate finder to tell us anything useful. Okay, so there we are. So that's our learning rate finder. I'm just going to come back to these three later. So I tried using a learning rate of 0.2 and after trying a few different values, 0.4, 0.1, 0.2 seems about the highest we can get up to. Even this actually is too high, I found. Much lower and it didn't train much at all. You can see what happens if I do. It starts training and then it kind of, yeah, we lose it, which is unfortunate. And you can see that in the colorful dimension plot. We get this classic, you know, getting activations crashing, getting activations crashing. And you can kind of see the key problem here really is that we don't have zero mean standard deviation one layers at the start. So we certainly don't keep them throughout. And this is a problem. Now, just something I've got to mention by the way is when you're training stuff in Jupyter Notebooks, this is just a new thing we've just added. If you get, you can easily run out of memory, GPU memory, and there's two reasons it turns out why you can particularly run out of GPU memory if you run a few cells in a Jupyter Notebook. The first is that kind of for your convenience, Jupyter Notebook, you might, you might, you may or may not know this actually stores the results of your previous few evaluations. If you just type underscore, it tells you the very last thing you evaluated. And you can do more underscores to go backwards further in time. Or you can also use, oh, you can also use numbers to get the out 16, for example, would be underscore 16. Now, the reason this is an issue is that if one of your outputs is a big CUDA tensor and you've shown it in a cell, that's going to keep that GPU memory basically forever. And so that's a bit of a problem. So if you are running out of memory, one thing you'd want to do is clean out all of those underscore blah things. I found that there's actually some function that nearly does that in the IPython source code. So I copied the important bits out of it and put it in here. So if you call clean IPython history, it will don't worry about the lines of code at all. This is just a thing that you can use to get back that GPU memory. The second thing, which Peter figured out in the last week or so, is that you also have, if you have a CUDA error at any point or even any kind of exception at any point, then the exception object is actually stored by Python and any tensors that were allocated anywhere in that trace back will stay allocated basically forever. And again, that's a big problem. So I created this clean trace back function based on Peter's code, which gets rid of that. So this is particularly problematic because if you have a CUDA out of memory error and then you try to rerun it, you'll still have a CUDA out of memory error because all the memory that was allocated before is now in that trace back. So basically any time you get a CUDA out of memory error or any kind of error with memory, you can call clean mem and that will clean the memory in your trace back. It will clean the memory used in your Jupyter history. Do a garbage collect. Empty the CUDA cache and that will basically should give you a totally clean GPU. You don't have to restart your notebook. So Sam asked a very good question in the chat. So just to remind you guys, yes, we did start, he's asking, I thought we were training an auto encoder or are we training a classifier or what? So we started doing this auto encoder back in notebook eight and we decided, we don't have the tools to make this work yet. So let's go back and create the tools and then come back to it. So in creating the tools, we're doing a classifier. We're trying to make a really good fashion MNIST classifier. While we're trying to create tools which hopefully have a side effect will find of giving us a really good classifier and then using those tools, we hope that will allow us to create a really good auto encoder. So yes, we're kind of like gradually unwinding and we'll come back to where we were actually trying to get to. So that's why we're doing this classifier. The techniques and library pieces we're building will be all very necessary. Okay, so why do we need a zero mean one standard deviation? Why do we need that? And B, how do we get it? So first of all on the Y. So if you think about what a neural net does, a deep learning net specifically, it takes an input and it puts it through a whole bunch of matrix multiplications and of course there are activation functions then which did that. Don't worry about the activation functions, that doesn't change the argument. So let's just imagine we start with some bunch of some matrix, right? Imagine the 50 deep neural net. So a 50 deep neural net basically if we ignore the activation functions is taking the previous input and doing a matrix multiplier by some initially some random weights. So these are all, yeah, these are just a bunch of random weights. And these are actually random n is a mean zero variance one. And if we run this after 50 times of multiplying by a matrix by a matrix by a matrix by a matrix, we end up with NANDs. That's no good. So that might be that our matrix, the numbers in our matrix are too big. So each time we multiply the numbers we're getting bigger and bigger and bigger. So maybe we should make them a bit smaller. Okay, so let's try using in the matrix we are applying by let's try multiplying by 0.01. And we multiply that lots of times. Oh, now we've got zeros. Now, of course, in mathematically speaking, this isn't actually NAND. It's actually some really big number. Mathematically speaking, this isn't really zero. It's some really small number. But computers can't handle really, really small numbers or really, really big numbers. So really, really big numbers eventually just get called NAND and really, really small numbers eventually just get called zero. So basically, they get washed up. And in fact, even if you don't get a NAND or even if you don't quite get a zero, for numbers that are extremely big, the internal representation has no ability to discriminate between even slightly similar numbers. Basically, in the way floating point is stored, the further you get away from zero, the less accurate the numbers are. So, yeah, this is a problem. So we have to scale our weight matrices exactly right. And we have to scale them in such a way that the standard deviation at every point stays at one and the mean stays at zero. So there's actually a paper that describes how to do this for multiplying lots of matrices together. And this paper basically just went through it's actually pretty simple math. Actually, let's see, what did they do? All right. Yeah, so they looked at gradients and the propagation of gradients and they came up with a particular weight initialization of using a uniform with one over root n as the bounds of that uniform. And they, yeah, they studied basically what happened with various different activation functions. And, yeah, as a result, we now have this way of initializing neural networks, which is called either Glouro initialization or Xavier initialization. And, yeah, this is the amount that we scale our initialization and random numbers by, where n in is the number of inputs. So in our case, we have 100 inputs and so root 100 is 10. So one over 10 is 0.1. And so if we actually run that, if we start with our random numbers and then we multiply by random numbers times 0.1, which is this is the Glouro initialization, you can see we do end up with numbers that are actually reasonable. So that's pretty cool. So just, I mean, just some background in case you're not familiar with some of these details. What exactly do we mean by variance? So if we take a tensor, let's call it T and just put 1, 2, 4, 18 in it. The mean of that is simply the sum divided by the count. So that's 6.25. Now we want to know basically, we want to come up with a measure of how far away each data point is from the mean. That tells you how much variation there is. If all the data points are very similar to each other, right? So if you've got kind of like a whole bunch of data points and they're all pretty similar to each other, right? Then the mean would be about here, right? And the average distance away of each point from the mean is not very far. Where else if you had dots that were very widely spread all over the place, right? Then you might end up with the same mean but the distance from each point to the mean is now quite a long way. So that's what we want. We want some measure of kind of how far away the points are on average from the mean. So here we could do that. We can take our tensor, we can subtract the mean and then take the mean of that. Ah, well that doesn't work because we've got some numbers that are bigger than the mean and some that are smaller than the mean and so if you average them all out then by definition you actually get zero. So instead you could either square those differences and that will give you something and you could also take the square root of that if you wanted to to get it back to the same kind of area or you could take the absolute differences. Okay, so actually I'm doing this in two steps here. So for the first one, here it is on a different scale and then the square root get it on the same scale. So 6.87 and 5.88 are quite similar, right? But they're mathematically not quite the same but they're both similar ideas. So this is the mean absolute difference and this is called the standard deviation and this is called the variance. So the reason that the standard deviation is bigger than the mean absolute difference is because in our original data one of the numbers is much bigger than the others and so when we square it that number ends up having an outsized influence and so that's a bit of an issue in general with standard deviation and variance is that outliers like this have an outsized influence. So you've got to be a bit careful. Okay, so here's the formula for the standard deviation that's normally written as sigma. Okay, so it's just going to be each of our data points minus the mean squared plus the next data point minus the mean squared so forth for all the data points and then divide that by the number of data points in square root and okay. So one thing I point out here is that the mean absolute deviation isn't used as much as the standard deviation because mathematicians find it difficult to use but we're not mathematicians, we have computers so we can use it. Okay, now variance we can calculate like this as we said the mean of the square of the differences and if you feel like doing some math you could discover that actually this is exactly the same as you can see and this is actually nice because this is showing that the mean of the squared data points minus the square of the mean of the data points is also the variance and this is very helpful because it means you actually never have to calculate this. You can just calculate the mean. So with just the data points on their own you can actually calculate the variance. This is a really nice shortcut. This is how we normally calculate variance and so there is the latex version which of course I didn't write myself. I stole from the Wikipedia latex because I'm lazy. Now there's a very, very similar idea which is covariance and has already come up a little bit in the first lesson or two and particularly the extra math lesson that Basim and Tunisian did and it's, yeah, so covariance tells you how much two things vary not just on their own but together and there's a definition here in math but I like code so we'll see the code. So here's our tensor again. Now we're going to want to have two things. So let's create something called u which is just two times our tensor with a bit of randomness. So here it is. Now you can see that u and t are very closely correlated here but they're not perfectly correlated. So the covariance tells us, yeah, how they vary together and separately. So we can take the, you can see this exactly the same thing we had before, h data point minus its mean but now we've got two different tensors so we're also going to do the other one. So the other data points minus their mean and we multiply them together. So it's actually the same thing as standard deviation but instead of deviation it's kind of like the covariance with itself in a sense, right? And so that's a product we can calculate and then what we then do is we take the mean of that and that gives us the covariance between those two tensors and you can see that's quite a high number and if we compare it to two things that aren't very related at all so let's create a totally random tensor v so this is not related to t and we do exactly the same thing so take the difference of t to its means and v to its means and take the mean of that that's a very small number and so you can see covariance is basically telling us how related are these two tensors so covariance and variance are basically the same thing but you kind of can think of variance as being covariance with itself and you can change this mathematical version which is the one we just created in code to this version just like we have for variance easier to calculate version which as you can see gives exactly the same answer okay so if you haven't done stuff with covariance much before you should experiment a bit with it by creating a few different plots and experimenting with those and finally the Pearson correlation coefficient which is normally called R or Rho is just the covariance divided by the product of the standard deviations so you've probably seen that number many times there's just a scaled version of the same thing okay so with that in mind here is how Xavier in it or Glorow in it is derived so when you do a matrix multiplication for each of the yi's we're adding together all of these products so for we've got ai,0 times x0 plus ai,1 times x1 etc and we can write that in sigma notation so we're adding up together all of the ai k's with all of the x k's this is the stuff that we did in our first lesson of part 2 and so here it is in pure python code and here it is in numpy code now at the very beginning our vector has a mean of about 0 and a standard deviation of about 1 because that's what we asked for to remind you that's what we asked for that's the standard deviation of 0 so standard deviation of 1, mean of 0 that's what round n is okay so let's create some random numbers and we can confirm they have a mean of about 0 and a standard deviation of about 1 so if we chose weights for a that have a mean of 0 we can compute the standard deviation quite easily so let's do that so 100 times let's try creating our x and let's try creating something to multiply it by and we'll do the matrix multiplication and we're going to get the mean and mean of the squares and so that is very close to our matrix so I won't go into, I mean you can look at it if you like but basically as long as the elements in A and X are independent which obviously they are because they're random then we're going to end up with a mean of 0 and a standard deviation of 1 for these products and so we can try it if we create a random number normally distributed random number and then a second random number multiply them together and then do it a bunch of times and you can see here we've got our 0, 1 so that's the reason why we need this math dot square root 100 we don't normally worry about the mathematical reasons why things are exactly but I thought I would just dive into this one because sometimes it's fun to go through it and so you can check out the paper if you want to look at that in more detail or experiment with these little simulations now the problem is that that doesn't work it doesn't work for us because we use rectified linear units which is not something that Savio-Gloro looked at let's take a look let's create a couple of matrices this is 200 by 100 this is just a vector well a matrix and a vector this is 200 and then let's create a couple of weight matrices two weight matrices and two bias vectors okay so we've got some input data X's and Y's and we've got some weight matrices and bias vectors so let's create a linear layer function which we've done lots of times before and let's start going through a little neural net you know I'm mentioning this is the forward pass of our neural net so we're going to apply our linear layer to the X's with our first set of weights and our first set of biases and see what the mean and standard deviation is okay it's about 0 and about 1 so that's good news and the reason why is because we have 100 inputs and we divided it by square root of 100 just like Lloro told us to and our second one has 50 inputs and we divide by square root of 50 and so this all ought to work right and so far it is but now we're going to mess everything up by doing ReLU so ReLU after we do a ReLU look we don't have a zero mean or a one standard deviation anymore so if we go through that and create like a deep neural network with Lloro initialization but with a ReLU oh dear it's disappeared it's all gone to zero and you can see why right after a matrix multiply and a ReLU means and variances are going down and of course they're going down because a ReLU squishes it so I'm not going to worry about the math of why but a very important paper indeed called Delving Deep in Directifiers it's a passing human level performance on image net classification by Keiming Hu at Al came up with a new init which is just like Lloro initialization but you multiply the remember the Lloro initialization was 1 over root n this one is root 2 over n and again n is the number of inputs so let's try it so we've got 100 inputs so we have to multiply it by root 2 over 100 and there we go you can see we are in fact getting some non-zero numbers that's very encouraging even after going through 50 layers of depth so that's good news so this is called Keiming it's either called Keiming initialization or called Hu initialization and notice it looks like it's about he but it's a Chinese surname so it's actually pronounced Hu okay maybe that's why a lot of people increasingly call it Keiming initialization I don't have to say his surname it's a little bit harder to pronounce alright so how on earth do we actually use this now that we know what initialization function to use for a deep neural network with a value activation function the trick is to use a method called apply which all nn.modules have so if we grab our model we can apply any function we like for example let's apply the function print the name of the type so here you can see it's going through and it's printing out all of the modules that are inside our model and notice that our model has modules inside modules it's a conv in a sequential but model.apply goes through all of them regardless of their depth so we can apply an init function so we can apply the init function which simply does random numbers normally distributed random numbers times square root of 2 over the number of inputs that's such an easy thing it's not even worth writing so that's already been written but that's all it does it just does that one thing it's called init.chimingnormal as we've seen before if there's an underscore at the end of a PyTorch method name that means that it changes something in place so init.chimingnormal underscore will modify this weight matrix so that it has been initialized with normally distributed random numbers based on root of 2 divided by the number of inputs now you can't do that to a sequential layer or a value layer or a flatten layer so we should check that the module is a conv or linear layer and then we can just say model.apply the function and so if we do that and now I can use our learning rate finder callbacks that we created earlier and this time I don't have to worry about actually we can create our own ones because we don't need to use even the weird gamma thing anymore so let's go back and copy that let's get rid of this gamma equals 1.1 it shouldn't be necessary anymore and we can probably make that 4 now oh I should have need to recreate the model there we go okay so that's looking much more sensible so at least we've got to a point where the learning rate finder works that's a good sign so now when we create our learner we're going to use our momentum learner still after we get the model we will apply in at weights and apply also returns the model so we can actually this is actually going to return the model with the initialization applied while I wait I will answer questions okay so Fabrizio asks why do we double the number of filters in successive convolutions so what's happening is in each stride 2 convolution these are all stride 2 convolutions so this is changing the grid size from 28 by 28 to 14 by 14 so it's reducing the size of the grid by a factor of 4 in total so basically so as we go from 1 to 8 from this one to this one same deal we're going from 14 by 14 to 7 by 7 so we've reduced the grid size by 4 we want it to learn something and if you use if you give it exactly the same kind of number of units or activations there's there's not really it's not really forcing it to learn things as much so ideally as we decrease the grid size we want to have enough channels that you end up with a few less activations but than before not too many less so if we double the number of channels then that means we've decreased the grid size by a model of 4 increased the channel count by a model of 2 so overall the number of activations has decreased by a factor of 2 and so that's what we want we want to be kind of forcing it to find ways of compressing the information intelligently as it goes down also we kind of want to be having a roughly similar amount of compute roughly similar amount through the neural net so as we decrease the grid size we can add more channels because decreasing the grid size decreases the amount of compute increasing the channels then gives it more things to compute so we're kind of getting this nice compromise between between the kind of amount of compute that it's doing but also giving it some kind of compression work to do that's the kind of the basic idea well still not able to train well okay if we leave it for a while okay it's not great but it is actually starting to train that's encouraging and we got up to a 70% accuracy so we can see not surprisingly we're getting these spikes and spikes and so in the statistics you can see that well it didn't quite work we don't have a mean of zero we don't have a standard deviation of one even at the start why is that well it's because we forgot something critical if you go back to our original point even when we had our, let's go to the climbing version even when we had the correctly normalised matrix that we're multiplying by well you also have to have a correctly normalised input matrix and we never did anything to normalise our inputs so our inputs actually if we get the just get the first X mini batch again it's the mean and standard deviation it has a mean of 0.28 and a standard deviation of 0.35 so we actually didn't even start with a zero one input and so we started with a mean above zero and a standard deviation beneath one so it was very hard for it so using the in it helped at least we were able to train a little bit but it's not quite what we want we actually need to modify our inputs so they have a mean of one and a standard, sorry a mean of zero and a standard deviation of one so we could create a callback to do that let's create a batch transform callback and so we're going to pass in a function that's going to transform every batch and so just in the before batch we will set the batch to be equal to the function applied to the batch now I can note by the way we don't need self.learn.batch here because we can read any because it's one of the four things that we kind of proxy down to the learner automatically but we do need it on the left-hand side because it's only in the get atra remember so be very careful so I might just leave it the same same on both sides just so that people don't get confused okay so let's create a function underscore norm that subtracts the mean and divides by the standard deviation and so remember a batch has an x and a y so it's the x part where we subtract the mean and divide by the standard deviation and so the new batch will be that as the x and the y will be exactly the same as it was before so let's create a instance of the normalization of the batch transform callback which is going to do the normalization function we'll call it norm so we can pass that as an additional callback to our learner and now that's looking a lot better so you can see here all we had to do was check that our input matrix was 0,1 mean standard deviation and all of our weight matrices was 0,1 standard deviation and we didn't have to use any tricks at all that was able to train and got it to an accuracy of 85% and so if we look at the color dim and stats look at this it looks beautiful now this is layer 1 this is layer 2,3,4 it's still not perfect I mean there's some randomness and we've got what is it like 7 or 8 layers so that randomness does kind of as you go through the layers by the last one it still gets a bit ugly and you can kind of see it bouncing around here as a result and you can see that also in the means and standard deviations there's some other reasons this is happening we'll see in a moment but this is the first time we've really got our even somewhat deep convolutional model to train and so this is a really exciting step you know we have from scratch in a sequence of 11 notebooks managed to create a real convolutional neural network that is training properly so I think that's pretty amazing now we don't have to use a callback for this the other thing we could do to modify the input data of course is to use the with transform method from the hugging face data sets library so we could modify our transform I to do just attract the mean and divide by the standard deviation and then recreate our data loaders and if we now get a batch out of that and check it it's now got yep a mean is zero and the standard deviation of one so we could also do it this way so generally speaking for stuff that needs to kind of dynamically modify the batch you can often do it either in your data processing code or you can do it in a callback and neither is right or wrong they both work well and you can see whichever one works best for you okay now I'm going to show you something amazing okay so it's great this is training well but when you look at our stats despite what we did with the normalized input and the normalized and the normalized weight matrices we don't have a mean is zero and we don't have a standard deviation of one even from the start so why is that well the problem is that we were putting our data through a value and our our activation stats are looking at the output of those value blocks because that's kind of the end of each you know that's the activation of each combination of matrix multiplication and activation function and since a value removes all of the negative numbers it's impossible for the output of a value to have a mean of zero unless literally every single number is zero max has got no negatives so value seems to me to be fundamentally incompatible with the idea of a correctly calibrated bunch of layers in a neural net so I came up with this idea of saying well why don't we take our normal value and have the ability to subtract something from it and so we just take the result of our value and subtract so sub minus I can write this in more obvious way it's exactly the same as just minus equals when I just do that we'll subtract something from our value that will allow us to pull the whole thing down so that the bottom of our value is underneath the x-axis and it has negatives and that would allow us to have a mean of zero and while we're there let's also do something that's existed for a while I didn't come up with this idea which is to do a leaky value which is where we say let's not have the negatives be totally flat just truncated but instead let's just have those numbers decreased by some constant amount let me show you what that looks like so those two together I'm going to call general value which is where we do this thing called leaky value which is where we make it so it's not flat under zero but instead just less sloped and we also subtract something from it so for example I've created a little function here for plotting a function so let's plot the general value function with a leakiness of 0.1 so that will mean one slope underneath the under zero and we'll subtract 0.4 and so you can see above zero it's just a normal y equals x line but it's been pushed down by 0.4 and then when it's less than zero it's not flat anymore but it's just got a slope of 1 tenth and so this is now something which if you find the right amount and subtract for each amount of leakiness you can make a mean of zero and I actually found that this particular combination gives us a mean of zero or thereabouts so let's now create a new convolution function where we can actually change what activation function is used that gives us the ability to change the activation functions in our neural nets let's change get model to allow it to take the activation function which is passed into the layers and while we're there let's also make it easy to change the number of filters so we're going to pass in a list of the number of filters in each layer and we will default it to the numbers in each layer that we've discussed and so we're just going to go through in a list comprehension creating a convolution from the previous number of filters this number of filters to the next number of filters and we'll pop that all into a differential along with a flatten at the end and while we're there we also then need to be careful about inner weights because this is something that people tend to forget which is that in it that it is that chiming initialization the default only applies only applies at all to layers that have a value activation function we don't have value anymore we actually have leaky value the fact that we're subtracting a bit from it doesn't change things but the fact that it's leaky does now luckily a lot of people don't know this but actually PyTorch's chiming normal has an adjustment for leaky values, weirdly enough they just call it A so if you pass into the chiming normal initialization how your leaky value is leaky factor as A then you'll get the correct initialization for a leaky value so we need to change inner weights now to pass in the leakiness alright so let's put all this together so our general value activation function is general value with a leak of 0.1 and a subtract of 0.4 to create a function that has those built in parameters for activation stats we need to update it now to look for general values not nn.values okay and then our inner weights function we're going to have a partial with leaky equals 0.1 so we'll call that our inner weights great so now we'll get our model activation function and that new inner weights and we'll fit that oh that's encouraging accuracy of 0.845 which is about as high as we got to at the end previously wow look at that so we're up to an accuracy of 87% and let's take a look yeah I mean look we've still got a little bit of a spike but it's almost even flat and let's have a look here look at that our mean is starting at about 0 standard deviation no standard deviation is still a bit low but it's coming up around 1 it's not too bad generally around 0.8 so it's all looking pretty encouraging I think and oh yeah look the percentage of dead units in each layer is very small so we've really trained got some very nice looking training graphs here and yeah it's interesting that we had to literally invent our own activation function to make this work and I think that gives you a sense of how few people actually care about this which is crazy because as you can see in some ways it's the only thing that matters and it's not at all mathematically difficult to make it all work and it's not at all computationally difficult to see whether it's working but other frameworks don't even let you plot these kinds of things so nobody even knows that they've completely messed up their initialization so yeah now you know now some very nice news well so the first thing to be aware of which is tricky is we a lot of models use more complicated activation functions nowadays rather than ReLU or leaky ReLU or even this general version you need to initialize your neural network correctly and most people don't and sometimes nobody's even figured out or bothered to try to figure out what the correct initialization to use is actually a very cool trick which almost nobody knows about which is a paper called All You Need Is A Good In It which Dmitry Mishkin wrote a few years ago and what Dmitry showed is that there's actually a completely general way of initializing any neural network correctly regardless of what activation functions are in it and it uses a very very simple idea and the idea is create your model initialize it however you like and then go through and put a single batch of data through and look at the first layer see what the mean and standard deviation through the first layer is and if the standard deviation is too big divide the weight matrix by a bit if the means too high, subtract the weight matrix and do that repeatedly for the first layer until you get the correct mean and standard deviation and then go to the second layer do the same thing, third layer do the same thing and so forth so we can do that using hooks so we could create a little so this is called layer wise sequential unit variance, LSUV we can create a little LSUV stats that will grab the mean of the activations of a layer and the standard deviation of the activations of a layer and we will create a hook with that function and what it's going to do is after the after we run that hook to find out the mean and standard deviation of the layer we will go through and run the model get the standard deviation and mean see if the standard deviation is not one, see if the mean is not zero and we will subtract the mean from the bias and we will divide the weight matrix by the standard deviation and we will keep doing that until we get a standard deviation of one and a mean of zero and so by making that a hook what we will do is we will grab all the values and all the comms right and so just to show you what happens there once I've got all the values and all the comms I can use zip so zip in python takes a bunch of lists and creates a list of the items the first items, the second items, the third items and so forth so if I go through the zip of values and comms and just print them out you can see it prints out the value and the first commf the second value, the second commf the third value, the third commf and so forth we use zip all the time in python so it's a really important thing to be aware of so we could go through the values and the comms and call layer-wise sequential unit variance in it passing in those module pairs sorry passing in the value and the commf and then for each one and we're going to do that on the the batch and of course we need to put the batch on the correct device for our model and so now that I've done that we now have it ran almost instantly it's now made all the biases and weights correct give us 0, 1 and now if I train it there it is so we didn't do any initialization at all of the model other than just call LSUV in it and this time we've got an accuracy of 0.86 versus previously it's 0.87 so pretty much the same thing close enough and actually if you want to actually see that happening I guess what we could do I mean it's not it's going to be pretty obvious after we run this we could say print H dot mean comma H dot standard deviation actually we could do it like before and afterwards right so we could say right before and after there we go so the first layer started at a mean of negative 0.13 and a variance of 0.46 and it kept doing the divides attract, divides attract, divides attract until eventually it got to meaner 0 standard deviation of 1 and then it went to the next layer and it kept going, going, going until that was 0, 1 and then the third layer and then the fourth layer and so at that point all of the layers had a meaner 0 and a standard deviation of 1 so I guess like one thing with LSUV you know it's kind of very mathematically convenient we don't have to spend any time thinking about you know if we've invented a new activation function or we're using some activation function where nobody seems to have figured out the correct initialization for it we can just use LSUV it did require a little bit more fiddling around with hooks and stuff to get it to work and I haven't even put this into like a callback or anything so if you yeah if you decide you want to try using this in some of your models it might be a good idea and it'll actually be good homework to see if you can come up with a callback that does LSUV initialization for you that would be pretty cool wouldn't it in before fit I guess it would be you'd have to be a bit careful because if you ran fit multiple times it would actually initialize it each time so that would be one issue with that to think about okay so something which is quite similar to LSUV is batch normalization so we're going to have a 7 minute break and then we're going to come back and we're going to talk about batch normalization I'll see you in 7 minutes okay hi let's do this batch normalization batch normalization was such an important paper I remember when it came out it was that inelitic my medical start up and I think that's right and everybody was talking about it and in particular they were talking about this this graph that basically showed like what it used to be like until batch norm to train a model on image net how many training steps you'd have to do to get to a certain accuracy and then they showed what you could do with batch norm so much faster it was amazing and we all thought that can't be true but it was true so basically the key idea of batch norm is that with LSUV and input normalization and timing in it we are normalizing the layers each layer's inputs before training but the distribution of each layer's inputs changes during training and that's a problem so you end up having to decrease your learning rates and as we've seen you have to be very careful about parameter initialization so the fact that the layers inputs change during training they call internal covariate shift which for some reason a lot of people tend to find a confusing statement or confusing name but it's very clear to me that's it by normalizing layer inputs during training so you're making the normalization a part of the model architecture and you perform the normalization for each mini batch now I'm actually not going to start with batch normalization I'm going to start with something that came out one year later called layer normalization because layer normalization is simpler let's do the simpler one first so layer normalization as much as this group of fellows the last of whom I'm sure you've heard of and it's probably easiest to explain by showing you the code so if you're thinking layer normalization wow it's a whole paper, Jeffrey Hinton paper must be complicated, no the whole thing is this code what is layer normalization well we can create a module and we're going to pass in we don't really need to pass in anything actually you can totally ignore the parameters for now in fact what we're going to do is we're going to have a single number called molt for the multiplier and a single number called add that's the thing we're going to add and we're going to start off by multiplying things by one and adding zero so we're going to start off by doing nothing at all this is the layer, it has a forward function and in the forward function so remember that by default we have n, c, h, w we have a batch by channel, by height, by width we're going to take the mean over the channel height and width so we're just going to find the mean activation for each input in the mini batch and when I say input though, remember that this is going to be, this is a layer so we can put this layer anywhere we like so it's the input to that layer and we'll do the same thing for finding the variance and then we're going to normalize our data by subtracting the mean and dividing by the square root of the variance which of course is the standard deviation we're going to add a very small number by default 1 in egg 5 to the denominator just in case the variance is zero or ridiculously small this will keep the number from going giant just if we happen to get something with a very small variance this idea of an epsilon as being something we add to a divisor is really really common and in general you should not assume that the defaults are correct very often the defaults are too small for algorithms that use an epsilon ok so here we are as you can see we are normalizing the the batch I mean I can call it a batch but just remember the first layer whichever layer we decide to put this in so we normalize it now the thing is maybe we don't want it to be normalized maybe we want it to have something other than a unit variance and something other than zero mean well what we do is we then multiply it back by self.mult and add self.add now remember self.mult was 1 and add is 0 so at first that does nothing at all so at first this is just normalizing the data so that's good but because these are parameters these two numbers are learnable that means that the SGD algorithm can change them so there's a very subtle thing going on here which is that in fact this might not be normalizing the data at all or normalizing the inputs to the next layer at all because self.mult add could be anything so I tend to think that when people think about these kind of things like layer normalization and batch normalization thinking of this as normalization in some ways is not the right way to think of it it's actually doing something I think to really well it's definitely normalizing it for the initial layers and we don't really need lsuv anymore if we have this in here because it's going to normalize it automatically so that's handy but after a few batches it's not really normalizing at all but what it is doing is previously this idea of like how big are the numbers overall and how much variation do they have overall was kind of built into every single number in the weight matrix and in the bias vector in this way those two things have been turned into just two numbers and I think this makes training a lot easier for it basically to just have just two numbers that it can focus on to change this overall like positioning and variation so there's something very subtle going on here because it's not just doing normalization at least not after the first few batches are complete because it can learn to create any distribution of outputs it want so there's our layer so we're going to need to change our con function let again previously we changed it to add activation function to be modifiable now we're going to also change it to allow us to add normalization layers to the end so our basic layers well we'll start off by adding our conf2d as usual and then if you're doing normalization we will append the normalization layer with this many inputs now in fact layerNorm doesn't care how many inputs so I just ignore it but you'll see batch normal care if you've got an activation function add it and so our convolutional layer is actually a sequential bunch of layers now one thing that's interesting I think is that for bias in the con if you're using layerNorm well this isn't quite true is it I was going to say if you're using layerNorm you don't need bias but actually you kind of do so maybe we should actually change that for batchNorm we won't need bias but actually for this one we do so put this back bias equals true bias equals bias okay so then these initial layers yes so they all have bias and then we've got bias equals false so now in our model we're going to add layerNormalization to every layer except for the last one and now we go oh nice 8.73 okay 8.60 and 8.72 so just we've just got our best by a little bit so that's cool so the the thing about these normalization layers is though that um they do cause a lot of challenges in models and generally speaking ever since batchNorm appeared well there's been this kind of like big change of view towards it at first people were like oh my god batchNorm is our saviour and it kind of was it let us train much deeper models and get great results and train quickly but then increasingly people realized it also added a lot of complexity these learnable parameters turned out to create all kind of complexity and in particular batchNorm which was in a minute created all kinds of complexity so there has been a tendency in recent years to be trying to get rid of or at least reduce the use of these kinds of layers so knowing how to actually initialize your models correctly at first is becoming increasingly important as people are trying to move away from these normalization layers increasingly so I will say that so they're still very helpful but they're not a silver bullet as it turns out alright so now let's look at batchNorm so batchNorm is still not huge but it's a little bit bigger than layerNorm and you'll see that we've now we've got the molt and add as before but it's not just one number to add or one number to multiply but actually we've got a whole bunch of them and the reason is that we're going to have one for every channel and so now when we take the mean and the variance we're actually taking it over the batch dimension and the height and which dimensions so we're ending up with one mean per channel and one variance per channel so just like before once we get our means and variances we subtract them out and divide them by the epsilon modified variance and just like before we then multiply by molt and add add but now we're actually multiplying by a vector of molts and we're adding a vector of adds and that's why we have to pass in the number of filters because we have to know how many ones and how many zeros we have in our initial molts and adds so that's the main difference in a sense is that we are we have one per channel and that we're also taking the average across all of the things in the batch where else in layer norm we didn't each thing in the batch had its own separate normalization it was doing then there's something else in batch norm which is a bit tricky which is that during training we are not just subtracting the mean and the variance but instead we're getting an exponentially weighted moving average of the means and the variances of the last few batches that's what this is doing so we start out so we basically create something called vars and something called means and initially the variances are all one and the means are all zero and there's one per channel just like before or one per filter this is a number of filters same idea I guess filters we tend to actually use inside the model and channels we tend to use as the first input so I should probably say filters either works though so we get out let's just example we get our mean per filter and then what we do is we use this thing called lurp and lurp is simply saying yes that's what it's done so what lurp does is it takes two numbers in this case I'm going to take 5 and 15 or two tensors they could be vectors or matrices and it creates a weighted average of them and the amount of weight it uses is this number here let me explain in this case if I put 0.5 it's going to take half of this number plus half of this number so we end up with just the mean but what if we used 0.75 then that's going to take that's going to take 0.75 times this number plus 0.25 of this number so it's basically kind of allows it to be on like a sliding scale so one extreme would be to take all of the second number so that would be lurp with one there and the other extreme would be all of the first number and then you can slide anywhere between them like so right so that's exactly the same as saying 5 times 0.9 plus 15 times 0.1 so this number here is how much of the second number do we have and one minus that is how much of this number do we have and you can also move this as you can with most PyTorch things you can move the first parameter into there and get exactly the same result so that's what lurp is so what we're doing here is we're doing an in-place lurp so we're replacing self.means with 1 minus momentum of self.means and plus self.meantum times this particular mini batch's mean so this is basically doing momentum again which is why we indeed are calling the parameter mom for momentum so with a mom of 0.1 which I kind of think is the opposite of what I'd expect momentum to mean I'd expect to be 0.9 but with a mom of 0.1 we're saying that each mini batch self.means will be 0.1 of this particular mini batch's mean and 0.9 of the previous one the previous sequence in fact and that ends up giving us what's called an exponentially weighted moving average and we do the same thing for variances okay so that's only updated during training okay and then during inference we can we just use the saved means and variances so this and then why do we have buffers what does that mean these buffers mean that these means and variances will be actually saved as part of the model so it's important to understand that this information about the means and variances that your model saw are saved in the model and this is the key thing which makes batch norm very tricky to deal with and particularly tricky as we'll see in later lessons with transfer learning but what this does do is that it means that we're going to get something that's much smoother you know a single weird many batch shouldn't screw things around too much and because we're averaging across the many batch it's also going to make things smoother so this whole thing should lead to a pretty nice smooth training so we can train this so we're going to this time we're going to use our batch norm layer for norm oh actually we need to put the bias thing is that right oh no it's no that's fine we're going to change that and one interesting thing I found here is I was able to now finally increase the learning rate up to 0.4 for the first time so each time I was really trying to see if I can push the learning rate and I'm now able to double the learning rate and still as you can see it's training very smoothly which is really cool so there's actually a number of different types of layer based normalization we can use in this lesson we specifically seen batch norm and layer norm I wanted to mention that there's also instance norm and group norm and this picture from the group norm paper explains what happens what it's showing is that we've got here the N C H W and so they've kind of concatenated flattened H W into a single axis since they can't draw 4D cubes and what they're saying is in batch norm all this blue stuff is what we average over so we average across the batch and across the height and width and we end up with one therefore normalization number per channel so you can kind of slide these blue blocks across so batch norm is averaging over the batch and height and width layer norm as we learned averages over the channel and the height and the width and it has a separate one per item in the mini batch I mean kind of it's a bit subtle right because remember the overall molten add it just had a literally a single number for H right so it's not quite as simple as this but that's a general idea instance norm which we're not looking at today only averages across height and width so there's going to be a separate one for every channel and every element of the mini batch and then finally group norm which I'm quite fond of is like instance norm but it arbitrarily basically groups a bunch of channels together and you can decide how many groups of channels there are and averages over them group norm tends to be a bit slow unfortunately because of the way these things are implemented is a bit tricky but group norm does allow you to yeah avoid some of the the challenges of some of the other methods so it's worth trying if you can and of course batch norm has the additional thing of the kind of momentum based statistics but in general the idea of like do you use momentum based statistics do you store things per channel or a single mean invariance in your buffers or whatever all that kind of stuff along with what do you average over they're all somewhat independent choices you can make and particular combinations of those have been given particular names and so there we go okay so we're getting we've got some good initialization methods here let's try putting them all together and one other thing we can do is we've been using a batch size of 1024 for speed purposes if we drop it down a bit to 256 it's going to mean that it's going to get to see more mini batches so that should improve performance and so we're trying to get to 90% remember so let's yeah do all this this time we'll use batch norm we'll just use pi torches there's nothing wrong with ours but we try to switch to pi torches when something we've recreated exists there we'll use our momentum learner and we'll fit for three epochs and so as you can see it's going a little bit more slowly now and then the other thing I'm going to do is I'm going to decrease the learning rate and keep the existing model and then train for a little bit longer the idea being that as the you know as it's kind of getting close to a pretty good answer maybe it just wants to be able to fine tune that a little bit and so we by decreasing the learning rate we give it a chance to fine tune a little bit so let's see how we're going so we got to 87.8% accuracy after three epochs which is an improvement I guess mainly thanks to basically thanks to using this smaller mini batch size now with a smaller mini batch size you do have to decrease the learning rate so I found I could still get away with 0.2 which is pretty cool and look at this after just one more epoch by decreasing the learning rate we've got up to 89.7 oh we didn't make it 89.9 so towards 90% but not quite 89.9 so we're going to have to do some more work to get up to our magical 90% number but we are getting pretty close alright so that is the end of initialization an incredibly important topic as hopefully you've seen accelerated SGD let's see if we can use this to get us up to 90 plus above 90% so let's do our normal imports and data setup as usual and so just to summarize what we've got we've got our metrics callback we've got our activation stats on the general value so our callbacks are going to be the device callback to put it on CUDA or whatever the metrics, the progress bar the activation stats our activation function is going to be our general value with 0.1 leakiness and 0.4 subtraction and we've got the inner weights which we need to tell it about how leaky they are and then if we're doing a learning rate finder we've got a different set of callbacks so it's no real reason to have a progress bar callback with a learning rate finder I guess it's pretty short anyway which reminds me there was one little thing I didn't mention in initializing which is a fun trick you might want to play around with and in fact Sam Watkins asked a question earlier in the chat and I didn't answer it because it's actually exactly here in general value I added a second thing you might have seen which is the maximum value and if the maximum value is set then I clamp the value to be no more than the maximum so basically as a result let's say you set it to 3 then the line would go up to here like it does here and then it would go up to 3 like it does here and then it would be flat and using that can be a nice way I mean I'd probably go a higher up to about 6 but that can be a nice way to avoid numbers getting too big and maybe if you really wanted to have fun you could do kind of like a leaky maximum which I haven't tried yet where maybe at the top it kind of goes like you know 10 times smaller kind of just exactly like the leaky could be so anyway if you do that you'd need to make sure that the you know you're still getting 0 1 layers with your initialization but that would be something you could consider playing with okay so let's create our own little SGD class so an SGD class is going to need to know what parameters to optimize and if you remember the module.parameters method returns a generator so we use a list to turn we want to turn that into a list so it's kind of forced to be a particular not something that's going to change we're going to need to know the learning rate we're going to need to know the weight decay which we'll look at a bit in a moment and for reasons we'll discuss later we also want to keep track of what batch number are we up to so an optimizer basically has two things a step and a zero grad so what steps going to do is obviously with no grad because this is not part of the part of the thing that we're optimizing this is the optimization itself we go through each tensor of parameters and we do a step of the optimizer and we'll come back to this in a moment we do a step of the regularizer and we keep track of what batch number we're up to and so what does SGD do in the step of the optimizer it subtracts out from the parameter its gradient times the learning rate so that's an SGD optimization step and to zero the gradients we go through each parameter and we zero it and that's in torch.no grad so okay so use.data that way if you use.data then you don't need to say the no grad it's just a little typing saver okay so let's create a train learner so it's a learner with a training callback kind of built in and we're going to set the optimization function to be this SGD we just wrote and we'll use the batch.no model with the weight initialization we've used before and if we train it then this should give us basically the same results we've had before while this is training I'm going to talk about regularization hopefully you remember from part one of this course or from your other learning what weight decay is and so just to remind you weight decay or L2 regularization are kind of the same thing and basically what we're doing is we're saying let's add the square of the weights to the loss function now if we add the square of the weights to the loss function so whatever our loss function is so we'll just call it loss we're adding plus the sum of the square of the weights so that's our L and so the only thing we actually care about is the derivative of that and the derivative of that is equal to the derivative of let's try to write that a little bit better is the derivative of the loss plus the derivative of this which is just the sum of 2w and then what we do is we multiply this bit here by some some constant which is the weight decay so we call that weight decay and so since the weight decay could directly incorporate the number the 2 we can actually just delete that entirely and just do that I'm doing this very quickly because we have already covered it in part 1 so this is hopefully something that you've all seen before so we can do weight decay by taking our gradients and adding on the weight decay times the weights and so as a result then in sgd because that's part of the gradient oh man I got it the wrong way around need to do that first I guess well whatever so since that's part of the gradient then in the optimization step that's using the gradient and it's subtracting out gradient times learning rate but what you could do is because we're just ending up doing p.grad times self.lr and the p.grad update is just to add in wt times weight we could simply skip updating the gradients and instead directly update the weights to subtract out the learning rate times the wt times weight so they would be mathematically identical and that is what we've done here in the regularization step we basically say if you've got weight decay then just take p times equals 1 minus the learning rate times the weight decay which is mathematically the same as this because we've got weight on both sides so that's why the regularization is here inside our sgd and yeah so it's finished running that's good we've got our 85% accuracy that all looks fine and we're able to train at our high learning rate of 0.4 so that's pretty cool so now let's add momentum now we had a kind of a hacky momentum learner before but momentum should be in an optimizer really and so let's talk a bit about what momentum actually is so let's just create some data so our x's are just going to be equally spaced numbers from minus 4 to 4, 100 of them and our y's are just going to be our x's divided by 3 squared 1 minus that plus some randomization and so these dots here is our random data so we're going to show you what momentum is by example and this is something that Sylvan Gruger helped build so thank you Sylvan for our book actually if memory serves correctly actually it might even be the course before that what we're going to do is we're going to show you what momentum looks like for a range of different levels of momentum these are the different levels we're going to use so let's take a beta of 0.5 so that's going to be our first one so we're going to do a scatter plot of our x's and y's that's the blue dots and then we're going to go through each of the y's and we're going to do this hopefully looks familiar, this is doing a lurp we're going to take our previous average which we'll start at 0 times beta which is 0.5 plus 1 minus beta that's 0.5 times average and then we'll append that to this red line and we'll do that for all the data points and then plot them and you can see what happens when we do that is that the red line becomes less bumpy right because each one is half it's this exact dot and half of whatever the red line previously was so again this is an exponentially weighted moving average and so we could have implemented this using lurp so as the beta gets higher it's saying do more of just be wherever the red line used to be and less of where this particular data point is and so that means when we have these kind of outliers the red line doesn't jump around as much as you see but if your momentum gets too high then it doesn't follow what's going on at all and in fact it's way behind right when you're using momentum it's always going to be partially responding to how things were many batches ago and so even at beta of 0.9 here the red line is offset to the right because again it's taking it a while for it to recognize that oh things have changed because each time it's 0.9 of it is where the red line used to be and only 0.1 of it is what this data point say so that's what momentum does so the reason that momentum is useful is because when you have a you know a loss function that's actually kind of like very very bumpy like that you want to be able to follow the actual curve so using momentum you don't quite get that but you get a version of that that's offset to the right a little bit but still hopefully spending a lot more time you don't really want to be heading off in this direction which you would if you followed the line and then this direction which you would if you followed the line you really want to be following the average of those directions and that's what momentum lets you do so to use momentum we will inherit from SGD and we will override the definition of the optimization step remember there was two things that step called it called the regularization step and the optimization step so we're going to modify the optimization step we're going to do minus equals grad times self.lr but instead when we create our momentum object we will tell it what momentum we want or default to 0.9 store that away and then in the optimization step for each parameter because remember the optimization step is being called for each parameter in a model so that's each layers weights for example we'll find out for that parameter have we ever stored away its moving average of gradients before and if we haven't then we'll set them to 0 initially just like we did here and then we will do our lerp so we're going to say the moving average of exponentially weighted moving average of gradients is equal to whatever it used to be minus momentum plus this actual new batches gradients times 1 minus momentum so that's just doing the lerp as we discussed and so then we're just going to do exactly the same as the SGT update step but instead of multiplying by p.grad we're multiplying it by p.grad average so there's a cool little trick here which is that we are basically inventing a brand new attribute putting it inside the parameter tensor and that attribute is where we're storing away the moving average exponentially weighted moving average of gradients for that particular parameter so as we loop through the parameters we don't have to do any special work to get access to that so I think that's pretty handy alright so one interesting thing, very interesting here I found is I could really hike the learning rate way up to 1.5 and the reason why is because we're not getting these huge bumps anymore and so by getting rid of the huge bumps the whole thing is just a whole lot smoother so previously we got up to 85% because we got back to our 1024 batch size and just 3 epochs in a constant learning rate and look at that we've got up to 87.6% so it's really improved things and the loss function is nice and smooth as you can see okay and so then in our color dim plot you can see it's this is actually the really the smoothest we've seen and it's a bit different to the momentum learner because the momentum learner didn't have this 1- part right it wasn't lurping it was basically always including all of the grad plus a bit of the momentum part so this is a different better approach I think and yeah we've got a really nice smooth result persons asking don't we get a similar effect I think in terms of the smoothness if we increase the batch size which we do but if you just increase the batch size you're giving it less opportunities to update so having a really big batch size is actually not great Yann LeCun who created the first really successful confnets including learn at 5 says he thinks the ideal batch size if you can get a way that is one but it's just slow you want it to have as many opportunities to update as possible there's this weird thing recently where people seem to be trying to create really large batch sizes which to me is yeah doesn't make any sense we want the smallest batch size we can get away with generally speaking to give it the most chances to update so this has done a great job of that and we're getting very good results despite using yeah only three epochs of very large batch size okay so that's called momentum now something that was developed in a course or well announced in a Coursera course back in maybe 2012 2013 by Jeffrey Hinton has never been published it's called RMS prop let's have it running while we talk about it RMS prop is going to update the optimization step using something very similar to momentum but rather than lurping on the p.grad we're going to lurp on p.grad squared and well just to keep it kind of consistent we won't call it mom we'll call it square mom but this is just the multiplier and what are we doing with the grad squared well the idea is that a large grad squared indicates a large variance of gradients so what we're then going to do is divide by the square root of that plus epsilon now you'll see I've actually been a bit all over the place here with my batch norm I put the epsilon inside the square root in this case I'm putting the epsilon outside the square root it does make a difference so be careful as to how your epsilon is being interpreted generally speaking I can't remember if I've been exactly right but I've tried to be consistent with papers or normal implementations this is a very common cause of confusion and errors though so what we're doing here is we're dividing the gradient by the amount of variation the square root of the moving average of gradient squared and so the idea here is that if the gradient has been moving around all over the place then we don't really know what it is right so we shouldn't do a very big update if the gradient is very very much the same all the time then we're very confident about it so we do want to be a big update I have no idea why we're doing this in two steps let's just pop this over here now because we are dividing our gradient by this generally possibly rather small number we generally have to decrease the learning rate so bring the learning rate back to 0.01 and as you see it's training, I mean it's not amazing but it's training okay so rmsprop can be quite nice it's a bit bumpy there isn't it I mean I could try decreasing it a little bit maybe down to 3eneg3 instead that's a little bit better and a bit smoother so that's probably good let's see what the colourful dimension plot looks like too shall we oh again it's very nice isn't it that's great now one thing I did which I don't think I've seen done before I don't remember people talking about is I actually decided not to do the normal thing of initializing to 0s because if I initialize to 0s then my initial denominator here will basically be 0 plus epsilon which will mean my initial learning rate will be very very high which I certainly don't want so I actually initialized it at first to just whatever the first many batches gradient is squared and I think this is a really useful little trick for using rmsprop momentum can be a bit aggressive sometimes for some really finicky learning methods finicky architectures so rmsprop can be a good way to get reasonably fast optimization of very finicky architectures and in particular efficient net is an architecture which people have generally trained best with rmsprop so you don't see it a whole lot but in some ways it's just historical interest but you see it a bit but the thing we really want to look at is rmsprop plus momentum together and rmsprop plus momentum together exists it has a name you will have heard the name many times Adam is literally just rmsprop and momentum we rather annoyingly call them beta 1 and beta 2 they should be called momentum and square momentum or momentum of squares I suppose so beta 1 is just the momentum from the momentum optimizer beta 2 is just the momentum for the squares from the rmsprop optimizer so we'll store those away and just like rmsprop we need the epsilon so I'm going to as before store away the gradient average and the square average and then we're going to do our lerping but there's a nice little trick here which is in order to avoid doing this where we just put the initial batch gradients as our starting values we're going to use zeros as our starting values and then we're going to unbiased them so basically the idea is that for the very first mini batch if you have zero here being lerped with the gradient then the first mini batch will obviously be closer to zero than it should be but we know exactly how much closer it should be to zero which is just it's going to be self.beta1 times closer at least in the first mini batch because that's what we've lerped with and then the second mini batch will be self.beta1 squared and the third mini batch will be self.beta1 cubed and so forth back in our SGD which was keeping track of what mini batch we're up to so we need that in order to do this unbiasing of the average oh dear I'm not unbiasing the square of the average am I no I'm not oops so we need to do that here as well I wonder if this is going to help things a little bit unbiased square average is going to be p. square average and that will be beta2 and so we will use those unbiased versions so this unbiasing only matters for the first few mini batches where otherwise it would be closer to zero than it should be right so we'll run that and so again you would expect the learning rate to be similar to what RMS prop needs because we're doing that same division so we actually do have the same learning rate here and yeah so we're up to 86.5 percent accuracy so that's pretty good I think yeah it's actually a bit less good the momentum which is fine obviously you can fiddle around well momentum we had 0.9 yeah so you can fiddle around with different values of beta2, beta1 see if you can beat the momentum version I suspect you probably can okay we're a bit out of time aren't we alright I'm excited about the next bit but I wanted to spend time doing it properly so I won't rush through it now but instead we're going to do it next time so I will yes I will give you a hint that in our next lesson we will in fact get above 90 percent and we've got some very cool stuff to show you I can't wait to show you that then but you know I think in the meantime let's give ourselves a pat in the back that we have successfully implemented you know think about all this stuff we've got running happening and we've done the whole thing from scratch using nothing but what's in the python standard library we've re-implemented everything and we understand exactly what's going on so I think this is this is really quite terrifically cool personally I hope you feel the same way and look forward to seeing you in the next lesson thanks bye