 So, first, hooray, it probably worked! You just built your first confnet from scratch. Congratulations. So, but let's talk a little bit overfitting. In the model that you just built, there were strong signs of overfitting. What does it mean strong signs of overfitting? Well, training performance was much better than testing performance, at least in all likelihood. Because, mind you, we didn't tell you exactly what kind of a network to build. Still, you should, with the way we set it up so far, have strong signs of overfitting. So what can we do? One strategy that people often use is dropout. As you'll learn in the regularization lectures, is there's this intuitions behind it. Like you can say, we want no single neurons to be overly important. Dropout means that sometimes the neuron is there, sometimes it's not. And if a neuron gets to be very important, then overly important relative to the others, it will add a lot of noise on the outputs. We can also think of dropout as basically converting a neural network a little bit into an ensemble methods. So what you will do is you will now add dropout. So what does that mean? We need to have a special dropout layer, an end-up dropout, and we will need to specify that it's being used, of course, in the forward loop. Solve the dropout. And now where do you have to do that? The first one goes into the initialization, and the second one goes into the forward loop. So why don't you check how much dropout helps to alleviate the problems of overfitting? What happens to training test performance?