 So, welcome to week 6. What did we do last week? We learned that regularization is a key concept if we want to have meaningful generalization. We learned about different ways of implementing regularization. There's a set of ideas of how we can make networks simple. For example, all one regularization, all two regularization or only stopping. We talked about data augmentation and how it can allow the network to have less variance that is not useful. We spoke about how stochastic gradient descent and the dynamics of learning can themselves regularize. We spoke about how many batches can help on regularization. And then we talked about dropout and how dropout is one of the commonly used regularization techniques. Lastly, we spoke about distillation, how distillation approaches can allow really good regularization in many cases. So, let us talk about last week. Talk with your part about what you learned last week and what you still hope to learn. What was most surprising of what you learned last week and how generally is it going? Time for you to discuss.