 So it's really important to initialize well as I think you've seen. I cannot overestimate the importance of that. There is, and in fact, Xavier makes certain assumptions about the way Neurons work, and it turns out that for different situations we might want to initialize very differently. So here's a fun application of that. My colleague, Blake Richards, and his lab, including Confort, wanted to have simulations for deep learning in things that look much more like brains. Now in brains, there exist two types of Neurons. They exist excitatory Neurons. All their weights are positive, and there's inhibitory Neurons, and all their weights are inhibitory. And it was known that training such networks is extremely difficult. So what they have is they have a network here. Don't worry about the details, but they basically have a network that has excitatory and inhibitory cells. This violates the assumptions behind the Xavier initialization. Again, what they do is they derive good initialization methods for that, and then training the network was actually relatively easy. It's also the initialization affects the dynamics of learning. So what I want you to do now is to compare the Xavier approach of initialization with a simpler approach to see how good each of the two of them converge.