 You've done it. Congratulations. I hope you all now have a solid understanding of pytorch. And more importantly of some of the big ideas behind deep learning. Different forms of initialization gradient descent. Regularization. Different architectures like con man con nets and gans and RNNs and transformers. And how to build them into methods like reinforcement learning. I also hope you can see the many similarities across the different applications and architectures. We've seen over and over the idea of auto encoding or masking. We've seen self supervised learning by blocking out some pixels or masking some words. And then taking these self trained networks, these con nets or transformers and using them to embed images or texts. And then fine tuning them, adjusting the last layer or last few layers. Whole industries are now arising as you well know on these applications. And on lots of GPU power on top of them. I think we're going to see an amazing decade of taking these core techniques and applying them to all sorts of areas. Finally, I hope your projects are finding a chance to think more about specific applications of these, how to incorporate the knowledge and the loss functions specific to a given problem. And then we'll move on to seeing all of your projects and hearing from you as you use deep learning throughout your careers, which I think many of you will after say, I guess 522, that will not be the end of deep learning. Thank you so much for a great semester and go forth and prosper.