 Well, welcome to Tutorial 2 of Week 8. We'll be talking about GANs today. So Let's briefly recap Tutorial 1 from this week. We learned about the reasons for why we want unsupervised learning. We also saw autoencoders as a simple way of doing such unsupervised learning. We learned about the idea of wanting generative models of the world, why that's useful and We learned variational autoencoders as one way of doing that. It's interesting that that sets a bridge between what you might learn in a statistics course, variational Bayesian methods, and deep learning, and in a way combines advantages of both into VAEs, which are really useful for certain things. And then in the end we saw many ways in which autoencoders can actually be used. Now before we talk about GANs today, I want to give you the chance to see a little bit under which circumstances VAEs fail.