 Great, so now let's wrap up for today. What are the advantages of Ganser's generators? Backprop makes them fast. It doesn't produce integrals that are impossible to solve, and you don't need to use Mac of Chain Monte Carlo, which is often very inefficient, and you can use any differentiable function for generating images. But there's clear problems with Ganser's generators. Not like they work in a sense, but there's clear problem. It's unclear when we should stop. There's no explicit way of getting at probability distribution sort of it. It's hard to train because this mini-max scenario that we have is rather poorly understood. In lots of ways it's necessary to babysit those networks. There's so many ways how they go wrong and success often requires us to understand where and how and why they go wrong. It's hard to evaluate them. Like FID is a good metric, but it's not clear if this is really what we care about. There's lots of local optima. For example, the network can effectively memorize aspects of training data, and it's very hard to invert. Even if we have a good model that goes from Z to an image, it's very hard to go from an image to the Z. Now, what could possibly go wrong? Gans are clearly getting better. Every year, as Goodfellow pointed out, we get more beautiful Gans. What could possibly go wrong? Well, the first one is there's a huge ethics society problem. Like you can now make videos of anyone saying anything, wearing any clothes you want them to wear, and it's progressively getting harder to prove if something is true or not true. It goes so far that Reddit now bans videos and face swaps, and they are truly pragmatic. There's also further caveats. Now, Gans make nice images. They do not faithfully model high-dimensional images, and a lot of people, when they first get exposed to Gans, they're like, wow, this is awesome. We now have infinite training data because we can just train it with Gans, but that's not the case. It's not properly modeling the real high-dimensional images. It's just really good at making us feel like these things are real, and it's suddenly not a great model for probability distribution. So what did we learn today? We learned about the basic idea of Gans. We get some mathematical intuitions of what they are about, and we saw many cool ways for producing images, and then making fake Amazon reviews will never be the same again due to the Gans. Now, it's time for you to submit your exercises, make sure you submit it, and also, you helped us so much making this course better. Keep up the great work of giving us feedback, and please try every week to give us that feedback. Thank you.