 So, let's talk a little bit about the generation of novel images. As you saw, interpolation in a way kind of worked. Lots of aids appear halfway between other digits. It's less obvious what happens halfway between CFA images. And it shows that we can generate images with a decoder. Now, how would we generate samples? Well, we can say, let's take Z randomly from that plane, maybe according to a Gaussian distribution, to generate totally new images. No? Let's, so how should we think about this latent space where Z lives? Well, it arguably compresses an image, but does it really allow us to create an image? Does any combination of latent activities produce a meaningful image? Should they? Well, what we really want is a generative model. Machine learning in a way is obsessed with generative models. One man said, what I cannot create, I do not understand. And machine learning suddenly takes up that idea. Goodfellow, for example, says the promise of deep learnings to discover rich, hierarchical models that represent probability distributions. No? So, Goodfellow doesn't talk about just, just compressing something. Now, like, he really wants a generator. At some level, you can say, there's reality out there. And if we understand reality, we should be able to generate it. And that's from Goodfellow's classical GAN paper. And you can say, we want to produce fake images, audio, and so forth. And there's like a great webpage. This person does not exist. You go there. And every time you go there, it gives you a new portrait. And this is a wonderful service if you want to produce fake reviews for Amazon. So, how should we sample to produce a good probability distribution of potential images? Well, we can say in fast approximation. Or let us assume that the latent variables have a Gaussian distribution. Now let's just try that. We use sample Z and generate new images from the decoder. Let's see if they actually look like the real thing and have the same distribution as the real thing.