 So did it work? Well, for a lot of you it will undoubtedly have worked. So what do we often find? We find if the original looks maybe something like this here, then the reconstruction in many cases might look like this. This is interesting, no? In a way the error that we made here is an improvement. What we have on the right-hand side feels more like this is how a real three looks like. And now does that mean that the latent space z captures meaningful features? Well, I don't know. Let's think about it. If it did, what would we expect? Then we could say we might have a face with a moustache and a face without a moustache. And arguably the difference between those two images is the moustache. And now you can say therefore what would we like to have? We would like to have meaningful vector arithmetic where we can say face x without moustache, like just face. Then we add that moustache component and then we get the face with the moustache. Now let's look at that. So we can start and visualize things in two dimensions. You can say if we look at the different characters, they will in some meaningful space hopefully have clusters. So the space should form clusters. The clusters might be far away from one another. And then the question is can we interpolate between classes? So why didn't you explore about this property of interclass generalization?