 So, let's talk a little about the cool things we can do with variational autoencoders. So we can do better weight initialization, we can train unsupervised with unlabeled inputs and ultimately throw away the decoder network and start training with labeled data then. We can use it for explainable AI, we have a network, we want to explain it by forcing it to describe things in a low dimensional way, we can much more hope that we will be able to interpret what the weights mean or what's a mis-prediction, what's the nature of a mis-prediction. In that sense, we can have very each of the features in the code vector and see what the output looks like that ideally can give us a code that is interpretable for us. We can use VAEs for something like style disentanglement where we can say, well we take an image, we have a VAE, now that goes into a space and now we can say we expect a part of that latent space to tell us something about the artist, part of it to tell us something about the time and part of it to tell us something about the style and that might allow us to learn about artist, time and style. In that case, we will of course have many different cost functions, we will have one cost function that's aimed at telling us something about the artist, about the time and about the style and on top of that of course about something about how good we are at actually reconstructing the images that go in there. So people have used the idea of VAEs to play music. Let me see if I can play this here. So this is music played by a VAE. Of course it's trained on some data and then it generates data. To me, given that I don't know all that much about music, it seems pretty impressive. Now you can then interpolate there. Similarly, you can do here we have a fun video of Nick Cage, DeepFakes, which is funny for me because I'm practically face blind, but apparently for people who see faces, it's quite impressive. Now there's of course issues with variational auto-encoders and we will learn a lot about them in the next tutorial. So if we take an input here of say faces, we do VAE reconstructions of it. Look, these faces all look kind of bland. There's something missing about them. Now you could say, not like no one's wearing glasses anymore for example. And now you can say we might be able to combine it with with GANs, which we'll do next time, and produce much better images. And that's what we'll do next week.