 Now that we've seen the controllable gun I want to just give you a few examples of alternatives. Now for example people have been building GANs that take as an extra input text input. Now if you think about it it's exactly the same problem like we can phrase it the same way. The text input gives me effectively I can have a classifier that tells me what kind of content is in something and I can then use this kind of ideas to steer a GAN to produce the right things. How does it work? Again we're basically adding a descriptor of the text to the input of the GAN and of course equally to the input of the discriminator. Here's an example where they used GAN where they used labeled image data where they can say well what if we have labels for the areas maybe we have a label for river or label for waterfall and things like that and then I also use that as input. This is what you can see here. What's the structure? Again same basic idea that we have this extra input that we get from a classifier that otherwise can segment the image for us. Now here's an idea another idea that's like tricks in this space that is sufficiently useful that I want you to apply it here. So these are cycle GANs. So what's the idea of a cycle GAN? We have photos and we have paintings of Monet. Wouldn't it be cool if we could convert Monet paintings into photos and vice versa? Wouldn't it be cool if we could take photos in summer, produce photos in winter or the other way around? Well here's a training idea. Now you can say I start with one image which is a real Monet painting. I produce a fake image out of it, a fake photo out of it and then I take this and I transfer it back. Now if my process works the right way and I have that translation from Monet to real photograph then these two images should be the same. Now this is of course super cool because it gives us extra training data. Equally we could start with a photo and go from the photo to the images back to photos they should be the same. So we can in the cycle GAN we can kind of define that for all kinds of different settings. And then we just have this extra loss. What's the extra loss? As we go into the other space and come back these two should be the same. Same if we go into the other direction. So we have an extra loss that we want to minimize which is basically the difference between these two images if we go one direction and if we go into the other direction. Now what we will do is we'll build a super special cycle GAN. Do you remember in week three we had this little Kaggle competition of animal faces? Arash made things difficult for you. He scrambled the images. Why was that a great idea? Well if he hadn't scrambled it you could have produced an extra training set maybe out of the ImageNet data set or something and get better through it. Or you could have used the ConfNet and get better through that. He explicitly wanted you to prevent you from that but what he did is he just scrambled the images so you couldn't get any more training data. But I believe we can find out how we should unscramble them. Because if an image is not scrambled it will look more natural. Now scrambling operations are cyclical. Like if I scramble and I scramble it back I'm back to the original thing. Not only that. In that one space they should look like actual images. Now that means that in a way constructing a cycle GAN for that is figuring out how Arash had been scrambling the images. So if you had known this in week three how to do that you could have completely dominated the competition and used the ConfNet and used all kinds of tricks there. So now I want you to build a simple cycle GAN. Namely a cycle GAN that de-scrambles the stimuli that we used in week three.