 Hi, so my name is Bolin, relatively new to data science and machine learning, so hopefully you don't have too many questions or any questions for me. I'm here to talk about Generative Adversarial Networks, GANs for short. I'll be using GANs from here, henceforth. It's hard to say Generative Adversarial Network too many times without losing your mind. So there were three reasons why I decided to do a small project on GANs. First, as machine learning enthusiasts, you're always faced with the problem of having too little data, right? And so what if you could find a way to generate more data for your machine learning models? Could it help improve the accuracy even if they were generated and not actual training data? The second part is I wanted to find out what neural networks in general are doing under the hood. And so I thought GANs may be a way for me to understand what they are doing under the hood better. And so I decided to do it. And lastly, I've always wanted to be a painter, but you know that career route stopped short of me and I decided maybe GANs could be one way for me to perhaps be a great painter again. So GANs are actually a system of two neural networks, one called the discriminator and one called the generator. I took this off some image, you can just Google search it. It's one of the images up there. And basically the two networks fighting against each other in a zero sum game framework. So it's adversarial basically. Discriminator network aims to discriminate generated images coming up from the generator network and actual training data. So its goal is actually quite simple. Is this a generated image or not? If it is, then output one or a probability that's close to one if not output a probability that's close to zero. The generator actually aims to fool the discriminator into thinking that the images that I'm generated are actual training data and they are not generated images. So as a way of analogy, you can think about it in the form of someone who's counterfeiting money in the police. So the police is trying to catch counterfeiters, but the counterfeiters are getting better at counterfeiting money over time. But the police are also getting better at catching them over time. And after a while, the rate at which they improve kind of increases at the same rate and therefore there's no improvement, relative improvement in terms of the performance. So I'll skip a lot of the code since I have five minutes and it's pretty, I won't say boring but it's pretty dry for a five minute talk. So these are the actual images that I took off a Kaggle website. It's by, I forgot the name. So it's a series of paintings, images of paintings so you can see the actual images here and this is what happens after I transform them to a range of between minus one and one. This looks terrible, I don't know what it's doing. So first of all, you have to code up a generator and also a discriminator. They're engaging in a zero sum game framework, remember. So the last function of the generator comes from not being able to fool the discriminator. On the other hand, the discriminator has two different kinds of laws. One, thinking that the generated images are true and two, from the training images, thinking that it's false. So using that free idea, you can come up with the last function. All right, I don't really want to. So this was trained on the tensor flow. I had only 3,000 images to work with because, you know, as a MacBook I couldn't take too much images, it's quite sad. But over time, I just wanted to show you how over time the generator learns to generate images that resemble, I won't say exactly art images but it starts to learn on its own using the last function that is being fed back to the network. All right, so as a start, I sampled from random uniform noise between minus and one and one and so this is the output and over time, you can see it being improved, improving. I trained this during 3am actually. It's quite scary to be honest. You shouldn't ever train it in the night. But you can see some resemblance of quote unquote art images in terms of faces, in terms of sky and things like that. Yeah, so the two conclusions that I got from this, the first one is the generator network even without a lot of images can generate images that are close to the training data. So I won't say I can sell them to the loof, definitely the loof wouldn't accept this but you can see that it's possible to use them as training data. And the second conclusion that I got is painting is really quite difficult. I can't even paint like this, so yeah, that's my two conclusions and I end my presentation. Thank you. Any questions? Hopefully not. Oh, that's one question. Did you have fun? I had lots of fun, not the images every, after every... Maybe 10 epochs scared the hell out of me but you're always waiting to see what kind of images it can generate. Yeah, so I had fun. And did you ever say go back? Yes, yes, I didn't because I didn't know about the resource but from now on, you know, if I have any computational intensive things to do, I'll do it under a code lab. I know it's straight in line so you don't just have to, like, pretend the collab thing on the front and anything in GitHub you can run on a GPU immediately. Ah, okay, questions. Oh, all right. Is it good? Yeah, so the cans are perfect. If you like them, thank you. Thank you. Bye. Generic new style fees. Thank you.