 In conches of the time, so I will do reasonably quick. So originally I was going to talk about TVUs and collabs and some cool new things with TVUs and collabs. Unfortunately, there's a lot of bugs at the moment in TVUs and collabs. I was up to 4 AM talking to someone on the West Coast in the States from one of the Google people. We couldn't get it fixed, so that will be next week. So then this morning I was thinking, wow, what can I talk about tonight? And someone pulled out their phone and showed me a really cool picture of a GAN and thought that it was done one way. And I was explaining to them, no, it's actually not done like that at all. So I thought, OK, why don't I talk about that? Because this is one of the things that is in TF Hub. So quickly about myself, I'm a GDE for ML. I also work on deep learning related to dialogue and generative models. In the past I've done a number of startups and stuff like that. Current startup is Red Dragon that Martin mentioned. We're Google partners. We do consulting for people about prototyping, deep learning applications and AI applications. We're also doing a lot of research and work around conversational AI. So we actually just gave a talk last week in San Francisco all about conversational AI. And we may do that early next year sometime here as well. OK, so how many of you know TF Hub? Wow, almost no one. Very interesting. So in some ways I kind of side with you. I wasn't a big fan of TF Hub. But what is it? So OK, TF Hub basically is a whole bunch of reusable pieces of ML models. And it's going to become a lot more important with TensorFlow 2. And it's been neglected at the moment because it's quite tricky to use with Keras. And you have to use a Lambda trick. There's a few tricks around to get it to work properly. That should get easier going forward. The one of the cool things about it though, and the one thing I'm going to show you tonight, is there's a lot of really big models that you just wouldn't be able to train yourself unless you've got quite a big budget. And they're basically being put up on this TF Hub for you to download and use. So if you think about TF Hub as basically, Google actually refer to them as modules. They're actually just parts of different models. So how many of you know transfer learning? Most people know transfer learning, right? So this allows you to basically take a part of a model and then reconstruct maybe the head of that model if you want to train it. So for example, Martin talked about earlier on about ELMO. And the ELMO model is actually one of the models that you can get in TF Hub if you wanted to basically train something with that. And it's quite easy to then download it and then just retrain the few top layers for doing classification or something like that. There's a bunch of different models that are up there. There's ones for image classification. There's a whole bunch of ones that you can use for sort of extracting feature vectors from images, which is a very common thing used for image search, for using things like that. There's text and embedding models. There's also a bunch of models from DeepMind in relation to sort of video and action recognition, which have been trained up on a lot of different material. The ones I'm going to talk about tonight are the GANs models. So how many of you know what a GAN is? Whoa. So I spoke about GANs about a year and a half ago here, and I think no one put up their hands. So I'm pleased to see that. So OK, if you don't know what a GAN is, it's basically what we call a generative adversarial network. So the paper came out in about June 2014 by Ian Goodfellow, who now works at Google after jumping around back and forth to open AI. Really, they took off a lot in 2016. We started to see some of the really interesting GANs that were coming out on then. And there's been lots of variations of them. And I would say for the second, for a while there, there wasn't a lot of interesting new ones. But then in the recent past, there's actually been a couple of really interesting new ones, one that I'm going to show you tonight. So how does a GAN work? So OK, what they do is basically what the example we're going to look at tonight is we're using them for image generation. So we're creating images that didn't exist before. So if you look at this picture here, this is a whole bunch of different chickens that are created with this GAN. So it's important to understand that none of these examples was actually in the training set. It's not like it's just remembering something and reproducing it. It's looking at, say, the probability distribution of what a chicken should look like, and then working out not only what a chicken should look like, but what should the background of where a chicken is look like. And you can see here that it can get some pretty accurate-looking chickens and certainly accurate-looking backgrounds. But it can also mess them up, like two-headed chicken on the top left here. I want to make this fun so you can play around with them. So OK, the analogy for a GAN, if you haven't come across this before, was the whole concept of basically a counter and the police. So you're actually end up having two networks that are sort of fighting against each other. So one of them that is job is what we call the generator. It's basically to generate new samples that will fool the police or what we call the discriminator. And the way we do that is we randomly sort of show the discriminator a real example and then some generated examples. And it's lost function. I'm not going to go a lot into this tonight. If you're interested in this, there's a video that I did up online that last year explains this more in depth. But basically the lost function is all about this game. And we want to get to an equilibrium where both the discriminator is quite strong, but also the generator is strong enough to fool the discriminator a certain percentage of the time. And that's what we're sort of looking at here. So this particular one that I'm going to show you is what we call a conditional GAN. And conditional GANs is where not in... So we've got our real data that we're passing in. We've got our generator that we're passing in to be sort of discriminated against and looked at. But the interesting thing here is we're also passing in a condition. Now in this case, the one I'm going to show you is we're passing in the condition of a class from ImageNet. So basically just before the class that we're passing in was chicken, right? So we're saying, okay, here's some random noise and I want this generator to basically take this random noise and then with the condition of chicken produce an image that looks like a chicken. Now that itself is not that hard in the past when we were doing it through things like MNIST. So MNIST, you've only got 10 classes to do. It's quite easy to do it. To do it with ImageNet though, you've got 1,000 different classes. So it's quite a feat to be able to produce a conditional GAN that actually can go up this high. For those of you who were in our deep learning developer class last year, I was doing GANs with the drawing, what was the drawing set called? Quickdraw, quickdraw, yes. So we were doing GANs, a building GANs to basically do stuff with quickdraw. And that was also a conditional GAN. These are much bigger, much more detailed sort of things. So one concept that I want you to sort of understand here and I'll be talking about this more is that, so this is an example of MNIST. And when we sort of sort these in and just put it in three dimensions, we've got sort of like the different classes live in different parts of the universe. If you wanna think about it like that, right? Where we've got the ones living over here, we've got different parts living over in different things. So you can imagine that each of these has a three-dimensional coordinate to it. Does that make sense to everyone? Now often what we wanna do is we wanna interpolate between one three-dimensional coordinate to another three-dimensional coordinate and the number should change if it was MNIST. What I'm gonna show you with the big GANs, we're gonna do the same thing, but we're gonna do it in a much higher-dimensional space and we're gonna be able to interpolate between one animal to another animal. And to have it sort of see, okay, what can it come up? So it's very important that you understand that it's not just using some sort of visual morphing or something, it's actually trying to work out, okay, what would half a chicken plus a lion look like? Or something like that. All right, this is another example of the interpolating space. So okay, here are some ones that I made earlier. So this is an owl going into a Japanese spaniel. So the two rows are just basically two runs through of the algorithm. We're doing five different images that we're interpolating between the one on the far left there is obviously the owl, the one on the far right is the Japanese spaniel and in between is this space. And as it goes through this space, it's able to create all these images or all these sort of creatures in this case that just don't exist in the real world. And they're very kind of interesting to sort of play with. So for me, I think this one would be a real cute dog right in the center, you know? I think if someone could actually breed something like this, you could make a lot of money selling this. The next one is kind of weird. So this is a bald eagle going into, I've forgotten what dog name this one is. But to me it looks like something from Gremlins. But you can see that this one didn't turn out quite as well. But you can certainly see where it's going from the bird through to the dog, another one. So this is actually just two different dog breeds. So for those of you who don't image that well, there's lots and lots of dog breeds inside ImageNet. So one of the things that you can do is just basically go through and play with these as you want. And you can imagine that like, when you look at this top middle one, that actually looks like a real dog. If I showed you that picture and said, this is my dog at home, you'd probably believe that, okay, yeah, that's a real dog, right? Okay, so let's look at how you can do it. So one of the things I wanted to do is just show you that this is really simple to do and there's already some collabs pre-made for doing this. All right, so what we're gonna do is basically pick the ImageNet class that we want and then it will generate it. So let's try a beagle, let's see how the beagles come out. And it takes a while to do it because obviously this is a really big model and I haven't even loaded the biggest version from TF Hub. So TF Hub has multiple versions, some that can be right up to 512 by 512. So okay, here's a good example, where we see some of them come out pretty accurate, like this one, some of them is not so much and then some really not so much. But it's important to understand that what it's trying to do is just sort of reproduce some sort of probability distribution of what a beagle will look like. Let's try one more, about a border terrier. Now we can also basically add in some noise. So I'm adding in a little bit of noise to it so that it just creates a bit more variety. Okay, we can see these ones didn't come out as well. And this is what I, you know, I'm gonna give you the links for the code and stuff and I suggest you go home and play with it, right? Because you will start to get a sense that and this is one of the things that, you know, in the press, they show all the perfect images and you think, wow, this thing is like perfect, right? When you start playing with it yourself, you start realizing, yeah, not so much. A classic one, I will come back to some other ones, but tigers or pretty much any cat, any cat I noticed that it just doesn't seem to understand cat eyes. There's something weird about cat eyes. I'll show you, if we look at, and it seems to be, you know, whether it's a big cat or a little cat, it still just doesn't seem to get the eyes. This is like the cross-eyed snow leopard or something. Let's look at a little cat. Oh, let's go for, Simon's already cross-eyed. Let's go for a tiger cat. So it is sort of interesting to play around with this and get a sense of what, you know, what ImageNet is actually learning in these things. So you look at, yeah, you look at these ones, they do look kind of like they're blind, right? They're something kind of missing and for some of them it's very much in the uncanny valley sort of territory. All right, so what if we want to make something that's unnatural. So let's start off with a panda and get into a cauliflower. And you can see that you could imagine if you were merging a panda to a cauliflower, it would probably look something like this, right? Another one that we can try, what should we try? A mushroom, let's try that. So I'll give you some tips. When you're adding noise, try and add a little bit of noise to both the A and the B. And then, okay, this is the number of, so this is basically the number of interpolations between them. So okay, I've just gone for five. Let's go right up to 10. So we can see all the scales of panda to mushroom. This will take a bit longer to generate. So I'll tell you what, while that one's generating, I'll show you another one. So there's another one that was actually done by NVIDIA. Last year called Progressively Growing Gans. So the way this is actually done is a little bit different than other sort of gans. I'm not gonna go into that so much, but this one allows you to do faces. So it's trained up on the celebrity image faces and allows you to make high resolution faces. And then we can actually sort of interpolate between a whole bunch of them like this. And we can then, this is one I found interesting. So if we start with a photo or we start with one face, we can actually look at starting with this face. What do we have to do in our loss function to get to this face? So this is the one we wanted to get to. We're starting, let me just do this. We're starting up here. And you can see that the loss function is sort of trying to find the path from one to the other. And it comes up with some very realistic faces like this face I think here, really is neither of the other two but it's very believable that that's a real human. So I think going forward in the future, you're gonna start seeing these things being used in games, being used maybe in movies, being used in a whole bunch of different things. So this one I will also give you the code to sort of go and play with. How are we going with our pandas? All right, so we've got panda starting out, kind of getting a little sort of mushroom is something like that, and then just dissolving into mushroom. So there's a whole bunch of, obviously in ImageNet you can, there's a whole bunch of different things. I'll pick one more. What's something, a wine bottle? Should we do a panda into a wine bottle? All right, okay, panda into wall clock. Let's try that. And you can pick any of the 1000 to start with and any of the 1000 to end with. All right, this is gonna take a while. Any questions about this? Well, I'm doing this last one. Okay, so the one I'm showing you here is 256 by 256. You can actually download up to 512 by 512 on this and use it. So there's pretty high resolution for this kind of thing. And what I think, if anyone wants to have a, you know, let's have a look at this into wall clock. Sort of somewhere around there is a very strange panda. You know. All right. So one of the things that might be fun for some of you to try out is actually go through and generate like a whole bunch of different examples and see if you were to use this for training ImageNet. How would it go? Because you can actually sort of think about this as like an image augmentation system as well. Now I'm not saying it's gonna work necessarily very well but it is possible that, you know, it's interesting to see that, for example, with the cats, I suspect that the ImageNet model that they're using doesn't rely on cats' eyes that much. And that's one of the reasons why it's, you know, not able to reproduce it very well. Anyway, that's it. I will add them back to here. Here's a bit there, the papers and the code. The cool thing is this is on Colab. So you just literally have to open your computer, press this, and you can just run it straight away. Right? These models would be quite big. And like I said, if you were trying to train one of these models yourself, it would take forever. So last year, I think at a different meetup, I showed the CycleGAN and just training that, you know, took forever to get one that was kind of, you know, doing. So to take advantage of this sort of whole TF hub thing, for this thing, I think it is very cool. Any questions? Any other questions? Okay, so I've been told, where are the t-shirts? Okay. So you have to swap a business card for a t-shirt. All right. Or your name on a piece of paper if you're under 18. Yes, something, all right. Where are the t-shirts? Just at the edge? Yeah, do you want to bring them up the front? We're going to stay around and answer questions. Just like, no, I know we've got a bit late. Thank you everyone for coming. Next month, we will probably do, you know, some stuff that's much more suited for some beginners and stuff as well. Thank you.