 Hey, everybody. So Starry Night is an iconic painting from the 19th century. Golden Gate, an iconic bridge from the 20th century. And this is an iconic achievement from the 21st century, a work of art created by an artificial intelligence. So again, I'm Dr. Eric Risser, and I'm here to give you just a brief history of creative AI. But first, what is creative AI? Well, technologically speaking, it's kind of where computer graphics, machine learning, and computer vision meet together in the middle. But if you think about another way, let's look at it in terms of zombies. These three heads were created by a human as examples. And then they were fed into a computer which looked at it and learned. And then it created the rest. So another example would be this little small patch of rocks and leaves and dirt that was scanned from the real world. It was fed into a computer, and then it extrapolated out to create an entire forest. Uncertain how all this works? Well, that's actually good. You're halfway to the answer. It runs on uncertainty. Or if you think about it another way, imagine you're a newborn baby seeing your first ever human face. There's a lot of new information here. There's two eyes, a nose, a mouth, the shape of the head. Now let's say you see your second ever face, and it's a zombie. So there's new information here. There's blood. There's scars. You've never seen any of these concepts before. But now there's repeated information. There's still two eyes, a nose, a mouth, shape of the head. Now you see your third ever head. And again, it's a zombie. So there's new blood splatters. There's different scars. The lips are a little chewed up. But at the end of the day, you're becoming less uncertain about what a zombie is. So at the predict conference, you'll probably see a few graphs. But I doubt you'll see the zombie graph. So here we're plotting unique information over total information. And we start to see a curve form. I call this the uncertainty curve. And it's true of any data that follows a category, be it zombies, cars, dogs, hipsters, anything that's like a well-defined thing that follows rules. So the goal of a creative AI algorithm is to essentially create new data that follows this curve and extrapolates it out. If you don't follow this curve just right, if you go too low, you'll end up just copying and pasting your inputs. And if you go too high, you'll break that category and you'll make stuff that doesn't make sense. A zombie with three eyes. So creative AI is cool. And it's making a lot of new stuff possible. So we've shown you how you can take a photograph and turn it into a painting. But you can also take paintings and turn them back into photographs. You can take small examples of something and imagine more of it. And you can even swap the textures from one object onto another. You can create people who have never existed. And you can even take rough sketches and turn them into imaginings of the real world. And in fact, our friends at NVIDIA have even shown us how you can hook that sort of concept up to a simple painting interface and show what the painting tools of the future will look like. So you can take old video games and make them look new again. And you can even create whole new 3D objects from just a single image. So the future is pretty cool. But I think it's important to actually take a step back and, before moving forward, kind of look at where all this came from. So it all started with an ancient field called texture synthesis. And this academic field of study is actually older than I am. And the idea is, given a small amount of something, you want to make more similar but different that thing. Again, really old field, dating back into the 80s. But the first paper, I think, that really kind of hit a mark and hit some notoriety was Efros and Leong in 1999. This wasn't necessarily the most sophisticated or best texture synthesis paper, but it was really simple. They figured out a way to kind of tie it to the concepts of Markov random fields and just made it very approachable. Anyone with a computer science degree could read it and go in and play with it and apply it to their field. So this actually sparked a whole lot of interest in the space, so much so that it kind of kicked off a series of papers that came out year after year. So next up was Wayne Levois Street Tree Structured Vector Quantization. This was the first fast, good texture synthesis method, in my opinion, quite seminal. Then Aaron Hertzman, the next year, published image analogies, which added the concept of coherence to the process and did a lot of great applications. So it was actually the first style transfer approach. Now it required image analogy pairs. So you needed, say, a photograph of a bowl of fruit, then a painting of the same bowl of fruit to give correspondence. Then if you had another photograph, you could style transfer that. Wayne Levois then returned in 2003 with the notion of a way to take this inherently sequential algorithm and make it parallel so it could be ported onto the GPU. And that was really the startings of modern texture synthesis, which was really locked down by Quattra in 05, where he added the concept of an expectation maximization algorithm to it. So now it's fast. It's stable. It's principled in the statistics. And as of today, most texture synthesis algorithms are actually based on this Quattra paper. So Kopp et al. This is a cool one. He added the concept of histograms and tri-planar projection to allow you to build volumetric textures. So this was kind of our first real meaningful foray into 3D shape synthesis. And then Han et al. Took this concept of instead of having big images and going to slightly bigger images, let's go with very small images, 128 by 128 pixels, and then grow out like 32 by 32,000 pixel images. So massive multi-scale dependencies. And then Barnes et al. I don't actually have an image for this one, because it's more backend work. But I think it's one of the most important papers in our space. So Connelly figured out how to take these inherently really slow algorithms that would take hours or even days to run and get them running in a second or two. And that's actually what powers content to where fill in Photoshop. So it was the first algorithm that actually made these things practical for real world application. Then a little bit of schizo, but myself or at all. My contribution at 10 was taking these algorithms that go from small images to large images, synthesizing out on the infinite image plane, and kind of shifting the way we think about it to hybrids. So now you have a few members of a population, and then you grow out infinite populations. So structured, higher level, higher order things. Mike, then in 13, hooked it up to a paint brush for the first time and showed how we can actually draw sketches and then relate those to textures and start painting with texture and sort of sketches. So again, art tools of the future. And then, Gatisse at all happened in 2015. So at this point, the space was kind of slowing down a little bit. A lot of the low hanging fruit have been picked. But in parallel, neural networks were kind of going through the whole ML and vision world. And things that had existed previously were kind of being turned upside down thanks to neural networks. And Gatisse was basically that paper for our field, the first one that used a neural network to apply it to the same topic. And I remember when Barnes wrote me the day after this paper came out, he's like, hey, somebody did really bad, slow texture synthesis. But they did it with a neural network, which is extremely cool. And it was the first parametric method that actually worked. All previous ones were non-parametric in nature. But what really made this cool wasn't necessarily their texture synthesis, when they applied it to style transfer. And it worked super, super well. So this let you essentially turn a picture of your cat into an oil painting. And it kind of took the internet by storm. And I really think that this is a super important paper for the field, really seminal. Not just because of what it brought, but because what it did for the field. It really brought Creative AI into the limelight. It stopped being kind of an esoteric, you know, graphic geek thing. And I'd start to see people's profile pictures on Facebook run through these algorithms. It started to hit the mainstream. And I think Prisma got app of the year in 2016. And it brought a lot of fresh blood, fresh ideas, fresh learnings into this space. But it wasn't perfect. This would have been a texture synthesis example you'd expect from Gatis. But half the time, this is actually what you get. So Pierre Wilmot and I kind of went in and wrote a follow-up paper which, you know, kind of stabilized some of the optimization. So Gatis would be on the left. Our results would be on the right there. Another example, Gatis' results, our results. In any case, texture synthesis was just the beginning of Creative AI. There are other approaches have emerged. While Gatis was working on neural style transfer, Goodfellow was working on the concept of adversarial training. So this is similar to the hybrid's approach where you throw a whole bunch of data into a neural network and then start imagining new members of that population. So you kind of bake the concept of a category into a neural network and it's generative. Isola then did a follow-up where he controlled it with image-to-image pairings. So you can then actually control this generative process so image-to-image translation networks and that could turn a black and white photo into color, it could turn day to night, night to day. It could turn sketches into real-life drawings. And again, they made it with cats on the internet and it kind of took the internet by storm. So Zoo et al, this is a pretty cool paper cycle again. So basically building those image-to-image pairings was a huge training set problem. So he kind of detangled the need for an exact image-to-image translation match on the training set side. So you could kind of just learn the inherent qualities you care about. And then Keras et al took these GANs which are inherently quite unstable and difficult to train and difficult to, and slow and difficult to make high resolution and figured out how to do that through stabilization and image pyramids. So again, all of these people never existed. So this was a, yeah, so this work really kind of took, in my opinion, adversarial networks from the lab into something that could be used in real life or real applications. And speaking of real applications, I think it's important to point out Leta et al, which applied this to a real world use case, which is Uprez. So original image on the right, down-resed on the left, right there, SR-GAN, that took the down-resed and tried to reproduce the original. So going back to last talk, putting these networks on the edge, now all of a sudden we can take basically bad video on the internet and turn into good video and all of your old photos from old digital cameras in the 90s and early 2000s, you can remaster them basically for free. So that's what the past looks like for Creative AI. And I think it's important that we recognize where all this stuff came from. What does the future look like? Well, I think it's time to go into industry and that's what my company does. So there's a problem. Art has gotten 3D art and digital media has gotten more sophisticated over the last 20 years and as the quality's gone up, so has the cost. This is what a graph of the number of artists required to make a Grand Theft Auto game looks like as well as the budget. So, yeah, Creative AI can totally help with that. This was a recent photogrammetry demo done by Unity Labs that they released last year at GDC and they actually used a lot of that texture synthesis technology to automatically go through and fix problems with a lot of the scan data automatically so humans didn't have to do it in Photoshop. A local artist at Havoc, Pete McNally, went out and scanned a bit of a rock wall here in Ireland, modeled a really quick cylinder and used Creative AI to essentially fill it in with detail. Another artist at a AAA video game studio scanned these two pebbles and a beach sand and built this entire parametric controllable generative beach generator, really. So this is how video games in 3D worlds kind of will be built in the future where you start with examples and you kind of curate it and then you have like some really high level controls to kind of just let it do whatever you want, really. In any case, that's the end of this talk but just the beginning for Creative AI, thank you. I was fascinating, Eric, and it's amazing to see how these textures can be modeled into real world scenarios. So it's like the virtual world is becoming bigger and more real every day. So is this how Minecraft worlds get generated? You know the way they spawn a new world, it just looks brilliant straight away. I wish, actually Minecraft just uses a procedural algorithm so that would have been a human who wrote code and wrote the rule system of how those blocks would be put out whereas this would be a learn system where you could show it examples of Minecraft worlds and then it could make more similar ones which is absolutely a future that Minecraft would have. There's no limit to what you could do with that really, is there? So what's next in the industry for Creative AI maybe outside of gaming or computer games? Sure. So a lot of these algorithms are inherently image based and a lot of the big problems in the industry aren't just limited to images. You know, you have shapes, you have animations, you have audio, you have how it all fits together and I think that's really the future is seeing this expand to kind of other problem spaces. Very nice. Thank you, Eric. Cool, thank you. Very cool talk. Thank you. Super, I'll take that. Check it out.