 Hi everyone. My name is Purnima Kamat and I recently joined Oliver Wyman as a lead engineer. I also run the women who code group in Singapore and Yau Conferences. If you've heard of these, look us up on Meetup. If you haven't, do come and speak to me later. We'd love to share more. My topic today is going to be on generative art using neural networks and genetic algorithms. The title was too long for this slide, so I cut the genetic algorithms, but that's going to come. This topic is something I'm really passionate about. I paint for over a decade now. A couple of years back, I thought I should work on the intersection of art and code. That's where the stock comes in. A couple of things I wanted to lay out before we start off. I was at this conference a few months back and somebody asked me, why do you even do generative art? What's the use of it? I think it's fun. It's a great way to learn something new. I taught myself JavaScript using generative art, so I think it's a great way to get started off on it. Practice the 10,000 hours rule by Malcolm Gladwell. I'm not sure if you can get to 10,000 hours doing anything, unless you're coding it professionally, I guess, but some percentage of it works. The best part is it's therapeutic. Hand coding SVGs or writing CSS linear gradients and radial gradients or some random numbers, they line up perfectly and it creates an aha moment. It's therapeutic. What is generative art? On a very high level in a nutshell, generative art is a number crunching program. You give it a few inputs. In our case, it's going to be X and Y coordinates on the canvas. This number crunching program could be a set of rules. Let's say a cellular automaton, which color codes your pixels. Or it could be something like a random number generator like Perlin noise, which could color code your pixels based on the intensity of the noise at a particular function. The output would be a series of RGB values, which you could then plot on the canvas to create that artwork. RGB or HSL or if you want a grayscale artwork, just ones and zeros, I guess. What if we replace this number crunching program with a neural network? That's majorly what my talk is going to be about. If you're interested in generative art, there are a lot of technologies that you could get started off with. Most people use processing. Most generative artists use the Java-based processing. There is a JavaScript version of it as well, which is called P5JS. I typically tend to use a lot of D3 because that's what I wanted to learn at the time. If you are into machine learning, you can use TensorFlow.js. Or if you want to do something more like facial recognition and stuff. ML5JS is a great pet as well. Something you should know about me is, as a kid, I was really introverted. I hardly mingled with friends. My best way to pass my time would be just to lay out on the grass and watch clouds and try to discern shapes in them. I think that habit kind of moved into my adulthood as well because I still see patterns in things. I don't know about you, but do you see a mask here in this pattern? If you do, you're like me, my friend. A couple of months back, I was looking for inspiration to create abstract art with machine learning. I came across this blog post by this research scientist at Google called David Ha. He used Python and a little bit of JavaScript and TensorFlow to create these abstract patterns on his Jupyter Notebook. I thought, why not try and use that and learn machine learning? I feel in order to create art, in order to create anything meaningful with any kind of technology, it's really important that we understand the basics of how it works. So what is a neural network? A neural network is essentially a set of neurons. A neuron is a number crunching program that we saw previously in the generative art, the diagram. This number crunching program, which is f of x on this slide, it can be a linear function, like what all machine learning algorithms use. It could be a square function. It could be a cube function or it could be a periodic function, whatever you want to visualize in your abstract art. Let's feed it a few inputs. We talked about x and y previously. X and y are essentially your y-axis on your canvas that you want to create your art on. Let's create another variable. Let's call it r and measure the distance of your pixel on your canvas from the origin. The origin being the leftmost, topmost corner on your canvas. So you have three inputs now. You feed it into your neuron and wait and see what comes out on the other side. And then plot it on your canvas to get your abstract art. But if you think about it, x, y, and r are still very linear aren't they? You know what those values are. And as generative artists, we kind of thrive in randomness. We kind of look towards having something different every time. Okay. Every time you refresh your page or every time you run a program, you want something new to come out. So how do you add that bit of randomness? You add that bit of randomness by introducing another input, which is z or z, however you prefer it. So z or z is something what data scientists call as latent vector. It's nothing but a series of random numbers, which really conforms to a particular pattern. You feed these inputs into your number crunching function. You also truncate your output to a particular level because after all, RGBs have a limit from 0 to 255. And anything above that, it doesn't make sense. So you use an activation function for that, in our case, it's tanh. And then you have an output, which is RGB values. You lay all of these neurons into a multi-layer network. You have your neural network, which creates abstract patterns for you. This whole concept of... Okay, this should have advanced to the next slide. Yeah. This whole concept of using neurons and neural networks to create abstract patterns is called a compositional pattern-producing network. This word was coined by another research scientist at Stanford called Kenneth Stanley. He wrote intensely about cppns in general and how they can be used in generally for machine learning stuff. So I tried to build a wrapper over this neural network in order to configure the network to the size that you want and try and create, you know, just keep hitting generate and it kind of creates and spits out a lot more abstract work. Oh, so you're not seeing the artwork, are you? There you go. Right, so this is my version of Big Breeder. So I tried to create a form which would accept the number of neurons that you want in your network, in each of your layers. So this particular network has three layers. So you can configure. You can't really see it very well, but the first bit says 8, then 16, and 32. Let me see if I can. Oh, crap, no. So, yeah, so you can configure the number of neurons you want in each of your layer. You can generate with color or you can generate black and white images. So one of my biggest ways I pass time these days over the weekends is kind of keep on hitting generate and see what comes out. So, yeah, that's my favorite thing. The best part of using this to create generative art is actually to use any kind of functions that you want. So if you use periodic inputs, the periodic is like trigonometric functions like sine or cosine, you get a series, a pattern which is periodic in nature or repetitive in nature, and it looks actually better in color. So you can use any kind of functions that you want to create any abstract patterns that you like. Of course, this UI limits you to using certain functions and certain configurations, but TensorFlow.js is pretty, pretty extensible. The best part of using a neural network to actually create abstract patterns is this concept of biomorphs. For some reason, when you create a neuron and you update your f of x to a square function, you are able to create biomorphs, or the biology resembling objects. So, yeah, it's pretty cool. And I try to create like... Of course, this is a version for the demo. If you're at home and you want to create larger images and you have the bandwidth to wait for the processing to complete, you can generate larger images. They're pretty cool to look at. So some of the things that I tried... Oh, come on. Can I switch to slides again? Oh, okay. Unfortunately, the demo works now. So these were like my backup images. So I was able to generate something that resembles a wasp, at least to me. I don't know what you can make out of the first image. But this was using a square function and a relatively simple three hidden layers with tannish activations. And this particular one is always spooky abstract pattern. I don't know if you can see a face of a man in there, but yeah, it's... This was created using a slightly more complicated neural network. Who's into Star Wars? Okay, so do you recognize the guy on the right? So, yeah. So one of the best things that I do on Sundays these days is generate these things. So these are great. I mean, you create neural networks. You pass it random numbers. You update the functions, and you create abstract patterns. But wouldn't it be cool to actually have a couple of patterns that you like, mix them, and create newer patterns? So why don't we have a look at that? So Kenneth Stanley, the person who actually authored the original cppn paper, he also speaks about another algorithm or a technique called a neuro evolution of augmented topologies. It's quite a mouthful, so let's call it's neat, cppn neat. So he speaks about using cppn neat for a lot of different purposes, one of them being using it as an alternative for the general back propagation that people use for deep learning or machine learning. So let's use cppn neat for our abstract, our generation purposes. So the concept revolves around creating few neural networks that we used, like in the previous demo, and randomizing them. What if I disconnect few neurons like on this particular randomized cppn? What if I just shut off one particular input from the output? What if I just, you know, remove one layer completely from the neural network? So what happens then? I used multiple such randomized cppn's, and I crossbreed them, and I mutate each neural network, and I create multiple offspring cppn's. Why do I do that? Because that's what generator artists do. So let's look at what mutation and crossbreeding means in terms of neural nets. Think about high school biology. So remember, we learned about mutations where an organism can have a new trait, and that's a mutation. And crossbreeding wherein you have, like, I know, very silly example, but a white rose and a red rose, and you crossbreed those two plants, and you have probably a pink rose or a rose with alternate white and red petals. I don't know. That would be cool to see. So applying those principles to mutation and crossbreeding, let's think about a randomized cppn on the right, or your left, wherein you have a neural network with three inputs, one, two, and three. And you add, so right now you have a connection, no connection between node three and node five, and the mutation would be adding a connection from node five. Or from a crossbreeding perspective, you have two parents, and you crossbreed the parents such that you randomly choose neurons from each of the network into your offspring network. So that's mutation, and that's crossbreeding. When Kenneth Stanley wrote about cppn neat, he was also referring to a particular application called pickbreeder. Pickbreeder is this website which is used for interactive computation, and I think it's called visualization. So essentially, which lays out 25 abstract images which are generated by randomized cppn. You as a user can choose any two patterns and mutate and crossbreed them to create 25 more patterns. Honestly, pickbreeder is a little bit more extensible. You can choose multiple parents, but in my version of pickbreeder you can just choose two because, yeah, time taken for development. So you can breed cppn's in black and white or in gray scale. I seem to be lost today with my browsers. Yeah. So I created an upper around the whole concept of mutation and crossbreeding as well. So what you see over here is 25 individually randomized cppn's, each creating their own abstract pattern. You can start with simple randomized patterns or you can start in black and white. Let's use color because it's cool. On the right, you can see the topology of the neural network. So if this was a little bit more clear, you would see that these were the activation functions. This I think is soft plus. This is also soft plus. Yeah, this is another simple network with tannage activation and I think both are tannage. So if I use these two, let's use something more interesting in this one. So these crossbreed and mutate to create... Right. And it doesn't work. Finally. So you crossbreed and mutate to create 25 more patterns and you keep mutating and crossbreeding until you create a pattern that you really like, download it, use it as a wallpaper or in my case like print it and frame it. So yeah. So yeah. So that's the idea behind Big Breeder. Okay. Okay. Here we are. The whole idea of Big Breeder actually stems from this book written by Richard Dawkins called The Blind Watchmaker. He basically uses his 26, 256 MB memory machine to create this computer model for biomorphs. Essentially to speak about how random evolution was actually the reason why organisms as complex as human beings were created. So he uses this concept of mutation and crossbreeding to create biomorphs or stick animals. So do read the book if you get a chance. Technology wise, I use TensorFlow.js for this, a specifically TFJS node. I did not have a lot of success using TensorFlow.js in the browser. I could even a single, single around 25 by 25 image took a long time to render and it's lots of number crunching happening behind the scenes of Node.js of course, HTML canvas where I lay the artwork out and PNG streams and D3 for topology. I have the Big Breeder running if you all want to play around with it on AWS, but it's on a measly T2 micro instance so be gentle with it. A few books Dawkins' Blind Watchmaker couldn't recommend it enough and some other books if you are interested in generative art in general. I got started with Matt Pearson's book, I think it's really great. It speaks about using processing as a language for generative art but you can apply those recipes anywhere using any language, using any technology. And Picova's book, this is one of my favorites for mathematical art so if you're like into Julia set or Mandelbrot set it's great recipes in there as well. References, I've written a little bit about it in my blog and yeah, the blog by David Ha and the papers by Kendall Stanley, they're just gems. And so I was at JSConf Asia this year and one thing that I really took away was to build more silly things. This slide is from Monica Dinklescu I hope I'm pronouncing her name right and she talks about, she's part of the Google Magenta team and she talks about how it is important with serious things. So yeah, build mostly things. Thank you. You can reach me on Twitter publishing about Africa. Thank you. Hi. I will next year if you all will have me. Awesome, awesome. Yeah. Going to your first picture demo explain I have five how come the generated images? Generated images? Yes. They do, right? I mean, the reason why I actually really got into it, I started off because, you know, as artists you just kind of want to try new things and the wasp was my first find and it's, of course there is like a graveyard of images on my machine. Those are the successful ones. There are a lot of things that are not as shareable as such but I knew that if I use a square function, I can create biomorphs. So I started experimenting with other parameters. Of course you may not be able to create the wasp or the man using the pig breeder that I created a wrapper around. You'll have to play with the parameters yourself on the neural network but yeah, it's it's, yeah you kind of, like I said, every Sunday just keep refreshing. Oh no so the wasp and the ones that I shared at the beginning, oh, you mean the slides, right? Yeah, so these were not mutated. These were generated by the single generation. So I configured the network with some parameters and random numbers which helped me generate biomorph like and this was the best amongst the stuff that I generated. Yeah but in theory, like if you read Kenneth Stanley's research, he actually bases the fact of the reason why they push this whole concept of evolution is because you're creating complex organisms from something as simple as a simple neural network. So if you can actually mathematically kind of mathematically modeling evolution, if you may. So not exactly, but yeah, in a way. Kind of. Yes. Any other questions? You train the neural networks? You're going to train them? No. No. So in a way, if you look at CPP and NEET is an alternative approach to backpropagation which is training. So the way backpropagation works is you choose an initial set of weights and then you train the network by by subtracting the weights based on how you're going. Yeah. The way the way training works in the CPP and NEET world is you you don't choose a set of weights, but you choose multiple networks and you find the fittest of them all to eventually choose what will work for you. So in our case, when we look at the big breeder, we are the fitness functions. So we decide what we think is great or good looking. But mathematically, there are ways to actually implement a fitness function. I haven't gotten there yet, but yeah, probably for the JSConf talk. Yeah. The same one. Okay. Hi. Hi. I'm just wondering so you've done that first technique and then you've done the evolutionary technique on top of that. What's next for you? What do you think is the most exciting? So I want to try a couple of things. I want to get into the ground space. I have been meaning to because this requires a lot of computing power. So any bigger image than the one that I showed takes really long time to load and your computer doesn't do anything else other than just crunching numbers. So the first step is to actually build a computer with proper GPU to crunch that and also probably have enough data to run a gun and then try and see if what would hallucinatory images look like? Because cppn in a way is also a generator. So if you're looking at the gun space so guns look at images and try to generate something. So what would a generator with cppn and a differentiator from a normal gun kind of look like? I know it's an unexplored space and it's interesting. But this uses JavaScript and I think that would be the most exciting space I guess. Do you need to buy a hardware for that? So the cheapest way would be to build your own. There are hardware that you can actually buy off the internet but that's really expensive. It costs around 7K SGD which is Yeah. Or you could, you could. Yeah, definitely. Yeah. Something that Okay, so when I did an analysis it turns out that you pay more for the cloud than you build your own and you have the computer at home than a monthly payment that you're doing for compute. So, yeah. I think, cool. Thank you so much for listening to me.