 I'm here because I'm actually a JavaScript guy, not really, but my name is Yang, I run a small boutique generative design studio called Swarm and generally what I do in Swarm is stuff like that. For my day job I actually, okay so I was telling a friend about this, I either draw pictures of numbers or I draw pictures with numbers so my day job actually includes, actually deals with building dashboards and shit or sometimes it deals with building bespoke software so this one's actually a software I made for Uniqlo in Singapore for your ticker interface. This was literally one of the pieces I sent to them as a suggestion to what they should put on the ticker so if you drop by if you drop by 3.1.3 over an orchard, this is run on DR interface. So okay so that's that, that's a little background of what my new generative design intersection with data visualization but today it's more, okay so the title of the talk is how to draw glorified screen savers the hard way. So I'll start with this guy's talk, I'll start with Brett Victor's talk. Brett Victor gave a talk a while back called Stop Drawing Dead Fish and he talked about that simulation is actually the suitable modus operandi for the medium that is the computer, right so if you do draw a fish in illustration it's still a dead fish, if you animate a fish it is still a dead fish, if you simulate a fish on a computer I mean it's still somewhat a dead fish but it is much closer to life or seemingly like life compared to any of the other mediums that came before. So there's this guy over at Monash University, this guy called John McCormack, not be confused with John McCormack of Quake, he talks about generative art and generative design and the relationship between the man and the machine. On one end of the spectrum you have the artist and the two and then somewhere in the middle you give up a degree of control, you give the machine a little bit of agency, it becomes either an assistant or a collaborator and all the way on the far end an artist and a mentor, right and he talks about this other thing called the clone dyke space if I'm right, he also referenced this in the paper before which is about hunting for a creative space with generative algorithms that produces interesting work because anything can be a generative algorithm 32 pixel by 32 pixel image and then you randomly feel an RGB values in the middle could be a generative algorithm it's just not very interesting so how do you narrow down the space of the output so that the ratio of output to the total space is actually interesting right so that's that's what I kind of try to do so let's start with this so little things that I kind of built on a occasional basis embryo I guess small toys like that oh milk small toys like that and then from meta balls you always get flocking meta balls so it's a little bit of like a progression that way at a start it's just random movement and then you try to get something interesting by introducing some form of algorithm right you try to tighten up the space to something that is somewhat interesting by tightening the algorithm um so that is this other guy um yeah so this was actually for this was actually a commission no it was actually for a pitch ages ago for this thing called the presence design awards every year they will come out of a motif and my suggestion so I collaborated with the main studio and my suggestion to them was to create an like to have a generative system that creates that plans all 10 years motifs together and see whether you can actually create all forms of the motifs in between so each there are certain dimensions involved and they let it evolve accordingly not to go too much into detail essentially at the back end of it there's a neural network that is untrained that I use as a multi-dimensional noise machine just to roll dice for the different dimensions and let it interpolate between the dimensions um something else is something like this which okay potato uh I've got a better version of that excuse me so something else is something like this which is um essentially this was what designed from festival back in 2015 um which is essentially a glorified screensaver like all my other works um at the back of it is actually a modified version of sugar scape like the sugar scape simulation where you actually have food and evolution and um what's this reproduction so essentially what you're seeing is just the output onto um onto the environment um you don't see the organisms actually moving it's just the effect it has on the environment so hopefully that hopefully this all answers like the glorified screensaver's part this is exactly what I do I hunt for ways to actually generate content um in different ways and a huge part of that huge part of my objective is actually to get to interestingness is to try and narrow the space down to interestingness so um the main one for this talk is actually um it kind of rolls from here so there is um there's this event going on right now called Singapore um and a friend of mine asked if it's possible to actually generate um logos for Singapore like like to actually implement the generative system okay just for disclaimer um it never really got it never really got implemented um it got way above our heads and then we kind of just pulled back and then save it for another day all right but um so this is Singapore's logo okay you just stop it there yeah so that's that's the logo it's it's it's quite straight forward complete geometric um and I thought that it would make sense because it is about design and art and little snowflakes um that we actually we implement based on a cellular automator system right um in this case Woofram's cellular automator or a totalistic cellular automator because it looks like seashells you can actually get to patterns like seashells from cellular automator like that as such right um so I got to that I rendered um we rendered a couple of logos based on that um but it wasn't particularly interesting um it gets a bit repetitive or it gets a bit mechanical after that so in that sense to do it the long and hard way is to actually uh okay let me skip back a little bit so there is this other guy called Carl McDonald and he wrote a fairly long article on Medium or he gave a fairly long talk as well about the return to machine learning so as an immediate artist um yeah like he works of code but he mainly does works of art and one of his most recent artworks is actually with um Google's AI experiments called the infinite drum machine and here he actually implemented T's need to create essentially a visual drum machine right um in a sense that okay let's see if I can get to an example I mean this is Google I guess okay let's start playing oh yeah okay I have no audio but the whole idea is that um I think he's got a hundred thousand samples and then based on that he applied T's needs to it to actually get some form of grouping and clustering to the sounds like sounds that sound alike should be placed together right so based on that I actually tried applying on my side uh so here we go so okay this this this is after a couple of this is after a couple rounds of iteration uh the thing about T's needs actually I think about okay so for this project um I started with directly using T's needs with with um pixel data and the best implementation of T's need okay never mind let's drop that first um with pixel data and it wasn't particularly very good the clustering didn't come out to be very um it the clustering order it's not really good um the results didn't come out to have very good patterns right you can't really find out lines like that so instead I went back to a drawing board and I read up about this thing called autoencoders which is a I mean yeah unsupervised way of unsupervised learning mechanism right so high dimensionality to most of the time low dimensionality back to the same number of dimensions to your input and then you essentially train the machine you essentially train a neural network with um unlabeled data so what I did for this one is that I actually trained a convolution autoencoder so that I can actually pull out the features because at the end of at the end of generating at the end of this uh yeah at the end of generating like all the cellular automator I got like about um million odd combinations but I was too lazy to label them so I sent it all through the autoencoder to get the features extracted automatically so what happens is that after you after you send after you train the autoencoder you chop off the end you chop on the second half of the funnel and you pipe it through the same model again just to get the features I don't read yeah just to get the features so based on that then I applied t's knees again and then I got this right so um yeah and you get interesting patterns like this where these guys are grouped together based on based on the features that is detected all these guys like honeycomb shapes yeah so based on that I can actually go forward and apply go forward and actually apply this to a to to train a recurrent neural network so that's this which is actually pretty fresh um this part I'm not doing particularly well I've come to learn like I'm kind of like a monkey turning knobs in this case many a times I'm really just tuning the hyperparameters to see what goes on actually I feel like a kid holding on to adult tools I don't exactly know what I'm doing many a times something in Carl McDonald's essay about return to machine learning is that he talks about getting an intuition off the neural network I'm just kind of dead I'm just kind of getting an intuition off the neural network I don't particularly know the true implementation details I kind of fumble around a lot so that's that this part didn't go very well although as a segue I would say that prior to this I was actually running everything on the CPU it was terrible it takes about eight hours to run to train a model um moving across to the GPU is awesome um partly because that the feedback loop is much tighter right like you can actually see and adjust and twist the knobs a little bit faster and it's a little bit less annoying so based on that uh yeah based on that not much else so based on that this is kind of what okay where the cellular automatic automator kind of kicks in is that it actually becomes the animation signature for is actually an animation signature for the logo so each level of time so this is the automator that is produced or like that and it just animates the logo accordingly so based on that you have some variations or this guy is not particularly interesting but yeah they do exist so that's that I do actually have a lot of content the only bit in python really is actually this whole part um this part on recurrent neural networks and prior to that um piping things through the auto encoder which is actually which both are done through keras um keras wrapped on top of tensor float apart from that everything else in my stack is actually javascript um so that's that thanks oh yes the fake bedroom um okay so the question is have i looked into generative adversarial uh actually no because actually i don't know what that means um can we go into detail on generative adversarial there's a there's a way to generate pictures and move between pictures which hasn't seen okay yes okay yes i've looked at that as well um um okay so in in call mcdonald's talk about return to machine learning he like he spoke about why he returned to machine learning and i kind of agree as well which is um machine learning machine learning essentially is just a bag of generic algorithms and um how i say this uh classically for machine learning like rational random forest and all that stuff it tends to lean towards uh predictive models whereas um with deep learning with a bunch of the neural networks that we have now it can lean towards uh or it can be very accessible to get to a generative model um um i guess for the jm stuff it's closer to procedural content in the sense of being a assistive tool to create content um i'm kind of leaning towards things like simulation things like um um generating something from nothing um closer to like door fortress and how they do procedural content within door fortress or the defund no man's sky uh yeah like like they're both i i agree that they're both uh procedural content um it's just how you apply the procedural content yeah yeah okay thank you