 Hello everybody, my name is Felix and I'm a little famous for my GL presentations. I had this presentation before at the FMX conference in Stuttgart two years ago and I've heard earlier this year from K.S.Kolom and I thought that's my place. I should do this again. I have actually not very much new stuff since then, but I'll show you some of my procedural generated content. I have actually no idea what I was trying to say when I was writing it down two or three years ago. I got nervous, but I skipped a bit. It was terrible, a little bit, but I'm sad. I learned programming early as a teenager. My dad was also a computer programmer and I was introduced to low-title graphics as maybe when I was 12 or 13. When we learned JavaScript, I would rather say HTML and computer science clubs as well as school. I installed it all up in two weeks and then I discovered that at this time it was Internet Explorer 4. That didn't even have the document object model and I was celebrating the day when Internet Explorer 5 was released. So then around 2000, I switched to the server side as well, programmed my first web pages and had a guest book on my side and programmed it into a script and tried. So then there was Miltrop. Miltrop is a music visualization plugin for Winamp and I would come to it and close it. I think I discovered it in 2003 and 2005. I thought I could release my first preset pack and then in 2008, customizable shader programs were introduced. You may know or you may not know that there are several shader toys on the web now and I think I have it all through many years later. I've studied computer science at the university where I was doing the software engineering branch and object-oriented systems and then to program Java and C-sharp and the whole modeling that way. So Miltrop is very, very, very basic. You have simple shapes that you can animate with a little scripting language and you can transform the backdrop with a warp feedback. So you can place some colors and then let them flow. That's what I was resembling with my first takes of FGA as well. So Miltrop is an IDE. You can just start it up. Oh, wait a second. I'm not sure if I have sound. I'm a little confused. Can I plug in sound somewhere here? More interesting is the headphone. You can open the headphone and directly jump and edit the program. You can save them, load them, give them names. What does that mean? When you find the name Flexi in one of them, that's my name. I plugged it in the stream to show you that it's actually really live music. It's not really recorded or something like that. All generated on the fly. One of my presets is included with the official download since 2008. It was when I'm 5.5. Actually not very much active and obviously looking into the forum but I didn't do anything new since one or two years. But that's the way I end my shader programming. So I started to reprogram my first Miltrop clone in Java in the processing language. But at the same time that GL appeared and I left processing inside and programmed back in JavaScript where I really learned programming as a teenager. So the most simple thing you can do with a texture feedback loop is a tunnel. You see it's running in the world and it's really fast. This one is one megapixel. This one is quite similar but imagine you don't use just one camera to feed it back into the loop but two. You can move around them, scale and rotate both cameras and you can have with the feedback a really responsive factor to generate that. It's one of the lessons that I learned with Miltrop very early. It was easy for me to adapt this into JavaScript and FGL data. Actually this one is not even with WebGL test but standard HTML5 cameras. And I won a prize with it last year. It's written in less than one kilobyte of JavaScript. So with WebGL you're not limited to this linear thing but you can also use more complicated complex function and complicated formulas. Really simple. This one I've placed four roots in a complex number plane. I can save one with a click and move it around. Last year at the one kilobyte combo contest there was also a new category, the WebGL and the 2K variant and I did wrong and I did 2K in WebGL and it was not allowed so this funny little guy was dismissed by the jury. But you see I'm using heavily the Edge enhancement filters something that you can do very cheap on the GPU side and this one is a feedback system on two mega-pacels. There are obviously three color channels in the texture, red green blue. If you don't see red green blue here, I've green colored them in the final composite step where I've also applied the Edge enhancement. So the basic setup is also very simple. There are mainly two shader programs. One shader program that uses the previous frame as an encode for the next frame that is a small shader and then you can render this warp feedback texture to the screen in the composite shader and it does not get into the feedback loop but you can layer different effects on top of each other and finally on the screen. What Miltrop also had was this grip-based texture warp but I did not include this in my simple shader pipeline in WebGL. So what can you do? I'm actually not using any geometry in my experiment. This is a WebGL scene which is... You could say it's late first. The whole image is banned by two triangles. That's the idea behind every shader toy. This one was me taking a photograph of the cathedral when I arrived yesterday. So I hoped I could show you a feedback loop on the actual live stream but I don't know if it's really streamed live. Is this a correct link? Some points from this slide I've already mentioned. Two triangles, yes. I didn't use any framework. I programmed this all by hand from top to bottom. So every page you see on this slide deck you can simply grab the HTML file and take a copy and break things. Just do it, right? Very much recommended. Miltrop had lower texture pre-calculated and they can be used for very surprising effects, very naturally looking effects. In Miltrop itself you have three different resolutions of the blood textures. I've extended this a little bit and that's too much. Cubal scene. Also I've dropped the 8-bit textures for Flood 32 which gives a lot of more opportunities also to use it as a simulation. I'm not the only one who does this and everyone who knows a little bit of FGL knows 3JS. This is the most famous library and framework for productive FGL works. Stephen Wittens wrote an editor texture library for 3JS and it does pretty much the same thing that I'm doing without 3JS. Then there are different clones of the original Miltrop. For example there's Miltshake. Miltshake has a soundcloud integration. It's all open source, you can find it on GitHub and there's a Python-based preset converter. You can take your original Miltrop content and I don't know why it works. Buttatron is even cooler. Buttatron has a converter for the HLSL DirectX shader programs to GLSL and GLJS. I was quite amused when I found the site because the first scene was programmed by me. I didn't know about Buttatron but I really like when things like that happen. Every once in a while I discover YouTube videos of Miltrop with my content and I'm just happy. Any ideas? Something is not good. It looks better. We would hear something. I can't find many of the original Miltrop presets and borders to GL. I didn't even have a thing of it. So why am I here? Why am I talking to you? Because I think you are missing a point when you look at fractal images. Most of them are remade for many minutes and I think it's a waste. You can have instantaneous feedback if you know how to do it right. It's not in every property the same but you can come very far. This is a progressive image-based approach to fractal rendering. In every animation frame I'm adding the previous frame adding one bit to the color depth and afterwards I apply a map fit. It's very simple. This whole shader program is maybe two lines for the feedback and another five lines for the compositor. The Julia factor is similar to the complex number operation on the frame. The final tree is not using the complex number calculus but it is composited of three smaller copies of itself. You see the branches on the left and the right? These are smaller copies of the whole thing and also the main branch after the first function. It's also a slightly smaller copy of the original frame. The same thing with four copies can lead to almost a 3D impression. Actually it's really not 3D in this case. Just playing with perspective and four copies of the same fractal. I said it before. I can't support every type of fractal. These are very special types of fractals that have the property to converge. The Mandelbrot definition is another one. A point belongs to the fractal if it converges but if not it's one level up and I can't support this level. It says the Mandelbrot set is a set of values of C in the complex plane for which the orbit of zero under the duration of the complex quadratic polynomial remains bounded. What I'm not doing is iterate over and over again and then look if it's in the limit or not I just don't care. I just calculate on and on and on and even if it reaches the point where it converged I do it still because whatever. So there will be no 3D effectors like the Mandelbrot. You're supposed to laugh now because this is something to eat. I played jokes myself. I call it a flappy bird. I don't know if it's really it's a flappy bird. It is also really old. I did this on the last day of the JS1K compo last year. Actually I did this before. I think it was 2008 or 2009 when I found the buddy guy. I'm holding this title abstract pixels because I don't see pixels only as RGB on the screen. I see pixels as an area of vectors. You can store anything if you want. You can use the RGB to transform but you are not bound to it to display it directly in the same form. You can color each channel separately and stack them on top of each other. You can use many color channels. In the end you can totally diminish this color thing and say why not use this value property of an agent in a sense that it can walk around and explore the world. It can have a position. It can have a velocity as a final pixel that moves around. Or you can think of simulations. I can read my bulletins. Cells and forest fire epidemic simulation. Let's say you have a forest and you use a regular grid and you say there are that many trees and it's so hot and the wind comes from that direction and the neighbor is burning. Oh my God. It will take on. Also you can simulate how information travels in a flat area. That's what I believe and this is where my talk continues is concentration of chemical substances. John from Neumann proposed a way to simulate or to an explanation for the patterns on leopard skin and zebras and I will come back later to this. So first cellular automating. Cellular automating is what I called set before with the fire simulation. You have very simple programs that are executed on a single pixel for every pixel on the screen. And the new state of the pixel is defined by the state that the pixel had before and also the state of the neighbors. The most famous of course is the Game of Life by John Coney. Sorry, close the time. And this one is executed here on a two megapixel texture and if you would do this on a CPU side you could wait for every frame up to several seconds and this one runs just fluently. I've also programmed the magnifying lens and another one to explore to find in the first place is a rock paper scissor simulation. So you have red, green, blue and don't want to die. Blue beats green and red beats blue and green beats red again. This is the only rule. And strange things are happening then. It's not the only rule. I'm also doing another thing. Each color patch is floating in the direction where it's gradient points. So when there's a green color patch and it will just continue to grow in the area where it's not present before. So I talked before. Blur is very important. Blur is very important for this gradient definition and for some other effects. And also blur is surprisingly cheap on the GPU. You can have a texture that is foreground is the one from the feedback loop and the state of the different levels of blur in the background are calculated for every fragment. The trick is that you don't calculate the operation for every pixel of the, in this case, 7 by 7 matrix convolution but you can separate this into two passes, one horizontal and one vertical. Each one has only 7 lookups. And then if you go down there you can reduce the resolution and save even more. I've also tried my hand at integral images in some area tables. These are images where you add the value of the pixel to the predecessor for every pixel so that you can do arbitrary lookups for different rectangles and have the average from the box. But the build-up process of some area tables was not feasible enough for me to do that here. So I introduced the works by Alan Thugling before and the reaction diffusion stuff for the natural looking patterns like zebra stripes or leopard dots and what so ever. And the algorithm behind it simply did a different model of two different blur limits. I don't want to go into detail. These are external links and you can look it up afterwards with the presentation on the online. So you have a brown signal and then you take the blood version of it and look for the difference. You could say the rich get richer and the poor get poorer. And then you can blur this again and this again. This one was pretty neat. It was made by Fred B. Fremont. At least he wrote a blog article about it. Actually it was Jonathan McCabe who introduced this simulation code in a paper and as I tried my hand at it this is how far it came. So I have to hurry. I have so much nice things to show. So the whole what shader program of this simulation just let it run for a while go directly to the source code to show you what makes the scene up. There are different shader programs I said mainly the advanced shader program and the composite shader program. In this case the composite shader program is a one liner. It simply takes the red shader and maps it to white. So the whole thing happens in the red shader in the advanced shader and you can see it in these three lines. That's all. But it really gives a nice Z-prop again. I think the warp on this one the flow and emotion is defined by simple sign and cosine values that vary over time. If you want to believe me and I can show you a few it's really in real time and it's all like that. I don't know who of you has ever zoomed into a zebra's fur before. Some people have strange hobbies. Yes, I think you're very meditative. It's fun to me. And don't show this to a zebra. Of course when there are partial differential equations at work there are of course more sinister types for them. This rotation in the background is actually from a predatory simulation model. It's the Lotka-Volterra model. You can say the X-axis is the prey population and the Y-axis is the predator population and when you're... It's exactly the opposite. The Y-axis... I always conscious this. Let's say when there are no predators but a lot of prey the prey will continue to grow and then the ratio... I have too much different other time. Yes, I'm drifting away. I knew that it would happen. She's a... Giraffe? A petan? It takes only a few seconds to... Maybe... ... Okay. Just have a look later on. I was invited to talk about my programming at the FMX conference in Stuttgart 2013 and I think it was due to this example in particular. This one was where I have originally gained a lot of fame. It's really simple again. There's only one color channel used. It's the red one. Not so obvious because it's recolored and I could go through the code with you and explain every code line but I think it's even self-explaning. I've commented on the code. It allows you to download it and check it out. Noise is not so important. You can generate your own noise from procedures and I have pre-calculated some noise textures to use it first. The simplex noise is out there but I think it's still not so real-time as I would like to have it. You can do many funny things with simplex noise. I haven't even started with fluid simulations in the party. I have to go quickly through this. I think you must skip the question and answer it in part. It's not so much fun to explain as it's really simple and I can look at it and think I don't know what to do. The code is there. It's open. It's quiet. Go run for it. All I do is very minimal effects but many complex numbers and stuff like that. So I showed you the initial interesting approach to panorama photos. I hadn't started there. I started with ray tracing of phosphes and this one is very simple. Typical shader guy looks at it and says where's the light and where's all the effects. But that's not my point. All I was trying to show you was there's again more geometry used than to find it. So the rest is ray traced with its fastest head. There are many operations like the polar coordinate transformation that is astonishingly cheap to use. Especially when you have texture that actually tidals simulations are flowing over the edge. You've seen this earlier before as a fractal feedback loop. In this case the same math to the application of the four roots and the complex armor plane deform the original ray-flame steel. When you combine the polar coordinates and complex numbers you can have two poles, one polar, the one. Actually this last two didn't count as two external poles which include shader editors on the web ready for your investment. This one is GLSL and GLSL Sandbox. Many people contribute to it. You can directly jump and edit the shader code. Now complex numbers can produce black holes. Have you ever pointed with a black hole through the edge? It's not physically correct. When I'm talking about projection I also mean you can have a scene like many particles let's say half a million particles and you paint them to a texture then you have a density map of the particles and you can use the density and the gradients to update the particle's position. This is why I'm hurrying so much I'm worried that my time is short because this is only the start of the particles and I also did fluid simulations so I'm shocking all. Do you want fluid simulation? This one was shown on many blogs including the Chrome experiments and the official Chrome blog that brought me many followers. So 10 minutes to go it must be enough for the particle soup and the other particle experiments. Yes, I'm sorry. I said you can use the pixels in the texture program to represent other things than RGB values. This block of two million particles is stored in the texture and updated every frame with a pixel shader, a fragment shader program. So actually this one is not updated but it's static but there are really nice algorithms that are simple enough to take my slides. I did some strange attractors in 3D with particles as well. These strange attractors are defined by one line as for the x, y and z component of the pixel particle positions and they are updated every frame by a shader program and this one is half a million pixels at a time. And the formula can be added around and the integration parameters can be added live and this is when you come to my successor speaking. I just want to leave this and take some moments to get full power. I think you can start the question and answer at a time. I think it was very fascinating to see all those different pictures from that sort of light. A big applause for Felix.