 Ah, wonderful to be back. Bon dia! How's it going? Good? Recovering from the after party, after, you know, beers and stuff, and wonderful. It's good to be back. I have been here in 2014 on this very stage talking about web components, and look, we've got them now! Kind of. So that's the pace. Hopefully, the things that I talk about today will be even more useful as they are kind of already there. We are talking about how the browser renders things. So yeah, for those who haven't seen me before, I'm Martin, I'm from Zurich, Switzerland, and in Zurich, we are very boring and very serious, so here's my LinkedIn picture. And that's actually really helpful because this LinkedIn picture keeps recruiters away. So it's kind of the bad ones, anyway. The good ones kind of still like it, anyway. I'm head of engineering at ArchiLogic, which is a tool, or has been a tool, and it still is a tool that does like 3D in the browser, which is pretty performance critical. You might notice, and we do like web VR things as well. And recently, we have started to build a new thing. We're going to see that. No, actually, we're not going to see that today. I'm also a Google developer expert. I'm active in the Mozilla community and in the W3C. I definitely recommend everyone to join the communities and the community efforts, as Steve has said, browser vendors have a lot of things to consider. And when we are vocal and loud about a feature that we want and go to these places, especially the W3C, and be loud about it and say, hey, we really want this and it would be awesome to have this, then browser vendors have more incentive to actually build the thing we are asking for. So I highly recommend going there. It's very easy. GitHub, the specifications are on GitHub. You can file pull requests. If you want to change, you can open an issue and say, hey, I don't understand this or, hey, this is not how I want this to happen. And then a conversation can start really easily. It's not like having to physically go somewhere, because, ugh, physically going somewhere, right? That's what we invented the Internet for, to not have to do that. We also built this thing called 3DIO. It's a set of APIs and components to make 3D easier, because in these groups that I just mentioned, I'm loud and vocal about 3D and VR as a first-class citizen on the web, so hopefully that's going to go well. Try it out. Let me know if it's not good, because then we have to fix something. Anyway, today I will talk to you about rendering performance. And I think to understand rendering performance, we have to go take a few steps. And I think the easiest step, the easiest first step to go with is looking at how we transform the text that we've got, which is HTML, CSS, and JavaScript as well. But primarily, when I talk about rendering, it's like the visual thing, so I'm mostly concerned about HTML and JavaScript in the first step. How do we take this text into something that becomes visual? And then in the second part, how do we actually paint in all the pixels and what are pixels and what is rendering really? And how do many different images get together on the screen together as one big website or web app? And I'm going to do some interactive parts. There's going to be questions. And you can win eternal fame. No, actually, I have Swiss chocolate, I think. So if you want Swiss chocolate, then come to me, find me later. I have only a very limited amount of that. But you can also take a 360-degree selfie with me if all this chocolate is gone or you don't like chocolate. It's not lactose-free chocolate, I'm sorry for that. And then we're going to talk about WebGL and Canvas 2D, which is the thing you might want to think first off when you're here rendering because it's like 3D and 2D rendering. Ah, that's awesome. Who here has used the Canvas before? Be it for 3D or 2D? That's a few people. Good. All the others, try it. Goddamn. Right, part one, the DOM and other trees. Isn't that a beautiful picture? So we start with a website like this. This is probably something that could have been around in the 90s, but it's still more or less looks like this unless we do something really weird, which we shouldn't be doing. If you start off with something that is semantic HTML, you might start off with something like this, it's a headline and an image. Now what happens as the data comes in from the network? Well, the browser receives things and goes, ooh, this is an HTML document. So I'll start with like building a tree with a document node. And then there's a body in there. All right. And then we have this headline with text. So I'll put these two because the headline could stay the same. The text could change. So I want to have these represented as individual bits in memory in the tree. And then we see an image and then we load the image from the network and all that kind of stuff and we kind of like get information. Depending on the image format, we might get the information of how large the image is very early on in the download so we can fill information in like that. There's another tree being built in the background, which is the CSSOM that Steve just mentioned. And it's basically getting like the visual properties. So for instance, for the hat one element, it might figure out how large this is going to be, what font we're going to use, what color we're going to use, all that kind of stuff is in a separate tree as well. So basically when we are parsing, we are parsing in a streaming fashion, which is amazing because if you've ever used fetch or something to download a larger file or get a larger API response, you know that basically the promise only resolves once you have everything in. And even though I think I'm not sure if the XML HTTP request gave you actual access to the thing, I think it gave you a progressive event that tells you how far in you are. I don't think you can get the data. The browser can do that. The browser can basically the moment it sees HTML, the browser can start parsing it into a tree. We don't have that power in JavaScript, so the browser has a bit of an advantage here. There's the streaming API or Streams API coming up. Jake Archibald gave great talks on that. So if you want to have access to things as they come in, streaming access to things, definitely check out Streams API and go to GitHub and say, I want this. This is awesome. I think Chrome and Firefox are already starting to ship it in an experimental form of it. So we start parsing them as they come in. And then as they come in, we build these trees. We have two trees. We have this DOM tree, which is like the elements and their content. And then we have the CSSOM, which is the visual properties. OK, great. But we don't know how these actually come together on the page. So there's an algorithm that, depending on which browser you're looking into, it's called reflow or layout. I'll call that layout for short, because it kind of makes sense as it lays things out on the page, in my opinion. And what it does is it basically figures out, OK, so we have this DOM tree, and we have the CSSOM. And we know that we use Comic Sans in 14-point. And it says, hello world. So it's this many pixels wide. And it uses native APIs for that, depending on which platform you're on. Actually, it uses a wrapper, usually. For instance, in Chrome, that is Skiya. So it's a library that basically gives you these informations and figures out the colors and all that kind of stuff. And it also figures out how much space we have. So how large is the window? Are we on a mobile phone? Is it in portrait or landscape mode? Is the window like resized? Or is there the full screen thing? And basically, based on that, and the size and information that we already gathered, it figures out how to fit everything in according to how HTML wants it to be fit in. And there's block-level elements, and inline elements, and inline block, and table, and flex box, and grid. So all this kind of stuff happens in this layout process. To do that, it looks at the trees. And then what happens is more or less like this. So here, as the information comes in, as the trees are being created, we have a parser that is streaming parser, these boxes are being laid out. OK, great. So we start with the page and lay out all the elements as they come in. That is wonderful, but that's not how my normal website looks like. I do not look at colored boxes. Maybe for Mindsweeper, but normally I look at something more intricate than boxes. So we have to paint things as well. So painting is one of two steps of getting something visual on screen. There's another step called compositing that we're going to look at in a bit. But basically, when we paint, we have to think about what does that mean? I mean, painting, I'm not sitting in front of my computer and basically do the color in the little dots myself. So if you think about screens, computer screens today, until we have holographic displays and all that kind of stuff, are basically just a collection of lamps, very tiny lamps, LEDs, or something like that, or background light in front of a thin film of whatever, or a little proton ray going, ooh, fun. If you have one of these old monitors, terrible graphic quality, anyways. So but basically, they're tiny little lamps. And each of these dots, each of these pixels, is actually three of them. There's a red one, there's a green one, and there's a blue one. If we turn them all off, surprise, no light, it's black. If we turn them all on, it is white. And then basically, we have intensities of on, right? You can dim them up or dim them down. So basically, you get 256 variations of brightness of each of them, and depending on how bright they are, in contrast to each other, you get a different color. So for instance, if the red one is full on, like maximum brightness, and all the others are off, you get red. Surprise, right? If you then turn on the blue one, you get hot pink probably, looking beautiful. And what it really is, is these intensities are just numbers. I laid them out in this grid to kind of reflect the colors into numbers. But there's nothing stopping me from just putting them as a list, right? This is just for us, so that we have a better understanding of which numbers represent which pixel, but really, it is just a list of numbers. So that means rendering is actually not very complicated if you think about it. If the screen is just an array of numbers, of three numbers for one pixel, three numbers enter, one color leave, to use Thunderdome, if that's an array, and each of these array items is technically like four values, or if we have like four values in the array, it depends a bit on how we want to structure this array. And they represent red, green, blue, and maybe also transparency. Now, well, the screen can't turn transparent, but we might have multiple colors that we want to overlay onto each other. So we have these four values, or three values, if you don't want transparency. And when we want to render, all we do is we write these numbers into the list. Now, who here has ever written numbers into a JavaScript array? I'm actually very surprised that not many people are raising their hands. What are you doing? Anyway, so yeah, we figured out rendering is basically just writing numbers into a list. Writing numbers into a list, that's a very basic concept. And it's a bit more intricate than that. So when we paint, when we want to write numbers into lists, painting works on images. And as you might know, images happen to be rectangular. That's also why we need this fourth number. What if we have a rectangular image, but we want to render something that is a circle? Huh. Well, the thing is the fourth number, the transparency, or the alpha channel, that's where the magic happens. If we set that to zero, then we just ignore that pixel there, and we're just not going to draw it. But in memory, they are basically a rectangular piece of memory, more or less. These rectangles are called layers, or images, or textures. I use that term interchangeably. There is a bit of a technicality behind that. But if you hear texture, or layer, or image, it's kind of the same thing. It's a bunch of numbers that form a rectangular array of pixels. And now, as I said, we want to put multiple of these over each other. For instance, we might want to put a background, and then a Nyan cat on top of it. So we have to kind of combine them. And this combination of the two is called compositing. And it works like this. So here we have two pixels. Each of them has an alpha value. So here we have 64, which is like a quarter intensity. And then here we have 128. Actually, that should be 127. Doesn't matter. That's like half intensity. And we want to combine the two. And here we have red. And here we have green. And then when we combine them, we basically mix them. We take, for instance, one part red and two parts green, and then kind of mix them. So this color here is twice as important. It gets multiplied by two beforehand. And this one is not as important. And so we mix them up to this wonderful color here. And as I said, the screen can't turn transparent. Doesn't really work like that, at least not with current technology. So we have to only use three numbers for that one, because we have three lamps. We don't have a magical transparency component. So this is how you can combine these four values and another four values of using compositing into three values that you can actually express with the three little lamps that you have on your screen for each of the pixels. And you can use it in more practical cases like this. So you have your background, and you have a Nyan cat. And then what you do is you basically render it all over again. So for each frame, hopefully often enough, hopefully at least 30 times per second, hopefully 60 times per second, because that gives a very smooth motion. You start by putting, and I put a little border around it. So you put the Nyan cat image on the other image and then put that onto the screen. And you do that again, but you move the Nyan cat a little. And notice, the things in the rectangle here and the background are the same. So we don't have to write all our numbers again. We just copy our numbers into different positions on the screen. So we don't have to draw everything. We don't have to paint. We only composite. We only put the pieces together in different spots. And then we do that again, and then we do that again. And then it goes Nyan, Nyan, Nyan, Nyan, Nyan, Nyan, Nyan, and so on and so forth. You know how the drill goes. But how does that actually work? So how do I take one value of the array and then put it onto another value of another array and then put that in the third array? Sure, you could use zip and all that kind of stuff. But actually, the graphics pipeline does something slightly different. It takes multiple images. So one of them could be the Nyan cat. The other one could be the background, for instance. Runs a shader. And the shader creates what is on the screen later on. Now, what is a shader? We have heard that term, haven't we? Actually used that yesterday quite extensively. And I think she showed us a few examples of what you can do. But shaders are not only for WebGL. Actually, shaders are what the browser uses to get things on screen really fast. And let's look at how that works by looking at the different kinds of execution hardware that we have. So we have CPUs. And the CPU has more cores these days than one. But it's a small number. Like, if it's a phone, it's probably like eight cores. Like, you have eight of them working together. If it's a modern computer, it's probably anything between four and 16. And if you have too much money, then it's like 32 or something like that. But then the GPU has cores as well. And thousands of them. What? That must be expensive. No. Because these GPU cores are very, very specialized. While the CPU core, unfortunately, because it has to do all sorts of calculations, has to be a little more generic. But it's highly optimized to make these generic things as fast as possible, which makes it a lot more complex. So these cores, each of these individual cores, isn't as clever as the core from the CPU. But the CPU can't do certain things, can't take certain shortcuts, because it has to be generic. And when I say generic, I mean, we might have programs that basically read from some memory somewhere, and then check what's in the memory there. And if it's some value that we expect, then we write to some memory somewhere else. We can't do that in parallel. The sheer problem, if you have ever worked with multithreading and you know, it's like, oh, I have a problem. I'm going to solve that problem with multithreading. Oh, now I have five problems. So multithreading is a tricky one, because of this. You can't just access the same memory at the same time. So basically, if you have multiple cores, they have to do some very clever things to coordinate amongst themselves. But it's really good for that. Like, it can do all sorts of things. It caches memory values. It does a prediction of what's going to happen in the next circle. It prefetches things, because it goes like, OK, I'm going to run this code next. So I'm going to get that already. So it can do all sorts of these optimizations. And the GPU does not really help there, because we have like 1,000 cores, but they can't do anything, because this memory thing has to be sorted out first. So basically, they can't really benefit in these cases. But we have different problems. And there is a problem that is particularly nasty for CPUs to deal with. And that's like doing a large amount of data, working processing a large amount of data, with the same operation. So here, for instance, this is matrix multiplication. We take a value from here, and then we multiply it with here, and then we put it here, and so on and so forth. The only thing is the positions change, but the operation, the multiplication is the same. And these matrices could be gigantic. So this is kind of the case for machine learning and graphics. Because if you think about it, a matrix is nothing else than a very fancy list, a list that has two dimensions. But we could theoretically say it is a list. And then we have a large list. And what is a list? Oh, the screen is a list, right? Our pixel array is a list. So we have to kind of process the list somehow with code. And that's where the GPU really shines, because it is optimized for that kind of stuff. It goes like, ooh, yes, give me an operation, and then give me these numbers, and hey, 1,000 cores. Here we go. Into battle we go. Here's your numbers, and here's the equation that you run on them. Done, awesome. Here we go again. So they're really, really good at that kind of stuff. That's why everyone wants to do GPU computing for machine learning, for Bitcoin mining, and for 3D graphics, or graphics in general. So that's what they are good at. Let's have a look. Now, obviously we have, well, not obviously, but we do not have as many GPU cores as we have pixels normally. We might have like 1,000 cores, but we have a million pixels, or 4K probably has a million, bazillion, gazillion pixels. I don't even know right now from the top of my head. But for convenience sake, we have one core for one pixel here. So we have the white pixels on the right, and our GPU is ready. It's like, yeah, give me work. I'm like, all right, OK, here we go. Take the x and y coordinate of your pixel and create this color value for your pixel, which is the x coordinate times 85, the y coordinate times 85, which means we start with 0, 0, and go down to 3, 3 here. So we should have a gradient. And then they all at once go, all right, off we go. And we create this graphic. And this is entirely dependent on what kind of code we are running here. And this is a shader. That's nothing fancy, not like super complicated weird shit. That's shit as a technical term. I'm not swearing on stage just to make that clear. So instead of having something very weird and a new concept, what it really is, it is a function. It is a function that takes a bunch of inputs and produces an output that has three values, red, green, and blue. Actually, it has four values, but bear with me here. It has the alpha value as well, and then the graphics card does the magic of combining it. So when we do compositing, what we do is we have a bunch of rectangular lists of numbers. And we combine them together to form the number that is going to be on screen at the end. And this number is a color. All right, OK. And it does so by using a program that is called a shader. And a shader is just a function, well, just, is a function that runs on the GPU, which is pretty freaking cool. Oh my god, I didn't know we could run stuff on the GPU in the browser. That's what WebGL really is about. But browser does that internally as well. So basically, we get to write this code if we are doing WebGL, for instance. Or we use code that is already shipping in our browsers. So browsers do have this kind of stuff built in for things like compositing, or composite-only animations, filters, and blend modes. Here's blend modes, by the way. So that's CSS blend mode allows you to define how you want to combine different images together. And all it is is a different equation. That's the entire difference. The difference of blend mode is the difference of equation that you're using, the difference of shader that you're using to produce colors on the screen. It can do certain things. It is limited in what it can do. As I said, the GPU cores are not as fancy as the CPU cores. So they are limited in what they can do. They can do a few things. They can do moving things like they can move things around. They can figure out where to put something in, because we can just add to the numbers where we are putting them. That's not a problem. This is called translate in computer graphics terms. You might have seen that as a CSS property. This is exactly what it is. It just moves things around. It can rotate things, because as it turns out, rotation is just multiplying the values, the x and y, and maybe z values, with a matrix. And we have established that the GPU is really good at matrix multiplications, because it's the same operation on a bunch of numbers. It can also do that for scale, because it's the same thing again. It's a matrix multiplication. Actually, it's a multiplication with a single value. And it can blend in filter ish. So the different equations are built into the browser, but not all browsers support this, unfortunately. And a few things and a few filters actually cannot be composited. There is no shader for them yet, or they can't be a shader because of the nature of the filter. And then they are actually having to paint the pixels. And that's actually a shame, because painting pixels takes longer, because we have to write all these values into memory, and that takes ages until it's done. So whereas compositing is like, hey shader, hey GPU course, here's a shader. Awesome, we're done, whoop. So yeah, there's that. And this is a real shader. And this is a shader written in GL slang, or GLSL. This is a shader language from OpenGL. Let's break it down real quick. So first, we have two variables that come in. And the semicolon is not optional, just saying. And we have data types here. So the vector two is basically an array with two items in it. It's an x and y coordinate. And then we have an image, which is a list of things. It's called a sampler 2D, but it's basically a list of things, a list of colors from an image, the different pixels. And then here, we take the pixel value from the image at a certain position, which gives us a vector four. So that's RGB and A. We ignore A. And then we calculate, basically, we add them together and divide by three. So if we, for instance, have 255, 0, 255, these two together make 500, divide it by 300, and then we have the grayscale value of that. And then we use this value, this grayscale, this intensity between black and white for all three, for red, green, and blue, in the output. And then we put 1.0, which is like full on as the alpha value. So this is a grayscale shader. This is what it does. This is how you do grayscale on the GPU. You don't have to write loops and stuff. I mean, you can do that in Canvas 2D with having a for loop that goes over all the values in the image and then putting the image back. But it's much, much slower. This is how you can use the GPU to do exactly that. Quick interlude. I said something about filters and blend modes. So have a look at this. So this is a CSS filter, and it's a CSS filter blower. I have enabled paint wrecked flashing. So if you see a green flash, that means that the pixels have been redrawn, which is slower. If you do not see a flash, that means it is just compositing. It's using a shader for this. Awesome. CSS blower filter works great. No flash of sadness. Amazing. Now, the SVG filters have the same code, but you see that? The green flash, that's sad. For some reason, it's the same filter on SVG. So using an SVG filter rather than a CSS filter. And unfortunately, we get this repaint, which means all these pixels, everything that has flashed green, has to be redrawn on screen, rather than just composite it. So if we look that, here you see it very clearly. This one's not very good. Sarah Drasna invites all of us to say, hey, how about hardware acceleration for SVG to fix exactly that? I'm actually not even sure if this is a bug in the DevTools or if it's actually a bug in SVG filters. I don't know. Very good question, good that you asked. I still highly recommend going to your browser vendor of favor and go, hey, I noticed this thing here. Can you maybe accelerate it? Make it fast and nice and composite only. This particular filter might actually not be compositable. That's why I'm not sure if this is actually a bug and this one is actually lying to me. But other filters expose the same problem. So all the filters in SVG have this problem. Anyway, so if someone from the browser vendor sees this, if you're fixing it, you get beer from me. So there we go. Awesome music video from Teddy Bears, by the way. The song, yeah, depends on your taste. Right, and then there's JavaScript. So we have talked about HTML and CSS, but we haven't really talked about JavaScript yet. So what JavaScript can do is if I have a perfectly fine website here, headline, Hello World, and an image below it, it can come in and go, actually, how about we make that headline inline instead of block level. And then the browser goes, fine, OK. I repaint the rectangle like I filled the pixels back in how they should be. And I remove the pixels like where the image was below you and I moved the image up. And that's a bit of work. Like things have moved around and we had to repaint things as well. So if we look at what the browser has to do, the JavaScript kicks in. It changes the style object. It changes CSSOM under the hood. And then it has to do the layout again because it has to figure out, oh, wait a minute. We're taking less space here so there's space available over here. So we move the image over there. Hurrah, we have that settled. And then it has to paint in all these pixels that have changed. And then it has to composite things together. So it's a lot of work, actually. And especially the layout and paint stages are expensive. So that's particularly expensive work. But what if our JavaScript does something slightly different? What if our JavaScript says, hey, can we make that text a different color, Rebecca Purple in this case? And the browser goes, sure, in that case, I just repaint the pixels that I have to change. And we're good. That's a bit better because we have actually skipped a step in the process. We didn't have to move things around. So we've skipped this entire expensive layout process and just painted and composite it. But there's other examples like this. So this menu just slides in without moving anything on the page. It can be drawn once somewhere and then just moved around. And we have established, wait a minute, moving things around, translating things, that's something that we can composite. And this filter, in this case, is composited as well. So we do not have to draw. And we do not have to lay out. And we just composite things like, we move this around in the shader and we'll be good. And if you think about it, what we want as often as possible hit this one. Because if we hit this one, that means that we are doing less work. And as Paul Lewis has said, performance is the art of avoiding work. Do not try to say that in a performance review at your job, but in browsers, that is true. I'm not sure bosses like that, really. I certainly do because I'm like, yeah, if you do less work, then you have capacity for more awesome things. But hey, not everyone understands that, unfortunately. All right. Now that you have seen that JavaScript can interact with our rendering process and with the graphics card and shaders and stuff, let's do a little quiz. So here we have a CSS element that is translated nothing, nowhere, on an x-axis. And after two seconds, we change it by translating it 200 pixels to the right. Do you think this paints? No? Who here thinks it paints? Oh, wow, OK, that's, wow, all right. Do you know what the green flash means? It painted. It does. And when I saw that, I was like, I go on stage and I tell people, composite is like moving things around. Then you just composite. And yet I get this. But there's a good reason for it. I didn't lie to you. It's just I left out an important piece of information. The important piece of information is that to do this compositing, remember, we have a bunch of images that we combine, right? So that would require multiple images for the page. It would require, to do that with any element, it would require having a separate image for each of these elements. And translate x is a little older, actually. So they are like, yeah, you know, you might use it to move it around once. And that's like a small operation, really. You're not doing it very often. So we kind of take it as a hint to not do an image because images or layers have a cost, right? We have to keep this additional image somewhere in memory. So the memory consumption is rising. And not only that, also we have another array, like an array that keeps track of all the images. So that one grows as well. And then we have to like deal with it. So it's a lot of cycles lost in rendering all the things. And what if you have like 5,000 DOM nodes? You have 5,000 images. Holy crap. I hope you have a few gigabytes left on your RAM. So basically, the rendering engine, the browser, takes hints when it makes sense to create a layer. And one of these hints is if it's a canvas or video element, we are redrawing it all the time. So we're pretty much like doing all sorts of things. And if we want to do anything with them, like moving them around, it makes sense to put them in a separate layer. So this creates a layer. Also, 3D transforms because they are a little newer and you pop things out of the layout concepts. And you might, if you go 3D, you probably go bananas anyway. So they're like, yeah, you know what? This might be more complicated. And actually, 3D transformations run best on the shaders in the graphics card because you know you have to do perspective transformation and all that kind of stuff. And the best way to do that is to put it in a shader. So we pop it into a separate layer and use that on the GPU. We actually accelerate this one and composite it only. And then if you have an animation, no matter what kind of transformation it is, if it's an animation that can be mapped to one of the things that compositing can do, like moving things around, then that actually also is popped into a separate layer and composite it. But only if it's that. If you do like, I don't know, change color or something. No, that can't be composited, then it's still painting. But if it's something that can be composited only, rotate, scale, move, then these things are actually also popped into a layer. And then there's the will change property, which will not automatically force a new layer, but it will definitely signal to the browser, hey, you should double check because I will change this thing. So if that makes sense to pop out into a separate layer, then please do. Support for that one is improving. It is not very great just yet, but I think Firefox, I'm not sure about Chrome, I think Firefox and Chrome do it, Safari and Edge, I'm not exactly sure right now. So let's play again. What about this one? Translate 3D, changes after two seconds, will it paint? Who here thinks it will not paint? Okay, few players left in the game. All right, let's see. 3D transform does not paint. Very good, very good. I have another round. Last one, I promise. I'll stop asking questions in a second. I know it's early in the morning. So here we have a keyframe animation that has a translate X, so it's a 2D transform in an animation that is playing. Will it paint? Who thinks it will paint? Who thinks it doesn't? All right, you're right. Doesn't, because it is a keyframe animation that is composite only. Wonderful. All right, last but not least. I'm running over time. Awesome. Let's have a look at WebGL and Canvas. And I know that the image doesn't really fit, but I found it cute, so go away. So I thought, what about a little challenge to compare Canvas to D and WebGL to each other? And the challenge is, draw the same image as often as you can at a different random location and a different size. So resize the image and draw it somewhere on the screen. And let's see how often we can do that without having serious performance problems. So here's more or less the main code. There's a bit of setup code, there's like 10 lines missing on this one. Here's the main code for Canvas 2D. So we pick a random position X and Y and we pick a random size and then we draw the image using Canvas 2D APIs at this position, circle around its center and then with the size given by the parameters. Okay, let's see how WebGL looks like. And for this I have to warn you, you're not gonna be able to read that code, but it's illustrating a characteristic of WebGL. Yeah, so this is the same thing in WebGL. It's leaving out the entire shader code and it's leaving out the setup code. Oh, I have the setup code available as well. You know, it's a bit more, I would say. If you compare this to this, it's probably like five lines more, right? So yeah, moving swiftly on, Canvas was a bit easier there in terms of the amount of code that we had to write. But how does it look in terms of performance? So here we have the performance, the frames per second on the Y axis and the amount of images or times we draw the image on the X axis. And just to keep in mind, the green line is what we wanna hit. The orange and red lines are not very good. The red line in particular is like the minimum we can do to actually make it look more or less like a motion. So basically from 500 elements on, we are a slideshow. Great. Now, how does WebGL turn up? Oh, all right. Slightly different performance characteristics, I would say. And you can turn on hardware acceleration, which means like shadering all the things that you can shader for Canvas to D as well. And it doesn't change much. So the numbers are higher. We start with 10,000 objects. And fair enough, I had to cap the graph somewhere. So 10,000 objects, fine. 25,000 objects, even WebGL a little slower. And that was on a MacBook, holy moly. And but I mean, we are a slideshow from basically here on Canvas to D. So even hardware acceleration on Canvas to D does not help us much because we're not getting full access to the graphics memory and the graphics pipeline. We're getting very limited support from what the shader can do. Okay. So that evens it out, which goes to say, not necessarily one or the other is intrinsically better except for the fact that the only downside to WebGL is the amount of code that you have to write really when it comes to the comparison. But let's look at that. So Canvas had much less code that we had to write and it was relatively easy to understand. WebGL comes with a bunch of lingo and a bunch of things that we have to set up. But WebGL has a lot of libraries to take that away from you. So if you do games, for instance, phasers using WebGL in the background without exposing it to you, so it's very high-level abstraction. 3JS is a high-level abstraction, as we've seen yesterday. AFrame is an even more high-level abstraction over that StackGL, Babylon, all these exist and take away the pain of writing all this code. We write a much nicer, denser code of better conceptual abstraction. So for instance, in 3JS, you make a movie. You have a scene, you have a camera, you have objects on the scene. There we go. No weird stuff that happened here over there in the code. However, Canvas does not give us access to write our own shaders. And for a lot of things that makes a lot of sense, for instance, the grayscale thing, if you have video coming in and you want to change the colors and invert the colors, all that kind of stuff, that's much faster in WebGL because it gives you the ability to write shaders that do this really, really fast. Also, if you do edge detection or something like that, all that works much, much faster and nicer in WebGL thanks to access to the GPU pipeline and shader code. Canvas usually is a very limited performance. However, it works on pretty much every device, including a potato probably, if the potato has a browser at least. But the WebGL thing, as it has to deal with driver bugs and all that kind of stuff, might be deactivated if it's a really old machine or an internet explorer. 10 feels bad to say that on stage. Then it's not available. So should we anyways do WebGL for all the things? And there was a few people who attempted that, like do all the dumb stuff away and use WebGL on everything. And I say, no. It might be very fast if available, but it has so many downsides when you do everything in WebGL. You have no UI primitives. You have no event handling. You have to implement everything yourself, and it is not supported everywhere. And the beauty of the web that we sometimes forget is that it works pretty much everywhere and to a certain degree is the best effort kind of thing. It does what it can on the device you give it. So use the right tool for the job. And that might mean that if you want UI primitives and basic interactions and all that kind of stuff and accessible content, which is very important, in fact, you should go for HTML and CSS with a bit of JavaScript sprinkled in to make it more interactive when available. And then there's SVG, as we have heard yesterday from Sarah. SVG is great if you have graphics that should respond to the size and environment they are in. They're also very tiny as they basically are just drawing instructions. And they can be hardware accelerated if the browsers want to make it happen. It is very hard to make it happen in their defense. I'm not saying that they are like lazy bums. They are working hard to make that happen. It is very hard to make it happen, unfortunately. And then you can use Canvas to D, but I wouldn't use it for super complex stuff. But if you do very simple drawing operations or just need to stream a video or something into Canvas to resize it or something like that, then that's actually pretty good. But for everything else, and that includes more complex 2D graphics that are interactive and do magical things, then you probably want to go for WebGL as that gives you full access to the rendering pipeline at the cost of having to deal with the full rendering pipeline or a library that abstracts them away. And with that, I'm done. Slides are online. I'm around. Ask me for things or take a selfie with me. Thank you very much. Enjoy the rest of the day.