 OK, so we're going to be talking about rendering performance or, as I like to think of it, the war on purple and green. I'm sure you've engaged that yourselves from time to time. On our panel today, we have Josh Peek from GitHub, Arya Hadayat from Sencha. We have our opening speaker, Jonathan Klein from Etsy. We have Paul List from Google. And we have Eli Fiddler from Blackberry. So let's just go in with the presentation. Are you ready to go? We get the presentation up. Mike? Hey, that's a lot better. Thanks. Most monitors and devices we have today are rendering at about 60 frames per second. So when we talk about rendering performance, we're really talking about trying to get your application to render at 60 frames per second with all animations, with scrolling, et cetera. If you do the math on this, it means that we have 16.6 milliseconds approximately to get a paint event done. So anytime you have a paint event that's taking longer than that to render on your page, you're going to get jank on the page. And it's going to be slow. Luckily, we have tools today that make this pretty easy to see what's going on. This is an example of Chrome DevTools. All you do is turn on the continuous page repainting mode. And in the upper right section of the page, you'll get a nice meter that tells you exactly how long the paints are taking. This is on the EdgeCom website. You can see here 2.7 milliseconds for this paint event. And then there's a meter on the right there that says, changing between 2.6 and 4.6. This is very, very fast paints. If our budget is 16.6 milliseconds, then 2.7 is well within that budget. And as you scroll the page, this meter will adjust depending on how long the paints are taking. So anytime you see this meter spike above 16.6 milliseconds, you're going to get that jank that I talked about. And jank is something that I'm sure we've all seen. It's basically when you're scrolling a page and it hangs, or an animation is happening and you can't scroll. Anytime where the page performance drops below 60 frames per second. And again, this happens pretty wildly on the web today. And it's a pretty big problem. And that's the term that's been generated for it. What causes jank, now that we know what it is? Well, this slide I pulled directly from the jank busters Google IOTalk earlier this year. And I think it gives a good overview of the main things that are important to think about. These things really fall into two categories. So first is unnecessary paints. So this is if you have, maybe your paints are fast, but you have too many of them. So you try to jam hundreds of paints into a single scroll event, and that's just not going to perform well, no matter what you're doing. And the other category is long paints. So these are examples where you might have a single paint event, but it takes more than your budget of 16 milliseconds to render. And there are some examples here of different event handlers that can cause that problem, CSS issues, and just visually complex pages. Why do we care about jank? Why do we care about rendering performance in general? People have talked a lot historically about page performance itself. But this idea of rendering performance is fairly new. During Edge London earlier this year, we heard Shane O'Sullivan say that Facebook artificially lowered the frame rate on their iOS and Android apps from 60 frames per second down to 30 frames per second, and engagement collapsed, according to him. On a native app, it's pretty easy to get 60 frames per second. If you get down to 30, that's pretty bad. So they artificially did this to, he said, a small saw second to the users, maybe a few dozen million users. And it caused major problems. At Etsy, we did a similar test in the sense that we started prefetching JavaScript on our search results page. When the JavaScript came down the pipe and was actually executed by the browser, we got janky behavior. So essentially, you'd see in the Chrome console that the JavaScript came in. And then the page performance would degrade. You couldn't scroll the page, and it was really nasty. We were monitoring all the business metrics on this page when we were testing this change out. And as soon as we rolled it out, the business investors got a lot worse across the board. So we rolled it back and we were unable to make that change with the current implementation. And this is all to say that right now, in many cases, rendering performance is more important than full page performance. If your page is fully loaded, you might think, OK, I'm done. It happened quickly. That's fine. But if the user can't scroll and interact with the page and your animations are slow, it's going to really torpedo engagement. How do we fix it? Well, that previous slide I put up has some good advice. Basically, we just use the tools that exist today to hunt down these painful repaints or unnecessary repaints and then fix them. Again, these things come from visually complex elements. But luckily, the designers are telling us today that flat is cool. So it's very easy to have a nice site and a fast site. All you do is just have a flat, no gradients, no background images, no nothing, just flat colors. And then you're good to go on all fronts. But seriously, what are some real techniques that we can talk about today that make an impact on rendering performance? The first is using request animation frame instead of set timeout. Historically, if you wanted to execute something on an often basis, but not as quickly as possible, you use set timeout in JavaScript to delay that event a given number of milliseconds. And people would do things like take 1,000 milliseconds to divide by, et cetera, try to get 16.6 millisecond execution times. But we'd rather have people use, going forward, request animation frame. This is a native browser API that can optimize these rendering events. It can do things like turn off the animation events if the tab is hidden. It can optimize battery usage on mobile devices. And since it's a native browser API, it's just going to be more efficient at doing animations. Support across browsers is pretty good, certainly in modern browsers. You can shim it for older versions of IE, but it's well supported across all modern versions of browsers. Another thing to point out is that consistent frame rate is better than a high variable frame rate. So if you can get your page to render consistently at 40 frames per second the entire time, that's better than having it render at 60 sometimes when, as soon as user scrolls or an animation fires, it drops down to 30 and bounces back and forth. That's a terrible experience for end users. So you want to make sure that the page is rendering in a consistent fashion. Another technique is to paint less. So we talked about how having too many paints can be an issue. A lot of the time this comes down to just bashing your paints together and trying to do these large updates to the screen fewer times. So just one slightly larger paint that's still within your budget of 16 milliseconds, but not having tons of paints that execute on every scroll event or every hover event, et cetera. If you can consider the Translate Z hack, we have some experts on the panel here today that can talk more about that. But essentially, when you add Translate Z zero to an element in CSS, it doesn't actually animate that element, but it does move it to a different compositing layer, which can be very useful for something that's being painted very often. So it'll get put onto the GPU and the GPU can optimize that. Speaking of the GPU, you want to make sure you're optimizing how often you're uploading textures to the GPU. The GPU is extremely good at manipulating textures that have been shipped to it. But if you saturate that bus between the CPU and the GPU by trying to push textures back and forth, you're going to have problems. So this really comes down to understanding when you're forcing stuff to go to the GPU and trying to make sure you're not doing that too much, but doing it enough to leverage it when it makes sense. And then finally, this comes from a post that was written by Paul Lewis, one of our panel panelists over here. Don't guess it, test it. It's really hard to give you hard and fast rules for rendering performance on the web today because these are so specific to the site characteristics. It can be a single animation that's causing your problems. It can be a single CSS rule in some cases. So it really comes down to using the tools to find out where your page is slow. If you want more resources, there's a great site put together by some folks at Google, jankfree.org. It aggregates slides, videos, and articles about this topic. And there's a lot of excellent information on there. And with that, I'm going to turn it over to the panel. Are you going to switch? There you are. We're going to try and get up the contribution screen here. Beautiful. OK, so let's go to our first question, Pete Miller. This question? So anonymous question here. If there were a why slow for rendering performance, what would its tests be? What changes would be easiest to implement, and which would have the biggest impact? I'd summon from the band. I want to go. Paul, do you have an idea for that one? I think we cover this in the introduction talk. A why slow for rendering performance is it won't go so far as to say useless, but it's going to be very difficult. Because what are we looking for? We're looking for layout problems. What triggers the layout problem? Is it that you clicked in this button up here in the top corner? Is it that you were scrolling, and we did something parallel actually, and that caused the layout problem? You start to get into very specific to your application where your problems are going to show up. So having an automated testing framework is going to be tricky. That's not so we shouldn't try it, but it's not always going to be the easiest thing to do. So it's something that you run in a kind of why slow where it goes, hey, you did this, and you did this. We should look at it, but I'm not at this point sure how broadly effective it would be. Yeah, I think it's really hard to automate that. But because why slow is partly used as an on-demand tool, that's kind of what the DevTools provide for you today. If you're using why slow in the sense that you just pull up a page and run it manually through the Chrome extension, that's what you can do today with the existing tooling and see the running performance for your given site. So we sort of have something similar. It's just very, very difficult to automate, as Paul was saying. So one thing that you can do is to apply specific restriction in your project. For example, we mentioned about not using request animation frame. So you can put some check in your CI system to prevent somebody from accidentally put that use of set time out. So if there's a new additional set time out code appears on some of the JavaScript files, that should trigger warning because, well, the project use request animation frame by default, for example. Yeah, I guess the why slow stuff normally runs on page load. But obviously, all these running performance problems are something that's happening as your app is running. So it's not really a way to profile that on initially. So it's basically what we already have now, which is the timeline view of showing all these problems. And Chrome does a pretty good job. I like the new little warning icon you get when you trigger a layout in the timeline. So that's a pretty good situation right now. Yeah, and there are also other things. There are things like telemetry, which Adobe have been using brilliantly for top code. So when they check in code, telemetry is a Python-based framework that the Chrome engineers use, and it runs a bunch of tests against pages. So things like smoothness of scrolling. And that's definitely something I'm interested in looking at a little bit more, because there may be things that we should be looking at that just give us at least an insight into how the change we made in our code affected rendering performance. And it may be that there's a certain amount of scripting that's required from the developers to set this up in a meaningful way for their particular project. But that might be something that we wanna take a look at. And if you're interested, give me a shout after this, because I'm actually really interested to find people who want to try it. So bust out telemetry and give it a go. I think also, I mean, the question is really, what are the magic bullets? I do this on my site and it gets fast. And there's enough things going on in rendering performance, in most cases that there isn't a hard and fast rule there. Things are very different on different platforms. So in addition to being specific to your site, it's specific to the device that you're running on, it's specific to the version of the browser that you're running on that device. And so testing is really the best way. And testing on as many devices as possible is the best way, which is not, I'm sure, what everybody who builds content wants to hear. But of course, there are some things that are good and that I think people have started to do a lot more. Being declarative is usually better than being imperative. So if you're going to do an animation, use CSS animations or CSS transitions, browsers are getting much better at optimizing things that are left up to the browser. And historically, you haven't been able to do that because performance hasn't been good, but now it's a lot better. Well, thinking about Weislo is not just like the specific implementation detail. Like when I think about other people that I work with and multiple levels, are there any big obvious thing? Like part of Weislo's power is just the marketing message that it delivered. We have these things, oh, okay, we should definitely all be G-Zipping everything. It sort of consolidated things that had been known. So I think CSS transitions, as opposed to JavaScript animations is an obvious one that most people are getting now. My personal one that I've noticed a lot is people really like one pixel offset, one pixel blurred drop shadows on text. That's a really expensive operation, don't do that. All right, I think we'll move on to our next question. Jake Ultrabald. All right, so tricks like translate Z, putting that on an element, is quite often a silver bullet in terms of performance. But could that be an anti-pattern tomorrow? Like when we used to tell people to concatenate strings using array.join, we now know that's faster than just doing string plus string. Is translate Z going to be the same thing? Possibly. There are three steps, I think, to most performance problems, which is first of all, there's the don't do it stage, but we just blindly do not do this thing. It will be bad. Then there's the next step where we go, ah, now we can make it less bad if you were to do this thing, which is where I kind of categorize the translate Z hack. It's something we'd rather you didn't have to do, but sometimes you've got to step in and do it just to kind of hint things up a little bit. Then there's the final step, which is hey, this is not a problem anymore. Now there's the other thing where you go, oh, actually all of a sudden it's an anti-pattern. So the array.concat versus array.join versus string.concat thing is the one that's often mentioned. I think the only thing you can do there is profile. Again, we've said it before, but it's like if you're profiling and it's part of your build process, then when something changes in the implementation, you're going to see it. You're gonna go, whoa, something went really slow here. I wasn't expecting that. What was it? And you can figure out what it is that's actually triggering your problem. So yes, it may be a problem in the future. There's no crystal ball gazing as far as I can tell. But profile is the answer for me. Yeah, on that I would say that anytime you're doing something that's known to be a hack, like that's definitely a hack, right? That's probably not going to be something that we want to be doing forever. So anytime that you feel like you're adding some code that's just not standards compliant or it translates to zero, doesn't make a lot of sense from a future proofing point of view, I think it makes sense to abstract that stuff away because chances are it's going to change going forward because it's not part of a spec. It's just a workaround. One thing I really don't like if I'm just being me for a second is the fact that we don't give you a way to say that you want something on its own layer. I don't have a problem with the developer saying that. You know, there's a kind of a balancing act of should the developer have to worry about layers versus should the browser just take care of everything? And in a perfect world, yeah, the browser would maybe get everything right for you. You'd be able to guess your intent perfectly. But if it can't, then surely the best next step is not to provide a hack, but to provide a decent API that says, yeah, you know what, if you need to call this call this, this is what it'll do. And you know, we actually, if you want to abuse it, okay, that's your call, you're the developer and we trust you and you're not an idiot. That's just generally how I feel about this kind of stuff. Yeah, it's definitely a leaky exception, of course. So I think if the layer composing system is something that you really need to be concerned about as a developer, it should be an explicit opt-in, not magic, like going to this magic voodoo property that you read about on Stack Overflow and you don't really understand how it works. But having an explicit API for declaring layers is, I think, a good idea if that's something we need to be concerned about. So Translate Z zero is obviously a hack to force compositing. And I think it's actually a really good declarative API if right after you put that on or at some point later in your page, you're about to animate the value of Translate Z or you're about to animate the value of Translate 3D, then you are exactly saying, this is something I'm setting now because I'm gonna animate it later. And that's why it does what it does so that you have your content promoted into a layer so that when you animate it, the next frame is ready. And a number of properties are like that. Those are the properties right now that force compositing and they're there specifically so that if you are going to animate them, you're primed and ready to go. And maybe there are more things that should be like that, but I'm very hesitant about saying that we should introduce a new property that says promote to layer because promote to layer isn't something that web developers should care about. If you don't have that, then you have this thing where developers can't get good performing code. They can't get a good performing app. They don't have anything other than a hack. And so either the browser has to infer perfectly the intent of the developer and get it right or we have the alternative which is give them a way of saying, no, this is what I really mean. And they're doing that through this sort of backdoor at the moment, which kind of makes me feel uncomfortable because it does give rise to, hey, implementations changed, now your expectation needs to change, which is I think a slightly unfair position to put a developer in, but maybe that's just... It's kind of interesting to ask what the distinction is between translate see and promote to layer in terms of actual developer life experience. Can I jump in for a sec? I feel like I've seen developers animating top or left, sending something in position fixed a lot and then not realizing that that's gonna have different performance implications and animating something with translate or even margin. So I feel like if we wanted to let someone animate top but give it the performance of something in its own compositing layer, then using some sort of layer promoting API would make sense in that case, potentially. I mean, your specific example right now also does force a layer on most platforms if you set something to fix positioning and then animate top it will. Just adding one more twist to the topic. We also have to remember that translate Z or even compositing layer is just half of the equation. For example, if you do that, but you try to animate, say, blur radius or border radius, it doesn't really help because there are only a couple of properties that you can apply to the GPU texture that can be executed by the GPU efficiently. So again, explicit or implicit or half explicit, half implicit, web developers still need to know that only certain properties you can run on the layer itself. And generally, the way I always pitch it is you're gonna promote to isolate a layer that's probably gonna get painted or you just want it because you're gonna move it, translate, no, sorry, move it, scale it, fade it or rotate it, I think. And filter. Yeah, or a filter. And those, Paul actually helped me figure he came up with those four, which is really helpful. So if you're gonna do one of those things or if you're gonna paint something, you know, and it's gonna affect to the elements, then it's a good time to isolate it. Other than that, like you say, it's probably not gonna offer you anything and you're potentially creating more of a problem because you're gonna have to upload a texture for that layer to the GPU. And if you're on a constrained device, like a mobile one, well, you're gonna spend, you know, pay that tax every time you change that layer, so. One of the interesting things I think about the web that's happening right now is we have sort of two, not even two, but whatever, a gradient of use cases because the browser is now becoming this platform where you can do complex animations, you can do full gaming experiences, et cetera, but you can also build a pretty static website. So when we talk about, developers shouldn't have to worry about this. Well, if you're building a high performance game in the browser, you probably should have to worry about that, right? But if you're building your blog, you shouldn't. So I think there needs to be a facility for both camps of people to be able to use these tools effectively without having to worry too much if you're on the low end, but you should be able to access those APIs if you're working on something really complex. I think we'll move on to our next question. Ben Holland. Okay, so we have reasonably good tools and insight into rendering performance in a dev environment, but not data from the field. What events or metrics would we ideally like to be able to measure and report on for real users? We have, the only thing we can do live in production is measure the window performance APIs. Those are pretty great, but those only affect the initial load. You don't get anything after the fact. So being able to tap into the timeline from JavaScript and get those actual values would be great. I mean, we do some stuff where we profile method times and then report these back over AJAX in our production app. So, but you can't get access to the timeline data live from JavaScript. So having those would be great. Yeah, I'd love to have real user data in production as well, but I'd also love to have even better synthetic tooling around it for remote use. So to me, by that is to say, right now there's no JavaScript API into the frame rate, for example, that the Chrome DevTools gives you for like that rendering meter. So it'd be great if you could just hook into that with JavaScript because then you could write tools that would basically change different events, hide nodes while scrolling the page and then read off those frame numbers and the paint numbers. And then you could very quickly narrow down an automated fashion on the area that's causing a problem. So I think that's kind of the next step is like, how can we get code around that paint meter so we can easily find out where the problem areas are? I think that's interesting, but is that in a lab environment or is that in the lab? The reason I ask is because if you're talking about live, it's very, say frames per second, it's very difficult to separate signal from noise because you don't know if it's contention at the OS level, if it's a badly configured monitor, terrible GPU drivers, contention at the Chrome level or browser level contention actually within the page and a bottleneck there. So just a simple frames per second, probably it wouldn't give you that much, but in a lab environment where you can be pretty sure that there's nothing else running and so on, then that seems to me to be a reasonable first step. Yeah, I was thinking in a lab because right now if you want to take a page and figure out why it's rendering performance is bad, it takes a lot of work, right? So if you could build better tooling around that in a lab environment, it could be very fast to narrow down on where the problem is. But yeah, you're right. It's not going to work very well in the real world today. I think we have a question from the room, James Ide. Hi. So for some context, I feel like a lot of the performance characteristics of how browsers work, like a lot of things in Jonathan's slides, are understood by a lot of web developers and the dev tools have gone great, even i11 is really great and I should check it out. But those tech or understanding how the browser works is great if you're a one-man show and you're building your own web page. And as soon as you start to have a large team working on your product, it's very easy for someone to introduce a regression. So I'm wondering, for example, at Facebook, we have a library called React, which helps avoid layout thrash when we render our web pages. And I'm wondering if there are, I'd say, organizationally scalable APIs that the browser vendors can provide to us. So concretely speaking, maybe asynchronous APIs to decode images or asynchronous APIs to compute the dimensions of a DOM element, things that are synchronous and slow down the page right now. Are there these types of things that the DOM could provide to us that you think would be helpful? So I think that's actually one of the most interesting areas of active development in at least most of the popular web engines right now, definitely WebKit and Blank, are both working on, they call it different things, but some sort of incremental layout where certain properties, when you request them from JavaScript, don't trigger a full relay out of the page. And I can't give you a list of what works right now or anything, because a lot of this is research code right now. But this is a very good example of where the browser can make things fast without you changing anything. So right now, there's a couple of properties that you ask for, like offset width, that will trigger a full relay out of the entire page and doesn't technically have to. But right now, the way that the engines are work, that's what they do. So stay tuned, but there's a lot of things there that are getting faster without web developers having to change things. In terms of adding new DOM APIs, we have standards buddies, and they accept public comments. And definitely, we've seen a lot of people providing that sort of feedback. We have another question from Mike Petrovich. So kind of to Jonathan's point about synthetic testing. I know Etsy does synthetic testing for initial page speed performance. But if we did something like that with using Selenium tests in a synthetic environment that would actually, you know, in a context sensitive area for your app, you know, kind of how it's meant to work, going, clicking around 200 milliseconds later, clicking this other element. And that's all affected by different rendering speeds and also, you know, JavaScript evaluations. So it's more of a real world example than just straight up how long to take to render this. And it doesn't work at a generalized level like there seems to be a consensus. But on a very contextual basis, what about using kind of automated behavioral tools such as that? Yeah. So we sort of have that today with web page tests, right, so Pat's here. And he built an awesome tool that allows you to do multi-page flows and sort of see screenshots for every step of the way and look at waterfalls that show JavaScript execution and see how long those things are taking and even monitor CPU during that process. So that's, I think, the best proxy we have for it today. But again, one of the challenging things is that we can't currently get data on frame rate or paint events out of the browser through an API. So that's where you sort of run into that wall. So you can get part of the way there. But you still can't calculate how long a paint takes, as far as I'm aware, unless you patch Chrome, maybe. I don't know. Telemetry actually runs Chrome with a very specific flag, enabled GPU benchmarking, which enables a GPU benchmarking API. And that will give you some stuff. And in fact, yeah, that's the reason telemetry can get the data out. Then the other thing you can do is connect to DevTools. It's over the web socket and get it to run, do the thing, pull off the timeline data, and then check for paint records and so forth. So there are things there. They are not, I would say, that easily grunt-taskable and just easily inserted into your workflow today. And I think that's something I'm very interested in, is whether we can actually make that a thing that's easy for developers to just include in their build steps. A good example of the use of telemetry is Adobe TopCode project. I think they have a fantastic dashboard that shows how long does it take to render a button for every single revision of the code, the CSS. So if you accidentally change the CSS, and it slows down the rendering of a button, then you'll know it immediately. I think this is all using Chrome telemetry. All right. Next question, Cheney Tsai. So the question is, how can we track the performance detriment that third-party scripts, such as add social buttons or anything you want to add, have on rendering performance and, I guess, steps to kind of mediate that without being too hacky? Come back in two hours or whatever. I think profile. I mean, you should know how your app is running. And then you should know what happens when you add that thing in and see what the change is. If it's hideous, then you need to make a decision. I think operating blindly with your code is a very dangerous approach. It's just, for me, it is fairly simple. Get used to the tools that are available. They are great tools. And just go make sure you understand how it runs before and after the third-party script to know whether it was worth adding it in. Because the chance, I mean, anything you add is going to have a tax, right? It's just whether you're happy to pay the tax, right? That's really the question. I mean, we've talked a lot about painting. We find that a lot of the issues about badly written third-party code is not so much about painting, but about adding extra event handlers or doing busy work when other things could be happening. And the Roma Web Inspector or Chrome Dev tools or whatever's available on all the browsers today are really good at showing you had a scroll event and somebody had a non-scroll handler, and it ran for five seconds. And it came from Facebook. Facebook doesn't actually have a scroll handler that takes five seconds. So let's go on to the next one. Ed Sodin. This was an anonymous question. What is the easiest thing browser vendors could do to make it easy for developers to optimize the performance of their sites? No, no, no. Isn't that just the why-slow question again? It's like, what are these magic things you can do? Yeah, I think more to that point. We talked about this in the last session, actually. Because browsers are now being put on watches and glasses and really low-end phones, it's hard to say what can the browsers do to just make our jobs easier, because devices keep getting lower powered. So I think we're going to keep seeing new environments for browsers to operate in. And it's hard to say, oh, well, the browser will just fix it all. We don't have to do any work. More tools. That's the only thing I can think. More tools. But that's going to be balanced, because if you've ever used tracing, tracing is brilliant, but it's overwhelming at first when you hit that thing. You're like, what have you just done to my eyes? So it has to be balanced. But I think if we could, for example, you can't actually drill down in Chrome DevTools today inside paint records. You know that you spent some time painting. You don't know what was painted. We recently added details on image decode and resize so that you know which image was decoded and resized, which is invaluable. So we can continue to iterate and improve the tools so that you're in no doubt as to where your bottlenecks came from. But to make your life easier, you're the ones writing the code. I mean, I make browsers, so I need you to answer that question for me. But one thing that I always ask when I go to conferences like this is I talk to web developers and I say, what are the tools that you need that would tell you things that you can actually act on? So when I ask what do you need and people tell me, I want to know what's going on in the paint record. And then you say, oh, but that will be different tomorrow or that will be different on a different device by an order of magnitude, which it often is. That becomes information that's very difficult for them to act on. So there are some developers who deliberately target every single device out there with individualized code. But those developers are very rare when you have the kind of money and time to spend on that. So tell us what you need to know. And we will try to make that happen. Also, we try to just make everything fast. So I have this gut feeling that in the near future, the rendering performance tool, especially with painting, would fall into two categories. One is how to improve from 10 frames per second to 30. And the other one is how to go from 40 to 60 or 58 to 60 frames per seconds. Because what you're looking for is that quick fix so that instead of 10, you run at 30 frames per second. I think we have a question. Just to say, one thing that you can do is tell us there is a problem. So I think, granted, we don't have best practices exactly at how to solve it. But given, I get the point about doing synthetic testing. But also, like most people in the panel agreed that the profile of how many frames per second you get varies per device. And of course, you can't simulate all these different situations. If we had some sort of real user monitoring, at least telling us, hey, there is a problem on whatever, Android device X on this page frequently enough, then that would motivate me to go by that device and run some more tests just to sort of balance the different effects, even ahead of having the best practices for exactly how would they fix my codes to make it go away. Yeah, I think that's a great point. And something that brings to mind is that sort of with the Y Slow tool release and the web performance movement that's been going on for the past 10 years, we've had a lot of studies that come out saying, look at the top 1,000 deluxe sites and how bad they are in these best practices. That hasn't happened yet, I think, with rendering performance. So I think it'd be useful just to see, how somebody go out and look at popular sites and be like, look, there are problems here on these devices, on these browsers, and just sort of expose it on a wider basis so people are aware of the problem. Because I think in this community, it's obviously well known. But I think in the wider community, people don't think as much about rendering performance. Does anyone know if Google redoes this page trying for sites that are slow? I'm just saying. No idea. I'm just asking. I have no idea. All right, our next question comes from Dunstan Casten. So my experience with paint performance usually drops frames with garbage collection events and memory management. What responsibility should web developers take on for those two things, memory management and garbage collection? OK, so I guess the immediate thing is that GC causes jank because GC is almost guaranteed to push you over that frame budget of 16 milliseconds. Your only option is to try and avoid GC. I think depends on how big a problem this is for you, how much memory churn you actually have, how much time garbage collection is actually taken and how often it runs, which is going to be very specific to your application. There are things we have on HTML5 Rocks. We have an article about using static memory pools to recycle objects and so forth. You're avoiding object creation, which tends to be good for GC. But personally speaking, I don't believe it's there has been people have requested, can we have a garbage collection API? And I would be concerned about a browser including that because that one could get really hairy very quickly. What would be the problems? The browser is uniquely positioned to, your page is not the only thing that's running in the browser. So it's in a unique position to know when garbage should be collected. And if you were to force the issue, you may basically make the wrong call. This is one of these times where, as I say, the browser is the one that has all the information and knows when it's the right time to pick up garbage. Ideally, you want it to do it at the best point for every application that's running. But that's the nature of it. But what if the GC API is just a strong hint for the browser to do garbage collector when it's possible? So if you do that, then what happens if it ignores you? Or we get different browsers ignoring you at different times. Is it any better than the browser just making the call itself? But that's the same situation like the transit the layer hacks, right? Yeah, exactly. And so some of this stuff is just real world. Yeah, we're just having to do this right now. We wish we didn't because we just want to get stuff that runs well. I mean, we've gotten complaints mostly from sites that do permanent animation all the time that GCs are a problem. And that's things like games. And if you're writing something that is truly interrupted by GCs on a regular basis, then you just have to create less garbage. But building a game that's going to run at 60 FPS while doing complicated things is a hard problem no matter what environment it's running in and no matter what language you're running it in. So I mean, you can't expect that the browser is just going to do it for you. And I don't think that you could make an easy decision even if you had a guaranteed API that said GC now. You couldn't make an easy decision about when to call that API. I'm curious, we are again and again, and when we talk about rendering performance, we have to test, we have to look at a specific thing. What is the brightest sign of hope for the next thing that's going to be easier to handle? I'm curious, I think it was a perspective. When you say easier to handle. I don't know, certain things in the sense of the timeline view made at least getting a handle on it easier than it was, like you could actually see it. I know like the inability to see pictures in the, like anyone see picture-to-codes and say, oh, that was a picture-to-code, that's what that thing was. Having that show up in the tools is an easier thing to deal with. Is there anything that either is going to go away or is going to be much easier to test for? In terms of going away from a Chrome perspective, there is, image decodes and resizes are at the moment extremely expensive. And as we all probably know, 60% of the average page is images. So if you can fix image decoding and resizing as a bottleneck, that helps enormously with that problem. So our engineers are definitely looking at ways to make that a thing. The other problem that you often see is the main thread, the contention on the main thread being so high such that work can't continue and the app becomes unresponsive. And we're looking at ways of basically reducing the workload. That doesn't always necessarily mean firing off more threads, which I think is the sometimes mooted idea, which is we have fairly, in Chrome's case, we have a fairly thread-heavy app. So just creating more threads wouldn't necessarily solve it. So what you actually want to do is do the work in a smarter way. So for example, putting more work that is suitable for the GPU to go to the GPU, that kind of stuff. From a developer's point of view, none of that actually requires action other than try and reduce the work that you're doing, which has always been the case. If you're layout-bound, try and reduce layout. Even if layout gets faster in the browser, you still want to do less of it. The performance is the art of doing the least amount of work. I think we have a question from the crowd, Saroj. Yes, Michael? Hi, I have a very simple question. So a few weeks ago, I was developing this web application. I don't have any complicated CSS going on, no timeouts, very simple layout, but it just uses a lot of images. So it picks your images from Picasso, and it shows it to you. And it's just slow, especially the first time the rendering performance is really bad. So what as a developer I can do to increase that performance? That is a tough one right now, if I'm honest. From my point of view, we don't have... So when the page is rasterizing, it does the image decodes and resizes in line as a sort of dependency. It gets this draw-bit map line, where it goes, OK, I now need to draw an image, I now need to decode that image and resize it if necessary, and then I can actually put it in, rasterize it into the page. That all happens in line. If it was decoupled from the rasterization and there was a sort of a gap where it was, then it appeared later, that might be one thing. But as a developer, there's not a lot you can do, apart from perhaps batch in your images. It's something you could potentially look at, because if it's all done on demand, so again, this is from a Chrome perspective, it looks at what it needs to rasterize, if that's 30 images, 30 images are gonna get queued up for decoding and resizing in one go. It might be, it depends on the app, but it might be better to just hold off and go, can I do these one by one or just over a few frames so that I'm not hitting everything at once? That sometimes may be a thing, but I offer that advice without knowing the application in detail, so use it wisely. Does it need to show all those 30 images at once? Yeah, that was one of the things I tried. Not really, and that was one of the things I tried to do, so to just show the images, whatever is visible in one page size, but even then, first time, you are getting, let's say, 10 images, it's visibly slow, and I've seen similar pattern in other websites too, like the well-known websites, so I wasn't sure if there is anything can be done about it, yeah. And we are trying to reduce the tax that is levied for decodes and resizes, but... We just had a whole panel talking about images. Images are expensive. Having to touch every pixel in an image and doing decodes and things like that is an expensive operation. There's fundamentally no way around the fact that we have to do a lot of work when we show images. But the techniques that we talked about in the last panel, especially if you control your images, in your case you may not, but putting low quality images in as placeholders and then asynchronously bringing in higher quality versions or carefully choosing what types of images that you're using, what compression formats, that sort of thing, for whatever's fast for your specific operation, is something that you can do to optimize image-heavy sites. But fundamentally, images are expensive. There's no way around that. It's interesting as a side note that when the last panel was talking, there was obviously a lot of discussion about bandwidth and that's only part of the picture when you're talking about images, right? Which is the other side of this is how long then does it take me to decode it and resize it? So you actually need a full understanding. You could save time on the transfer of an image, yay, and then have something that's actually really horrible to decode or is horrible to resize on the other side. So your overall picture is actually either worse or the same or better, but you need to understand the whole lifecycle of an image to actually know whether or not you're doing the right thing. And also the low quality placeholder, switched out with the high res one, could be really good, but then you're also trading off the fact that you probably have a second request to get your high res image. So it's not just a straightforward, often anyway, and it's very specific to your app and what it is it's worth while doing. It will be interesting to see if in the near future, instead of the single CPU alone responsible for images, you could have a GPEC core processors. I think we have a question from Jake Arbiterbald. Well, he's getting the mic. Image decoding is actually hardware accelerated in many cases. So on the scrolling to load images thing, like when they're out of view, use a scroll event to bring the images in. I just want to point out that that's a kind of a massive anti-pattern on mobile because if the radio falls asleep, any device of a cellular connection, waking that radio up comes with a huge latency and will ruin the battery. Now on the resource priority spec, we've got a new attribute called plus bone which hands that power back to the browser and lets the browser sort of render without, it can defer the image loading, downloading if it's out of view. I would love, as well as deferring the image downloading, I'd actually love the ability to defer decoding as well. I don't see why you'd have one, not the other. Don't load this. Okay, do load it, fine. Don't decode it. Okay, do decode it, it's fine. That seems to go hand in hand to me, unless I'm crazy, it's possible. That's exactly the magic of web audio for audio because decoding the audio is expensive so you want to do it first so it's ready. Maybe we can do web audio for images. Somebody write us back. That's a great idea. Next question. Matt Todd. Hey, this is an anonymous question. What have been the biggest wins and wastes of time at real websites? And I think balancing that between front-end developers and web developers but also from browser implementation developer development, what have been really rewarding endeavors and what have been huge waste of time that have taken more time to actually produce any kind of benefit. It's over. We still see a lot of sites that in busy loops add one node to the DOM and then ask for a property that requires you to do a layout and then add a node to the DOM and then ask for a property that requires a layout in a really tight loop. And this is a huge anti-pattern. We've been telling people this forever. Most people are pretty good about batching nodes dumped into the DOM and that sort of thing but still a ton of people do this all the time. I feel like that doesn't happen so much like where you see the tight loop but it's more so that you have all these other components on the page where one will add something to the DOM and something unrelated will trigger like a compute style and just happens as you render this page. So a lot of it's like uncoronated. It's not these like little tight, I mean they're the tight loop things I think for painting and rendering but a lot of people just have all these components and then some affect the DOM and some require reading. So I think you see that among just having a ton of stuff on your page. From my point of view, the biggest waste of time I think I've seen would be CSS selector matching, which is remarkably fast. And if you're optimizing your selectors for matching that's probably not something you're gonna see a return on. The thing that I've seen the most use come from would be promoting for isolation, putting elements onto their own layer when they're frequently painted. That's the one I've seen work the most. I agree mostly on the selector matching stuff but if you look at some really huge DOMs that have maybe like 30,000 nodes and you have just a buildup of all the CSS and typically people have so many descended selectors that just target like an anchor element. And if you add like a ton of anchor elements in this DOM that's gonna add up. So it's not that every rule matters but there's ones that you can call out. Right, no for sure. But I would argue that was a fairly extreme case unless I'm misunderstanding your description I think for most people and most applications where the DOM is not 30,000 nodes. Wow. It's a GitHub commit diffs. Boom. We see a lot of sites that are over 100,000 nodes. Really? Wow. Just gonna let that sink in for a bit. I was wondering what your experience was. Yeah, at Etsy unfortunately the big wins have been removing content or removing design elements. So like removing drop shadows off of every element on the page kind of thing and things like getting rid of animated carousels. Sometimes those things are hard to pass through because the designers or the UX people really want that but at the end of the day you have to test it, right? And if people want a jank free experience more than they want drop shadows then that's what you should give them. We have a question from John Mung. Hi, we've talked a lot about images decoding and resizing being very expensive. Do we know of any example libraries or example implementations that use web workers and transferable objects to have a significant impact on this? Oh dear. Yes, no. I did do actually an experiment with this one where I actually created canvas elements for all the images and used workers to decode and resize the images. And so you can do it is the first answer. Whether you should do it becomes the next question and no would be because you have to manage all the images yourself. Is it in view? Is it decoded already? Have I decoded this thing before? All these things that the browser does fantastically well for you you are now having to take care of yourself and you can shoot yourself in the foot so hard and so fast that I feel duty bound to say you can do it but don't. But you can do it and it's cool. Especially on mobile, I think it will definitely, definitely not be a win. So even if you can get the image decoded a couple of milliseconds faster, the amount of time that you're gonna be moving pixels around saturating your memory bus and the battery impact of running these web workers at the same time is gonna be way, way worse than just decoding the images. Because you're gonna spawn a new worker. So you haven't got enough cores on mobile probably to support that decision. And it's gonna be slower because it's JavaScript. But you are in control. I mean you are in total control. So that's the trade off you're making control for all that. Okay we have another question from Sergei. We had a lot of challenges explaining network performance basically the other side of performance to designers and business people. Do you, and a lot of rendering performance actually is much closer to the product they might actually understand. Do you have any suggestions on how we organize the process for making them understand all of these side of the issues? Yeah I think similar to network performance you have to kind of show it to them, right? And in a lot of cases it's a lot easier to see rendering performance being problematic. Especially if you have low end devices that you can show people with. So I think it's a question of getting it in front of them. And like you said it is closer to the product. It's closer to the design side of things. So I think it should be easier for those people to be having more intuitive. But I think demonstration is the best option. Yeah. I think also demoing on cellular instead of Wi-Fi helps a lot. If I just demoing on a constrained device, pick one. Oh look, slow, look fast. Which would you rather shape? It tends to be pretty easy then I think. Or easier. And then one thing that was really successful for WPO was just business metrics, right? And we have some of that data now. It's starting to come out from Facebook and Etsy. So I think showing that information is really important to say this does really impact engagement. We have a question from Paul Irish. I think I was gonna mostly ask about the same sort of a thing. But I'm interested in ways to kind of communicate what the impact, cause like network performance, page load time is a really easy metric. We can track that across browsers, across different connectivity. Rendering Perf, there's not much in the way of numbers that we can demonstrate, say this is the impact that I had or this is where our problem's at right now and we need to bring it down here. So do you have any ideas on kind of making things a bit more quantitative so that management and clients can really see the impact that we have and then understand that there is a problem that we need to solve before we get into it? Yeah, that's a great question. Cause everybody quotes these stats, like 100 milliseconds on Amazon was 1% revenue, but you can't say like three frames per second was half a percent revenue or something like that. So I think we just have to have real studies that have been done by real companies and ideally companies that are in your market. So again, it's a tough problem but I think we just need to have more people doing experiments and be willing to slow down half their traffic to 30 frames per second and see what happens. I mean, we are now at the point where the browser is capable of 60 frames a second which hasn't always been true. And but now that it is, we're sort of playing catch up with the page load time part of performance and now we need to ask that question, say, you know, as an industry, as a body of developers, we are interested I hope in seeing how the actual runtime experience affects our users and whether that's, you know, how much it affects our users. Cause we don't know today, is it the same, is it less, is it more than the page load time? Because, you know, they load once hopefully and run it for a long time. So, you know, 100 milliseconds different on rendering for page load is a very different deal to, you know, repeatedly hitting them with 100 milliseconds, but we don't know how big that is. So, yeah, we don't have a good answer, I don't think, on how much it affects the bottom line but I think we need to find out pretty quickly. All right, I think that about wraps it up for us. Are we ready for lunch? Someone, oh, someone has a microphone ready to go? Yeah, that's fine. One question, actually like my biggest problem with rendering performance is that I cannot describe my intent to the browser. I cannot describe my intent that I want this element to be have higher priority when rendered over another. And I think that's the biggest problem for me when we're talking rendering performance that the browser is making false assumptions of what my intent is in my layout. And I think this goes together with images. This image should be rendered before that. This element, this box should be rendered before this box. What are we doing to look into that? So I, as a developer, can declare my intent to the browser so the browser can make, or the rendering engine can make a more intelligent guess of how things should be composed when we're layouting. Because like now we're talking about translate C, that is to be like a hack. First of all, it's not hardware, it's not a force and hardware extortion in IE. And I said, but I shouldn't know about layers. That's the implementation detail in the rendering engine. I just want to declare my intent that this layer is either higher or lower priority. And then you guys should figure out the rest. I'll be looking into these things. Is there any proposals coming up or? So, so there's one, yeah, there's, okay. This is, first of all, the browser should do everything for me. It's one thing I'd want to pick up on. I don't know how much I agree with that. I don't think I agree with that. I think the way I normally phrase this is if you write spec-compliant code, you should expect spec-compliant responses from the browser. But there is no promise of performance. Now the promise comes through the fact that everybody wants fast code. Browser vendors want to give you fast implementations. And so that's something you should seek and you should ask for and you should push for definitely. But I think it's very difficult to hand on heart say a developer, any programmer should be completely divorced from the system they're programming on. I can't quite bring myself to say that. I think it'll be a nice thing, but I don't think that's realistic. Is that something that we are looking to improve? Yeah, always. I think there's always a definite balancing act of APIs. If they're overly prescriptive and you don't get enough control, then you end up with what I think is the app cache situation where you can't polyfill it, you can't bridge that gap. If it's, at least if it's too difficult or it's a horrible API, and I would argue that things like WebGL actually are very, very horrible from a developer's point of view, but at least you can polyfill that. You can at least add on something like 3JS. So I'd rather it's that way around. But it comes through developer feedback saying we don't think this works. We don't like this. Okay, and I think that is really it.