 OK, so we've talked about the network. Hopefully, that gives you some background for things to look out for, things you could look forward to in the future, like HTTP 2.0 and other things. Now, let's talk about this concept of a critical rendering path. So the idea behind the critical rendering path is there are certain sequence of events that need to happen in the browser before we can paint something to the screen, right? Put some pixels on the page or on the screen. Effectively, this is the sequence that happens, and we're going to go through it and try to understand what are the bottlenecks along the way. So first of all, what's so critical about the critical rendering path, right? Let's start with a very simple example. This is a five-line HTML file and a CSS style sheet that I want to render on the page, right? So all I have here is a title called Performance. I am embedding a reference to a style sheet and external style sheet, and I'm just printing Hello World. And my style sheet just contains these two rules. So if you pay attention here, my span is actually set to display none, right? So in theory, when the browser actually renders this, it should just say hello, right? Nice and simple. This should be rendering in like nanoseconds, right? Everything's nice and optimized here. Well, not so fast. So first of all, let's start from the beginning, right? Recall the earlier slide when we were transferring a tiny little 20-kilobyte file, right? So we're sending this data in chunks. So just for the sake of an example, let's say that we get the first part of the HTML data, right? So we send the request, we get the first packet, and it just contains the stuff here at the top, which is doc type, meta, and title, Performance. So it comes in off the network, we get partial HTML of this entire file, and we can start constructing the DOM or the document object model of this page. And this part is important, right? So at this point, the screen is blank because we haven't received the full HTML. And the important part about the parsing and the construction of the DOM is it's actually done incrementally, which is to say we start with bytes on the wire, the bytes come in, we interpret them as characters, and then the browser starts creating tokens out of the string, right? So it looks at the string and says, OK, there's a p tag, there's a hello string, a start tag for a span, and all the rest. It creates nodes out of it, and then it connects these nodes into this tree here, right? And basically what the HTML5 specification provides is a specific algorithm for how you go from the top, which is a text string, to a tree that looks like this, right? So the HTML5 specification actually gives us a very concrete sequence of steps to say, this is how the tree should look like at the end. This is important because prior to HTML5, there was no such specification, which is why there were so many differences between different browsers. We would take the same HTML, and we would give you a different tree at the end, and it's like, what happened there? So this is one of the great things about HTML5. We finally have a spec for this thing. Great. So we're constructing this stuff. Now let's say we got the second packet, right? So now we have a reference. We discovered the reference to the link tag, or the style sheet, and immediately the browser is going to send that HTTP request. It says, well, OK, this user is asking me for the style sheet, let me dispatch that request on the network. In the meantime, I can proceed and continue to parse the HTML and construct the DOM. So at this point, we've actually finished. This is the end of the HTML file. We have the HTML, we've parsed the DOM, we have this object model, but nothing is on the screen yet because we don't have the CSS yet. What would happen if we rendered the DOM without the CSS? Well, you'd get an ugly, unstyled page, which is not a nice experience. So this is known as a flash of unstyled content. So basically, at this point, we just block and we say, look, until we have all of the style information, we can't put anything on the screen. So in the meantime, we're waiting for the network. We're waiting for the request for the style sheet. Next, our two-line style sheet actually arrives in two packets. And I'm kind of using this as a silly example, but you can imagine a larger style sheet which has many different styles. And we got the first line here, which is the p tag. Notice that unlike the construction of the DOM, we can't actually parse the CSS incrementally. Because of how the CSS is specified, we have these cascading rules, we need to have the entire file before we can evaluate it. We can't incrementally build it up. So if this takes us 200 milliseconds to fetch, we have to wait for 200 milliseconds for the entire CSS file to arrive. Which is also an important takeaway right here, is if you have one giant style sheet, even if you split it into multiple style sheets, that may be beneficial because we can construct those or evaluate those incrementally. So concatenating everything into a style.css may not be a good rule, especially in mobile. So great. Finally, we got our last line. We can construct the CSS object model. So now we have two trees. We have the DOM object model and we have the CSS object model. Except that we still can't paint anything. So the screen is still blank. Next, what happens is we actually take these two trees, the CSS object model and the DOM object model, and we build a new hybrid tree out of it. So for example, here we have our body tag. We have the p, the paragraph tag with hello, and then there's our span. And our span tag, which is the CSS rule that says display none, gets applied to this guy right here. And what we get at the end is a thing called the render tree. And notice that there's something missing from the render tree. Notably, the span tag is missing. So what we've done here is we're saying, look, if it's not going to be displayed on the screen, there's no point for me to spend the CPU cycles to render this thing, only then to hide it. So elements that are displayed none are just not part of the render tree. So at this point, we've taken these two trees and we created the render tree. Notice that the screen is still blank. This is all work that's happening within the browser. And in fact, there's not just one render tree. There's not a million, but a whole bunch of render trees out there. And some, so why do we need this? Well, certain elements may have special implementation details, for example, a video tag. A video tag has its own layer and its own priority, where it may be hardware accelerated or GPU backed. So we have these different trees that are being balanced, and they all need to be synchronized between each other. So this is just to illustrate that there's a relationship between all of these. So the idea of the critical rendering path is exactly this. What we have gone through here is we got bits off the wire. We've constructed the DOM. We've asked for the CSS. We've constructed the CSS object model. We've built the render tree. In theory, now we have all the information that we need to start putting something on the screen, painting pixels. Except there's also the layout phase and paint. So what is layout and paint? But before we even get there, I guess, a couple of takeaways. So HTML is parsed incrementally. What does this tell us? Well, it actually means that if you can, you should be delivering the HTML incrementally to the browser. So here's two different strategies that your application server could use. You could generate the entire response, let's say, wait for 100 milliseconds to generate the entire index.html file, and then ship it all at once at the end. That's one strategy. And the other strategy is to say, well, OK, great. I've got everything up to like half of the page now. So I'm going to flush that to the user, and I'm going to feed you the next part of the page after that. The second strategy is actually much better, because we're parsing the HTML incrementally, which means that we can discover the CSS style sheet much quicker. So a fun example is our Google search pages. So what happens? We want to make the search pages really fast. So we have this nifty little trick where you send us a packet with a search query. And before we even understand what the search query is, we immediately send you the header response of the page, which is like, all of the search pages have the same header. So we don't even know what you're asking for, but it's just like, here's the header. Start parsing the HTML, start fetching the CSS, or any other resources. And then once we send that data, we actually look at the query index. OK, so what are you asking for? Search for fluent. OK, let us query the search index, build the actual result page, and then deliver all the rest. So we can progressively fill this in. So this is a low level example, but this is something that you can leverage. The other important takeaway here is rendering is blocked on CSS. So if you're delivering a lot of CSS or splitting it between different files and you can use different strategies, but we have to have all of the CSS before we put anything on the screen. So CSS is truly critical if you want to have a fast loading page, have visual output. So you want to get the CSS down to the user as quickly as possible. Sometimes that may mean inlining some of the styles into your page. Sometimes it may mean like, hey, I have a giant style sheet and I only need 50 styles out of 500 on the specific page. Can I deliver just those 50 and then load the others later after the fact? Because otherwise, you're just blocking the entire rendering waiting for those other 450 unused styles. And we'll see an example of that in a second. But this is kind of a cute example, a five line HTML file, but didn't we just forget something, which is critical to most of the web apps? There's this guy called JavaScript, our friend and foe. So our friend, because JavaScript is truly what enables the web to be what it is today, right? It's dynamic, it responds to your input, it does everything that we need, and allows us to script all of these applications. But it actually makes the story much, much more complicated. For example, JavaScript can query the DOM and it can also query the CSS object model, right? He can ask for like, what's the style of this element? Or let me grab this element. Let me change the style of it. You know what, I'm going to add a new DOM element with a new style on it, right? This is how we build these apps, but it actually makes the performance story much more complicated. So here's an example, right? With JavaScript, we can be clever and we can say, look, I've got this string, but I don't care, I'm going to execute a JavaScript. I'm going to execute the snippet, which is I'm going to write into the actual HTML markup, right? Document that right. I'm going to pass it a string or whatever. It can be an HTML element, an arbitrary piece of something, and just write this into the DOM, right? So why is this a problem? Well, recall that we're parsing the HTML incrementally, and then let's say there's a script tag, right in the middle that says, I don't know, awesome script.js. When we execute that awesome script, you can actually inside of that script say, well, I'm going to write a bunch of new HTML right below my current script tag. So whenever we discover this tag, the script tag, we say, look, we don't know what you're going to do next, because you could be changing everything that comes after the fact. So we're going to stop the world right now. I'm going to wait until we fetch the JavaScript code, execute it, and only then can we proceed to parse the HTML and the rest. So it literally blocks both the DOM construction and everything else. So we have to fetch the JavaScript, and we have to execute it before we can proceed, right? So putting a JavaScript tag at the top of your file will just basically block you on the network until we fetch that file, which is why if you've ever come across advice of, put your CSS at the top and JavaScript at the bottom, well, now you know why, because putting your CSS at the top means that we discover it early, and if we discover it early, we can fetch it quicker, and that unblocks rendering. Putting your JavaScript at the bottom means that you will still block, but hopefully by that point, we've already constructed most of the page, so you're not blocking much, right? So it's still kind of not a great outcome. We still have to block the page, but that's the reason that rule exists. Also, no performance talk is complete without talking about asynchronous scripts. Here's an example. So this is just an example with the plus one widget, right? Oftentimes, and this is not just for social widgets, this is for any JavaScript, Google Analytics, all the rest, you can have two variants of this. You can say, look, I'm going to include the script tag, and it's going to be the plus one, and it'll add the functionality that I need on my page, or I'm going to use this kind of crazy looking asynchronous function thing at the bottom. It looks scary, right? I'm not sure if that's even going to work. Should I use that, or should I use a simple one? I understand the top one. The problem with the top one is it will block the rendering of the page. So whenever you put it, we have to stop the parser, fetch that file, execute it, and only then can we proceed. With the asynchronous version, which is kind of this pattern here, that can be applied to any JavaScript code, this will fetch the script asynchronously, right? So we don't have to block the parsing and the construction of the page. Another interesting feature that we have, and it's actually well supported across all the different browsers, is this async keyword. So the idea here is, here's the case that we've just described, right? We're parsing the document, everything's good, and then we encounter this tag here, which is file a.gs. At this point, we need to go out and fetch the file, right? So this is the blue line here, which is we add some extra latency, network latency to fetch the file, we parse the JavaScript, and then we execute the JavaScript, and only then can we proceed to build the actual page, continue building the actual page. If you add the async keyword, the async keyword is basically a promise, like it's a handshake that you give us and say, look, I promise I'm not going to dock right. And that keyword allows us to then say, okay, fine, we won't block the construction of the DOM, we will continue to, we will start to download, we will start downloading the JavaScript, and we'll just execute it whenever it's done, right? Because you promise us that you're not gonna just like, we'll nearly modify all the HTML markup on the page, right? So if you can, you should use this tag on your scripts, and that will certainly make performance much better, because we don't have to block on the network here. So we've covered this already, but here's an example, right? A very simple example. I grabbed a reference to some element on a page, I can query its style, which is basically going to the CSS object model, and I'm extracting the width, right? And I can also update the width, right? So I can do this from JavaScript, so I'm basically modifying the CSS and the DOM object model variables. And here I'm just writing a simple string into the document directly. So first of all, you don't wanna be doing document that writes to start, but the important takeaway here is JavaScript can block both the DOM construction and CSS, right? So think of it this way. Recall that we couldn't paint anything until we had the CSS. Now what if your JavaScript also asked for a CSS property, right? So JavaScript can block on CSS, and now you have this kind of funny relationship graph where you say, okay, I started constructing the document, I'm fetching the CSS, I'm gonna try and execute some JavaScript at this point, except I can't, because you may be querying for CSS, right? So once again, you wanna get the CSS down to the user as soon as possible. You wanna avoid having synchronous JavaScript files as well. So putting it all together, right? Stream the HTML, this allows us to incrementally construct the page. You wanna get the CSS down to the users as soon as possible, and avoid doc write or use async scripts. So that'll help us with the initial render, speed up the initial render of the page, or at least it removes some of the speed bumps along the way, right, that are inherent on a lot of the different pages. So let's look at an actual example. Actually, before we get to the example, let's say we wanted to deliver a great experience. We wanna break the mobile, break the one second mobile barrier, right? We want to deliver pages within one second. As we saw previously, just the latency overhead of a 3G or 4G network is about 800 milliseconds, right, and higher on the 3G network. For 4G, we were down into kind of 400, 500 millisecond territory, so we don't have a lot of time. What does that tell us? Well, the server processing time must be really fast, ideally below 100 milliseconds, right? If you're taking hundreds of milliseconds or seconds, there's definitely an area that you should be optimizing for, specifically for mobile, and if you wanna hit this goal. Then, we've fetched all the resources. Your server response time is fast. We also need to make sure that we allocate some time for the browser to do all of this work, right? We just described a lot of this, like constructing the DOM, CSS DOM, render tree and all the rest. There's a lot of overhead associated with that, to paint that. So we need to allocate some time on top of within our one second budget for this extra rendering time. So, some very simple implications out of this, specifically for mobile, like if you wanna break the one second barrier, we have to inline requests. We have to inline resources, right? And just earlier, I was talking about ECB-2 and houses an anti-pattern, yeah. And I think there's a lot of interesting room for kind of head scratching for how do we solve this problem, right? Maybe we can use server push to deliver instead of inlining. But basically what this tells us is, if you wanna deliver the page within one second, we can't afford to make extra HP requests because just the latency overhead of that first request is basically a second. So we need to deliver as much data, as much useful data as we can within that first second, and that's how we unblock the rendering. So if we want to have fast mobile sites, that's what we need to do. And then I'm not saying that you need to render everything within one second. I'm just saying you need to render something useful to the user within one second, right? Maybe it's the head of the page, maybe it's the basic text. And then after that, you can continue and progressively load the rest, right? Defer all the other JavaScript, you know? You don't need your social widgets and other things to fire immediately within the first 100 milliseconds. Your analytics beacons and other things, all those things can fire afterwards. So here's an example, a slightly more complicated but nonetheless very simple one, right? We have a simple HTML file with a style sheet reference and a script reference. And I can tell you right now that this page on a 3G phone, as simple as this page is on a 3G network, this will not likely render in one second, within one second, right? The only, and the reason for that is, you know, we need to make an extra HSP request. So there's a round trip to get the HTML file. Hopefully we're streaming this HTML back, but then we need to make the style sheet request and we need to fetch the JavaScript code and the JavaScript code will block the construction of the DOM later, right? So all of these things are working against us. How do we work around it? Well, we have to start inlining things, right? You say, okay, I am going to inline my critical CSS styles and what's critical CSS styles? Well, maybe once again, it's like the header of your page or something meaningful, like even the loading bar, right? That's gonna come directly in the page and yes, there's a cost trade-off here, right? And what I'm saying is you're inflating the size of the page by inlining this potentially across all of your different pages, but maybe that's not a problem for your site, right? You want to isolate the critical CSS and have that as part of the initial page. Same thing happens for scripts. Ideally, actually, you don't even have scripts at the top, right? It's true that a lot of applications require JavaScript today or pages, I should say require JavaScript, but it was also true that a lot of the JavaScript is enhancing the experience, right? So you're loading jQuery to add on-click handlers, you're adding social widgets, you're adding interactivity, but that doesn't block us from first getting the content rendered, right? And then loading and adding those things incrementally afterwards. So most of the time, you can safely, at least in my experience, you can safely defer most of your code until the first render happens and then add that functionality. This may not be true for your single-page app experience because your single-page app is the one rendering the entire application. And in that case, I anticipate this question, like how do we make this work for one-page app? The question, the answer is like tough luck, right? Because chances are you're gonna be fetching a JavaScript file with a lot of templates in CSS and other things, and in short, you probably won't be able to hit this goal. Although there's maybe there's some tricks that you can use with similar techniques like inlining and all the rest. And then just at the bottom here, I'm showing you that we can just create a simple function to say load this after the first paint, right, so get it out of the critical path. You don't need all of the JavaScript immediately. So how do you even identify something like what the heck is critical CSS? One nice tool that you can use is if you open up your Chrome DevTools, you can actually go into the audits panel and within audits you can actually run a performance audit and it'll look at the number of different things on your page, but one of the rules it looks at is actually the CSS that is being used on the page. So here you can see that when I ran this example, there was about 55 kilobytes of CSS on this page. Oh, sorry, yes, 55 kilobytes of CSS on this page, but 60% of it was not being used, right? So this is shared styles across many different pages and perhaps we can somehow split the critical and non-critical CSS, right? So some of these files probably didn't need to be loaded initially, right? They could be loaded after the fact and that would accelerate the rendering performance. Another example, and I'm not sure if Wi-Fi is acting any better now. If not, we have some screenshots. Yay, okay, so another interesting tool that we have is our pay-speed insights tool. Of course, you can look at your waterfall within your Chrome DevTools or Firefox tools and all the rest. This is just kind of the same view, but on the web, right, and yes, it's tiny, but we're looking at Guardian Co-UK and this is all of the files that we need to load this page, right? So it's big, there's over 100 different files. There's CSS, there's JavaScript and all the rest, but not all of these files are required for us to first render the page, right? So this is why we have this highlight critical path button and look what happens when I click on it, right? That entire page, the entire waterfall collapses down and so basically this sequence here. And what the sequence tells us is, yes, you have 80 plus files on your page, images and other things, but these are the files that are critical for us to actually get something visible, right? Like, here's a sequence of JavaScript, CSS files and other things that we need. So this is a great way to kind of look at your own site or any site for that matter and figure out what is that sequence? Like, what's currently blocking my performance? So let me go back and actually show you some examples here. So when I grabbed this snapshot earlier, first of all, when I went to Guardian, it issued a redirect, right? Which on mobile, of course, is extremely expensive, right? We just said that we just want a spear quest that's gonna take us about a second. Now, if you redirect me from, you know, www.guardian.couk to m.guardian.couk, that's a new host name, which means a new DNS lookup. It means a new TCP connection. It means basically they're starting all over, right? Right from the beginning. And in this case, it's 300 milliseconds and that's, you know, on a very fast network. So redirects on mobile, ideally, the number is zero, right? You should avoid them whenever you can. And then after that, we can start looking at, you know, how is this page being constructed? So you can see that Guardian's actually doing a good job, like they're shipping. So the green bars here are the CSS files. So they are, in fact, putting the CSS at the top, which is good, exactly what you want. But then later, they start loading JavaScript files, right? And so here they're loading the jQuery.min plus some other plugins, so far so good. Except then, they encounter the, they have, I guess, their own show ads script. And the show ads script issues a document.write to write the Google ad code, right? And if you recall our earlier discussion, that's definitely an anti-pattern, right? Because this will block all of the rendering. And not only that, but because this code gets written as part of the script, we can't discover it early, right? So the document parser is smart in the sense that it can actually look ahead a little bit and say, like, hey, are you gonna be asking for this image file, right? I'll look ahead and just start that request. With document.write, we can't do that, right? We have to wait until you write it into the document and only then can we dispatch the request. So this is why there's this really high amount of latency here. And then later, there's just actually some really long-running JavaScript on this page. I didn't really dig into as to why, but a good opportunity to profile your JavaScript right here. No, let's skip that. So what that showed you there is the idea of a critical path, right? And certain things you can do to optimize the rendering part of it. CSS, DOM, and JavaScript. Once we've loaded the page, right? So we've gotten to the first paint finally, like hopefully within one second. Once that's happened, we enter a new, basically phase in the lifecycle of the page, which is the in-app performance, right? Now we need to sit and just render the pixels and react to user input. And hopefully we're doing this at 60 frames per second. So same pipeline, except now we're running in a loop, right? Your JavaScript is likely what's triggering events or maybe it's the user triggering events, right? Scroll event or clicking a button or clicking in a form or doing one of these things. And each one of these events is going to trigger a couple of different things within the browser. It can trigger a style recalculation. It can trigger a layout or a paint update. And we'll talk about each and every one of these. So as I said, performance in this phase is 60 frames per second. And this may sound a little funny. It's like, look, I'm not building a game here. I'm just building a web page, right? Like what gives 60 frames per second? Well, it turns out that it actually matters. So just recently, Facebook shared some interesting story where they intentionally slowed down their timeline from 60 frames per second. So when you scroll your timeline, right, they slowed it down from 60 frames to 30 frames per second. And they found that the engagement of users with the timeline dropped, meaning they viewed fewer stories. They didn't scroll as far, et cetera. So that extra jank, if you will, or the skipped frames literally translated to fewer engagement or less engagement from the user. And if you do the math, what's 60 frames per second? We have 1 second, 1,000 milliseconds, divided by 60 frames per second. That means we have a budget per frame of 16 milliseconds, which frankly is not a lot of time. So 16 milliseconds. So what's a frame? We need to render 60 frames per second. So that's this line at the top here. And each frame has 16 milliseconds. And there's a lot of things that we may need to do within each one of these frames. First of all, your application code needs to run. So let's say I click on a button, right? And there's a handler associated with that button. So you need to process that input and do whatever that you need to do. But then after that, based on whatever your code did, I may need to, as a browser, I may need to trigger a style of recalculation, right? So let's say I click on the button and you're trying to open a new window, right? That may reflow the DOM. That may create a lot of JavaScript garbage. And I may need to update the actual pixels on the screen. So it doesn't mean that I have the steps here. They don't necessarily happen in this order. And GC doesn't always happen in the middle and layout and paint. And in fact, that's the ideal case where your code runs. And then we do a little bit of cleanup at the end and we paint the pixels. That is the best case scenario. We'll see why this is often not the case and why it's a performance problem. So what happens if we can't finish the frame in 16 milliseconds? Let's say I have a long-running function, right? The user clicks on a button and I, I don't know, I'm trying to compute pi, I don't know, right? So some expensive function. And if we can't finish it in 16 milliseconds, we cross over that budget. Basically, we just skip a frame, right? So let's say it took 17 milliseconds. Great. The browser does some cleanup after that. But then we wait until the next paint, which is 16 milliseconds after that, right? So basically, we have these fixed intervals within the browser. We say, we're going to paint updates to the screen at these fixed intervals, or that's the v-sync interval if you want. And if you just happen to cross one of these thresholds, we'll just wait for the next one. So if your code consistently takes 20 milliseconds to run, we will paint at 30 frames per second. If your code takes more than 32 seconds to run, you can do the math, right? So it's not like we don't really have, like, you finish your code and we paint. We have these specific v-syncs that you have to hit in order to deliver consistent experience. And a dropped frame is what we lovingly call as jank in the browser. So the takeaway here is whatever code you need to run in order to deliver 60 frames per second, it needs to run within 16 milliseconds. In fact, it needs to run much quicker than 16 milliseconds because the browser also needs to do a lot of extra work, right? Like, you've done your stuff. Now we need to do our stuff, which is we may need to update the styles, the DOM, the painting, the everything else. And this cost is, of course, variable, right? Like, 16 milliseconds sounds like an absolute term. It's not because 16 milliseconds of CPU time on a mobile device is completely different than 16 milliseconds of CPU time on a beefy desktop, right? So as a rule of thumb, you probably want to aim like you're developing it on your laptop. You probably want to aim to complete all of your work within 10 milliseconds, right? That gives us enough of a buffer and hopefully even less, much less, right? But 10 milliseconds is kind of like that barrier where if you're not hitting that, then you're definitely dropping frames on mobile. And the important thing here is if your code takes, let's say, 100 milliseconds to run, we can't interrupt it. We can't interrupt it and say, like, OK, let's paint an intermediate frame, whatever that means. If you run for 100 milliseconds to 100 milliseconds, it just blocks, right? So if you ever had the experience of, for example, you come to a site and you start scrolling and it's kind of like, it's janky for like, I'm not sure if that's the right word here, but like it just skips, right? That's what happens. You're probably firing some JavaScript code which runs in the background. We can't update the screen. We can't scroll the screen because the JavaScript code is taking so much time, so we're dropping frames and that's why you get that slow experience, right? So also frequently, before I was aware of this, I would come to a site which was with a lot of JavaScript handlers, usually like an on-scroll handler, right? Because they would attach it to say, like, I'm going to drag this element as you scroll down the page and it would be very expensive. And then you're scrolling and it's like, it's very slow and you're like, oh my god, my computer's slow and I need to close Photoshop and I need a new laptop, right? And then you profile your code and you're like, actually no, no, that's not the problem. The laptop won't help. It's just the site is really slow. Or the JavaScript is taking so much time. So here's an example. This was actually a screenshot that I took a few months back on a very popular web destination, right? So lots and lots of users. And what you can see here is this is directly from Chrome. If you go into Timeline and you start recording, we will actually show you how much, so each one of these bars is effectively a frame, right? And how much time the frame took to render. The yellow bars show you the JavaScript execution. So we can see here is an event, a scroll event has fired and there's a long running JavaScript function after that. In fact, in this case, this function took 46 milliseconds. So because it took 46 milliseconds, we dropped three frames right there, right? So we're not delivering 60 frames per second in this case for sure. And even worse, this implementation, I'm scrolling the page, it's firing the event scroll on every single time. And this JavaScript function runs every single time event fires. So in fact, within this frame, we did the same work twice, which is useless if you think about it, right? What you want to do is you want to register the fact that an event has happened within that one frame, handle it once, and then defer any work until after, right? So it's a very common problem for a lot of sites on the web today and something that we need to fix. So a simple way to fix it is if you guys are not familiar, you can use the request animation frame callback, which basically says, call me every time a new frame starts, right? And the browser just calls your function. So what you want to do is you want to register the scroll events, put them in a queue, and then when the request or the RAF callback fires, you say, OK, I have some outstanding things that I need to handle. Let me handle them, and then I'll defer all the work until the next frame. Similarly, if you do have a long running function, maybe you can split it, right? Maybe you can actually say, well, I'm going to do this part of the work now, but then I'm going to yield control such that I can update the browser, can update the screen, and then I can continue running it, right? So it definitely requires a bit more work on your part, but it's definitely worth looking at, because rendering performance definitely matters. And at this point, of course, it helps to talk about profiling your JavaScript code. Chrome provides a couple of good tools for this. In fact, we actually provide not one, but two different profilers. We have a structural and a sampling profiler. And what the heck does it even mean? John and myself actually did a full one-hour episode on the difference between structural and sampling profilers and why you would need either one. I'll give you a very brief version, but if you want, if you think JavaScript performance is your bottleneck, I definitely encourage you to check out the full video. So the difference is if you open your DevTools and you go into your profiler and you run the JavaScript CPU profiler, basically the way it works is your code runs. We pause every, let's say, 100 milliseconds, and we just look at what functions are executing. And then record that. We let your code run, and then we pause, examine the stack, and basically it's a sampling profiler, which may or may not be the right tool to find the problem in your code. Sometimes it may capture the exact problem, like if you have a really expensive function and you're always blocked in it, great. The sampling profiler will identify it very quickly, and that's probably the tool that you want to start with. But the structural profiling actually allows you to go deeper. Instead of saying, we're going to sample, we're actually saying here are the points that we want to check point, and you want to measure the time taken between these two points and how many times we've executed it. So to use that, you need to use the Chrome Tracing functionality. And the way this would work is in your JavaScript code, you could actually annotate the specific work that you're doing. So for example, console.timeA and then timeN. So you're basically marking specific intervals in your code to say, I'm going to execute some work here, and I want you to track this work. And then once you visualize it, you can actually kind of get this graph where you say, well, I was processing a data item 1, data item 2, and then data item 4 for some reason took a bit longer than all the other stuff. And because this is using Chrome Tracing, this is a pretty low-level look at what's happening in a browser. You also get insight into things like, what was the V8 function that was being executed, or when did the GC happen, or what was the rendering thread doing at this time as well. So there's a lot of information hidden in there, and it can be a little bit daunting to start the first time you open Chrome Tracing. So if you're not familiar with it, I recommend that you check out the video, where we talk about some specific tips for how to navigate Chrome Tracing. It's not exactly the most user-friendly UI. I'll give you that. But we're working on it. Of course, garbage also happens, right? So we're running JavaScript code. We're creating dominoes. We're creating other things. We're allocating objects. Sometimes we basically need to pause and just collect the garbage. And garbage is OK. That's fine. But sometimes it is a bottleneck. If you're running, for example, if you're building a game, even a pause of 20 milliseconds for collecting garbage means that you've dropped a frame of animation, and that will result in not the best experience for the user. So Chrome also provides good tools to track this down. So if you go into the timeline, same thing. You record a trace. You start interacting with your application. You hit record. Start scrolling your page, playing your game, what have you. What we will show you is the growth of the allocated memory and also how many event listeners or dominoes you have allocated in your page. And then, of course, you can tie that back to like, I was doing this specific action. The number of dominoes has increased, and it did not go down after that. So you can look at this graph here and say, OK. Well, you know, clearly I'm leaking memory. We also provide some useful tools to identify differences between different stages of your application. So for example, you want to start playing a game. You've done certain actions. How do you know what the difference is? So let's actually take a look here. So this is just an example in our documentation code. And what it's doing is it's a very simple example, a cute example of it's just going to create a whole bunch of objects. So let me open DevTools. I'm going to go into Timeline or Profiles rather. Loop. Let's try that again. So Profiles, I'm going to take the heap snapshot. So I haven't noticed that I haven't done anything on a page. It's kind of like my baseline. I just loaded a page. So I'm going to take a snapshot of everything that's in memory right now. Then, it's taking a little while, it's a big heap. And at this point, I'm going to click this Action button. And I guess I have a few things enabled here that I'll just disable for now. So I'm going to click the Action button. So this is just an example. Nothing really happens, but what's happening in the background is I'm running this JavaScript code, which just generates a whole bunch of objects. And then it leads a whole bunch of objects. So now I've done something on my application. I'm going to go back and I'm going to take another snapshot. And once again, it's going to run through and gather all the data.