 So we've been hearing a bit about Rails today. And we're going to dig in and we're going to see how we can apply what's inside the Chrome Dev Tools to really measure, diagnose what's going on, and fix it. So first up, you've seen the slide before between response, animation, idle, load. I'm going to be focusing mostly on these two pieces inside of response and load, having that instant 100 millisecond response coming right back after tapping in action, and then page loading up quickly. And what I want to do is basically just do a performance audit live with some websites in production. It's just a lot of live demo. What could go wrong? Don't know. But the first thing that we're going to look at is the actual Chrome Dev Summit website. And I want to take a look and see how it loads and if we can identify ways that it could load even faster. All right, I'm going to come over here. And what we got is a fresh build of Chrome. And we're looking at the Chrome Dev Summit site where emulating is mobile. It looks pretty good. And I just want to capture the reload action, just to see what's going on. So I was going to just hit Command R. And that's cool. But one of the hard parts about what we see here with the waterfall is, yes, we see the waterfall and there's a lot of network requests. But it's really hard to do the correlation of what is happening at the network layer and what is actually happening on screen. So now up here in the top side, we can actually turn on screenshot capturing. And so this will basically capture screenshots every single time that the page is visually changing. We capture all of them and then keep them so that you can help to correlate what's going on. The other thing that I'm going to do while I'm here is just turn on throttling. So regular 3G is usually what I go with. I'll flip that on and I don't know. I'll just say, yeah, Command R, we'll capture that. OK, good. You can let it finish. I'll let it finish. Oh, I'm impatient. You can hit the red guy. All right, cool. So here's our network waterfall. Up on the top here, we have the actual what happened inside of the view. And you just double click on these to view what happened. OK, this was our first paint, white screen. Good, happens a lot. It happens. You can just hit right on these if you want to iterate. Oh, look at this. You can see that the page kind of fading in. I guess there's a fade effect there. And there we go. Yeah, cool. So this is over here, this guy. This is our, we call this the first meaningful paint. It's not the first paint, but it's the first paint that a user would care about. It's typically what you'll see. So when you're diagnosing page load, you really just want to start from that first meaningful paint and just work backwards from there. Because usually, the reason why that was delayed was because of some network request that it really needed to actually finish that off. All right, so we got that. And we can also, as we hover over these down in the waterfall, we can see the relationship of where each of these screenshots were taken. So this one is over here in this ish area, which is interesting. But one thing that we can do, because there's so many network requests, we can simplify things. One thing that we really want to know is, of all these network requests, which really matter when it comes to page load? Which are going to be render blocking, right? Now, inside the network pane, there is this priority column. You can just right-click inside here. There's a lot of things in here. Priority is one of them. So this is basically the network priority that Chrome and Blink uses to go fetch this. And these priorities are more or less mapped to, is it render blocking or is it not? So if it's high, then it's very necessary. And if it's a low priority, then that's important. A nice thing, too, is you can see this JavaScript file, right? It is low priority. And scripts usually are high priority, but this one's low. And you're like, well, maybe it was just async tag, async attribute. We can just click through on the initiator, which is whatever triggered that request, and view it. Let me just move that out of the way. And where did it click? It just highlighted it. I'm going to click it once more. Yeah, this line, OK, cool. And here's the insertion. And yep, async was one. So it's nice to verify that setting a script to be asynchronous does, in fact, make it be fetched at a lower priority. And so we know that. But the other thing that we can do here to simplify the view and let us focus a little bit better is focus on only the asset types that really matter when it comes to being content blocking. So this is just a hack. You can hold down Command and just click on these guys, JS, CSS, web fonts, HTML, and I'll include XHR just for now. A bit more simple. And so now, if we hover over and get a little closer view, OK, cool. So it looks like our yellow line dropping down is hitting right after these two guys finish, I think. The nice thing is we can also just zoom in, is it right? Yeah, it might be overlapping. Not sure. What are these requests? Is the schedule.json and the widget apis.js. This one's low. So I'm going to bet it's async, so I think it's not going to matter. But schedule.json does, if this is a schedule page, we might be using that. So that might make sense. This is my number one suspect, right? So then here, it's like, OK, well, this request happened way out here. And let me just verify if I click in. OK, yeah, this is like the JSON of everything that's happening, OK. So that's necessary. It was requested way out here, not at the beginning. Why? We can look in the initiator column and just see what it says. But another way to do this is you can hold Shift in this view. And we'll basically highlight the asset dependency graph, more or less. So if we highlight in this, then schedule.json was requested from ellens.html, which is highlighted in green up here, which was in turn requested by the original HTML. So you can walk your way down and see the relationships. And this is really valuable because on your home page, there's a lot of third party content, a lot of anything. It's all about these CSS requires these web fonts, and scripts require other scripts. And knowing these asset dependencies is usually critical in finding out why your load is later or not. OK, so schedule.json is requested from ellens.html. And you'll see there's two right here. And it looks kind of weird because they look like the exact same thing. And it is weird. Turns out it's a bug. And it was in Chrome for about a day. It's fixed now, but this is the build I have. So that's just what it is. Sorry. All right. But let's just say, really kind of either one of these. This was what ended up making this request. Now, I can look into this and see what is inside this payload. It looks like it's a vulcanized polymer, all the elements. And it's big, which is why there's so much blue. Like a lot of download time, two seconds worth of download. It's 133 kilobytes gzipped. And if I kind of expand that, 550 kilobytes unzipped. OK, so it's a pretty big file. And that is why the download time was so big. Again, I'm throttling, so there's that. But it's a bit more realistic when it comes to mobile. So one thing that we can already say is essentially, if the schedule JSON is our last thing that we need to actually get this paint out, and the elements.html has to finish for us to do that, then if we can affect the size of this big blue chunk, then we're going to make that first paint much faster. So we can basically say that if we dropped the size of this initial vulcanized elements.html, like in half, then we would be ending the request somewhere around here, like two seconds. And then the schedule JSON would just be able to shift over. And so we'd be looking, instead of at a load time of about 4.2 seconds, we'd probably be looking at about 2.75 to 3. So we'd be able to knock off a second worth of load time with that one change. So this is the way that you can associate both visually what's happening and the dependency relationships between assets in order to really diagnose what is really happening in page load and why it has taken a while. All right, cool. But it's interesting. The schedule that JSON was actually requested through JavaScript, and while much page load concerns are very oriented around network, there is plenty of things, especially as people are putting more sites that are built as web apps online, a lot of times that you're dependent on JavaScript to really finish everything. So I want to take a closer look at a heavier app. So we're now going to take a look at Hotel Tonight. If you haven't used Hotel Tonight, it's just a really nice app and mobile web that finds you a hotel to stay at, even the same day. It's cool. The team did a really nice job building this. In fact, Hotel Tonight was a native app only. And then this summer, they released a mobile web app for it, and it's really, really nice. So I want to take a look at it and see what we can find out. This time, instead of using what's happening in the network, we're going to go dive into it fully, everything that's happening with the timeline, profiling JavaScript and all that. The first thing I'm going to capture is reloading just the initial load of the page. I'll turn on screenshots over here, because I like the screenshots. And we're just going to hit Command R. See how that goes. All right, cool. That was fast. Good. All right, what's going on? So the first thing is, we have, at the top of the timeline, this is the overview, the red is indications where we think there might be responsiveness concerns. Underneath that, these little green blips, and this recording, not terribly useful. It usually points, the green represents the frame rate. So if you're really looking to hit 60 FPS, you want that green to be tall, and you'll be janking if it's falling beneath. The next section here is usually the most interesting. This is the main thread activity. Yellow is script. Purple is recalculation styles and layout. And this is kind of getting you a sense of what your activity is over time. And then after that, a little bit more faint, you can see it here, is kind of a condensed version of the network waterfall. Following that is the screenshots. So we can kind of track the load of this. And you can see it loaded fast, right? And we're looking here. And here is where that first pane came up. So I'm just going to zoom in here. And it's actually interesting, right? There's this whole section of things going on. And you think that might be where it loads. I mean, that is where the JavaScript does all the work. But it looks like there's not a lot happening up in front, which is pretty peculiar. But we can see, in this tiny little view, this network request. And so I'm like, OK, yeah, I guess it's the HTML. We can now introduce the network requests into this view. So we turn on this checkbox. And here we have what's actually happening in the network layer. So here's the HTML. And then at some point, these JavaScript requests kick in and so on. But it's nice to kind of get this association between what is happening in the network and then on the main thread. So one thing I want to do, right, it looks like it paints very quickly. If we want to look and really associate what the page looked like with what is happening down here, you can just click on these frames right here and see what the state was. And so you can click these. And I think, yeah, this one. OK, cool. So this is the frame that basically gave us our first meaningful paint. Now we're looking at rail. We want a first meaningful paint that's in a good amount of time. So you know, shift to do this kind of measurement. 600 milliseconds, which is good. It's definitely hitting the 1,000 millisecond budget. And that is the first meaningful paint. That is a solid paint. But it's cool, right? The paint comes pretty much immediately. And there's not a lot of JavaScript that's sitting between it. And if I want, I just want to take a closer look at what is in this HTML payload, because it seems almost magical. PrettyPrint. And PrettyPrint on the HTML, it's going to PrettyPrint the CSS and the JavaScript. And there is minified HTML in here. So I'm going to cheat a little bit, because PrettyPrint doesn't yet do HTML. Bring it over to this little web app that you probably have used before. Get that. OK, CSS. Here's the HTML. OK, they're using React, yes. And what's interesting, too, is as you look down, inline styles. So they're using the inline styles on every element, which is the kind of thing that's happening now. And it seems actually OK, right? Like, there was no extra CSS request. It was all online. It was fast. So as far as I'm concerned, it looks good to me. But it's pretty cool. And so basically, they did a fantastic job here of optimizing for their critical path, inline the CSS. The first HTML payload is enough to get that view up on the screen. All right, cool. Now, let's come back. And I just wanted to see one thing, like, well, you know. So they got the load within 600 milliseconds. That seems good. But we should go and see kind of how they handle responses. So now we're going to just start a new recording. And I'm just going to tap on this Search Hotels. See how this works. OK, cool, cool, cool, cool. All right, when you capture screenshots, this part takes a little bit longer. But it should come back, right? OK, overview, what are you telling me? Some things. I don't know. It's nice to have these screenshots, right? Because you can zoom in and you can even see the little loading animation is animating. So all that was captured. So it's like, click, and then animate, animate, animate. And then we got stuff. So I'll just zoom in on kind of that chunk. And all right. So in our flame chart, we have this area over here and this big guy. And one cool thing, you can see here's the XHR coming back. And it's nice because now that we have network here, we can just validate that, yep, here is the actual Ajax request. It's API hotel tonight inventory, right? Cool, makes sense. So now we got a nice kind of correlation. We had to click the network request. We went on that, that came back. We did this work. So in total, what do we have? We started with this click event, but I suspect in front of that we zoom in enough. Yeah, yeah, we had a mouse up. We might even be a touch start in front of that sometimes. So this is where kind of the input started. And then we zoom out. And this is where the paint finished. Was it this one? Yes. This is the frame right here where everything finished. So if we measure this area, this is about 1,700 milliseconds. But specifically, this was our response, right? We clicked and we started getting a loading spinner. That is a response. You are giving immediate feedback. You're saying, hey, we're going to wait. So in total, that was about 200 milliseconds, a little longer than you want. But OK, I did the Ajax. And then it looks like I did a bunch of JavaScript after that. And so the load itself was, hmm, yeah. See, that's second and a half. Good, but probably could be better. Now you're at this point where, OK, seems like this is fairly wide. We could probably see what we can do about that. So in the flame graph, we're looking at all of the JavaScript call stacks. And you can just drag it. You can also hold down Shift while you are scrolling trackpad to scroll. And it's very tall call stacks. Tall isn't really a problem. Tall is just the height of the call stack. No issue there. It just makes scrolling take a little bit longer. But you can see, receive component, update rendered component, this is React doing some stuff. But it's so much data, I don't really know where to start. So here's one way to go about it. We can view things, kind of like you'd have over on the Profiles panel. The Profiles panel, you'll kind of start a recording and then kill it. And there's a chart. But you might be looking at something like this. And this is great. But a lot of time, you'll see this program. And you're like, what is that program? I know, yeah. I wondered for the longest time. So program, for the most part. The numbers are pretty much all the things happening that is not JavaScript. So the recalculation of styles and the layout, all this stuff, this is program. It's time that the pure JavaScript profile can't account for, but you see it over here in the timeline. But the challenge then is, this is a lot of information. So now we have the same sort of breakdown, but over here in the timeline, with both everything that's happening in the JavaScript and what's happening from the browser. So I can work my way down from, say, for instance, this finish loading, and look at the total time from the call tree, and just follow the path down and see where the time is spent. But these call stacks are really tall. It's going to take some time. So I'm just going to work from bottom up and see what functions have time in their costs. So in the bottom up, I'm mostly interested in the self-time column. And here, for this big chunk, we're looking, this is 900 milliseconds or so time. And our number one guy is this, Capture Stack Trace. And you're like, I mean, I just looked at this, and I was like, Capture Stack Trace seems like something that you wouldn't want to be slowing down the user for a quarter of a second on capturing Stack Traces. It doesn't seem terribly important to the user. So let's see what's up. Capture Stack Trace actually is showing because we now implement the ability to see what's happening at a native level. So Capture Stack Trace is actually not in the website's code. It's in V8. I'll just navigate over here. This is the definition of Capture Stack Trace. And it just sits inside of, actually, V8 source. There's a lot of JavaScript that V8 uses to do JavaScript stuff. And Capture Stack Trace is one of them. So this is ours. And this is what's taking time. But why is this happening? So we'll just kind of walk upwards. Why did Capture Stack Trace get called? Because of error constructor? Error constructor because of t. t is in React. OK, cool. So let's click over there and take a look. Check this out. This is nice. I didn't have to pretty print anything. This is actually just, these came in from source maps. You can actually see over here, they use Webpack, and they created source maps. So this app in production actually ships with the source map, so it makes my job of demoing this really easy. It's quite nice. All right. So right, we're looking in this area. It was like function t or whatever. But it dropped me here. And I can see the new error. So I don't really know what this is saying. But I do know it says that we're creating a new error and we're turning it upwards to say something. I don't know. But it turns out that this error creation, to basically message upwards and then be caught by React and do something, that error creation actually has a significant enough cost, in this case, that it accounts for a quarter of the time of this big block. That's probably not what they expected. But we're able to see it here. And so there's probably a good way that we could address that, do something different, and then drop the cost of this, at least by 25%, and end up with something that is fitting a bit more into our one second budget for the load. Let's see. The other thing that I wanted to point out, and I'm going to come back to the network to show it, is in addition to the priority stuff that I showed you before, there's also a connection ID column that you can turn on. Now, this is really helpful, especially if you're in HTTP 1.0, haven't yet moved to HTTP 2.0, because it's basically giving you an ID for the actual TCP connection that it's using. And here, you essentially get to see when new TCP connections are created and which ones share. So you can actually sort by this and see, for instance, that these requests here actually all just shared a single TCP connection, which is actually why you see this gray gap here, because this request was just waiting for this to finish, and then same thing here, because they're all sharing the exact same TCP connection. So it's getting into the terminals of networks and how they work. But still, it gives you a bit more insight into what is actually happening on the hood and why you might be seeing this sort of a delay. All right, let's bring that back. Bring it back. Bring it back. All right, cool. So in the timeline, some new stuff that I showed, capturing screenshots, the network waterfall, bringing in there, and summarizing all of the work, just aggregating it up to tell you what is happening across all the operations in the tree view, both call tree and bottom up. You can also, and this is pretty fun, you can also group all that work by different things like domain. So say, for instance, hotel tonight, it looks like most of this is their own code, but I can also take this work and separate it out, see by URL, which files are all my costs coming from, or sometimes even better, is the subdomain. So here we can see all the third parties, say, for instance, analytics or whatever, you can see their actual costs on the page for that recording. Even things like Chrome extensions will be included here. So it's really nice to get a sense of, my page is complicated, lots of third parties going on, not sure who to blame. And this gives you a really clear idea of which players are consuming time on your main thread. All right. Over on the network, I showed the dependency visualization, understanding the priority of different requests, that connection ID, and the screenshots. It's good stuff. As you can imagine, a lot of these things are fairly new, not all of them, but many of them. Canary has them. In only a few cases, we have to turn on the experiments, but the end of Paul Bakas' talk gave you a good idea of how to take care of that. And I also just want to point out, we're not finished. There's a lot more things that we're working on, especially in this performance space. We want to make sure that you always get good insight on how you can really make a big impact when you're addressing performance and getting the information that you need. And also, if you are really into the stuff you're interested in, we're hiring. So holler at me. And we'd love to get some people that are really excited about building fantastic tools for developers to make them really productive. All right, so I just did kind of like a live perf audit. There's been a number of these that have done for all sorts of different sites. One in particular was the mobile Reddit site, which is a fun site and that had a lot of interesting things going on, got written up, a lot of interactions from the community, just like figuring things out, and ended up making their site a lot faster. But there's a lot of these I encourage you to read these. You'll learn a lot. But at the same time, I also encourage you to do it yourself. This is the Wikipedia team. They printed out, they literally printed out on paper the timeline, and then just took sharpies and posted notes and just marked it up and said, for this 10 milliseconds, we're doing this. And then this, just accounting for the time and justifying it, too. Should this thing really take this much time? And it was really effective for them. And in fact, they shared this a little bit ago. They were able to drop the page load time for Wikipedia in half. And I don't know, that seems pretty good. We all use Wikipedia. We want it to load fast, especially when we're in a bar trying to win an argument. So it seems good. These results come directly from their work, diagnosing in a performance audit in this manner. So I encourage you to do it. I think that's it. Thank you guys very much.