 Hi, everybody. I'm Paul Lewis. And I'm Philip Walton. OK, so we thought today what we'd do is we would talk about the core web vitals inside of DevTools. Now, I know about the DevTools side. In fact, I implemented some of the core web vitals inside of DevTools. But Phil, you're more of the person that knows about the actual metrics where they came from and that kind of stuff, right? That's right. I know a lot about the metrics. I work on the Chrome team, working with some of the people that were helping to define the metrics. And standardized them in browsers. But I don't really know much about how they work in DevTools. So Paul, you're a great person for me to talk to here. Let's dive in and see what we can find out. OK, so I guess our plan is to have a bit of a conversation to go back and forth. We'll be diving in and out of DevTools, having a bit of a discussion about these metrics, and just trying to kind of explore, understand, and share what's kind of going on there. So I guess the first one that I was kind of thinking about when we were discussing this was LCP and FCP. So I guess the first thing to kind of talk about is what are they? Where do they come from? Yeah, well, these are both paint metrics. So FCP is first contentful paint. It represents the first point in time that the browser is able to paint any content on the screen. And LCP is largest contentful paint. And that represents the largest single text node or image element on the page. And the idea between these two is that FCP represents like the first time the user sees something. And LCP represents when the main content of the page has painted. I mean, in general, whatever the largest thing on the largest image or text node on the screen is generally the thing that the user is going to notice. And so that kind of represents once the page is really loaded. So I guess for a lot of people then, the first thing they're going to think of, certainly for the largest contentful paint, would be something like a hero element or something like that. They're going to big image at the top of their page, for example. Absolutely. OK, right. But it's not always that, I'm guessing, because you could be deep linking into some content like further down the page and everything else. Yep, that's absolutely right. OK, I'll tell you what we'll do then. I've got a page here. Actually, I've got this page on web.dev. The performance tab open inside of DevTools. And I guess the goal here is going to be to show FCP and LCP in context. And I have web.dev open here on a page in the performance section around using image CDNs to optimize images. So if you've not seen this content, definitely worth a look. It's a great art. OK, and we have, yeah, we have. I'm going to see. I can deep link into this section, right? With this. And so this, I guess, would become our hero image, right? And an interesting point to make here is that the hero image is not necessarily going to be above the fold. Like in this case, you're loading a page halfway down, halfway scroll down the page. And so LCP is always, you know, it's only going to consider elements that are actually visible to the user on the screen. Right, great point. So this is what's going to make this really a bit interesting. So what I'm going to do is I'm actually going to go to fast 3G. So in the performance tab, you can open the capture settings here. I'm going to change from just online over to fast 3G. So we're just going to switch to a slowdown on the performance. You can see this little exclamation mark shows up saying network throttling is enabled. And I'm actually going to slow down the CPU just a little bit. And are you doing this so that we can see things? You're doing this to simulate maybe a lower power device or something like that, correct? Yeah, I am. But right now as well, what I wanted to do is if I take a recording with things, just slow down a little bit. It might be easier to just to see what's going on because I happen to be somewhere in my house. There's actually a really good internet connection. So I don't particularly see network latency quite as much as you would in other cases, say if you're on a mobile device out and about. So I just thought, let's just try this and see what happens. So I'm going to hit record. I'm going to hit command shift R to do a reload. OK, and I'm going to stop and we can discuss what we see. OK, let me just wrap this up here. Now, the first thing to notice, I suppose, would be the timings row here to remind ourselves what these are. Don content loaded. This has been around forever, hasn't it? Yeah. But there is first paint, first contentful paint, first meaningful paint, which we could talk about in a little bit, I suppose, largest contentful paint. And you can see that it's actually highlighted our screenshot here and then the load event. Now, I could use the keys on the keyboard to come into a little bit closer, zoom in a little bit on this particular area of interest. And you see here, I suppose, the first contentful paint is presumably happening. And then the largest contentful paint is happening slightly later. That's right. Now, I think we can get a little bit more info about this because first contentful paint is happening and then the largest contentful paint, which implies to me that the image is coming in after the initial page content. So we're drawing something, we're painting something, and then we're painting the image after the fact. So let's see if we can do that with screenshots on. And we will record again and see what we get. OK, I'll stop there. And hopefully, if I just use this a little bit, and we might see, OK, so roundabout, in fact, I wonder if I can just bring this in a little bit further. Let me just see if I can drag that down, drag this a little bit. OK, that might be as clear as this is going to get. I wonder. Yeah, it is. OK, I'll tell you what we're going to do. We're going to make this a little bit clearer. Because what's happening is we're actually seeing the page content before I did the refresh and then slightly after. So I can, if I take this and I go to About Blank, this is actually a really interesting way to do this testing, if you're ever curious about it, record it from About Blank so that you start without anything on the page. And that can make it easier to find your screenshot. So I'm going to paste in the URL here, but not hit Enter, not go to that yet, hit Record, and now go there. OK, hopefully that will make it a little easier to see what's going on. OK, so you see we've gone from here into the screenshots. We see this, we see the original page content, the top of the page, and then we're going down to our deep link just below that. So my assumption is if we bring our zoom in here, that around about here, in fact, we can just do this, this here, you see we're just right on this line here where we go from nothing to something, nothing to something, is exactly the point where we actually start to see the first contentful paint coming in. Yeah, it's the first thing that the user sees, but it's not the main thing that they wanted to see when they were loading the page. Yeah, in fact, it's saying that the largest contentful paint at this point is actually this piece of text now. Let's try it one more time, just to really, really dial it in. I'm going to go for slow 3G. I'm going to go to about blank again, and I'm going to hit Record, and I'm going to see what happens. I feel like we're going to see something reasonable here. Let's process that profile. OK, there we go. This, I think, is starting to make more sense to me over here. There we go. OK, wow. There we are. First contentful paint is here, there. OK, and then much later, boop, there comes our image, which is slightly over to the right here, there. So I can select that area, and as based on the screenshots, roughly there, and I see that's the first contentful paint. And then if I select later on in the screenshots there, I can see that that's the largest contentful paint, which is our image. Yeah, it's nice that DevTools shows you exactly what element on the page is the largest contentful paint. Absolutely. I can't resist. I know we're going to talk about layout shifts next, but why not just jump the gun a little bit? We actually have a layout shift showing up between first contentful paint and largest contentful paint. And I think based on this, I think the reason is because we're going from no image to image and it's pushing the content down there. That's right. So I think we're seeing the page content move. So my guess is if we were to go and find this image here in the elements panel, we're going to see that it doesn't actually have, yeah, it doesn't have width and height attributes set. Yep. And I think that's basically causing this to happen. So we'll talk about layout shifts more in a second, but the reason this page is shifting is because we have an image here that loads, when it loads, it loads asynchronously essentially. And when it's loaded, it pushes the rest of the page content down. If we added width and height attributes to this image, we wouldn't see that layout shift. As I said, we'll come back to that more in a moment. Yeah, that's a good general best practice, though. Just to let everybody know. Always put width and height attributes on your images. That way, the browser can render the space that it needs. It can allocate the space that it needs to render them before it actually finished loading the image, so then you don't get that layout shift. Exactly. The other thing I think we should talk about before we move on is how to optimize for this particular situation. So what would you suggest if somebody said, I need to get first contentful paint and largest contentful paint nearer the start? That it's taking too long to get to these numbers. These numbers are too high. Do you have a kind of go-to list of things you would say to them? Yeah, well, definitely one thing that you don't want to ever block. I mean, ideally, you don't want to ever block painting on more than kind of one network request, that initial network request that you make, dick at the page content. You want to be able to paint at that point. If you have additional requests like requests for fonts or style sheets or other things that are preventing the browser from painting, that will just delay the time when that paint can happen. And so sometimes, depending upon the design you're working with, you don't have a choice, but in an ideal world, you would want to be able to paint right away. And so it looks like in this case, on web.dev, we are able to paint pretty quickly. And that's why first paint is happening at the beginning. And then the browser is loading this image. And then largest kind of paint happens as soon as that image gets loaded in. Exactly. Yeah, I think what we're actually also seeing here is that app.css, which is the main style sheet and the fonts as well, my guess is that they are going to be blocking based on the, you can see that when I roll over them, the network panel here is saying highest, which is the priority that's been assigned to the CSS. And the reason, I guess, is because the CSS is going to be blocking the render, which is what you were saying. So that's why I think some people would inline that. But I guess if we go ahead and take a quick look in our head, and if we can find, we could search for it, but I'm going to link, well, link rel, there's the style sheet. Yeah, you see, there's a style sheet for the fonts and right below it app.css. And so this would be a classic case of here's a style sheet, it's going to block render because the browser, Chrome is going to take a look at that and go, well, I need to wait and see what the styles are before I render anything. Absolutely, so there can be something that we can sometimes take a look at. Same with blocking JavaScript, right? We see that one sometimes gets in the way. And you sometimes hear this referred to as critical CSS, where you identify just the CSS that is needed to layout the page, not necessarily style all the components on your entire site. And so you can inline just that CSS content in the head of your document. And so then you're not blocking on an additional network request in order to paint something on the page. Exactly, yeah, right. So that, that was FCP and LCP. As I say, you will find those on the timings track here in DevTools. Okay, so next up, layout shifts. Now we talked about this very briefly just now with these two down here, but where does it come from? What's the history of the layout shift and cumulative layout shifting? I think I've also heard it called. Yes, so the metric name cumulative layout shift, or CLS for short, is a metric that tries to capture the experience of visual stability on a page. And you probably, everyone's probably had this, you know, experience where you go to a website and you go to tap on, you know, a button or something and right before you tap on it, it shifts out from underneath you. It's a very frustrating experience. Even if you're not interacting with the page, you're just reading it. If, you know, some images, late loading images pop in, some ads pop in, the content changes, like a number of things are gonna happen and you lose your place as you're reading. And it's just not the greatest experience as a user from the user's point of view. So cumulative layout shift is a metric that attempts to quantify that experience. And so there's a couple of pieces there, but a layout shift is anytime an element on the page between one frame and the next frame, it's start position changes. And so this will happen like in this case that we just saw, an image loads in and it pushes the text below it down. And so the image, the layout shift was not on the image, the layout shift was on the text below the image that on the previous frame, it had, you know, an X and Y position of something. And then on the next frame, it was pushed lower and so its position changed. So it's a bit, you know, tough to explain, but the CLS is a measure of both how much of the page content moved and also how far it moved. And so if the entire page content shifts from being fully visible on the page to not visible at all, that would be a CLS of one. If that happened 20 times throughout the page lifecycle, that would be a CLS of 20, you know, and then if it moves kind of half of the screen distance and the image itself is only filling up half the screen that that would be roughly, you know, 0.25 CLS. You can go read more about how to calculate CLS and whether that dev, it's a little bit too complicated to explain now, but that gives you a sense. It's a measure of kind of how much visible instability there is on the page. Okay, so as we talked about before, then we have this one layout shift here and so on. In fact, this is probably the better one of the two to actually demonstrate this. And when you click on this, and it's in this experience track, if you don't get this experience track in DevTools, it means that we didn't detect any layout shifts in that particular recording. If you do find that it's there, then you'll see that it's populated with these kind of records. Now, you can click on this and it will take you off to the detailed information about CLS. But what we try and do is we try and give you a sense of the score and the cumulative score about what's going on. But we also try and highlight for you. So you're going from an image here, that's 11 by 11. We show it as this very small overlay on the left-hand side there to a much bigger 801 by 414. So one of the items that I actually have to do in this area, and you can see actually we have a few going on here, which are probably other images that are being shifted as we make our way through. And let me just, I wanted to step back for a second and just talk about why somebody would do this. I mean, typically, you'll run Lighthouse on a page or you'll go to Search Console's new Core Web Battles Report or the Chrome User Experience Report and you'll see that you have layout shifting happening on your page. And you might be wondering to yourself, okay, but I don't see it when I visit my page. So where is this layout shifting happening? And so then Depth Tools is a great place to debug that and to figure out which page on your site has layout shifting and then load it up in Depth Tools under the throttling conditions that Paul showed earlier and then look and see what Depth Tools is telling you is shifting because that's how you can figure out what's causing the layout shift and then you know what you need to do to fix it. Yeah, and there's more I have to do here to be clear. I think one of the things that is missing from this which is actually available in the data I just need to plummet through is which element are we talking about? I can show you that we've got these areas but it does feel like we're missing a bit of information about exactly which element it is. Like we do with the LCP, we highlight the image that we're actually referring to here. We should be able to do the same here. So by the time this goes out and you're watching this, give it a try in Chrome Canary because I might have been able to land a feature by that. I'm not making any promises but that would be good, wouldn't it? And just as a kind of a quick point, there's often two pieces to a layout shift. There's the element that shifted and then there's the element that caused it to shift. And so sometimes figuring out one or the other can be helpful in fixing because it looks like here that it's showing the image that came in but adding elements to the DOM doesn't in itself cause the layout shift but if adding an element to the DOM moves the elements below it then that would cause a layout shift. Right, because the default size of this image looks to be 11 by 11 pixels to begin with and then when it gets populated with the actual pixel data, it pushes down the rest of the page content which I guess justifies the layout shift there. Yeah. Okay. So that's that, you know, and if you've got, like we said earlier, if you put width and height on these things, that will help but you can also have, I mean, let me show you this other one. Even on the Google homepage, this privacy reminder down here, if I take a recording here and I just refresh this page, we're gonna see a layout shift here and similarly we've got this here which is going from down here and I presume there's some JavaScript or something like that that's looking to see whether the privacy reminder has been seen and if not, it pushes that content up. And so again, this is probably JavaScript based and you're gonna know in your own apps what's going on, is it third party content, is it your own JavaScript, is it your own styles? And it's a case of sort of digging into the specifics of your application to try and figure out exactly, what's triggering that, like what could be happening there in order to figure it out. So that's just a couple of examples of the layout shifting that you could see. Yeah, and just, right. Well, one thing to keep in mind is that in an ideal world, you would have no layout shifts on your page but sometimes it's unavoidable. And so the threshold that we recommend, folks stay below is 0.1. And so it looks here that, this layout shift is quite a bit below that. And so even though you still wanna be at zero if you can, as long as you're below 0.1 for 75% of your users, you're usually in good shape. So you say 0.1, I guess that's like for page load because that's where a lot of these metrics are aimed at page load right now, right? Yeah, so that's actually a really good point. I'm glad you brought it up. CLS measures layout shifts that happen during the entire life cycle of the page from when you load the page until when you unload the page. Even if you leave the page open for days or weeks, it does measure that entire time. Whereas here in DevTools, you ran a trace and you got, you saw the layout shift that happened during that trace. And so in this particular case, CLS was only measuring layout shifts for a small period of time. It's important that developers keep that in mind because the actual metric definition is for the entire lifespan of the page. So if you run a lighthouse trace or a web page test trace or even in DevTools and you see a certain value and it's below 0.1, the threshold that just mentioned, just keep in mind that you have to actually be measuring it the entire time. That's the metric, the measure that counts is the entire life cycle of the page. Also, I think in this area, we should talk about perhaps the metrics themselves as a bit of an evolving art. I mean, we have, for example, first meaningful paint up here. But this isn't one of the metrics that we would mention, say something like Core Web Vitals. And there's also no metric as far as I'm aware for something like animation performance. So I guess my question to you is, what's going on there? Why have we got a metric here that we wouldn't refer to? And why do we not yet have a metric for something that we might be interested in tracking? What's the kind of history and story there? Yeah, that's a good question. So FMP, our first meaningful paint, if you remember from a previous trace that you did, Paul, FMP was right next to FCP, and then LCP was later in the page load. So what actually ended up happening was that, oh yeah, and it looks like that's the case here. So after a bunch of testing, I mean, FMP is essentially, it's a different metric. It has a different meaning than LCP. And after a bunch of research, we found out that FMP actually wasn't as accurate at predicting when the main, what most people would consider to be the most important content of the page, the most meaningful part of the page, the metric itself has the word meaningful in the name. But it turns out that LCP is actually a better predictor. And so as we come up with metrics that are better at capturing the user experience, we'll kind of deprecate older metrics and replace them with newer metrics. But we do recognize that that's happened a bunch over the years and I'm sure developers are getting tired of hearing new metrics announced all the time. And so one of the things that we did with Core Web Vitals, with the Web Vitals Initiative, and specifically with Core Web Vitals is we're committing to only introducing metrics at most once a year for the core set of Web Vitals. And so developers are following along. They can bring, that gives them a little bit of stability if they're building a business on these metrics or predictability if they just kind of don't wanna have to always be following along with the latest. And so recently we announced LCP was one of the Core Web Vitals and an FMP is not one of the Core Web Vitals and over time that will probably be deprecated. So you also asked about animation performance. This is definitely a metric that we're looking at for the future, maybe in 2021 or 2022. So we know that the set of Core Web Vitals doesn't capture the entire story of user experience. And we're hoping that over time we can improve it and animation performance is definitely a metric that are definitely an area of performance that we're exploring. I think the last one that we talked about, talking about, if I get that right. I think I did, was first input delay, which is not directly shown in DevTools. So what is, it's not sometimes called FID, right? What is that and why? Yeah, so first input delay or FID or FID for short, represents the time from when the user interacts with the page, taps on the screen or clicks a keyboard key to the point when the browser is able to respond to that input event. So this can, you might think that it's always gonna be instantaneous. Like you click on the screen and then something will happen. But as users, we kind of know that that's not the case. Oftentimes, we've all had the experience of clicking on something or tapping on something and not having an instant response. And so this can happen if there's a bunch of JavaScript running on the page. Maybe you have a large JavaScript file that the browser is currently parsing and executing. And then so if at that exact time a user taps on the screen, then the browser has to wait a little bit of time before it can respond to that input event. And so FID quantifies like that duration of time. And you mentioned that it's not exposed directly in DevTools and the reason is because I'm assuming you're the one who helped implement this, but first input delay requires an input. It requires a user. And so in many lab scenarios, there is no user. And so you can't always measure first input delay that way. But we have another metric called total blocking time that quantifies just how- We do have. Yeah. That's great. And that quantifies how often the main thread, how much of the main, like how much time the main thread is blocked. And a block main thread, as I just mentioned, contributes to the likelihood that a user will interact with the page, but the browser won't be able to respond right away. So you said that total blocking time is in DevTools. Can you show me where that is? Yes. Maybe there at the bottom of the screen. I have long tasks over here. And yeah, it is down there. And it currently says it's unavailable. And I'll talk about that more a little bit. I've been working on that feature, in fact, today. So I can tell you a little bit more about what's going on there too. So what I'll do is I've come to Web Dev and I've cleared it. And I'm just going to hit record and I'm going to hit refresh. And I don't expect here that I'm going to see any particular blocking time because I've got a fast machine. I'm on a good connection. And yeah, you can see right down at the bottom here, we have total blocking time and it's currently set to zero milliseconds. So what that roughly translates to over here is when we zoom in on these top level tasks, which are on the main thread, we have no task that goes over 50 milliseconds. So 50 milliseconds is our threshold for, hey, this task is long and it's going to contribute to the blocking time. Because what we want to do is we want to keep track on tasks that go over 50 milliseconds because they're the ones that are most likely, we're the user to interact, they're the ones that are most likely to prevent the browser from being able to respond in an adequate amount of time. So we currently have no tasks. So blocking time is defined as anytime greater than 50 milliseconds in a task. So if a task is 49 milliseconds, there's zero blocking time. And if a task is 51 milliseconds, there's one millisecond of blocking time. And just out of curiosity, some people ask why 50 milliseconds, what's the thinking behind that? And so the answer is that the idea, you might have heard of rail, the rail performance model, and you've heard oftentimes people say you should always respond within 100 milliseconds of user input. And so the question is why is 50 milliseconds the blocking time? And the idea there is that if you ever have, if you keep all of your tasks below 50 milliseconds, then there's never a situation where two tasks can't both run within the 100 millisecond threshold. And so that's kind of if people are wondering why that 50 millisecond time exists and why we chose that for the magic number with total blocking time. Exactly. And of course, if you were doing an animation, then your task time really should be under like 10 or 12 milliseconds. So you've got to be context aware. The 50 millisecond number is, it's a great number to have in mind, especially for load performance, but it does change depending on the context and whether you're say animating or not now. As I said, we have no tasks here that are running long. And now, I mean, if I got a trace like this from somebody, I would be very happy. I would say, yeah, I wouldn't complain at this at all. But what I can do is I can at least simulate a slower device like I did before over my capture settings. I'm going to go to like a six time slowdown and I'm expecting that this 25 milliseconds here is going to run long. So this is some JavaScript that's being evaluated. So I'm going six times slowdown. I'm going to hit record and I'm going to refresh again. Okay, I'm going to do two things. I'm going to stop the recording a little bit earlier than I did last time. But the first thing to notice here is our tasks are now longer because of the slowdown. And if I zoom in on this task, it's 176.55 milliseconds. And instead it's qualified for being a long task by what's 126.55 milliseconds. Okay, so what we do is after the 50 millisecond point on this task, we do this candy striping here and we also pop a red triangle up into the top right-hand corner so that when you're looking at a glance like zoomed out, you get a sense of just how many of your tasks are running a bit long. And I think almost universally here, the ones that are running long are JavaScript-based. So if you again are looking at the Chrome User Experience report or Search Console's Core Web Battles report and you see that you have a first input delay that's higher than you would have expected for a certain page, I think this is a great example of how you would go about debugging that. So like you might be on your fast MacBook Pro laptop or something and not see any long tasks. But if you go into DevTools and you throttle the CPU and then you start seeing a bunch of long tasks like shown here, then that would help explain why, because if a user tried to interact with a page during one of these long tasks, the browser would not be able to respond. It would have to wait until the task completed before it could run those event handlers. Yes, so Paul, I'm seeing it saying unavailable there in the bottom in DevTools. What does that mean? Yeah, so sometimes we do say unavailable. The reason is we wait for Blink to tell us when it's happy for us to declare the page interactive. And at that point it tells us how much blocking time it measured. And so sometimes if the trace isn't long enough we don't actually get that information. So what I've been working on actually recently is adding in an estimate, which is essentially counting up the amount of candy striping that we're getting in those top level records so that we can at least give you an estimate even if Blink hasn't given us the kind of official answer. So hopefully you should see that in Chrome Canary soon. Yeah, that makes sense. Well, because total blocking time is technically the definition is the amount of blocking time between first content for paint and timed interactive. And so it makes sense that DevTools would wait until the browser is interactive. But yeah, that does seem like a good feature to just give like an unofficial total when it's not interactive yet. Yeah, exactly. So now we've talked about FCP, LCP, layout shifting and long tasks and FID or FID. If I was a developer who wanted to know more about these things as well as playing with it in DevTools where would I go and get more information? That's a great question. You can go to web.dev slash vitals and that will have all the information about the definitions of the metrics, links to guides on how to optimize for them, links to more information about all the tools that support them and everything like that. So definitely the best place is to go to web.dev slash vitals.