 Hello everybody and welcome to the state of speed tooling at Chrome Dev Summit 2020. My name is Elizabeth Sweeney and I'm a product manager on Chrome's web platform team. And I'm Paul Irish. Today we're going to talk to you about some of the latest in Lighthouse scoring, third-party audits, the Chrome user experience report, and Core Web Vitals actionability. Elizabeth is going to get us started. So today we'd like to start off by sharing a few things about Lighthouse's performance score as well as some potential updates coming next year. So the goal of the Lighthouse performance score is to make sure that you have the ability to gauge how well your page is likely to deliver a good experience in real-world conditions with your users. To understand the goals of the Lighthouse score, let's take a brief moment to remind ourselves of why it exists in the first place. So here we have real-world data for a page's first contentful paint or FCP. So because this is field data, it is recorded from real users on their real devices. Every time one of your users loads your page, it adds a single data point to this set. Because of this, a single field metric represents all of your users. Thousands of data points, variable cache conditions, network and device environments, real-world data presents you with all sorts of variables and unknowns. When you're trying to optimize based on data that represents so many different conditions, it's difficult to know where to start. And this is why synthetic or lab testing is so useful. When you run Lighthouse on your page and get an FCP value, it is a single data point, collected real-time for you, calibrated to represent a user in your upper percentiles. What this allows you to do is to use a single set of values as representative of your user's experience on your page so that you can dive deep and debug against that. In other words, if you're optimizing against your Lighthouse performance score, because it is calibrated to be representative of your upper percentiles, you are optimizing for the majority of visitors to your page. The Lighthouse performance score is a tool to prepare yourself to succeed with users in the real world in dimensions of quality that they care most about. So that's basically why we have it, because the closer you are to that 100 score, the less you're leaving up to chance for what can go wrong in the field. Okay, so we quickly reminded ourselves of why we need it, but what is in it? The Lighthouse score is a weighted blended combination of the user-centric metrics that you see in the report. It can be viewed as a recipe with all of the important ingredients for a good user experience. So those user experiences include loading performance, which is measured by metrics like first contentful paint, speed index, and largest contentful paint. One of the key ingredients for a good experience on the web is to be able to see content and to see it quickly. Interactivity is another key ingredient, metrics like time to interactive and total blocking time allow you to measure how quickly your page is going to be able to respond to user input. Another primary ingredient to make your users happy is the stability of your content measured by metrics like cumulative layout shift. It's never any fun to have things jumping around on you. So we have all of these ingredients and metrics to measure them, but we often get asked a very good question. How does Core Web Vitals fit into the Lighthouse score? Well, they're right there. Core Web Vitals represents the table stakes of any good experience, which is why we have them included in our scoring recipe. Not only that, but there's been a lot of work done to make Core Web Vitals more actionable in the Lighthouse report, and Paul will be speaking about that a little bit later. Just a brief reminder that first input delay, which requires a real world user to measure, can be optimized by using the lab proxy metric, total blocking time. TBT will help give you a sense about how responsive your page is going to be when you have a real user engaging with it. This is the current Lighthouse performance score. And as you can see, the various metrics are weighted differently based on what we have found to be the most important for a good user experience. Core Web Vitals, with one exception, are the most heavily weighted metrics in the Lighthouse performance score. So when you're optimizing against the Lighthouse score, you're setting yourself up for success with Core Web Vitals in the field. Now, that one exception is the weighting of CLS, which is weighted less than the other metrics. When Lighthouse 6.0 came out, CLS was still a new metric, and we wanted to make sure that we had time to receive feedback from the ecosystem before we weighted it more heavily. Now that it has had time to mature, we want to adjust the weighting to make sure that we're aligned with Core Web Vitals. We are still calibrating our scoring curves and analyzing thresholds, so we don't have specific figures for you today. But an increased weighting of CLS is one of the primary changes you can expect in our next scoring update in Q2 of 2021. We'll add a link at the end of our slides for where you can stay up to date with the latest changes, but we also encourage you to check out the Lighthouse scoring calculator where you can explore the details of your current scoring composition. Okay, switching gears a little bit to third-party audits. We know that a big part of web experiences are delivered using third-party code, and developers don't have as much transparency or control over the performance impacts as is ideal. Third-party services can deliver a lot of value, but they can also come with performance costs. Our goal is to make those costs as transparent and attributable as possible so that developers can make informed decisions and reason about trade-offs when choosing what to include in their sites and how to incorporate them. An example of the work we're doing to make performance impacts transparent is with the Minimize Third-Party Usage Audit. This audit is designed to help you break out what third-party code is impacting your performance and by how much. As I mentioned a moment ago, the intention is to minimize the costs of third-party code on your user's experience. Another new audit we're shipping surfaces opportunities to lazy load third-party code, and for that I'm going to pass it to Paul to share more about this new audit. Thanks, Elizabeth. So I want to take a moment and consider a YouTube embed. Now, when a YouTube embed on your page loads, it's an iframe, it loads in, but then there's its scripts and stuff. To be honest, the amount of resources loaded in are a little heavier than you'd expect. Now, an alternative to loading in that full iframe and everything with it all at once, the beginning of the page load, is to load in something that looks just like it, but is far more lightweight. It can look exactly the same with the play button. Once the user engages with that play button, then we can load in the full fat embed behind it. So we call this pattern a facade, and we've been seeing this become a little bit more popular. It's a nice web-friendly technique. We've added a brand new audit to Lighthouse that captures opportunities where you can employ this pattern. Right now, the audit finds opportunities like video embeds and chat widgets, but if there's any facade that you'd like to see recommended, please go to the web.dev documentation to see how to submit them. And now we have a few updates on the Chrome user experience report. At each of our events, we're happy to update you on the growth of the Crux corpus. And today, we are announcing that we have field data for 8 million origins. The Crux data is available via BigQuery, the Crux dashboard, and the new Crux API that was launched in June 2020, this year. You can also check out some stuff was added to the API. In fact, effective connection type property was added to the payload. And if you're interested in this API, there's great documentation on how to make use of it. Moving on to Core Web Vitals actionability. It's important to us that you're equipped with the tools to not only measure how you're doing with Core Web Vitals, but to actually improve them. So I want to take a look at this trio of metrics, and we're going to kind of get a cheat sheet for how we can improve these things with the tooling that we have. And for each of these metrics, I'm going to go through and look at kind of two key steps. We're going to diagnose what's going on, and we're going to ameliorate or make them better. All right. So LCP. Let's diagnose it. The key question here is, okay, we had a largest contentful paint. What was that paint? In Lighthouse, you can look at this specific audit. It'll tell you the DOM element that is associated with the largest contentful paint. You can see the same thing in DevTools. If you capture a trace, you select the LCP item and get the metadata about that DOM element and when it happened. Now let's make it better. Because this is a paint, we want this paint to happen earlier in the page load. This is mostly a matter of optimizing your network waterfall and your loading strategy of critical bytes. These are all the Lighthouse audits that help you out with that. Moving on to TBT and first input delay. Now the key question here is, where are the long tasks? Each of these long tasks is contributing to our total blocking time and input delay. In DevTools, if you record a trace and look at the top of the main thread, you'll see tasks and if those tasks are above 50 milliseconds, it's a long task and DevTools will tell you that. I also want to point out that a little bit newer in DevTools, you'll see at the bottom of the pane is the total blocking time metric. This is computed for you on the fly based on the trace that you're looking at. In Lighthouse, you can see the same kind of information about your longest tasks. In fact, we summarize the longest long tasks in descending order. You can see how long they are and what URL they're associated with. If you have a lot of third parties on your page, you can look at the audit that Elizabeth mentioned before. For each third party, we also include the blocking time contribution for them. Now, how do we make this better? Well, this is mostly a matter of optimizing our main thread. We have to take inventory of all the work that's happening and we want to take that work and we want to break into chunks. We want to spend less time doing it. We want to defer some of it and we want to just straight up delete some of it, just not do any of it at all. The seven audits here help with all those things. Next for cumulative layout shift. The key question is, okay, we had shifts, but what was it that shifted? In Lighthouse, you can see these DOM elements were the elements that shifted around and for each of them, you can see their numeric CLS contribution. You can see a similar thing in DevTools, record a trace, look at a layout shift event, select that and you can see, okay, what were the shifts? What were the rects that moved around? Now, the next question is, why did it shift? Because we found the shifted elements, but those shifts were actually the side effects of the real culprits. This is a little bit trickier, but we've added some new stuff to Lighthouse to help out with that. In fact, we have four completely brand new audits that attack this problem from different directions. So we have audits that look at, if you have animations that are not running on the compositor and that could be running a lot smoother and not affecting layout shifts. If you have image elements that do not have fixed dimensions, you have iframes that are being added and perhaps shifting things beneath it down, and if you have any web fonts that are not loading in an optimal pattern. Thanks, Paul, for that cheat sheet on Core Web Vitals and making sure that it's actually actionable. And there are actually some other advice that you can check out that'll be shared in talks coming up. So stay tuned for talks like Fixing Common Web Vitals issues with Katie and Exploring the Future of Core Web Vitals with Annie and Michael. Yep. These links here captured some of the resources in this talk. And that's it for us. Thank you all very much. Thank you.