 Hi, I'm Annie. I'm a software engineer on the Chrome web platform team, working on the core web vitals metrics. And I'm Elizabeth Sweeney. I'm a product manager on the Chrome web platform team working on tooling for web developers. We're here to talk to you about what's new in web vitals. We'll start with a quick intro around the what and the why of web vitals. Web vitals are metrics for web pages with a focus on user experience. They measure problems which frustrate users like poor performance or content shifting around. Core web vitals are the subset of web vitals we consider most important to focus on. They apply to all web pages so they're surfaced in all our tooling. They're measurable in the field so you can get a ground truth of what your users are seeing as they interact with your page. And they each focus on a critical user-centric outcome. We try to keep the core web vitals to a small set of metrics to make it easiest to focus on the most important things. Why? We want to make it as easy as possible for developers to improve the user experience of their sites because we care a lot about the users of the web and we want to make them web the best place possible for them. Now that we have a little background in the core web vitals, let's talk about updates we're making to our tools to help you improve them. We'll go through the metrics one at a time but first I'll hand it over to Elizabeth for some details on how core web vitals are measured. Before we dive in depth, I'd love to take a few moments to review how we can get the most useful insights about user experience metrics. We have two amazing resources to pull from to learn about how users are experiencing your sites and how to improve them. Those two resources are lab and field data. Lab data, which is synthetically collected in a testing environment is critical for tracking down bugs and diagnosing issues because it is reproducible and has an immediate feedback loop. Field data allows you to understand what real world users are experiencing, conditions that are impossible to simulate in the lab. Either set of metrics taken in isolation aren't nearly as powerful as when they're combined. Let's take an example to see the different uses that lab and field data have. So here we have real world data for a page's largest contentful paint, LCP. So because this is field data, this is recorded from real users on their real devices. Every time one of your users loads your page, it adds a single data point to this set. Because of this, a single field metric represents all of your users. Thousands of data points, variable cache conditions, network and device environments, real world data presents you with all sorts of variables and unknowns. When you're trying to optimize based on data that represents so many different conditions, it's difficult to know where to start. This is why synthetic or lab testing is so useful. When you run Lighthouse on your page and get an LCP value, it is a single data point collected real time for you, calibrated to represent a user in your upper percentiles. What this allows you to do is use a single set of values as representative of your user's experience on your page so that you can dive deep and debug against that. In other words, if you're optimizing against your Lighthouse performance score because it is calibrated to be representative of your upper percentiles, you are optimizing for the majority of visitors to your page. The Lighthouse performance score is a tool to prepare yourself to succeed with users in the real world, in dimensions of quality they care about. The closer you are to that 100 score, the less you're leaving up to chance for what can go wrong in the field. Let's look at how you might use PageSpeed Insights to get a sense of your performance in the lab and in the field. We start at the top with your Lighthouse score in the big score gauge. This is a weighted blended combination of user centric metrics collected in a lab setting. It includes all six of the metrics that you see detailed in the lab section further below in the report, including web vitals. The goal of this high level performance score is to make sure that you have the ability to quickly assess at a glance how well your page is likely to deliver a good experience in real world conditions with your users. Just below the score gauge, we now have field data. Whereas the score above is sourced from Lighthouse and is collected in the lab, this data is sourced from the Chrome User Experience Report, or CRUX. For the URL level data that you see here, you're able to see how many page loads over the previous month offered users who visited the URL a good experience or one that needed improvement. Also worth noting is that you can see core of vitals marked by the blue ribbon. Below the URL level data, origin data is also often available to show you the same insights but across the entire origin as opposed to just the URL. Okay, now we're back to Lighthouse with data sourced from the lab. Here you see your lab metrics in detail, really good stuff. The opportunities and diagnostics that help to give you actionable suggestions on where you can improve. We're working right now to reorganize the report to make everything as clear as possible, but until then, remember that it's okay if the numbers between lab and field don't match. They are giving you different information, one which is useful to debug, another which is useful to validate how your users are experiencing your site. Okay, refresh your complete. Let's get back to learning more about the metrics themselves. Annie, I'll pass things back to you. Let's go through the core of vitals metrics one by one. The first metric we want to highlight is largest contentful paint or LCP for short. It measures when the largest item in a page is painted to the screen. In this example, it will measure when the blue square is painted. We really like LCP as a metric for page load time because it's a good proxy for how long it took until the user could see the main content of the page. It's important to remember to look at LCP in the field. A user with a really great device and network conditions may have the main content loaded in a few hundred milliseconds while a user with less great conditions could be waiting several seconds. So be sure to understand what the field data of your own site looks like when working to optimize LCP. As you work on improving LCP, if you have feedback for us, we're always happy to hear it. We've already made improvements in the metric to better handle background images and carousels based on developer feedback. Our email address for feedback is webvitalsfeedback at googlegroups.com. An optimizing largest contentful paint can have a big impact for your users. When NDTV reduces metric by 55%, they cut their bounce rate in half. Make sure to watch the Business Impact of Core Web Vitals talk for more great success stories. I'll hand it over to Elizabeth to talk about improvements we're making to our tools to help you reduce LCP on your own sites. One thing that we know can be very frustrating when trying to improve LCP is isolating which element to optimize. With element screenshots in Lighthouse, a full page screenshot is taken so that viewing DOM elements and their details have never been easier. Once you found the right element, what do you do with it? Well, we hope to have you covered there too. With Lighthouse's audits for largest contentful paint, you can understand which opportunities apply to your page to reduce server response times, limit render blocking JavaScript and CSS, and improve resource loading times. Now that you know when your main content is loaded, how do you know that your page elements aren't moving around? You know your page elements aren't moving around with cumulative layout shift or CLS for short. CLS measures how much and how often content unexpectedly shifts around on the page. This can be a really frustrating experience for users. CLS is measured throughout the whole lifetime of the page. As you can see, the user on the left loaded the page and navigated away quickly, and they had a good score. But the user on the right scrolled through the whole article and images popped in as they were scrolling, making it hard to keep their place. This is a very common issue we see with CLS and one that's very frustrating for users. So make sure to check on your field CLS to ensure your users aren't having problems after the page finishes loading. As we've listened to developer feedback on CLS, one thing we found is that there are some cases where the page can be open for a very long time and the score can increase too much. So we're working on capping it by using a window which captures the most frustrating burst of layout shift in the page. The great news on CLS is that improving your score can have a big impact for your users. When Yahoo! Japan reduced their CLS, users started spending more time per session and loading more pages per session. Elizabeth, can you tell us about the new tooling for CLS? To help get you started with diagnosing and optimizing CLS issues, Lighthouse has a set of audits to help. Within a Lighthouse report, you can find advice to help avoid shift triggering events on your page. Simple things like setting explicit width and heights on your image elements can go a long way. Okay, Annie, so now we have our content loaded and we know it isn't moving around too much. What's next? Making sure the page is responsive. We measure that with first input delay or FID for short. FID measures the time it takes between the user tapping or pressing a key and the browser being able to process that input. Long-running JavaScript on the main thread is usually the biggest problem that makes this number go up and slows down the page for your users. Let's look at an example. In this example, the user is trying to tap on a menu but the main thread is blocked by JavaScript running on the page. So the browser has to wait until all the JavaScript is finished and that makes the menu take a really long time to open. Going back to lab versus field, it's important to remember that for real users, the first input can occur at many different times during page load. Since we want the page to be responsive no matter when or where the user clicks, we recommend reducing long-running JavaScript on the main thread so that it's never busy when the user tries to click. That's the reason we've added a total blocking time or TBT metric in the lab to help you understand and reduce this main thread blocking time. Back to you, Elizabeth, to hear about how the tools can help. To set your users up to have quick and seamless interactivity on your pages, check out the Lighthouse report to find opportunities to optimize your main thread execution times and potentially defer or remove portions of code entirely. One of the exciting success stories we've seen is how our partners Alondo used Lighthouse CI to prevent regressions to their core web vitals. They were facing a few challenges. Among them were deciding which metrics were priorities to benchmark against and devising a system by which they could lock in their optimizations and not regress. After deciding to prioritize user-centric metrics like LCP, CLS, and FID, they used Lighthouse to dramatically reduce their load and interactivity timings. Then they integrated Lighthouse CI, which served as a strict non-regression mechanism to ensure the investments they'd made to improve quality for their users didn't regress. Something I'm particularly excited about sharing is a new feature that we've been working on. Now with the Lighthouse TreeMap, you're able to easily understand bundle composition and explore opportunities for improvement based on both resource sizes and coverage. This is an important part of our efforts to make attribution of impact, whether it be first or third party, easy and accurate. Core web vitals are now in all of your favorite developer tools and there are more than what's listed here, including a new web vitals library and a bunch of ecosystem tools that have already adopted them. You're able to measure your core vitals for a specific page, for your origin, locally in the lab, and from real users in the field. Remember that first input delay is only measurable in the field, but you can use Total Blocking Time, TBT, as a proxy lab metric that allows you to debug and improve your interactivity in the lab before your users ever have to experience a bad fit. The next obvious question is, again, this is great, but where do I start? What tool should I use? As shared before, a great place to start is PageSpeed Insights because it gives you both your lab and field data perspectives. I wanted to be sure to mention something that I'm excited about that's being developed in Lighthouse right now. We are working on an opportunities filter for the report. This will allow you to filter opportunities to improve specific metrics. So you wanna optimize your CLS. With this feature, you will be able to zero in on just the opportunities that will impact that metric. There is also a dedicated lane in the performance panel in Chrome DevTools that allows you to isolate exactly when critical web vitals timings are being measured. Seeing these timings in the context of your overall waterfall can be critical to diagnosing a wide variety of factors that can be affecting your performance. Total Blocking Time is also visible clearly in the footer so that you can have a signal as to what your interactivity will be for users and layout shift events can be teased apart and analyzed in the experience section. As always, it is critical to be leveraging field data to understand how your real users are experiencing your sites. The Chrome User Experience Report is powered by real user measurement of key user experience metrics across the public web, aggregated from users who have opted in. Using crux allows you to further isolate what needs your attention most with regards to how you can improve your site experience for users. Okay, we've gone over a lot in a short amount of time. What do you think, Annie? Are we missing anything? I could talk about Core Web Vitals forever, but those are the basics. We're really excited for you to try out the tools and let us know what you think. To review, here's a list of the Core Web Vitals metrics. Largest Contentful Paint measures how quickly the main content loaded and you can optimize it by speeding up response times and addressing render blocking resources. First Input Delay and its Lab Proxy Total Blocking Time measure how quickly a page responds to user interactions and you can optimize them by speeding up main thread JavaScript. Cumulted Layout Shift measures content shifting around on the page and our tools help you identify antipatterns to avoid. On the right, you'll see the email address we have for feedback. We'd love to hear your thoughts on the current set of Core Web Vitals or the user experience metrics you'd like to see us work on in the future. Elizabeth, how can people learn more? To learn more about Core Web Vitals and the tooling to help you optimize them, please visit the resources we've included here and stay tuned for a bunch more amazing talks that are happening today and tomorrow. Thank you so much for being with us today. Thanks everyone. We're really looking forward to your feedback. As we continue making Web Vitals the best they can be.