 Hello again, everybody. For those of you who don't know me yet, my name is Elizabeth Sweeney, and I'm a product manager on the web platform team in Chrome. I'm excited to talk with you all today about the latest and greatest in our speed tooling. I'll be sharing some updates as far as how we think about measuring user experience, including metrics updates and our new Core Web Vitals initiative, as well as making sure that we're privy to all of the newest features, products, and updates to our developer tooling as far as speed measurement is concerned. So let's dive in. Well, I know we've heard it before. It is worth reiterating why metrics change. Well, ultimately, it's because our understanding of how to best measure user experience evolves over time as we learn more and work through technical hurdles. We need to make sure that our metrics and tooling are updated to reflect the latest in our learnings. Fundamentally, we view it as mission critical to give you the most accurate and effective mechanisms by which to optimize your site's experience and help you achieve your goals. And that doesn't just mean for one of your users or a few. We want to make sure that as many users as possible, regardless of what network they are on or what hardware they're using, are in the bucket of users that want to come back to your site again and again. And that brings us to the impetus behind Core Web Vitals. We have long been espousing performance and user experience quality, because we believe that good site performance leads to better outcomes for users, businesses, developers, and for the web in general. The Core Web Vitals initiative aims to bring together a more cohesive picture of web performance so that there is a better shared understanding of what should be prioritized first. Let's take a moment to review the metrics themselves. Largest Contentful Paint, LCP, is a measurement of perceived loading experience. It marks the point during page load when the primary or largest content element has loaded and is visible to the user within the viewport. It's an important complement to First Contentful Paint, FCP, which only captures the very beginning of the loading experience. LCP provides a signal about how quickly a user is actually able to see the content of the page. To provide a good user experience, sites should strive to have largest Contentful Paint occur within the first 2.5 seconds of the page starting to load. To ensure you're hitting this target for most of your users, a good threshold to measure is the 75th percentile of page floats segmented across mobile and desktop devices. First Input Delay, FID, measures the time from when a user first interacts with a page, so they're clicking on something, tapping a button, that kind of thing, to the time when the browser is actually able to respond to that interaction. To provide a good user experience for FID, sites should strive to have a First Input Delay of less than 100 milliseconds. To ensure you're hitting this target for most of your users, a good threshold to measure, again, is the 75th percentile of page loads. Given that FID can only be measured in the field with real users, we wanna make sure that you have a way to locally debug and optimize FID in the lab. That's where total blocking time, TBT, comes in. TBT quantifies load responsiveness, measuring the total amount of time when the main thread is blocked long enough to prevent input responsiveness. So, TBT measures the total amount of time between first Contentful Paint and time to Interactive. So, in short, you should definitely make sure that you're leveraging the signals that you're getting from TBT in the lab to optimize for FID in the field. Cumulative Layout Shift, CLS, is a measurement of visual stability. It quantifies how much a page's content visually shifts around. A low CLS score is a signal to developers that their users aren't experiencing undue content shifts. A CLS score below 0.1 is considered good. CLS in a lab environment is measured through the end of a page load, whereas in the field, you can measure CLS up to the first user interaction or including all user input. So that was a quick overview, but it's important to remember that our goal is to have the vast majority of our users served with fast, interactive, stable experiences. To that end, Core Web Vitals uses the 75th percentile value of all page views in the field to evaluate against these thresholds. So in other words, if at least 75% of page views to a site meet the good threshold, then the site is classified as having a good performance for that metric. And this applies to all three of the Core Web Vitals, LCP, FID, and CLS. The 75th percentile is used to evaluate all of them. As I mentioned before, our ability to measure user experience quality is always improving. We expect to update Core Web Vitals on an annual basis and provide regular updates on the future candidates, motivation, and implementation status. Looking ahead towards 2021, the Core Web Vitals will be refreshed to ensure that it reflects the latest in our learnings, and this includes adjustments to the set of metrics as well as the thresholds. Let's do a quick refresher on the value of combining both lab and field signals together to diagnose, optimize, and monitor your site's performance. Lab data, which is synthetically collected in a testing environment, is critical for tracking down bugs and diagnosing issues because it is reproducible and has an immediate feedback loop. Field data allows you to understand what real world users are experiencing, conditions that are impossible to simulate in the lab. The real world's messy. I mean, there's permutations of devices, there's network configurations, cache conditions, the list is long. Either set of metrics taken in isolation aren't nearly as powerful as when they're combined. And that's why we try to provide you with ample coverage for both lab and field tools. We have the tools that focus on providing you with information about what real users are experiencing, field tools, such as the Chrome User Experience Report, Search Console, and the new Web Vitals extension. And then we have our lab tools as well, coming in to provide you with mechanisms to see what needs improvement before a user ever even sees your page and it gives you a reproducible environment to debug and optimize. Those are tools like Chrome Dev Tools and Lighthouse. PageSpeed Insights is a great place to start to give you a pulse on your Core Web Vitals performance in both the field and in the lab because it leverages crux and Lighthouse under the hood. Given that the Core Web Vitals initiative aims to help folks know what should be prioritized first, we wanted to make sure you had full support and tooling coverage for LCP, FID, and CLS. Core Web Vitals are now in all of your favorite developer tools and there are more than what is even listed here. And that includes a new Web Vitals library and a bunch of ecosystem tools that have already adopted them. You're able to measure your Core Web Vitals for a specific page, for your origin, locally in the lab, and from real users in the field. And as I mentioned before, Total Blocking Time, TBT, it's a proxy metric for FID that allows you to debug and improve your interactivity in the lab, which is why it's listed here in the FID column. Before we go over all of the latest updates in each tool, I wanted to make sure that you had all of our tools mapped in a workflow for Core Web Vitals. Which tools do what? Where do I go first? As I said before, a good place to start to get a general pulse is PageSpeed Insights. But all of our tools have a really critical role to play. Using Search Console allows you to see across your entire site and identify which types of pages need improvement. Then you can diagnose and optimize locally with Lighthouse and Chrome DevTools. We have some really new capabilities, by the way. I'm excited to share with you in a moment. And then you can prevent regressions with Lighthouse CI and create a custom dashboard to monitor your site with crux. Along the entire journey, you can turn to web.dev for guidance. All right, let's get into the tool updates themselves. Lighthouse just announced v6 last month, which has new metrics, including Core Web Vitals, new audits, and a new performance score. Let's start with the updates to the perf score. On a high level, we want to make sure that you can get a sense of your loading performance, interactivity, and layout predictability. The metrics and the weights of those metrics that formulate the top-level score are intended to give you a balanced view of your user experience against critical dimensions of quality. While three new metrics have been added, the Core Web Vitals metrics, three old ones have been removed, first meaningful paint, first CPU idle, and max potential FID. These removals are due to considerations like metric variability, as well as simply having newer metrics that offer better reflections of the part of the user experience that we're trying to measure with that metric. There are also improvements to the weights based on user feedback. For instance, reduction of time to interactive's weight in the final scoring calculation is in direct response to user feedback about variability and inconsistencies in metric optimizations correlating with improvements to the user experience. However, it is still a valuable signal to understand when a page is fully interactive, that's why we still keep it. TBT serves as a nice complement to TTI so that together you're able to more effectively optimize for user interactivity. There's also a super nifty scoring calculator to help explore the performance score. The calculator gives you a comparison between V5 and V6 scores as well. It's not shown here, but it's in the tool. And when you run an audit with Lighthouse 6.0, the report comes with a link to the calculator with your results pre-populated. So I highly recommend you check it out. Lighthouse V6 also offers quite a few new audits. These are with a focus on JavaScript analysis and accessibility. You can now easily trace how much unused code is being shipped with your application, as well as making sure that you're providing audits to check that screen readers and other assistive technologies have all of the information they need about the behavior and purpose of controls on your web page to serve users well. All of the products that Lighthouse powers are updated to reflect the latest version, including Lighthouse CI, which now enables you to easily measure your core web vitals on pull requests before they're merged and deployed. PageSpeed Insights PSI reports on the lab and field performance of a page on both mobile and desktop devices. The tool provides an overview of how real world users are experiencing the page that's powered by crux and a set of actionable recommendations on how a site owner can improve page experience that's provided by Lighthouse. PageSpeed Insights and the PSI API have also been upgraded to use Lighthouse 6.0 under the hood and now support measuring core web vitals in both the lab and field sections of the report. So core web vitals are annotated with the blue ribbon that you see here. From the crux data set, you'll be able to see whether or not 75% of your loads are hitting the core web vitals thresholds for each metric in the field for both your page and for your origin. Then you can take a look at your lab data from Lighthouse to see whether or not you are hitting the core web vitals thresholds for each metric in a synthetic testing environment. This helps to guide you towards actionable opportunities to improve your page's performance. Now, the new core web vitals report in Search Console helps you to identify groups of pages across your site that require attention. And this is also based on real-world field data from crux. URL performance is grouped by status, metric type, and URL group, which is basically groups of similar web pages. The report is based on the three core web vitals metrics and it's a great way to identify pages that need attention on your site. There are many, many cool new things in DevTools but I'm gonna focus on just two of them right now that are related to core web vitals support. First is the capacity to now debug interaction readiness with total blocking time in the footer. The total blocking time TBT metric, again the proxy for first input delay, is now shown in the footer of the Chrome DevTools performance panel when you measure page performance. The performance panel has a new experience section that can help you detect unexpected layout shifts. This is helpful for finding and fixing visual instability issues on your page that contribute to cumulative layout shift. So you select a layout shift to view its details in the summary tab and to visualize where the shift itself occurred, hover over the moved from and moved to fields. And for more information on everything that's new in DevTools, see that what's new in DevTools Chrome 84 link that's here. The Chrome UX report crux is a public data set of real user experience data on millions of websites. We just hit over seven million, so that's awesome. It measures field versions of all of the Core Web Vitals. Even if you don't have room on your site, crux can provide a quick and easy way to assess your Core Web Vitals. The newly redesigned crux dashboard allows you to easily track an origin's performance over time. And now you can use it to monitor the distributions of all of your Core Web Vitals metrics. To get started with the dashboard, you can check out the tutorial on web.dev. We've also introduced this new Core Web Vitals landing page to make it even easier to see how your site is performing at a glance. There is also a new crux API for you to use built from the ground up to provide developers with simple, fast and comprehensive access to field-based experience data. Developers can query for an origin or a URL and segment results based on different form factors. The API updates daily and summarizes the previous 28 days worth of data, including your Core Web Vitals performance. We're excited to integrate more features over time to enable new ways to explore the data and discover insights about the state of user experiences. Web.dev is your go-to place for guidance on web development. It also now sports the canonical page for information about web vitals. The Web.dev Measure tool also allows you to measure the performance of your page over time and it provides a prioritized list of guides and code labs on how to improve. Its measurement is powered by PageSpeed Insights, which has Lighthouse 6.0 under the hood and fully supports the Core Web Vitals metrics as you can see here. There are also a slew of other amazing tools to help you with measuring, optimizing, and monitoring your Core Web Vitals. The Web Vitals extension measures the three Core Web Vitals metrics in real time for desktop in Google Chrome. This is helpful for catching issues early on during your development workflow and as a diagnostic tool to assess performance of Core Web Vitals as you browse the web. The extension is now available to install from the Chrome Web Store. The Web Vitals library is a tiny modular library for measuring Web Vitals metrics on real users in a way that accurately matches how they're measured from Chrome and reported to other Google tools. The library supports all of the Core Web Vitals as well as other field vitals. SiteKit, Google's official WordPress plugin, allows you to get insights about how people find and use your site, how to improve, monetize your content directly in your WordPress dashboard. They've also just updated to ensure that you know how you're performing against Core Web Vitals. As I mentioned earlier too, we're so excited to have so many amazing ecosystem players and production monitoring solutions already implementing support for Core Web Vitals. Honestly, we're delighted. And thank you so much for your amazing work. It's really cool. And this is a long list of links, but I'll make sure to tweet them as well so that you can click through them more easily. There are a bunch of goodies in here. And with that, I'm just gonna give you a huge thank you. Really appreciate your time.