 Well, we are excited because nobody likes to wait. And we want to talk to you about all of the optimizations and measurement tools that we have been working on and can provide to you. And we're going to start with going over an overview of user metrics, what we care about and why, then looking at the latest developments in Lighthouse, the Chrome UX report, or Crux, and talk about how we're unifying tooling across the board. All right. So when it comes to web performance, one thing you could say is you can't improve what you don't measure. This was Peter Drucker, right? He was a management guy? This is actually true. Peter Drucker, a really, really well-known business management guru. To be honest, I think he might have been a front-end developer as well. But this is just absolutely true. If you want to make something better, step one is measure it. And let's get into that with web performance. To measure, we need to look at some metrics. And it's really important to make sure that your metrics are user-centric and really focused on the user. We heard a little bit ago about pinning wait time and some of those custom metrics. But we'd like to have metrics that really focus on how the user experience is. So let's take a page load, break it down, and look at a few key metrics in here. So a little film strip here. We're loading the search result page. And so we just have a little progression of a few images getting to our final result where the page is done loading. Now, underneath the things that we see visually is a few things happening. There's a main thread and network requests. And these are really important, too, because it actually weighs in a lot on the actual user experience. So I want to point a few things out here. First of which is this point right here. This is the first time that text shows up on the page. It's the first time that content is there. So we call this the duration from the navigation to the point now which text shows up, the first contentful paint. Easy enough. We know this one. A little bit later, though, you can see right after the main thread kind of quiets down a bit. This right here is an important moment. All the long tasks in the main thread have done. So the main thread has kind of quieted down and allowed the page to now be responsive to users once they choose to interact with it. Also, the network is quiet. So we know there's no one big massive script hanging out ready to run and take up time on the main thread. So the duration from navigation to this point, we call this time to interactive. And there's one more key metric I want to cover real quick. Now, in this page load, a user could really touch the screen at any point. They could interact with it here once there's a paint on the screen or a little bit later. But let's just say, for the purposes of this, they tap the screen at this point. Now, the thing is, if they tap the screen at this point, take a look at what's happening on the main thread. A lot of things. We're in the middle of a big, long task. And that means the page is not going to be able to respond to the user. So we have to wait a little bit until the page can actually respond. So this duration, the time from the input until the end of the long task that we're dealing with, we call this the first input delay. This is an important metric. I want to spend a little bit more time on it. If this is a main thread, well, it's a very open and available main thread. Really nothing happening. So let's say if the user has some input, well, piece of cake, we can just reply to it immediately. Your event handler is going to run, touch start, or click. We're going to do style and layout and paint. And ship a frame, they're going to see something. So let's say they're tapping on the menu icon, and then the menu slides out. So we're good. But if there is a long task sitting on the main thread, well, we're just going to have to wait. So we always, yes, we'll be doing the event handling. But this time between when the input's first received and when the events will be dispatched, that is the input delay. First input delay is just the first input of the page. It's just the first time that a user touches a page. And the important thing here is that first input delay is a field metric. It really only makes sense to gather in the field. Time to Interactive is a great and really powerful metric, but it makes sense mostly in the lab, in a lab scenario. And we recognize this that, basically, time to Interactive out in the field, where real users are tapping on the screen as the page is loading, really kind of messes with this metric. So there's a few metrics on the screen here, basically just outlining, which makes sense to gather in the lab, and then some that are only exclusive in the field. So I just want to point out TTI and first input delay or FID are our interactivity metrics. Really key for understanding how available the main thread is to the user. So all of these metrics are awesome, obviously. But where can we actually find them? So all three of these metrics are readily available in their respective lab and field environments. So as Paul was saying, because FCP can be measured in both the lab and in the field with real users, it's available across the board. So that's in Lighthouse, in the Chrome User Experience Report, or Crux, and as a web perf API. TTI is only available in the lab, and so it can only be accessed via Lighthouse and PageSpeed Insights. Now FID requires real user input to measure, so it's available in Crux. And FID is exciting, because it's actually going to be coming to Chrome in Q4, or early Q1, as a web perf API. So it should be able to give you you can view it in a performance observer just as you get FCP today, which is kind of cool. Super cool. Yeah, excited about standardization of this stuff. It's really exciting. So for those of you who aren't familiar with Lighthouse, and we know that it's awesome, and a lot of people know, but Lighthouse is an open source automated tool for improving the quality of web pages. So you can run it against any web page, and that's either public or requiring authentication, and it has audits for performance accessibility, PWA, and more. I'm excited to tell you about some things that we've been doing with Lighthouse. So one of those things is a PWA refactor. So currently, there is a broad spectrum of PWA definitions in the wild that can make it difficult to identify whether or not, definitively, you are a PWA. And while our PWA checklist is absolutely wonderful, and it gives helpful guidance towards what a PWA is, we want a machine verifiable way to say yes or no. So today, we're launching the new Lighthouse UI with a more binary badging system for the PWA category. And the badge groupings reflect that we want everybody to be able to achieve the fast and reliable badge. All experiences should be that, whether or not you're installable or not. In order to actually get all, become a full PWA and get that badge, you have to successfully fulfill all audits in the categories. Yeah, there's a few more things that we've been doing. And in the new 4.0, 4.0 alpha that's coming out in Lighthouse, there's a few nice changes that we've made. So one of the things that we've been working on is reducing the amount of time that it takes to run Lighthouse. Well, we want to sit around waiting for a long time. So we're happy to report that the median runtime of all Lighthouse runs that we're aware of has dropped down about 50%. And up the 90th percentile, we've also dropped this down about 66%. So we're really jazzed about this. We want to make sure that it's not a long wait for you to get the insights that are available. A few more changes. We've changed how scores are represented. So if you've seen at the top of a Lighthouse report these score gauges, right beneath them is this little scale. So this is just how the numeric scale is mapped to a color. We made a change here. I just want to point out none of the numerical scores and those calculations have changed in this new update. It's just deciding which color is applied. So this is basically the change. We've just adjusted how the various numerical scores map to these colors. Yeah. So basically, we're raising the bar about what our expectations are for a performance site. But if you're in the green, you should feel really good about it. Yeah. It's good. I know a lot of you. Yeah, it's nice to go for the 100. And I love the 100. I'm excited about it. But yeah, if you're in the green, you're good. And we just want to make that clear. All right. Sweet. Now, a few more changes. And this one about throttling. When it comes to throttling, a good mobile throttling preset shouldn't necessarily map to the particular conditions of a telecommunication system and its specification. A good preset maps to what real users feel. And so really, what we want to do is we want to capture the latency and throughput at the 80th percentile. The frustrating experiences that you oftentimes experience. And we want to keep pace with this measurement as our global telecommunications infrastructure gets upgraded. A lot of people are moving from 3G to 4G. And we want to make sure that we capture that. So we're making a change, but actually not in the latency and throughput numbers. This is actually just a labeling change. So wherever you see fast 3G today, you'll be seeing slow 4G. And it's actually because the preset that we use actually captures a 4G experience more than a 3G experience. So just FYI, same stuff, different name. It's all good. What's next? Oh yeah. So there's a few other things going on with Lighthouse. Some really nice projects making use of it. First up, check out some of the projects on GitHub taking advantage of Lighthouse, some of the dependent projects. Really cool stuff in here, a lot happening in the recent months, many projects looking into or building systems around using Lighthouse in a continuous integration experience so that on every commit you run Lighthouse, store all that data, get graphs, some really cool stuff happening in here. So take a look. Lighthouse is also available in a number of different commercial products as well, versus caliber. Fantastic stuff here. Trio is another one. I think this is my site, which is doing OK. Accessibility actually does some work. But there's some nice stuff. And the last is SpeedCurve, which actually just added support for Lighthouse a few months ago. So we're excited to see Lighthouse becoming part of the production monitoring ecosystem. And even internally, we're excited to see where Lighthouse is being integrated. So one of those examples, as was announced in the keynote, is the new site, web.dev. And it's exciting to be integrating it with really prescriptive actionable guidance. And you can run Lighthouse with any URL. And it will provide you with a prioritized to-do list with that guidance and interactive code labs for the specific things that you need to work on. What's so exciting about this is that for the first time, tooling is directly integrated with the documentation. Yeah, it's pretty hot. And we wanted to also call out another wonderful partner who has done a good job of using Lighthouse. So Squarespace was able to use Lighthouse as an out-of-the-box auditing and reporting system to build on top of. And it allowed them to improve their 50th percentile and 95th percentile TTI by over three times. So we were super excited by that. They used it to generate traces and dig deep into specific problems as they happened, as opposed to post-regression. So now, we are going to talk a little bit about the Chrome user experience report. Or as it says, crux, as I've already said, I think three times. The crux actually provides user experience metrics for how real world Chrome users experience popular destinations on the web. So it's a data set that is powered by real user measurement of key user metrics across the public web. And it's aggregated anonymously from users who have opted in. We're excited to talk about some of the updates that we've done here. One of the things, and it was actually featured in Anshal's talk earlier, was regional analysis. So we heard loud and clear from developers that we needed to be able to break down this data set in a country-specific way. And now you can do that. So via BigQuery, which is where you can interact and play with this data set, you can now get separate country-specific data sets to pull it apart. And yeah, so this is just how you've been interacting with ChromioX Report in the past, just working with BigQuery. But I heard that there's like a nice new shiny thing. Yeah. You can get it way easier now. So the brand new crux dashboard, which was announced just a bit ago, it allows you to understand how an origin's performance evolves over time. And so it's built on Data Studio. It's much more easily accessible. And it can be easily customized and shared with everyone on your team. And it doesn't require you to write your own script on BigQuery to access it. And it's automatically synced with all the latest data sets, so you're good to go. Also to ensure consistency across all of our tooling, as we've mentioned, that's a huge goal for us. FID is now launched as an experimental metric in Crux. So when we announced last year, the data set only had 10,000 origins. And now we are at over 4 million. And if you are excited to see your website, I am excited to see PaulIrish.com in this data set. Yeah. And it would be great. Yeah, we're working hard to improve it and get and expand quickly. And so if you're excited, check in soon, because we are working hard to move fast. All right. So one of the things that's really important to us is to have a unified story between our performance tools. So OK, so hand raising time. Raise your hand if you've used Lighthouse. Yes. Raise your hand if you've used PageSpeed Insights. Yes, of course. Raise your hand if you've noticed that what you're seeing in Lighthouse and PageSpeed Insights doesn't necessarily be telling the same story. Yeah. I'm there with you too. Now, we saw this was a bit of an issue. And we wanted to improve it, because we don't want advice from two different tools that Google provides that is conflicting. So we've been working hard and collaborating with the search team on this. And so today we're excited to announce that there's a brand new next generation of PageSpeed Insights now powered by Lighthouse. And this is really exciting stuff. So now if you use PageSpeed Insights, all of the data that you've been seeing in Lighthouse when it comes to performance is now in the report. All of the metrics and opportunities and diagnostics are right there. You also see the top score that you have been seeing in PageSpeed Insights. That score is the Lighthouse performance category score. So kind of speaking the same language. And still, if you've really enjoyed kind of the ChromUX report data that has been available inside of PageSpeed Insights, that's there too. Play a quick little screencast of how this looks. So let's take a look at Chrome.com in PageSpeed Insights. We're going. Come on. Yeah. Great. Good. Awesome. This is in real time. I did not speed anything up, so we got to wait for the latency. So yeah, this should look fairly familiar if you've used Lighthouse. But up at the top, we have field data. And by default, PageSpeed Insights runs both analysis on mobile and desktop at the same time delivers you the results simultaneously. So you can check that out. So this is live today. So go check it out. Take a look. Gives you feedback. Excited to have this out there. All right, thanks. Oh yeah, I mean, you can clap if you want. I mean, that's cool. All right. Now, if you've ever actually opened up, I don't know, I have a tendency of opening up the DevTools on basically every site that I do, as is the habit. So I opened up the DevTools on PageSpeed Insights. And lo and behold, it's just a thin web app that makes a call to an API, a RESTful API. It's actually the PageSpeed Insights API. So we were like, well, this kind of means that in order to do this, then we're going to just have to have all the Lighthouse data available over the API. So that's what we have. The new PageSpeed API v5, consider it the Lighthouse API v1. All the same Lighthouse data, including all categories, not just performance, but all of them. And all the work is done for you, no kind of waiting for your own Chrome to reload and do the analysis. So we'll do the work for you. And the Chrome UX report data, that summary is still in, it's added into the response. That's the word. Basic usage, I don't know if you'd use it from fetch client side, but if you did, it would look something like this. Well, just pass it a URL. There's a few other parameters to customize things. Get back the results. Looks a little like this. There's a Lighthouse result full of the exact same Lighthouse data that you'd be getting by running Lighthouse anywhere else. And inside that loading experience property, that is the Chrome UX report stuff. So really cool. Check it out. Details and documentation, reference guides here. PageSpeed inside to v5. All right. And so this is really cool. Totally cool. Yeah, it means all unified analysis, and it's the same. So when you're measuring, you're optimizing, you're monitoring. If you want to start making changes and testing things out, there's a place for you to go for that. So that's great. And we have all of these things aligned, but where do you go for what? And when should you go there? So if you want, if you need a snapshot of a page's performance, as Paul said earlier, PageSpeed Insights is a good default to go to, because it provides you with both the field and the lab, and gives you a good benchmark. If you want to make changes, test and iterate, and really have that fast feedback, then the Chrome extension, the audits panel, or operating within the command line interface is gonna be a good place to go. And finally, if you want to set up production monitoring or set budgets, then the API is gonna be fantastic. But across the entire development lifecycle, you now are completely powered by Lighthouse, which we're super excited about. Yeah. So to wrap up, well, I guess, if there's one thing or four things that you take away from this, first up, measure well, measure often, can't approve what you don't measure. Yep. You can now use the PageSpeed Insights for quick Lighthouse analysis. The crux, real-world data, really helps round out your view of what's happening with your users, and really understand different percentiles where users are feeling pain and frustration. And finally, to evaluate performance at every stage, which is really important to us, you know, you can now check out the API, so go use it. All right, I think that's it. Thank you guys very much. Thank you. Thank you.