 There are a handful of challenges and points of confusion about optimizing for web vitals that I see really frequently. So the goal of today's presentation is to cover as many of these issues as possible in the next 10 minutes. In particular, the three themes I'm going to be talking about are cumulative layout shift, third-party scripts, and rum. I'm not going to be giving you an overview of these topics. Instead, I really want to stick to discussing some of the edge cases and details that people tend to find confusing, as well as highlighting some performance techniques that you might not be aware of. I'm going to start today by walking you through how CLS measurement is implemented in code. I want to do this because I think seeing the implementation in code clears up a lot of the questions and confusion around when CLS is reported and finalized, particularly for single-page applications. CLS is measured by using a performance observer to observe for layout shift events. When a layout shift occurs, the performance observer invokes a callback function, and this callback function adds to the running layout shift score, in other words, CLS. If you wanted to, you could also add code inside that callback function to report this intermediate value of CLS. Although you can do that, it's not necessary. Ultimately, the only value of CLS that matters is the final one. Final CLS is not determined by taking a bunch of CLS entries and seeing which one occurred last. Instead, it's determined by listening for the Build Disability State Change event. User actions like navigating to a new page, switching tabs, minimizing tabs, closing the browser, they're all examples of events that cause a document's visibility state to change from visible to hidden. When a document's visibility state changes from visible to hidden, you know that the value of CLS at that moment should be reported. One tool that I really like for debugging layout shift is the Layout Shift Regions option in DevTools. You can access it from the Command Palette or the Rendering Options tab. This feature highlights page elements that have shifted as they are shifting. In other words, it's not highlighting the root cause of the layout shift, but rather the affected elements. I personally find this tool most helpful when combined with screen recording. Layout shifts can happen very, very quickly, and this can make it difficult to debug them in real time. However, with screen recordings, you can step through the page load process afterwards as many times as you want at your own pace until you figure out what's causing the layout shift. Another thing that I want to mention is that you can augment your web vitals reporting to provide you with more information about the circumstances under which a particular performance measurement was observed. This includes things like reporting on connection type or scroll position, as well as pieces of data that might be unique to your app. For example, if your app uses a debugging token. Layout shifts that occur within 500 milliseconds of user input do not count towards CLS. Asterix here, though, is that scrolling is not considered a user excluded user input. In other words, if a user scrolls on a page and a layout shift occurs immediately after, it's still going to count towards CLS. The reason why scrolling is treated a little bit differently than these other user input events is that if you think about it, if a user is scrolling on the page, there's really no good reason why a layout shift should be occurring. On the other hand, when a user is clicking on the page, it's much more likely that they're trying to navigate to a new page or opening a nav bar. And these are things that can trigger layout shifts, but they're probably layout shifts that the user doesn't mind or they're wanted because the user's trying to accomplish something. Keeping with the scrolling theme, another thing that I want to mention is that most lab tools such as Lighthouse or web page tests do not scroll down your page. And this can be a blind spot when it comes to measuring and identifying CLS in lab environments. And I'd say this probably affects mobile a little bit more than it does desktop. At the same time, though, this blind spot might not be as big as you think it is. Keep in mind that layout shifts only count towards CLS if they are visible to the user. In other words, if there's a layout shift that occurs below the fold, but the user hasn't scrolled down the page, it's not going to count towards CLS. Lastly, keep in mind that sometimes there can be no correlation between the CLS of a mobile and desktop site. The desktop and mobile versions of the same site often use different layouts. They use different UX patterns. And as a result, they can exhibit very different layout shifts. That brings me to my next point, which is that code is only part of the solution to layout shift. Some layout shifts can be fixed strictly through code. This usually consists of adding the width and height attributes to images, videos, and iframes. However, many layout shifts are largely the result of bad UX patterns. In other words, the product was designed that way. An example that I've seen really frequently is sites popping in banners at the top of a page to make an announcement. And when this banner pops in, it pushes everything else on the page down. Optimizing UX patterns for core web vitals is a whole topic in and of itself. Luckily for you, Grema will be discussing this in the talk immediately following this one. I highly recommend that you stick around and watch it. That's all I have to say about CLS. I'm now going to talk about third-party scripts. In the past year, we've heard a lot about lazy loading images and lazy loading iframes. Lazy loading can also be used to load third-party scripts. However, it's a bit more of a delicate art form, and the APIs are also different. On the screen, I've listed some APIs that are available for lazy loading scripts. There are a couple of things I want to note here. One is that the delay set by set timeout does not represent a guarantee as to when the callback function will execute. Instead, it's the minimum amount of time until that callback function will execute. This is because the callback function cannot be executed if the main thread is busy. This behavior sometimes makes that timeout frustrating to use because it makes it a little bit unpredictable. But in the context of loading third-party scripts, it's actually kind of interesting because maybe you don't want to or you shouldn't be loading third-party scripts if the main thread is busy. In addition, I want to note that if you want to trigger lazy loading based on user scrolling, you should really use Intersection Observer to do that rather than listening for the scroll event. Intersection Observer is going to much more performant than listening for the scroll event. Lastly, I don't see many people using the performance observer for lazy loading, but it does open up some really interesting possibilities for doing things like waiting for a particular performance event to occur. For example, first paint or first contentful paint and then triggering script loading. Another issue with third-party scripts that I hear a lot is that engineering teams are really frustrated because they feel like marketing teams just keep adding more scripts to their page and there's nothing they can do about it. Ideally, you would get those teams on board with performance, but if that's not an option, you might be able to make some improvements to the situation by taking advantage of some of the features available in Tag Managers. These are features that give you the ability to restrict tag usage as well as get greater visibility into the usage of tags. This slide lists some useful features of Google Tag Manager, but I would expect that you would be able to find similar features in other Tag Managers as well. In this last section, I'll be talking about techniques that you can use for improving your ROM setup. A question that I commonly get is how can you get page-level performance data? Page-level performance data is technically available in both crux and page speed insights. However, in practice, you might find that it's not available and that's because page-level data will not be exposed if there's not enough performance data available for that page. If you're running into this situation, you might find it helpful to look at Search Console. Search Console is a little bit different because it exposes performance data based on URL groups. Search Console URL groups are groupings of URLs with similar HTML structure. The idea being that structurally similar pages are going to exhibit similar performance characteristics. As a result, pages that might not have enough performance data to be displayed in PSI or crux could potentially be displayed in Search Console. In addition, this feature provides you with a way of forecasting the performance of newly added pages to your site. For example, the screenshot on the screen shows the URL groupings that Search Console detected for all the author pages on web.dev. If I were to add a new author page to web.dev, I technically don't know what that page performance is gonna be like, but I can get a pretty good idea by looking at the aggregate page performance of all the existing author pages on web.dev. Search Console is a great tool, but if you want more detailed or more frequent performance data than what Search Console provides, you will need to collect your own performance data. There are two paths that you can go down when it comes to collecting your own performance data. You can set up the tooling yourself or you can sign up for a third party service. If you wanna set up the performance tooling yourself, we recommend using the Web Vitals.js library that is available on GitHub. Web Vitals.js is a small lightweight script that you include on your page that provides an API for measuring the Web Vitals metrics. The advantage of using this script rather than something you implement yourself is that it gives you the assurance that your implementation is correct and therefore the measurements that you collect are gonna match those found in Google tooling. Alternatively, there are a wide variety of third-party services that support Web Vitals. Most of these are paid services, however, CloudFair browser insights is available for free and supports Web Vitals. That brings me to the end of this presentation. Thanks for watching, see you soon.