 Hi, I'm Barry Pullard and today I want to share with you our top Core Web Vitals optimisations for 2023. There is a lot of web performance advice out there and it can be overwhelming as to which advice will have the most impact. So the Chrome team has spent the last year identifying a smaller set of three recommendations for each of the Core Web Vitals that we believe will have the biggest impact, are relevant to most websites and are realistic for most developers to implement. First up, let's look at our largest contentful paint recommendations. LCP is the time it takes to show the users the biggest piece of content. That's typically a hero image or a headline. LCP is the metric that most sites tend to struggle with, with fewer sites passing this Core Web Vital compared to CLS or FID. In most cases, 70-80% of websites, these are image resources that need to be discovered and downloaded. At last year's Google I.O., we showed that the actual download time often wasn't the biggest delay for images and our analysis this year has further confirmed this. Therefore, one of the most effective things you can do today to improve LCP time is to make the image source discoverable from the static HTML. This helps the browser's preload scanner find and load it as early as possible. Background images, client-side rendering and lazy loading are all red flags that your LCP images may have discoverability problems. The fix is usually to use an old-fashioned image element. Or, if that is not possible, the next best thing would be to add a preload link so the resource is referenced in the HTML. Either of these allow the image resource to be discovered by the preload scanner and queued to be fixed by the browser. You can use the loading waterfall in Chrome Dev Tools to identify resources that start loading late. If there is a large gap between the HTML being processed and the LCP image being requested, it is likely that you have a discoverability issue. By including the image in the HTML, either with an image element or a preload, it is discovered right away and finishes downloading sooner. Much better. The white line shows it still takes a while to start downloading, but we'll get to that next. Which brings us to our next tip. Because while you're making your LCP images discoverable, that doesn't mean they can be fetched as quickly as they could be. Browsers tend to prioritize render blocking content, like CSS and synchronous JavaScript, over images. The new Fetch Priority API allows you to flag a resource as being of higher importance. Just adding the Fetch Priority equals high attribute to your image or preload LCP element allows the browser to start downloading it earlier and at higher priority, which can have a large impact on the LCP time. This API is already available in Chromium-based browsers, but the implementation is being worked on for Safari and Firefox is also showing interest. However, this attribute is a progressive enhancement and will simply be ignored by non-supporting other browsers. Going back to the earlier example, we solved the discoverability problem, but there was still a big delay until the image was requested and starts to be downloaded. With Fetch Priority, the delay is minimized and the image is downloaded much sooner, almost as soon as it is discovered. This is the optimal result for LCP images. You can also de-prioritize non-critical resources in a number of ways, such as using Fetch Priority was low or lazy loading them so they're not fetched until needed. De-prioritizing or delaying non-critical resources allows the browser to concentrate on the more important resources, like your LCP element. Just make sure you don't use these de-prioritizing techniques on the LCP images themselves. And if you're using a JavaScript framework, you can use image components that the Chromorora team have helped create to add images with the best practices built in. The Angular and Next.js components have Fetch Priority support already added, and the team is working to update the Nuxt image component to also support this new API. Our team also works with other platforms. For example, if you're using WordPress, you may want to try using the new Fetch Priority module of the official WordPress Performance Lab plugin that our Chrome team developed in collaboration with the WordPress core performance team. The first two LCP recommendations are about how you can structure your HTML to make LCP resources discoverable and prioritize their download. However, this depends on having the HTML in the first place. So the last recommendation is about getting that as quickly as possible by using a CDN to optimize timed first byte or TTFB. The browser cannot start loading any sub-resources until it receives the first byte of the initial HTML document response. The sooner you can get those first bytes to the browser, the sooner the browser can start processing it, and everything else can start happening as well. The best way to reduce TTFB is to, one, reduce the distance between the users and servers by serving your content as geographically close to your users as possible, and two, cache that content so that recently requested content can be served again quickly. A content delivery network, or CDN, is the best way to do both of these. A CDN is a globally distributed set of servers that acts as a point of connection for your users. This last mile is often the slowest part, so by making this slow part as short as possible, we can reduce the impact of this. CDNs also allow content to be cached at these edge nodes, further reducing the load times. Even in cases where you have to make the journey all the way back to your origin server, CDNs are generally optimized to do that much quicker. So it's a win either way. Developers have often used a CDN for hosting static assets like CSS, JavaScript, or media like images and videos, but serving the HTML over CDN can even more benefits. According to the Web Almanac, only 29% of HTML document requests were served via CDN. If you're not one of these, then that means there is significant opportunity for you to claim additional savings. Moving on to the next core web vital, CLS, or cumulative layout shift. This is a measure of the visual stability of a web page. Its content jumping about a lot as new content is loaded. While CLS has improved a lot on the web since 2020, about a quarter of websites still do not meet the recommended threshold, so there remains a big opportunity for many sites to improve their user experience. The first recommendation for CLS is to ensure that content is explicitly sized so when it is initially rendered by the browser, it is rendered by the correct dimensions. Traditionally, we have concentrated on recommending that images have width and height dimensions for this, or CSS equivalents. And this is still a leading cause of CLS and is often easily fixed by providing these dimensions. But it's also important not to forget other content too. CSS is your friend here, and with the relatively new aspect ratio property, you can ensure content other than images, like videos for example, can also be responsive and still set the appropriate height based on the width that will be rendered at. Alternatively, min height can be used to reserve a minimum space for dynamic content such as advertisements. The default height of an empty element is 0 pixels, so even if you can't be sure of the actual height for some dynamic content, you can almost certainly reduce the CLS impact by reserving some space using min height. One of the biggest improvements we saw to CLS last year was with the launch of the Back Forward cache or BF cache in Chrome. Note that Safari and Firefox have already had this feature for some time. A page may have a lot of CLS on initial load as additional content such as images and ads load. Of course, it's best to try to avoid these even on the initial page load, but where that's more tricky, you can at least avoid this load CLS for users that go back to that page. The BF cache stores a complete snapshot with a fully rendered page in memory for a short period after a user navigates away. If they go back to the page, the snapshot is restored. Similarly, if the user goes forward again, we can restore that snapshot. This completely eliminates any of that loading CLS would be incurred if the page was re-rendered from scratch. The BF cache is enabled by default. You do not have to do anything to turn this on. However, you can stop the browser from using it by using certain APIs that may not react well to when the pages are stored after navigating away. So you should test their BF cache eligibility to ensure you're not giving up this free performance optimization. Chrome DevTools has a tool where you can test whether a page is eligible for the BF cache. Running the test navigates away and then back and tests whether the BF cache was used. If the BF cache could not be used, the tool will tell you all the reasons why. The most common reasons are setting a cache control no-store HTTP header or using unload handlers on desktop, both of which currently block BF cache usage. In Lighthouse 10, we added a new audit to do the same test for you and again explain the reasons why a page is ineligible if it is. We also have a JavaScript API, the Not Restored Reasons API, that allows you to identify blocking reasons in the field and report it back to your analytics. The BF cache is one of a series of instant page navigations that the Chrome team is working on to make browsing the web faster. Keep an eye out for other improvements in this area like prefetching and pre-rendering, which can also improve your core web vitals. The final CLS recommendation is about handling animations and transitions. Animations are often used to move content, think cookie banners or other notification banners that slide in from the top or the bottom. Depending how these animations or transitions are coded, they can be more or less performant and they can count as CLS or not. Layout inducing animations require the browser to lay out the page again and so require more work. This is even true for absolute positioned elements that are taken out of the normal document flow. For example, using top or left to move content counts as a layout shift, even if it does not shift any content around it. The content itself is shifting and it has the potential to affect other content, therefore this counts towards CLS. Doing the same animation using translate does not shift the content in the browser's layout processing and instead happens in the compositor layer. As well as being less work for the browser, it also means it cannot impact other content, which also means it does not count towards CLS. The solution is, in many cases, a drop-in replacement for animation using top or left and is supported in all browsers. Always prefer composited animations like transform over layout inducing non-composited animations like changing top, right, bottom and left. And there's a lighthouse audit to help identify these. Finally, we have the responsive recommendations. These recommendations will help with both the current first input delay or FID core web vital and the newer, more comprehensive, interaction to next-paint IMP metric. Responsiveness is all about ensuring that we don't block the main thread as that results in the browser being unable to respond to user input. The first recommendation is to identify and break up long tasks. This will give the browser breathing room to allow it to respond to user input. Chrome DevTools and Lighthouse identify a long task as a bit of work that takes 50 milliseconds or longer. This may not sound like a lot, but in browser terms that can be the difference between a site feeling responsive or not. JavaScript is single-threaded and greedy by nature. Once it gets hold of that CPU, it will keep hold of it as long as it can until there's a break in what it can process. In this example, even though there are five sub-processes, all five will execute straight after each other. So the key here is to put in those breaks where you can in your code. You can use setTimeout with a 0 millisecond delay to put non-critical work in a new task that will be executed after any already queued tasks. There are also some newer and upcoming browser APIs like isInputPending, scheduler.postTask and scheduler.yield. That can help you decide when and how to yield the main thread. For more details, check out our Optimize Long Task Guide on web.dev. This Google I.O. we also have a separate talk on IMP and long tasks by Annie Sullivan and Michael Mulkney. While optimizing the JavaScript we have on our page is good, an even better way is just not send as much JavaScript in the first place. Our thirst for putting more and more JavaScript on our web pages seems unquenchable, but we need to check that all that JavaScript is necessary. You can use the coverage feature of Chrome DevTools to see how much of your JavaScript was executed. If large portions of the JavaScript is not used during page load, you can consider code splitting to load this code later, when needed or when the browser is less busy. The Aurora team have also worked on a Next.js script component which allows less critical, often third-party code, to be loaded with various strategies to reduce the impact of these scripts. Tag managers are another place that tends to accumulate old JavaScript code that may not be needed anymore. Audit your tags regularly to ensure that old tags are removed, because even if they don't fire anymore, they still need to be downloaded, parsed and compiled, all of which takes time and other resources that browser is better using elsewhere. The final recommendation for improving responsiveness is to avoid large rendering updates. JavaScript isn't the only thing that can affect your website's responsiveness, and the browser itself can be slow if it needs to do a lot of work to render a page to screen. Large rendering updates can happen when there are lots of DOM changes, either intentionally or by a cascade effect of one change resulting in lots of other elements needing to be recalculated. The best way to avoid large rendering updates is to keep your DOM size small so that even if there are cascade effects, they can be handled quickly. And yes, we have a lighthouse audit for this too. CSS Containment is another way to separate our areas of your web page to tell the browser that the elements in certain areas are unaffected by changes in other areas, which can reduce layout work. Content visibility is an extension of CSS Containment, which allows you to skip layout and rendering completely for off-screen content. And finally, avoid abusing the Request Animation Frame API, which should only be used for critical rendering work. If too much work is scheduled via this API, it will slow rendering itself. And those are our nine top recommendations for what we think you should look at first to improve your core web files. This isn't meant as a definitive list, but instead are a few of the more impactful options that our research has shown can really move the needle on your website's performance. A lot of these recommendations are already covered by our various tooling, including Chrome DevTools, Lighthouse and components that we've worked to add to JavaScript frameworks and platforms. But we're not resting on our laurels and have plenty more plans to update our tooling and documentation to surface these key recommendations. We've created a blog post with more details and links for each recommendation. And once you've considered these recommendations, you can start to consider the other recommendations our tooling highlights. But we think these are the ones you should consider first. Thank you for listening. I'm looking forward to seeing the continual improvements in web performance and your users will appreciate them even more.