 Hey, folks. My name is Adi Osmani, and welcome to Optimizing for Core Web Vitals. So today, we're going to talk about optimizing user experiences on the web with a case study on French luxury fashion house, Chloe. Chloe recently been taking a fresh look at web performance, and I'm really excited to share their learnings with you. Now, you may have seen Google Search announce an upcoming search ranking change recently that incorporates page experience metrics. Now, these metrics include the Core Web Vitals, which, together with a few other signals, paint a pretty holistic picture about the quality of user experiences on a page. But what are the Core Web Vitals, and how do you go about optimizing for them? Well, Core Web Vitals are a set of metrics related to speed, responsiveness, and visual stability. Now, these three aspects of user experience are measured using three metrics. So first of all, we have largest contentful paint, which measures loading performance. Next up, we have first input delay, which measures interactivity. And last, we've got cumulative layout shift, which measures layout stability. Let's kick things off by talking about cumulative layout shift, or CLS. Now, CLS is a pretty important metric for measuring visual stability, because it helps quantify all those times when we see really surprising shifts in the content on a page. And it helps make sure that the page is as delightful as possible. Have you ever been reading, like, an article online, when, all of a sudden, something suddenly changes on the page? And without warning, the text moves, and you've lost your place. That's literally what happens. A giant chicken kicks your content away. And he has no regrets. Look at him. He's basically CLS. So what causes poor CLS? Well, first of all, we've got images without dimensions, ads, embeds, or iframes without dimensions, dynamically injected content, and web fonts that might cause a flash of unstyled content. Now, as I mentioned, Chloe is a French luxury fashion house, and it's become a bit of a go-to brand, not just for luxury apparel, but also handbags, and fragrances, and things like that. And they have recently been focused on improving cumulative layout shifts on all their main pages, so their homepage, their product listings page, and their product details page. Through a bunch of work, they've been able to reduce their CLS all the way down to zero, which is about as perfect as you can get. So how did they get here? This is the before view of the Chloe home page, where we can observe a number of surprising layout shifts due to elements on the page not following CLS best practices. So let's dive into a few tips that worked well here. First off, always include width and height size attributes on your images and video elements. Alternatively, you can always do things like reserve the required space with CSS aspect ratio boxes, but in general, this approach just makes sure that the browser can allocate the correct amount of space in the document while the image is loading. So here's a demo of this in action. These are some images that don't have width and height specified, and what you see happening is that they're pushing content in the page all the way down. This is something that's reflected in our tools like Lighthouse, and I've got a little bit of a clip out here. You can see the Lighthouse report where CLS is in the red and not quite where we want it to be. So how do we address this? Well, in the early days of the web, developers would add width and height attributes all over the place. They'd add them to their image tags. They'd make sure that they kept enough space allocated on the page before browsers would start fetching images. That was great because it would minimize reflow and relay out. Now, when responsive web design was introduced, developers began to omit these width and height attributes, and they started to use CSS to resize their images instead. One of the downsides to this approach is that space could only be allocated for an image once it began to download. And at that point, the browser could determine its dimensions. As images load in in that old world, the page would reflow as each image appears on the screen. And a lot of us got used to our text suddenly popping down the screen, which wasn't a very great user experience. And this is where aspect ratio comes in. So the aspect ratio of an image is the ratio of its width to its height. It's pretty common to see this expressed as two numbers separated by a colon. So for example, 16 colon 9 or 4 colon 3. For an x colon y aspect ratio, the image is x units wide and y units high. What that means is that if we know one of the dimensions, the other one can be determined. So for a 16 to 9 aspect ratio, if dress.jpg has got a 360 px height, the width is 360 multiplied by 16 over 9, which gives us 640 px. I'm not very good at math, so hopefully that was helpful. Now, modern browsers now set the default aspect ratio of images based on an images width and height attributes. So it's really valuable to set them if you want to avoid those layout shifts. This is a change in modern browsers. And it's all thanks to the CSS working group. They've done some work that basically allows us to just set width and height as normal. And this calculates an aspect ratio based on the width and height that attributes before the image is loaded. So what we're seeing on screen here, this is something that's added to the default style sheet of all browsers. And it calculates aspect ratio based on the elements width and height attributes. So as long as you're providing width and height, the aspect ratio can be calculated and everything will hopefully avoid layout shifts. So this is a great best practice to be following. This is also something that works well with responsive images. So with source set, you're generally defining images that you want to allow the browser to select between. You can define sizes for those images. To make sure that your image width and height attributes can be set, just make sure that each image is using the same aspect ratio. And here's that demo once again with width and height attributes added. Notice that in a modern browser, you won't see any layout shifts there and the user will get a much more pleasant experience. So another reminder, set those width and height attributes as much as you can. Here's the impact that this change has on Lighthouse. As we can see before, we went from a CLS of 0.36, so we're in the red, all the way back to something that's a little bit better. There were one or two other things in this page that could have been improved, but on the whole, we've had a relatively significant impact on reducing layout shift. You might be wondering, how can I figure out what elements on my page are contributing to CLS? We've got you covered. So in Lighthouse, we have an avoid large layout shifts audit that highlights the top DOM elements contributing most of the CLS to the page. So check out that audit. In DevTools, we also have a good story here. So if you're using the DevTools performance panel, it has an experience section that can help you detect unexpected layout shifts. Super helpful for finding and fixing visual instability issues. They get highlighted in this experience section with some kind of reddish, pinkish layout shift records. And if you click on one of those records, you'll be able to get more details about, what was the score? Where did this element move to and from? Really great diagnostics to help you nail down how to fix your CLS. So Chloe's approach to image loading is that they use a skeleton pattern with a SAS CSX mixin called bruschetta loading. Bruschetta is one of those things that are a little bit of a luxury to me during quarantine. They're right up there with toilet paper and antibacterial soap. But let's stick with bruschetta loading. So this is Chloe's approach to image loading. They have a parent container with a color similar to the final image that's being loaded. Now, lazy loading strategies like this where you have a little bit of a preview of what's finally gonna be shown are sometimes referred to as a low quality image placeholders. You can use a predominant color from the final image. You can use a low resolution image. Sometimes people will use like a one pixel by one pixel image or something like 10 pixel by 10 pixel, something very low resolution that just gives you a preview of what's finally gonna be displayed. Now, lazy loading strategies like this which either use a color or that kind of placeholder, they don't strictly improve largest contentful paint but they do improve perceived performance. So they can still be pretty good for the user experience. Now, what Chloe did here in addition to using this skeleton loading approach was that they do use responsive images and they do make sure that they're setting dimensions on their images as well to avoid CLS. Let's shift things up. Let's go on to the next tip. So reserve enough space for any of your dynamic content, things like ads or promos. Ideally, you wanna make sure that you're giving any of that content a container that is not going to just bounce out of and suddenly cause shifts in the page. A related tip to this one is avoid inserting new content above existing content. Unless it's in reaction to a user interaction you wanna make sure that any layout shifts in your page are ones that you are making a conscious decision around and like occur as expected. So let's try to visualize this. Here's an example of a promo where we're dynamically injected into the page, we haven't reserved space and it's just pushed everything all the way down. We can see this reflected in our lighthouse call out at the bottom of the screen. Now, this is something that very typically happens with ads, iframes, promos, and these types of assets can sometimes be the largest contributors to layout shifts on the web. Now, many ad networks and publishers will often support dynamic ad sizes and ad sizes that are dynamic or something that can sometimes increase revenue because you're giving people a lot of flexibility around what can go inside your ad slots but it can also be something that can potentially negatively impact the user experience by pushing things down. So that's something that you want to avoid. So how do we approach this? Well, one solution to the problem is statically reserving space for the slot. So you can make sure that you're defining a container for these ads or embed frames so that regardless of what goes inside you're not shifting the content of the page around. So here I've got a container where I've set my width and my height. I've set a background color but I've also set it to overflow hidden just in case anything dynamic is a little bit, you know, a little bit taller than the container. I still don't want it to be able to break out of it. Ideally, the content fits inside of our container like our iframes or whatever else we might inject in there. And what you can do if you're in the, you know, if you're somebody that has lots of dynamic content that gets injected into your page you can take a look at your data, look at what are the medians or the 95th percentile widths and heights for this dynamic content and size your container accordingly. That'll just mean that you have the best chance at still being able to present that content to users without negatively impacting the rest of the user experience. So here's what it looks with my pattern in place. I've reserved enough space and that content pops in but there are no layout shifts in the page. So I'm really happy about that. Slightly better is my baseline for everything in life at the moment. So yeah, this is the Lighthouse 6.0 impact. We can see that we reduced our layout shifts from 0.24 all the way down to about zero. I'm gonna give myself about zero, it's in the green. So that's great. So let's talk about a production example of something like this on Chloe. So Chloe had a promotion banner for shipping at the top of their product listings page. And you'll see this like free standard shipping promotion listed at the very top, but this wasn't always there. There was a time when this product listings page had a CLS of 0.4, which is like really not great. Because of two things. The first was the way they approached their dynamic promo banner and the way that they approached filters. Let's talk about the banner first. Now, this banner used to be positioned in line underneath the main page header. And as you can see here, it looks kind of harmless, but what's the impact of having a dynamically sized banner on the user experience? Well, we have a video here, let's take a look. As we can see here, once the content is fetched and rendered for this banner, it pushes the content for the rest of the page all the way down, and that's not very ideal. So how did Chloe go about fixing this? Well, they reserved space for this banner. The content for this banner was also coming from a client side request. Therefore, messages were causing a pretty visual layout shift occurring a few seconds into page load. Now, they moved this API call straight to the server and they made sure to reserve enough space for the banner with a simple height setting. As a part of this work, they moved the position of the banner up a little bit, but altogether, like moving more work to the server, always a good idea. And just making sure that they're reserving space, these things made a bit of a difference. So here's the after view. Here we can see the impact to their product listings pages after these changes have been made. It's a lot less shifty, so I'm happy about that. So we talked about their promo banner. The other big CLS issue for product listing pages was that Chloe had a filters widget for filtering products. Now, this would rehydrate to become dynamic once it booted up. And so on the client, it was pending XHR calls for data. It was waiting on session state based on filter choices in order to be able to like, finally render this thing on the screen. So this is what this basically looked like. We'd wait for kind of cons to be sent down for the filter widget, we wait for hydration and it would still push content on the screen all the way down. Now, what they ended up doing here was that they adapted this widget to contain more of the information needed to render the filter widget server side. So they'd rendered it with better defaults. This helped avoid those layout shifts. And I just wanted to give a call out here to the right of the screen, we can see the web vitals Chrome extension. This gives you a real time view of all of your vitals metrics and it can just be helpful as you're building your sites locally or you're just browsing the web and wanna get a sense for the performance of different sites that you check out on the regular. And here's what things look like after their rehydration fix for filters. As you can see, CLS reduced by a decent amount looking at the before and after. And it was just another case of like, pay attention to the little things in your pages that might be in aggregate causing lots of things to be pushed down. Every little CLS fix helps. And here's the overall impact of these changes on desktop. We can see that the above the full content is relatively stable and offers a much better user experience on the whole. And this is also reflected in Lighthouse. Work on Lighthouse, gotta give Lighthouse a shout out. As we can see here, cumulative layout shift is in the green, we've hit zero, so it's in a really solid place. So to improve CLS, Chloe acted on a number of different elements. It wasn't just one thing. They reserved space for the promo content in terms of its ratio. They made sure to set within height dimensions on their images and they adopted a skeleton pattern to improve perceived performance. They reserved space for their promo banners requests before receiving messages. And they also reserved space for the filters dynamic component as well as making a few other optimizations to just help with rendering. So on the whole, it was definitely worth it. All right, so I have a big surprise for you. We've got more metrics to talk about. Put a lot of work into this slide. Historically, it's been a bit of a challenge for web developers to measure just how quickly the main content of the webpage loads and is visible to users. Thankfully, we now have metrics like largest contentful paint that are able to report the render time of the largest content element that's visible within the viewport. Now, you might be wondering, what causes a poor LCP? Well, there are lots of things. Slow server response times are a big one. This could be your backend infrastructure. It could be unoptimized database queries, API responses that are just taking a while to resolve. Could be render blocking JavaScript and CSS. Slow resource times are another big one. You could have unoptimized images slowing down your LCP. And then there's client-side rendering. There's a whole class of problems where those of us who love working in JavaScript and using modern libraries and frameworks and bundlers can sometimes get into a place where we have our requests for assets like images, in particular hero images, behind JavaScript fetches. So the browser, first of all, has to fetch your JavaScript. Then it has to parse and process that JavaScript to fetch your image. And that whole process can take so long that you delay showing meaningful content to your user. So it's just things like that you should keep an eye on. There are plenty of tools that can help diagnose these issues. So let's take a look at some real-world production challenges around LCP and how to work around them. Chloe started off with an LCP of about 10 or 11 seconds. In this view here, you can see that their primary hero image content wasn't getting fetched and rendered until about 11 seconds into our trace. Their homepage suffers from, in this case, it suffered from a few different things. It had heavy full-screen image downloads, poorly optimized images, some images that were requested late in the network chain. And these are very common issues. There's nothing here that's just like that they're doing crazy wrong. It's just very common issues. And it's useful to be aware of some of the things that impact LCP. So things that impact LCP are image elements, image elements that might be inside of an SVG element, video elements, block-level elements containing text nodes. And so let's talk about images first because they're pretty often a cause for poor LCP. So for many sites, images are the largest element in view when the page is finished loading, especially as UX patterns have shifted towards us using more hero images in our pages. So it's very, very important to optimize our images, especially anything that's visible within the initial viewport. Now there are a few techniques that you can use here. You can consider not having an image in the first place. If it's not that relevant, maybe remove it. Compress those images. You know, there are plenty of image optimization tools out there, compressor images. Maybe consider converting them to more efficient modern formats, use responsive images. And you can also consider using an image CDN. I'm seeing an increasing number of sites leveraging image CDNs, just to help them get control over an ability to just tweak parameters in a URL for an image and change what format gets served down or what quality you have. And it's just using an image CDN can be a really good way of staying on top of modern best practices. Because even us that are, you know, web enthusiasts sometimes have a hard time staying on top of all everything happening in the image optimization world. Now you might be wondering, how can I identify the element that is my like LCP? Thankfully, we've got some solutions here. In DevTools, in the performance panel, if you record a trace and you go to timings, you should find a record for LCP. Click on that record and you'll get the summary pane showing up that includes things like the size of the image and more importantly, the related node. So if you hover over that related node, it'll highlight what in your page was considered LCP. I personally find this really valuable as kind of a stepping stone to, where should I be spending my time optimizing? So check that out if you use the performance panel. This is also something that we try to capture in Lighthouse. So Lighthouse has got a largest contentful paint element audit and we try to highlight what element was responsible here too. So if you use Lighthouse, check that out. So back to Chloe. So Chloe discovered that they were delivering very high resolution images, even very high resolution for retina screens because there is a bit of a cutoff point where if you're serving kind of two-by, three-by images, the human eye is not gonna be able to perceive large amounts of difference there. And there are kind of, you have diminishing values that you get out of just serving very, very, very high resolution images. Now in this case, we're in DevTools, we're in the elements panel. We're looking at a specific image and what we see is that the maximum width of images being served down is 1,920 pixels. It's pretty large. So one of the things that Chloe actually decided to do was change things up here. They resized their images to not be more than two times the image viewport size. So they removed source at sizes over 828 width to keep an image maximum size that they were comfortable with. And that actually ended up being pretty fine on retina devices as well. So it was this nice trade-off of how do we deliver rich imagery without negatively impacting the user experience. Now by doing this work on an iPhone X or a Pixel 2 XL that was previously seeing anywhere up to 245 kilobytes where the image bytes being downloaded, they were able to reduce it down to 125. That's huge. That's like a 51% decrease in image bytes being served down with no noticeable difference. So optimize your images, people. The next thing we're gonna talk about is some of the other image optimizations that they performed. So on the product listings page, Chloe used image lazy loading, which is, it's a relatively popular pattern. What they discovered was that there were four primary images being loaded above the fold. However, there was one off-screen image that seemed to be tripping up their lazy loading heuristics and was still being fetched. Now this particular image happened to be 248 kilobytes in size, about 200 plus kilobytes in size. And this was negatively impacting the user experience. So they wanted to try improving this. Now on the whole, there were a number of things Chloe did. They were able to bring down their above-the-fold image download size all the way to 14 and a half kilobytes. They were able to tune their lazy loading heuristics so that off-screen images, like the one I was just talking about, were no longer a problem. They adopted an image CDN, they adopted WebP by default, improved their image resizing strategy. And the results of this, outside of just having a nice like lighthouse report with lots of greens, is that each product page now weighs 57% less than it did before, which is a really nice outcome to have as a result of like optimizing your images. Taking a step back, here's what the homepage LCP looked like after these changes. We can see that again, previously those hero images were not rendering in until about 11 seconds in. Now LCP happens at about four seconds into the process and it's complete just a few seconds later. The request time for our LCP related node for kind of our hero images is about 1.3 seconds in. So on the whole, this is really great. There's still work they could do here, but on the whole, this is like fantastic to see. So let's switch things up to our next tip. Defer any non-critical JavaScript and CSS to speed up loading the main content of your page. Now, this is guidance that is, it's not new, it's been around for a few years, but for anyone that's not familiar with this guidance, I'll give you a very quick recap of it. Now, before a browser can render any content, it needs to parse HTML markup into a DOM tree. The parser needs to pause if it encounters any external style sheets, your synchronous script. And scripts and style sheets can both, you know, be render blocking resources, which can delay your first contentful paint. Consequently, your largest contentful paint as well. And so what we tell people to do is defer any of your non-critical scripts and style sheets to speed up load. So let's take a look once again at the product listings page for Chloe. As we can see, this is a trace independent of their image optimizations. And as we can see here, Lighthouse highlights that there are a few render blocking style sheets that are delaying early paints on the product listings page. Now, this is kind of manifested in terms of like just how much white we're seeing in our film strip. So one approach to addressing this problem is by inlining your critical CSS and deferring the load of non-critical styles. We often call this technique critical CSS. So critical CSS is all about extracting CSS for above the fold content, ideally across a number of different breakpoints and making sure that you can render the above the fold content as quickly as possible in the first few RTTs and deferring the load of the rest of your style sheets for the page, you know, for things below the fold, as soon as possible, otherwise. So how did Chloe do this? Well, they built some tooling. They implemented critical CSS in their SAS build process and they constructed a syntax allowing their developers to specify for each widget what part of the CSS code goes into their critical CSS. This is highlighted using the critical keywords you see on the screen right now. Now, at build time, they're able to build both the critical CSS and the non-critical CSS so that every single build is consistent with both. There are many ways that you can approach critical CSS. I've contributed to some tooling on this topic in the past and you can automate it. You can go very custom. I see some teams that will just have a critical.css file that they manually curate. And regardless of the approach that you take, what's key is just making sure that you're delivering important content to the user as quickly as possible. So we talked about the need for loading in the other style sheets for the page. What Chloe do is their non-critical CSS style sheets are stored in an array. So they point to references to them on their servers and that's injected with a deferred script so that it's hopefully not render blocking but is still loaded with a relatively high priority that isn't going to interfere with the HTML parser. So what was the impact of optimizing their critical CSS? Well, the answer is pretty large. They were able to bring down their first contentful paint from 2.1 seconds to about 1.1 and their LCP from 2.9 seconds to about 1.5. Now, this is really great work. Optimizing your critical CSS can be a bit of a time investment but is something that can just make sure that your page is getting styled as soon as possible. So let's talk about another tip. I mentioned slow server response times when we were discussing like, what impacts LCP? Now, the longer that it takes a browser to receive content from the server, the longer that it takes to render anything on the screen, the faster a server can respond that that's gonna improve every single page load metric, including LCP. So you might be wondering, how can I tell if I have a slow server response time? Lighthouse has you covered. In Lighthouse, we have an audit called reduce initial server response time. And if you see this, it's a good hint to spend more time diagnosing the problem and causes of the problem. As I mentioned earlier, it can be plenty of things on your back end. And we're trying to optimize our server response times. There's plenty that we can do in terms of optimizing our DNS, our pre-connects, all of those types of things. But there are also things that we can do to optimize loading priority. This is where techniques like link rel preload and server push can come into play. Now, if you're new to server push, I'll give you a quick summary of it. To improve latency, HB2 introduced this idea server push, which basically allows a server to push resources to the browser before they're explicitly requested. Now, you and I as developers, we can, as well as anyone else watching, you're all awesome too, we often know what the most important resources are on a page. And so we can start pushing those as soon as things respond with the initial requests. This allows the server to fully utilize what's otherwise an idle network to improve page load times. Now, server push is not without its nuance. This is one of those optimizations where you need to be careful. It's possible to over-push. So server push is not HTTP cache aware. So I could push something for a particular page. The user could come back to another related page and the server would push those exact same resources again. The way to avoid that is by either using cookies or a service worker to avoid those refetches for those types of resources and track what's in the cache. But it does involve a little bit more work. In general, server push is an optimization that can have a big impact, but just be aware of some of that nuance. It's not quite as simple as just like turning it on sometimes. Now, Chloe use automatic server push, which is an implementation provided by Akamai. It uses data to decide when to push critical CSS, fonts, and script. And if you're manually using server push yourself, you might end up looking at syntax that looks a little bit like this. What we see here is the link HP header. This is actually the preload resource hint in action. And it's a separate but distinct optimization from server push, but in reality, most HP2 implementations will push an asset that's specified in a link header containing a preload resource hint. So you can use the syntax in order to enable server pushes for a page. So what was the impact of this optimization? Without server push, Chloe are finding in their lab tests that LCP was closer to four seconds, but with it, it was closer to 2.5 seconds, which is like a huge amount of impact. On screen at the moment, we've been verifying that using Lighthouse, but you can also tell if individual requests were server pushed using things like dev tools and using things like web page tests, network waterfall view. Both are very, very handy. Now we're on to our very last metric. Hooray! Chloe didn't optimize for forced input delay, but I did want to very quickly cover it. Now, first input delay measures the time from when a user first interacts with a page. So that moment when they start to click on a button or tap some UI, some JavaScript powered control, to the time that the browser is actually able to respond to that interaction. Now, there are many things that cause a poor first input delay. There can be long tasks on the main thread, heavy JavaScript execution, large JavaScript bundles can delay how soon script can be processed by the browser and can have an impact here. And then you have things like render blocking script. Now, in general, I would strongly recommend using Lighthouse and using dev tools because they do try to point out areas where you might have long tasks or heavy script execution. Very often the solution is to just break up this work, serve what the user needs when they need it and try to look at opportunities for minimizing main thread work as much as possible. Sometimes people will contextualize this in terms of maybe shift some of that work, some of the logic work to a web worker. But regardless of the path you want to take there, the end goal is essentially just making sure that the main thread isn't busy and that user interactions are not delayed. So we're almost at the end of our journey with Chloe. Here we can take a look at Chloe's overall web vitals in the lab. Thanks to their investments in performance and user experience, they were able to reduce their cumulative layout shifts down to zero and their LCP by almost half. So this is mind-blowingly awesome. This is really, really cool. As you've seen, all of this work is kind of the culmination of a number of smaller optimizations that, when added up, actually make a pretty significant impact to your end user experience. And we don't have to just look at data in the lab, we can look at the field as well. Here is Chrome user experience report data for Chloe. And as we can see, our core web vitals metrics for LCP and CLS are trending in the right direction. CLS went from 0.85 down to zero in the latest dataset. And this is all, on the whole, it's tremendous work. It's really great to see. And I know that Chloe are happy to continue building on this work in the future as well. Now, if you're interested in building dashboards like this for your own team measuring the core web vitals, you might be interested in checking out the Chrome user experience report dashboard. This is a great solution that just allows you to drop in a URL and very quickly get access to field data and distributions for the different core web vitals. It also summarizes the metrics. So if you're trying to share around this report with other people on your team, they'll hopefully be able to also get some familiarity with the core web vitals too. We also recently shipped a new Chrome user experience report API, Crux API. This is great for programmatically being able to build out your own dashboards, very similar to what we were just taking a look at. So check that out too. And that's it. I hope that you found this talk useful. Go and optimize your web vitals. There are plenty of docs over on web.dev that cover the methodology, the tools, as well as the best practices that you can use to get fast and stay fast. My name is Adi Osmani. I hope this has been useful. Thank you.