 My name's Katie Empennius. And I'm Addy Osmani. We work on the Chrome team on trying to keep the web fast. Today, we're going to talk about a few web performance tips and tricks from real production sites. But first, let's talk about buttons. Now, you've probably had to cross the street at some point and may have had to press a pedestrian beg button before. Now, there are three types of people in the world. There are people who press the button once. There are people who don't press the button. And then people who press it 100 times because, of course, that makes it go faster. The frequency of pushing these buttons increases proportionally the user's level of frustration. Want to know a secret? Sure. At least in New York, most of these buttons aren't even hooked up. So your new goal is to have a better time to interact with them than this. Now, this experience of feeling frustrated with buttons that's not working is actually something that applies to the web as well. According to a UX study that was done by Akamai in 2018, users expect experiences to be interactive in about 1.3 times the point when they're visually ready. And if they're not, people end up rage-clicking. Right. It's important for sites to be visually ready and interactive. It's an area where we still have a lot of work to do. Here, we can see page weight percentiles on the web, both overall and by resource type. If one of these categories is particularly high for a site, it typically indicates that there's room for optimization. And in case you're wondering what this looks like visually, it looks a little bit like this. You're sending just way too many resources down to the browser. Delightful user experiences can be found across the world. So today, we're going to deep dive into performance learnings from some of the world's largest brands. Let's start by talking about how sites approach performance. This probably looks familiar. For many sites, maintaining performance is just as difficult, if not more difficult, than getting fast in the first place. In fact, an internal study done by Google found that 40% of large brands regress on performance after six months. One of the best ways to prevent this from happening is through performance budgets. Performance budgets set standards for the performance of your site. Just like how you might commit to delivering a certain level of uptime to your users, you can commit to delivering a certain level of performance. There are a couple different ways that performance budgets can be defined. They can be based on time. For example, having a budget of having less than a two-second time to interactive on 4G. They can be based on page resources. For example, having less than 150 kilobytes of JavaScript on a page. Or they can be based on computing metrics, such as Lighthouse scores. For example, having a budget of a 90 or greater Lighthouse performance score. While there are many ways to set a performance budget, the motivation and benefits of doing so remain the same. When we talk to companies who use performance budgets, we hear the same thing over and over. They use performance budgets because it makes it easy to identify and fix performance issues before they ship. Just as tests catch code issues, performance budgets can catch performance issues. Walmart Grocery does this by running a custom job that checks the size of the builds corresponding to all PRs. If a PR changes the size of a key bundle by more than 1%, the PR automatically fails and the issue is escalated to a performance engineer. Twitter does this by running a custom build tracker that they built against all PRs. This build tracker comments on the PR with a detailed breakdown of how that PR will affect the various parts of the app. Engineers then use this information to determine whether a PR should be approved. In addition, they're working on incorporating this information into automatic checks that could potentially fail a PR. Both Walmart and Twitter use custom infrastructure that they built themselves to implement performance budgets. We realize that not everybody has the resources and time to devote to doing that, so today we're really excited to announce Light Wallet. Light Wallet adds support for performance budgets to Lighthouse. It is available today in the command line version of Lighthouse. The first and only step required to set up Lightwallet is to add a budget.json file. In this file, you'll define the budgets for your site. Once that's set up, run the newest version of Lighthouse from the command line and make sure to use the budget path flag to indicate the path to your budget file. If you've done this correctly, you'll now see a budget section within the Lighthouse report. This section will give you a breakdown of the resources on your page, and if applicable, the amount that your budgets were exceeded by. Lightwallet was officially released yesterday, but some companies have already been using it in production. Jabong is an online retailer based in India who recently went through a refactor that dropped the size of their app by 80%. They didn't want to lose these performance wins, so they decided to put performance budgets into place. Up on the screen, you can see the exact budget.json file that Jabong is using. Jabong's budgeting is based on resource sizes, but in addition to that, Lightwallet also supports resource count-based budgets. Jabong used the current size of their app as the basis for determining what their budget should be. This worked well for them because their app was already in a good place. But what if your app isn't in a good place? How should you set your budgets? Well, one way to approach this problem would be to look at HTTP archive data to see what breakdown of resources correspond with your performance goals. But speaking from personal experience, that's a lot of SQL code to write. So to save you the effort, we're making that information directly available today in what we're calling the performance budget calculator. Simply put, the performance budget calculator allows you to forecast time to interactive based on the breakdown of resources on your page. In addition, it can also generate a budget.json file for you. For example, a site with 100 kilobytes of JavaScript and 300 kilobytes of other resources typically has a four second time to interactive. And for every additional 100 kilobytes of JavaScript, that time to interactive increases by one second. No two sites are alike. So in addition to providing an estimate, the calculator also provides a time to interactive range. This range represents the 25th to 75th percentile TTI for similar sites. So one of the things that can end up impacting your budgets are images. So let's talk about images starting off with lazy loading. Now, we currently send down a lot of images to our pages. And these aren't the best for limited data plans or particularly slow network connections. At the 90th percentile, HP Archive says that we're shipping almost five megs worth of images down on mobile and desktop. And that's not perhaps the best. Now, lazy loading is a strategy of loading resources as they're needed. And this applies really well to things like off-screen images. There's a really big opportunity here. Once again, looking at HP Archive, we can see that there's actually an opportunity where at the 90th percentile, folks are currently shipping down anywhere up to three megabytes of images that could be lazy loaded and at the median 416 kilobytes. Now, luckily, there are plenty of JavaScript libraries available for adding lazy loading to your pages today, things like lazy sizes or react lazy load. Now, the way that these usually work is that you specify a data source instead of a source as well as a class. And then the library will upgrade your data source to a source as soon as you come into view. Now, you can build on this with patterns like optimizing perceived performance and minimizing reflow just to let your users know that something's happening as these images are being fetched. Now, we're going to walk through some case studies of people who've been able to use lazy loading effectively. So Chrome.com is our browser consumer site. And recently, we've been very focused on optimizing its performance. We'll cover some of those techniques in more depth soon. But these resulted in a 20% improvement in page load times on mobile and a 26% improvement on desktop. Lazy loading was one of the techniques the team used to get to this place. They use an SVG placeholder with image dimensions specified to avoid reflow. They're using intersection observer to tell when images are in or near the viewport and a small custom JavaScript lazy loading implementation. The win here was 46% fewer image bytes on initial page load, which was a nice win. We can also look at more advanced uses of image lazy loading. So here's Shopee. Shopee were a large e-commerce player in Southeast Asia. Recently, they adopted image lazy loading. And we're able to save one megabyte of fewer images that they're serving on initial load. Now, the way that Shopee works is that they're displaying a placeholder by default here. And when the image is inside the viewport, once again, using intersection observer, they're able to trigger a network call for the image to download it in the background. Once the image is either decoded if a browser supports the image decode API or downloaded if it doesn't, the image tag is rendered. And they're able to do things like have a nice fade-in animation when that image appears, which overall looks quite pleasant. We can also take a look at Netflix. So as Netflix's catalog of films grows, it can become challenging for them to present their members with enough information to decide what to watch. So they had this goal of creating a rich, enjoyable video preview experience so members could have a deeper idea of what was on offer. Now, as part of this, Netflix wanted to optimize their home page to reduce CPU load and network traffic to keep the UX intuitive. The technical goal was to enable fast vertical scrolling through 30 plus rows of titles. The old version of their home page would render all of the tiles at the highest priority. That would include data fetching from the server, creating all of the DOM, fetching all of their images. And they wanted the new version to load much faster, minimize memory overhead, and enable smoother playback. So here's where they ended up. When the page now loads, they first render billboard images at the very top three rows on the server. Once they're on the client, they make a call for the rest of the page, and they render the rest of the rows and then load all the images in. So they're effectively simply rendering the first three rows of DOM and laser loading the rest as needed. The impact of this was decreased load time for members who don't scroll quite as far. And this is effectively a summary of where they ended up. Overall, faster startup times for video previews and full screen playback. So before, there was a CPU load required to generate all of their DOM nodes to get images to load. Now, they don't saturate quite as much member bandwidth, and they pull in four times fewer images on initial load. So their video previews now have faster load times. They've got less bandwidth consumption and lower memory overall. So from our tests, image laser loading has helped many brands shave an average of 70% percent off of their image bytes on initial load as a result of using this optimization. These include the likes of Spotify and Target. So it looks like there could be something here we could bring into the platform. So today, we're happy to announce that native image lazy loading is coming to Chrome this summer. Now, the idea here is that with just one line of code using the brand new loading attribute, you'll be able to add lazy loading to your pages. So this is a big deal. Very excited about it. This will hopefully work with three values, the lazy value, eager, if an image is not going to be lazy loaded and auto if you want to defer it to the browser. Thank you. So we're also happy to announce that this capability is also coming to iFrames. So the exact same attribute, the loading attribute, is going to be possible to use on iFrames. And I think that this introduces a huge opportunity for us to optimize how we address loading third party content. Now, here is an example of the brand new loading attribute working in practice. So what way this is going to work is that on initial load, we're actually just going to fetch the images that are in or near the viewport. We're also going to fetch the first two kilobytes of all of our images, as that will give us dimension information and help us avoid reflow and give us the placeholders that we need. And then we start loading these images on demand. And what this leads to is quite nice savings. So we're only loading out 548 kilobytes of images rather than those 2.2 megabytes. Now, Chrome's implementation of lazy loading is doing a few other things behind the hood. We actually factor in the user's effective connection type when we decide what distance from viewport thresholds we're going to use. And those can be different from whether you're on 4G to whether you're on 2G. Now, the loading attribute can either be treated as a progressive enhancement, so only using it in browsers that support it. Or you can load a JavaScript lazy loading library as a fallback. So here, we're checking for the presence of the loading attribute on HTML image element. If it's present, we'll just use the native attribute. We'll upgrade our image data sources. And if it's not, we can fetch in something like lazy sizes and apply it to get the same behavior. So here it is working in Firefox, where we've applied this exact same pattern. And so we're able to get to a place where we have cross-browser image lazy loading with a relatively hybrid technique that works quite well. Users expect images to look good and be performant across a wide variety of devices. This is why responsive images are an important technique. Responsive images are the practice of serving multiple versions of an image so that the browser can choose the version that works best for the user's device. Responsive images can either be based on serving different widths of an image or based on different densities of an image. Density refers to the device pixel ratio or pixel density of the device that the image is intended for. For example, traditional CRT monitors have a pixel density of 1, whereas Retina displays have a pixel density of 2. However, these are only just two of the many pixel densities in use on devices today. And what Twitter realized was that it was unnecessary to serve images beyond a Retina density. This is because the human eye cannot distinguish between images beyond that density. This is an important realization because it decreased image size by 33%. The one exception to this is that they do continue to serve higher density images in situations where the image is displayed full screen and the user can pinch zoom on the image. Responsive images are just one of the many techniques that go into a fully optimized image. When we're talking with large brands, those optimizations not only include the usual suspects like compression or resizing, but also more advanced techniques like using machine learning for automated art direction or using AB testing to evaluate the effectiveness of an image. And this is where image CDNs come in. You can think of image CDNs as image optimization as a service. And they provide a level of sophistication and functionality that can often be difficult to replicate on your own with local script-based image optimization. At a high level, image CDNs work by providing you with an API for accessing and, more importantly, manipulating your images. And image CDN can be something that you manage yourself or leave to a third party. Many companies do decide to go with a third party because they find that it is a better use of resources to have their engineers focus on their core business rather than the building and maintenance of another piece of software. Trabago is a travel site based in Europe who switched to Cloudinary, and this was exactly their experience. When Trabago switched to an image CDN, they found that overall image size decreased by 80%. Those results are very good, but they're not necessarily unusual. When talking with brands who've switched to image CDNs, we found that they experience a drop in image size of anywhere from 40% to 80%. I personally think part of the reason for this is that image CDNs can often provide a level of optimization and specialization that can be difficult to replicate on your own if only due to lack of time and resources. Images are the single largest component of most websites, so this translates into a significant savings in overall page size. So next, let's talk about JavaScript. Starting off with deferring third party script and embeds, things like ads, analytics, and widgets. Now, third party code is responsible for 57% of JavaScript execution time on the web. That's a huge number. This is based on HP Archive data, and this represents a majority chunk of script execution time across the top 4 million websites. This includes everything across ads, analytics, embeds, and a lot of the CPU intensive scripts can cause issues with things like script execution and can delay your user interaction. So we need to exercise a lot of care when we're including third parties in our pages. Now, when I ask folks how their JavaScript diet is going, it usually isn't very great. Tag managers, ads, libraries, maybe there's an opportunity for us to defer some of this work to a smarter point in time. Let's talk about a site that actually did this for real, the Telegraph. So the Telegraph knew that improving the performance of third party scripts would take time, and it benefits from installing a performance culture in your organization. They say that everybody wants that tag on their page that's going to make the organization money. And it's very important to get the individuals in a room to educate, challenge, and work together on this problem. So what they did was they set up a web proof working group across their ads, marketing, and technology teams to review tags so that non-technical stakeholders could actually understand what the opportunity here was. What they discovered was a change. The single biggest improvement at the Telegraph was deferring all JavaScript, including their own, using the defer attribute. This hasn't skewed analytics or advertising based on their tests. This is a really huge deal, especially for a publisher, because usually you see a lot of hesitation from marketing folks, from advertising analytics, because there's this fear that you're going to end up losing revenue or not quite tracking as many users as you want to be able to track. But through collaboration, through building that performance culture, they were actually able to get to a place with the org where they kept building on top of this, including leading to changes such as a six-second improvement in their time to interact with. So they still have work to do, but this is a really solid start. We can also talk about Tui, who are a travel operator in Europe. They were looking at how to be more customer-centric and realized that just adjusting prices wasn't going to cut it if visitors were leaving their site because of slow speed. Now, for speed projects at their organization to get off the ground, they had to get organizational buy-in from management all the way up to their CEO. And through a test and learn mindset, they were able to discover that when low times decreased by 71%, bounce rates decreased by 31%. Now, part of the things that allowed them to get to a place where they could improve performance were these two optimizations. Tui were using Google Tag Manager in the document head. In their case, we're using it to inject tracking scripts and things like that. So they moved the execution of Google Tag Manager after the load event. They didn't see any drop in tracking, a meaningful level as a result of this. And the result was great in their perspective. They have 50% reduction in DOM complete. Tui also had a third-party ABE testing library that they were using that weighed 100 kilobytes of GZIP and Minivite script. They realized that even if they were to push this to after the onload event, it could potentially have some issues. They noticed some flickering as it would switch between one ABE test to the other. So they completely threw that dependency out. And they rewrote their ABE testing as something custom, part of their CMS, in under 100 lines of JavaScript. The impact was being able to throw away that dependency and a 15% reduction in home-painting JavaScript. Let's also talk about embeds. Now, we noticed LighthouseFlagsChrome.com is having a high JavaScript execution time, despite it looking like it's mostly a static site. This would delay how soon users could interact with the experience. Now, what we saw was that Chrome.com actually had this Watch Video button on it, where they'd show you to promo if you clicked on the button. Unfortunately, they dropped in YouTube's default embed into their HTML. And this was pulling in all of the YouTube video player, all of its scripts and resources on initial page load, bumping up their time to interactive to 13.6 seconds. Now, the solution here was that instead of loading those YouTube embeds and their scripts eagerly on page load, switching to doing it on interaction. So now, when a user clicks to watch that video, that's the point when we load in all those resources on demand, because the user signaled an intense that they're interested in watching. This led to a 69-point improvement in their Lighthouse performance score, as well as a 10-second faster time to interactive. So a really big change. Now, no performance talk is complete without a discussion of the cost of libraries and how you should just remove all of them. But since that topic has been done so many times, I wanted to take a little bit different angle and instead talk about what are some alternatives to removing expensive libraries. In other words, if that's not an option for you, what are some other things you can look into? First is deferring or deprecating expensive libraries, so taking steps to eventually removing that library. Replacing the library with something less expensive, deferring the use of an expensive library until after the initial page load, and updating a library to a newer version. When replacing libraries, there are generally two things you want to look for. One, that the library is smaller, but also maybe more importantly, that it's tree-shakeable. By only using tree-shakeable dependencies, you're ensuring that you're only paying the cost for the parts of the library that you actually use. You can also defer the loading and use of expensive dependencies until after the initial page load. Tokopedia is an online retailer based in Indonesia, and they're using this technique on their landing page. They really wanted their initial landing page experience to be as fast as possible, so they rewrote it in spelt. The new version only takes 37 kilobytes of JavaScript to render above-the-fold content. By comparison, their existing React app is 320 kilobytes. I think this is a really interesting technique because they did not rewrite their entire app. Instead, they're still using the React app. They just lazy loaded in the background using service workers. This can be a really nice alternative to rewriting an entire application. As I mentioned, Tokopedia used spelt for their landing page. In addition to the spelt, Preact and Lit HTML are two other very lightweight frameworks to look into. And last, consider updating your dependencies. As a result of using newer technologies, newer versions of libraries are often much more performant than their predecessors. For example, Zolando is a European fashion retailer, and they notice that their particular version of React was impacting page load performance. They A-B tested this and found that by updating from React 15.61 to 16.2, they were able to improve load time by 100 milliseconds and lead to a 0.7% uplift in revenue per session. Now, another useful optimization to consider is code splitting. When we're thinking about loading routes and components, we ideally want to do three things. We want to let the user know that something is happening. We want to load the minimal code and data really fast. And we want to render as quickly as possible. Now, code splitting enables us to do this more easily by breaking our larger bundles into smaller ones that allow us to load them on demand. This enables all sorts of interesting loading patterns, including progressive bootstrapping. Now, when it comes to JavaScript, it actually does have a real cost. And those two costs are download and JavaScript execution. Download times are critical for really slow network, so things like 2G and 3G. And JavaScript execution time ends up being critical for devices with slow CPUs, because JavaScript is CPU bound. This is one of those places where small JavaScript bundles can be useful for improving your download speeds, lowering memory usage, and reducing your overall CPU costs. Now, when it comes to JavaScript, our team have a motto. If JavaScript doesn't bring users joy, thank it and throw it away. I believe that this was in an extended special of Marie Kondo's show. Now, one site that breaks up JavaScript pretty well is Google Shopping. They were interactive in under five seconds over 3G, and they have this goal of loading very, very quickly, including their project details page. Now, Shopping have at least three JavaScript chunks, one for above-the-fold rendering, one for code to respond to those user interactions, and one for other features that are supported by Search. Now, their work to get to this place involved drawing a new template compiler, producing smaller code through it, and also looking at things like a lighter experience for folks who are on the slowest of connections. They actually ship a version that's under 15 kilobytes of code for users in those types of markets. Another good example are Walmart Grocery. So Walmart Grocery is a single-page application that loads as much as possible up front. And they've been focused on cleaning up their code, removing old duplicate dependencies, anything that's unnecessary, and they split up their core JavaScript bundles using code splitting. They've also been doing things that Katie's been suggesting earlier, like moving to smaller builds of libraries like Moment.js. And the impact of this iterative work has been great, a 69% smaller JavaScript bundle and 28% smaller, faster TTI. Now, they continue to work on shaving JavaScript off their experience to improve it as much as possible. We can also talk about Twitter. So Twitter is a popular social networking site. 80% of their customers are using mobile every day. And they've been focused on unlocking a user experience for the web that lets users access content pretty quickly, regardless of their device type or their connection. Now, when Twitter Light first launched a few years ago, the team invested in many optimizations to how they load JavaScript. They used route-based code splitting and 40 on-demand chunks for breaking up those large JavaScript bundles so that users could get interactive in just a few seconds over 4G. Between this and smart usage of resource hints, they're able to prioritize loading their bundles pretty early. So what did the team focus on next after that? Well, Twitter is a global site that supports 34 languages. Now, supporting this required a tool chain of libraries and plugins for handling things like locale strings. Now, after choosing a set of open-source tools, they discovered that on every build, there were including internationalization strings in those builds that were invalidating file hashes across the entire app. Each deploy would end up with an invalidated cache for their users, and this meant that their service worker had to go and redownload everything. This is a really hard problem to solve, and they ended up actually rewriting their internationalization pipeline and revamping it. The impact of this was that it enabled code to be dropped from all of their bundles and for translation strings to be lazy loaded. The impact was 30 kilobytes of reduction in overall bundle size, and it also unlocked other optimizations, such as the emoji picker in Twitter being loaded on demand. That saves 50 kilobytes from their core bundles having to include it. The changes in their internationalization pipeline also led to an 80% improvement in JavaScript execution, so some nice wins all around. We can also take a look at JavaScript for your first-time users, so those people who are coming to your experience for the first time, looking at Spotify. So Spotify started serving their web player to users without an account, and they would show an option to sign up to play as soon as users would click on a song. For first-time users that didn't need to use their playback library or their core logic, they would actually just keep first-time page loads very, very low with just 60 kilobytes of JavaScript to get it interactive really quickly. Once users actually authenticate and they log in, they then lazy load the web player and their vendor chunk, meaning that you as a first-time user get a really quick experience and then an OK experience for the rest of your navigations. Now, Spotify recently also rewrote their web player in React and Redux, and one decision that they made was to improve performance of navigations in the player. Previously, they would load an iframe for every view, which was bad for performance. They discovered that Redux was pretty good for storing data from REST APIs in a pretty normalized shape and making use of it to start rendering as soon as a user clicks on a link. This enabled them to have quick navigations between pages, even on really slow connections, because you were reducing overall API calls. And finally, we can take a look at Jabon. So Jabon, as Katie mentioned earlier, they're a popular fashion destination in India. They decided to rewrite one of their experiences as a PWA, and to keep that experience fast, they used the purple pattern, so push, render, pre-cache, and lazy load. This allowed them to get interactive in just 18 kilobytes of JavaScript. So they were using HP Server Push. They trimmed their vendor bundles to 8 kilobytes, and their pre-caching scripts for future routes using service workers, which overall led to a TTI improvement of 82% with some good business wins off the back of it. Performance sites display text as soon as possible. In other words, they don't hide text while waiting for a web font to load. By default, browsers hide text if the corresponding font is not loaded, and the length of time that they will do this for depends on the browser. Simple to see why this is not ideal. Good news is that the fix is also simple. Wherever you declare a font face, simply add the line font display swap. This tells the browser to swap out, use the default system font initially, and then swap it out for a custom web font once it arrives. Although you do currently have to self-host web fonts to add font display to your pages, right? Yes, but we have a special announcement today. So developers have been asking us to do something with Google Fonts for about a year and a half, and today we're happy to announce that Google Fonts is still going to support font display. So you'll be able to set things like font display swap, optional, and a full set of values. We're very excited about this change, and this actually just came in like last minute, so we've got some docs. Last night, last night. Last night, so we've got some docs to update. Let's also talk about resource hints. So browsers do their best to prioritize fetching resources they think that are important, but you as an author know more about your experience than anybody else. Now, thankfully, you can use things like resource hints to get ahead of that. Here are some examples. So Barefoot is an award-winning wine business. They recently used a library called Quicklink, which is under a kilobyte in size, and what they do is they prefetch links that are in Viewport using Intersection Observer, and what they saw off the back of this was a 2.7 seconds faster TTI for future pages off the back of it. Jabon are a site that are very heavily dependent on JavaScript for their experience, so they use Linkroll Preload to preload their critical bundles, and saw a 1.5 second faster time to interact with off the back. And Chrome.com, it was originally connecting to nine different origins for their resources. They used Linkroll Preconnect and saw a 0.7 second decrease in latency off the back. What are other folks doing with prefetching? So eBay is one of the world's most popular e-commerce sites, and to help them speed up how soon users can view content, they've started to prefetch search results. So eBay now prefetch the top five items on a search result page for faster subsequent loads. This led to an improvement of 759 milliseconds for a custom metric called above the full time. It's a lot like first meaningful paint. eBay shared that they're already seeing a positive impact on conversions through prefetching. The way that this works is effectively doing their prefetching after a request out of callback, so once the page kind of settles. And this is rolling out to a few different regions right now. It's shipped to eBay Australia, and it's coming soon to the US and UK. Now as part of eBay's site speed initiative, they're also doing predictive prefetching of static assets. So if you're on the home page, it'll fetch the assets for the search page. If you're on the search page, it'll do it for the item page and so on. Right now the way that they're doing predictive prefetching is a little bit static, where eBay are excited to experiment with how to use machine learning and analytics in order to do this a little bit more smartly. Now another site that's using a very similar technique to this is Virgilio Sports. They're a sports news website by Atalia, and they've been improving the performance of their core journeys. They actually track impressions and clicks from users who are navigating around the experience, and they're actually able to use link rel prefetch and service workers to prefetch the most clicked article URLs. That every seven minutes their service workers will go and fetch the top articles picked by their algorithms, except if you're on a slow 2G or 2G connection. The impact of this was a 78% faster article fetch time, and they've also seen that article impressions have been on the rise too. After just three weeks of using this optimization, they saw a 45% increase in article impressions. Critical CSS is CSS necessary to render above the fold content. It should be inlined and that initial document that is inlined in should be delivered in under 14 kilobytes. This allows the browser to render content to the user as soon as the first packet arrives. In particular, Critical CSS tends to have a large impact on first contentful paint. For example, 2E is a European travel site, and they were able to improve their first contentful paint by 1.4 seconds down to 1 second by inlining their CSS. The K is another site using Critical CSS. They're a large Japanese newspaper publisher, and one of the issues they ran into when implementing this was that they had a lot of Critical CSS, 300 kilobytes to be specific. And part of the reason for that was there were a lot of differences in styles between pages but also due to factors like whether a user was logged in, whether a paywall was on, whether a user had a paid or free subscription, and so on. Once they realized this, they decided to create a Critical CSS server that took in all these variables as inputs and returned the correct Critical CSS for a given situation. The application server then inlines this information and it's returned to the user. They're now taking this optimization a step further and trying out a technique known as edge-side inclusion. Edge-side inclusion is a markdown language that allows you to dynamically assemble documents at the CDN level. Why this is exciting is that it allows Nikkei to get the benefits of Critical CSS while also being able to cache the CSS. Granted, they're caching it at the CDN level and not the browser level. In the event that the necessary CSS isn't already cached on the CDN, it simply falls back to serving the default CSS and that requested CSS is cached for future use. Nikkei is still testing out the use of edge-side inclusion, but just through dynamic CSS alone, they were able to decrease the amount of inline CSS in their application by 80% and improve their first contentful paint by a full second. Broly is a newer compression algorithm that can provide better text compression than GZIP. OYO is an Indian hospitality company and they use Broly to compress CSS and JavaScript. This has decreased the transfer size of JavaScript by 15%, which has translated into a 37% improvement in latency. Most companies are only using Broly on static assets at the moment. The reason for this is particularly at high compression ratios, Broly can take longer and sometimes much, much longer than GZIP to compress, but that isn't to say that Broly can't be used on dynamic content and used effectively. Twitter is currently using Broly to compress their API responses. And on P75 payloads, so this would be some of their larger payloads, they found that using Broly decreased the size by 90%. This is really large, but it makes sense when viewed in context of the fact that compression algorithms are gonna be more effective on larger payloads. And our last topic is adaptive serving. So loading pages can be a different experience depending on whether you're on a slow network or a slow device or you're on a high-end device. Now, the network information API is one of those web platform features that give you a number of signals, such as the effective type of the user's connection, save data so you can adapt, but really loading is a spectrum and we can take a look at how some sites handle this challenge. So for low-end users on mobile, Facebook actually, for users on low-end mobile devices, Facebook actually offers a very basic version of their site that loads very fast. It has no JavaScript, it has very limited images, and it uses minimal CSS with tables, mostly for layout. What's great about this experience is that users can view and interact with it in under two seconds over 3G. What about Twitter? So cross-platform Twitter is designed to minimize the amount of data that you use, but you can further reduce data usage by enabling data saver mode. This allows you to control what media you wanna download, and then this mode, images are presented to users when they tap on them. So in iOS and on Android, this leads to a 50% reduction in data usage from images and on web anywhere up to 80%. These savings add up, and users still get an experience that's pretty fast with Twitter on limited data plans. It was part of looking into how Twitter are handling their usage of effective type. We discovered they're doing something really fascinating. They're handling image uploads in an interesting way. So on the server, Twitter compresses images to 85% JPEG and a max edge of 4,096 pixels. But what about when you've got a phone out and you're taking an image, you're taking a picture, but you're on a slow connection and may not be able to upload it? Well, on the client, what they now do is that they check if images appear to be above a certain threshold, and if so, they draw it to the canvas, output it at 85% JPEG, and they see if they're improved size. Often this can decrease the size of phone capture images from four megabytes down to 500 kilobytes. The impact of this was 9.5% reductions in canceled photo uploads. And they're also doing all sorts of other interesting things depending on the effective type of the user's connection. And finally, we've got eBay. So eBay are experimenting with adaptive surveying using effective type. If a user is on a fast connection, they'll lazy load features like product image zooming. If you're on a slow connection, it isn't loaded at all. eBay are also looking at limiting the number of search results that are presented to users on really slow connections. And these strategies allow them to focus on small payloads and really give users the best experience based on their situation. So those are just a few things that people are doing with adaptive surveying. It's almost time for us to go. It is. We hope you found our talk helpful. Remember that you can get fast with many of the optimizations that we talked about today. And you can stay fast using performance budgets and light wallet. That's it from us. Thank you. Thank you.