 Today, Hussein and I are going to talk with you about how you can make your site fast. We're going to focus on three things, images, web fonts, and JavaScript. We've chosen to focus on these three things because they are the three largest components of most websites. In addition, they're likely to be the three largest components of your performance budget. We hope that after this presentation you'll go home and make changes to your website. Know that during this process, you can lean on both Lighthouse and web.dev for additional resources. Almost everything we cover today can be audited by Lighthouse. In addition, at web.dev, you can find guides, code samples, and demos of everything we cover today. So let's start by talking about images. Images are taking over the web. On many sites, images alone would consume the entire performance budget. And on some sites, it would far exceed that. I think the reason why these numbers are so bad lies in the fact that performant images are the result of many steps and optimizations. As a result, they're not going to happen accidentally. A performant image is the appropriate format. It is appropriately compressed. It is appropriate for the display. And it is loaded only when necessary. To be successful with images, it's imperative that you automate and systematize these things. Not only is this going to save you time, but it's going to ensure that these things actually get done. There's much more to images than meets the eye. At a bits and bytes level, an image is as much a byproduct of its image format and its compression as its visual subject matter. You can think about image formats as choosing the right tool for the job. The image format that you choose will determine what features an image has. For instance, whether it supports transparency or animation, as well as how it can be compressed. First image format that I want to talk about today is the animated GIF. You should not be fooled by their crappy image quality. They're actually huge in file size. This 1 and 1 half second clip is 6.8 megs as a GIF. As a video, however, it is 16 times smaller at 420 megabytes. This is not uncommon. Animated GIFs can be anywhere from 5 to 20 times larger than the same content served as a video. This is why, if you've ever inspected your Twitter feed, you may have noticed that the content labeled as GIF is not actually a GIF. Twitter does not serve animated GIFs. If you upload an animated GIF, they will automatically convert it to video. The reason for the drastic difference in file sizes between videos and animated GIFs lies in the differences between their compression algorithms. Video compression algorithms are far more sophisticated. Not only do they compress the contents of each frame, but they do what is known as inner frame compression. And you can think of this as compression that looks at the GIFs between the different frames. The first step in switching from animated GIFs to video is to convert your content. You can use the FFMPag command line tool for this. Next, you'll need to update your HTML and replace image tags with video tags. The code I have up on the screen is technically correct, but it's probably not what you want to use. Instead, you want to make sure to add the four attributes I've highlighted up on the screen. That's going to give your video that GIF look and feel, even though it's not a GIF. Now we'll switch gears and talk about a much more modern image format, and that, of course, is WebP. WebP is no longer a Chrome-only technology. Last month, Microsoft Edge shipped support for WebP. In addition, Mozilla Firefox announced their intent to ship WebP. Currently, 72% of global web users have support for WebP, and given these recent developments, you can expect this number to only increase. This is a big deal because WebP images are 25% to 35% smaller than the equivalent JPEG or PNG. And this translates into some really awesome improvements in page speed. When the Tribunited support for WebP, they found that there was a 30% improvement in page load times on WebP-supported browsers. By far the biggest hesitation I see around adopting WebP is a fear that you can both serve WebP and support non-WebP browsers. This is not true. The picture and the source tags make it possible to do precisely this. You can think of the picture tag as a container for the source and image tags that it contains. The source tag is used to specify multiple image formats of the same image. The browser will download the first and only the first image that is in a format that it supports. So in this example I have up on the screen, the Chrome browser would download the WebP version. A Safari browser would download the JPEG version. The great thing about this is that even though all major browsers have supported picture and source tag since 2015, however, if, say, 2014 browser were to encounter this, it would still work because those browsers would just download the image specified by the image tag. You can't notice I've been talking about image formats, but I want to kind of go in a tangent and squeeze in a mention of the AV1 video format. And the reason why I wanted to squeeze it in is that it is the future of video on the web. The reason why it's the future of video on the web is that it compresses video 45% to 50% better than what is currently typically used on the web. It's still fairly new, so it's not really practical for you to be implementing it on your site yet. However, I encourage you to attend Francois and Angie's talk at 3.30 today where they're going to be diving into AV1 in more detail. Image compression is a topic that's tightly coupled to image formats. Image compression algorithms are specific to the image formats that they compress. However, all image compression algorithms can be broken down into lossless and lossy compression. Lossless compression results in no loss of data. Lossy compression does result in loss of data, however, can achieve greater file size savings. At a minimum, all sites should be using lossless compression. No questions asked. However, for most people, it's going to make sense to be slightly more aggressive and use lossy compression instead. The trick with lossy compression is finding that sweet spot between file size savings and image quality for your particular use case. Many lossy compression tools use the scale of 0 to 100 to represent the image quality of the compressed image, with 0 being the worst and 100 being the best. If you're looking for a place to start with lossy compression, we recommend trying out a quality level of 80 to 85. This typically reduces file size by 30% to 40% while having a minimal effect on image quality. By far the most popular tool for image compression is ImageMen, and it can be used with just about everything. ImageMen is used in conjunction with various ImageMen plugins, and you can think of these plugins as implementations of different image compression algorithms. Up on the screen, I've put the most popular ImageMen plugins for various use cases, however, these are by no means the only ImageMen plugins available. ImageSizing is something I think many sites can be doing a much better job at. We have so many types of devices, and specifically sizes of devices that access the web these days, however, we insist on serving them all the exact same size of image. Not only does this have transmission costs, but it also creates additional work for the CPU. The solution, of course, is to serve multiple sizes of an image. Most sites find success serving anywhere from three to five sizes of an image, and in fact, this is exactly what Instagram does. Instagram uses this technique throughout their site, however, one use case where they were able to measure its impact was with their Instagram embeds. For context, Instagram embeds allow third-party sites to display Instagram content on their own site. As a result of serving multiple image sizes, Instagram was able to reduce image transfer size by 20% for their Instagram embeds. Two popular tools for image resizing are Sharp and Gimp. The biggest difference between the two is that Sharp is faster, and when I say faster, I mean faster at image processing, however, it requires that you compile C and C++ to install it. In addition to creating multiple sizes of your images, you'll need to update your HTML. You'll want to add the source set and sizes attributes. The source set attribute allows you to list multiple versions of the same image. In addition to including the file path, you'll also want to include the width of the image. This saves the browser from having to download the image to figure out how large it is. The sizes attribute tells the browser the width that the image will be displayed at. By using the information contained in the source set and sizes attribute, the browser can then figure out which image it should download. Lazy loading is the last image technique that I'll be talking about today. Lazy loading is the strategy of waiting to download a resource until it is needed. In addition to images, it can be applied to resource types like JavaScript. Image lazy loading helps performance by using that bottleneck that occurs on initial page load. In addition, it saves user data by not downloading images that may never be used. Spotify is an example of a website that uses this technique very effectively. On this particular page that I pulled up, image lazy loading was the difference between loading a meg of images on initial page load and 18 megs of image on initial page load. That's a huge difference. Two tools to look into for image lazy loading are lazy sizes and low ZAD. And you implement them both more or less the same way. Add the script to your site and then indicate which images should be lazy loaded. However, just because this is a fairly simple to use technique does not mean that it's not important. In fact, it is so important that native lazy loading is coming to Chrome. Native lazy loading means that you'll be able to take advantage of lazy loading without having to add third-party scripts on your site. It will be available for both images and cross-origin iframes. And you can truly be lazy when it comes to implementing it. If you make no changes to your HTML, the browser will simply decide which resources should be lazy loaded. If you do care, however, you can use the lazy load attribute to specify which attribute should or should not be lazy loaded. Fonts and cause performance problems because they are typically large files that are downloaded from third-party sites. As a result, they can take a while to load. This leads to the phenomenon known as the flash of invisible text. And shockingly, this affects two out of every five mobile sites. Flash of invisible text looks like this. Instead of a user being greeted with text on your site, they're greeted with invisibleness. Not always as frustrating, but it also looks bad. What you want to incur instead is the flash of unstyled text. And this is when the browser initially displays text using a system font, and then swaps it out for the custom font once it has arrived. The good news here is that this fix is literally a one-liner. Everywhere in your CSS, where you declare a font face, add the line font display swap. This tells the browser to use that swapping behavior that I just talked about in the previous slide. Now, I'm gonna hand the mic over to Hussein. He's gonna talk with you about techniques you can use with your JavaScript. So Katie showed a number of techniques that could be quite useful for the images and web fonts on your site. As well as a few exciting things coming to the Chrome platform in the near future, like native lazy loading. For the rest of this talk, we'll go over some other important things you should be doing, but for the JavaScript that makes up your application. Earlier in the session, we saw how images can make up the majority of a site with regards to the number of bytes sent. However, we also send a significant amount of JavaScript to browsers. If we take a look at HTTP archive data once again, as of last month, the median amount of JavaScript that we shipped to mobile web pages was about 370 kilobytes. For desktop, the number was about 420. Now, JavaScript code still needs to be uncompressed, parsed, and executed by the browser. So in reality, we're looking at about a megabyte of uncompressed code that needs to be sent and that needs to be for an application of the size. Users who try to access this with low-end mobile devices will notice a much poorer performance. But why are we as developers shipping way more JavaScript code than we've ever done before? There are a number of reasons, one of them being the amount of dependencies that we pull into our applications and how easy that process has become. Front-end tooling has come a long way in the past decade, but there has been some cost. So what can we do to continue to try and build robust and fully-fledged applications but not at the expense of user experience? The very first thing we can and should consider doing is splitting our bundle. The idea behind code splitting is instead of sending all the JavaScript code to your users as soon as they load the very first page of your application, it's to only send them what they need for their initial states and then allow them to fetch future chunks on demand. The easiest way to get started with code splitting is by using dynamic imports. Now, dynamic imports has been supported in Webpack for quite some time and it allows you to import a module asynchronously where a promise gets returned. Once that promise finishes resolving, you can do what you need to do with that piece of code. The idea behind dynamic imports is you want to make sure that it fires uncertain user interactions and you want to do this to make sure that you only fetch code when it's actually needed. If you happen to be using another module bundler like Parcel or Rollup, you can still use dynamic imports to code split where you see fit. Now, a number of JavaScript libraries and frameworks have provided abstractions on top of dynamic imports to make the process of code splitting easier with your current tooling. With Vue, for example, you can define async components and they're just functions that return a promise that resolve to the said component. By using that with dynamic imports, you can attach async components into your routing configurations so that only when a certain route is reached only then will the code that lives in that component be fetched. Angular has a very similar pattern. In its router, you can use the loadchildren attribute and you can use it to connect a feature module to a specific route. With loadchildren, you can define a dynamic import with Ivy and Ivy's a new rendering engine that the Angular team is working on. When you do this approach, all the code, all the components, all the services that live in the feature module will only get loaded when that route is reached. In the meantime, you can use loadchildren, but you just need to use a relative file path to the feature module. With React, libraries like React, loadable and loadable components have allowed us to code split on the component level while taking care of other things like showing a loading indicator or an error state where applicable. However, with React 16.6, the lazy method was introduced. And this allows you to code split while using suspense. Now, suspense is a feature that the React team has been working on for quite some time. And it allows you to suspend how certain component trees update their state or update the DOM depending on how all of its children components have fetched their data. Another very useful technique that ties in well to code splitting your bundle is by using preload. Preload allows us to tell the browser that if we have a LakeDiscovered resource or a resource that's fetched late in the request chain, that would like to download it sooner because it's important. So by doing this, we're telling the browser to prioritize it. To use preload, you only need to add a link element to the head of your HTML document, and you need to have a rel attribute with a value of preload. The as attribute is used to define what type of file you'd like to load. Now, as developers, it's also important to make sure that the code that we write works well in all the browsers people use to access our site. So if we happen to include ES 2015, 2016, or later syntax, we also want to include backwards compatible formats so older browsers could still understand them. This usually involves adding transforms for any newer syntax that we use and polyfills for any newer features. Now, because transpiling means we're adding code on top of our bundle, our application ends up being larger than it was originally written. One way to make sure that we only transpile the code that's actually needed is by using Babel preset ENV. This preset takes the hassle out of us trying to micromanage which plugins and polyfills we need to add. And it does this by allowing us to specify a target list of browsers and letting Babel handle the rest. You can add this preset into your list of presets and your Babel configuration, and you can use a target's attribute to define that set of browsers that you'd like to reach. Now, this is a browser list query. So if you've used tools like Auto Prefix or before, you may already be familiar with it. Using a percentage like here is one type of query you can use. And it allows you to target browsers that cross a certain global market share. The use built-ins attribute allows us to tell Babel how to handle adding polyfills. The usage value means that Babel will only automatically include polyfills to files when it's actually needed for features that need to be transpiled. Now, this is the behavior we all want to only transpile code when it's required. So although Babel preset ENV means that we can limit the amount of transpiled code that we have to make sure that we only include what's necessary for all the browsers we plan to target, what if there was a way to differentially serve two different types of bundles? One that's largely untranspiled for newer browsers that don't need nearly as many polyfills, and another legacy bundle that contains more polyfills is a bit larger, but is needed for older browsers. We can do this by using JavaScript modules. Now, JavaScript modules, or ES modules, allow us to write blocks of code that import and export from other modules. But the amazing thing about using modules with Babel preset ENV is that we can have it as a target instead of a specific browser query. One site that's actually using this module approach today is the New York Times, and they're using it for one of the flagship articles of the year, polling in real time for the 2018 midterm elections. They're using SAPR as their client-side framework, which contains a number of progressive enhancements baked in, like automatic code splitting. But they're also using roll-up to emit module chunks as well. They're using a fairly simple heuristic to make sure that users who have older browsers download a larger, more polyfilled bundle. But users who are using newer browsers can only download smaller and slimmer modules. A very simple way to make sure that users who access your app only download one or the other is by using the module no-module technique. When you define a script element with type module, browsers that understand modules will download that normally. But they'll know to ignore any script element that has the no-module attribute. Similarly, browsers that don't understand modules will ignore any script elements that have type module. But since they can't identify what no-module means, they'll download that bundle as well. So here, we can get the best of both worlds, shipping the right bundle to our users depending on what browser they use. If you happen to have critical modules that you'd like to download sooner, you could do that by also preloading them as well. And you can just need to specify a module preload value to the real attribute. So we've talked about a few things you can do to improve the code that you ship to your users. But if you're thinking of adding any of these optimizations, it can be useful to try and keep an eye on things. And there are tools out there that could actually make this easier. The Code Coverage tab within Chrome DevTools allows you to see the size of all your bundles as well as how much of it is actually being used. You can access it by opening the command menu and just typing in coverage. If you're using Webpack, Webpack Bundle Analyzer can be a very handy tool. And it gives you a nice heat map visualization of your entire bundle. You can zoom in, see which parts of your bundle are larger and which parts of your bundle are smaller. And if you've ever wanted to find the cost of a specific library, you can use Bundlephobia. You could type the name of any package and see how large it is, as well as how much of an impact it can make to your application in terms of download time. You can also scan your package.json file to see how much of an impact all your packages make. Now, as useful as it is to use tools to manually keep an eye on how things are doing with your bundle size, it could be especially useful to also include checks into your build workflow. One tool that could actually help here that can allow you to set performance budgets is Lighthouse CI. So instead of only running a Lighthouse in the Chrome Audits panel, or as a Chrome extension, you can also run Lighthouse in CI and have it included as a status check into your workflow. You could specify certain Lighthouse categories and set scores for them so that merges and pull requests only get included if those scores are met. Now, a site that's actually taking steps to add a number of these optimizations is Uniclo. They're a clothing retailer based out of Japan, and they're taking steps to improve their entire web architecture, beginning with their Canadian site. They've identified a number of critical resources and decided to try and download them sooner, and they're doing this by preloading them. They've done this with some images, some core fonts, as well as a number of cross-origin fetches. They then also identified that they can code-split and try to get some wins that way as well. They took the correct first step of code-splitting at their route level, and just by doing that alone, they noticed an almost half-sized reduction in their bundle size. They also moved on to code-split their localization packages and noticed that they can get their bundle size down to 200 kilobytes. After this, they even added more optimizations, such as using a pre-act compatibility layer for their react bindings to get their bundle size to about 170 kilobytes. While doing all of this, they made sure to also set budgets so their whole team can stay in sync, and they're using another open-source tool to help your called bundle size. They've set 80 kilobyte budgets for each one of the chunks and then allowed them to stay under a 200 kilobyte total for all of their scripts. While adding these optimizations, they noticed a two-second time-to-interactive reduction for users that use low-end mobile devices and have weaker connections. Now, you might think two seconds is not that much, but it can make an impact for your customers. After these optimizations were added, they noticed a 14% reduction in bounce rate, a 31% increase in average duration, and a 25% increase in pages viewed per session. Now, there were other things also being added to the site at the same time, but they know that performance played a very huge factor here. So we've talked about quite a few things that you can do today to improve how your site performs. But what can Chrome do as a browser as well? For users that opt-in to data-saving mode, Chrome would fry and show a lightweight version of the page where possible. And it does this by minimizing data used as well as showing cache content whenever it can. Now, as developers, you could also tap into this as well. And you could do this by using the network information API. If you look at the Navigator Connection Save Data attribute, you can identify whether your users actually have data saver enabled. And you could try and serve a slightly different experience to make sure things are fast for them as well. You can also use the effective type attribute and use that to actually be able to serve different assets conditionally, depending on what connection your users are having. The very last thing that I do want to mention is although me and Katie were talked about a lot of the things that you can do to improve your site, every application is built differently. Every team is different. Every tool chain is different. So this isn't something you need to start doing wholesale and including everything at once. By setting budgets and keeping an eye on your bundle size from the very beginning, you can include performance enhancements as a step-by-step procedure and make sure your site never regresses. Performance doesn't need to be an afterthought. Almost everything we've talked about is in Web.dev. So I highly suggest you take a look if you haven't already. We hope you enjoyed this talk as much as we enjoyed giving it. Thank you.