 and welcome to another episode of Baking the Web. Today, we're going to be baking Chrome, the web browser from Google. It's going to take about 10 years, so clear your schedule. We start with this, Safari, but we don't need all of it. We just need this. This is WebKit. So I'm going to add that into the mix. To make sure the mixture is buttery smooth, we add butter. You can never have enough butter. Now we need to mix all of this together. You can use any mix blend mode you like, as long as it's powered by a V8 engine. We're going to perform the acid test. And...it's fine. This has been in preparation for a few years now, so naturally, it's picked up a few bugs. Try and get as many of those out as you can. And at this point, we're going to add a dart. Ooh! No, no, no. Let's save this for another dish. Now it's time to get the milk kit out of this mixture. Fork it. It's important to tidy up as we go along. Of course, we can do other things at the same time because it's async. And now we're going to take this bit and flatten it. Hold on, hold on. You can't flatten it. We're going to smoosh it. No. Flatten? Flatten. So now we take this and we add a little threading, some streaming, some video, some HD video. And if you're feeling brave, a few experimental flags. Pop it in the blender and give it one last mix. Put this on the tray. And set the timer for 10 years and wait. And that's all there is to it. Paul nudged me. He nudged me. I didn't. I did nothing of the sort. Captain Cake Carry. It's going to be fine. It's... Oh, no. Oh, no. I just need to fix this bit. It's going to be okay. We should probably get on with the conference. Welcome to Kromdewssel! I'm Paul. And I'm... This is Jake. Yeah. So welcome to two days of talks, forums, demos. And if you're up for it, some social interaction. Theme for day one, mate. Theme for day one. Theme for day one is web today and how far we've come so far. And the theme for tomorrow, day two, well, it's the future. I mean, it's quite literally the future. But it's also the theme of tomorrow where we're going to be introducing some experimental and new stuff and we'd love to get your feedback to know how best we can help you. Yeah. So hopefully everybody saw the guideline videos before we came on stage. It's really important for us to have a safe and inclusive space. This is for everybody to learn and everybody to share. So if anybody or anything is not making that possible, please do let us know and let one of the staff members who's wearing a red t-shirt or you can send an email to email address that's posted on the website and throughout the venue on the signage. Now we have live captioning here and here. Thanks to Norma for providing those today. Well, especially fast talkers like you. Yeah, like me. Yeah. Can we leave? I need to go. Can we do the first talk now? Yep. Do that. So to kick off the conference, please welcome Ben and Dionne. I'm kind of excited about having some cake. I know. Maybe we could get some cupcakes later. I think there are cupcakes later, but I'm pretty sure they're not from the leftover cake. I hope not. I hope not. Welcome to Chrome Dev Summit. I'm Ben Galbraith. I lead product for the Chrome platform teams. And I'm Dionne Almer. And I lead product for our developer ecosystem efforts. Thanks for joining us here at the Yerba Buena Center for the Arts here in San Francisco. And a special shout out to those of you watching on the live stream. We look forward all year to this opportunity to gather with developers from around the world to celebrate the web platform. And of course, it's an opportunity for the Chrome team and others here at Google to provide updates on our latest work and get feedback from the community. 2018 marks 10 years since the launch of Chrome, but it also marks Android's 10-year anniversary and 20 years since Google was founded back in 1998. So this is a really big year for us. And it's got us reflecting on these past 10 years. Back in 2008, Dionne and I were at Mozilla working on developer tools when we received probably the most unique product announcement we'd ever seen, the Chrome comic, which walked through Chrome's new features like the omnibox and architectural innovations like the multi-process architecture and the high-performance V8 JavaScript runtime. And years later, it's really fun to know that V8 was reused by Node.js and plenty of other open source projects out there. Launched in this time in the midst of a web renaissance back in the Ajax movement that saw developers pushing the boundaries of what was possible on the web, doing things that we just hadn't seen before. And it's amazing to see just how far the web has come in these 10 years with publishers like Huffington Post transforming the typical documents of the web into rich interactive experiences. And Google Maps, an early pioneer of the Ajax era, had gone from maps of tiled images to a street-to-space continuum combining 360-degree photos, vector maps, 3D models, and satellite imagery, all just in the browser. And productivity apps have come into their own on the platform with Pro Tools like the incomparable AutoCAD and the collaborative design platform Figma. And because they're on the web, you can get right into these things. And when you're done being productive, you can pop into high-quality games like Crossy Road for, you know, just a minute. Just a quick minute. And Chrome's changed quite a bit over this time and more than just these subtle changes to the logo. Earlier this year, we highlighted Chrome's new refresh design and updated usability enhancements, such as better tab treatment for people like Dion that have like 500 tabs open at a time. That's true. And also a smarter omnibox and autofill features. But the most important update was clearly the birthday mode in the dyno game, which run for the entire month of September. Do you know this has played over 270 million times a month? That's incredible. It kind of is. But here at CDS, it's really the platform-related features that we want to talk about. So let's highlight a few. So back in 2008, Chrome launched with Process Isolation, an architecture that put each tab in its own operating system process, making Chrome exceptionally stable, but also exceptionally secure. The process boundary acts as a line of defense in the event malicious code is able to exploit a bug and attempt to read data from another tab. Over the past 10 years, this model has become the industry standard. This year, desktop Chrome browsers took this model further with the launch of Site Isolation. So now Chrome ensures that even within a single tab, content from different domains is isolated. So in this example, in the iframe, this new site is using a display ad and it's running in a different process than the rest of the page. We've wanted to extend Process Isolation this way for some time and we think it's an important protection for the web. Now we've also been continuing our efforts to encourage developers to use HTTPS for all web traffic. This ensures that the content of a site isn't maliciously modified while in transit and also that user data is transferred securely. In the latest Chrome, we mark pages as not secure when they're served from HTTP and when the user begins to interact with form fields, we further emphasize the point with a stronger warning. The HTTPS movement is a collaborative effort with organizations like Let's Encrypt plan a big role in lowering the barriers to upgrading sites and it's working. In Chrome itself, we've seen big shifts to HTTPS and over 80% of the top 100 sites are now all HTTPS. We think this is actually a major accomplishment because the web is such a massive ecosystem. All right, in addition to security and stability, speed was a big theme for Chrome at launch and V8 is a big part of that story. The team recently posted a retrospective highlighting its improvements over this period, including significant gains in JavaScript execution speed and a 100x decrease in garbage collection jank. And the team is looking forward to the next 10 years of improvements by optimizing V8 for the latest JavaScript language features like Promises and Async08, supporting the Node.js server-side community and being responsive to the needs of the framework ecosystem. There's a fun recent example of this. How the team rapidly responded to JavaScript inefficiencies related to the React frameworks recently announced Hooks feature, finding ways to make this feature work much faster on V8. Now, V8 has also expanded to support WebAssembly, also called Wazm, a cross-browser standard for executing low-level code like C and C++ at very high speed. Earlier this year, the team launched the new Wazm compiler called Liftoff, and the popular Unity game engine found that it resulted in a 10x reduction in the load time for their Wazm code. And WebAssembly threads are now available in a Chrome origin trial, which opens the door for more complex code to come to the web. Now, it's been really fun to see the community explore the power of WebAssembly, both through new creations and by bringing old codebases. Actually, I think I prefer the term vintage codebase. Vintage, vintage, high-quality codebases to the web. This tweet was one of my favorites, kind of an exception moment, because now I think you can run IE on Windows 95, on Chrome, on Android. It's crazy. What a world we live in. Sorry to believe, but back when Chrome launched, video on the web wasn't a thing. We had to use plugins. And now, of course, HTML video is ubiquitous. This year, Chrome landed support for AV1, which is the next-generation media codec, completely royalty-free, that provides a 30% bitrate reduction, even over HEVC, which makes streaming video on the web faster as this becomes adopted. And because AV1 has major industry backing, we can expect to see it receive significant adoption over time. Now, one constant over the past 10 years is the importance of optimizing images. This is sort of like flossing for the Internet, vital for your site's health. Plenty of us don't really seem to do it enough. And as a Brit, trust me on that one. Now, a few years ago, Chrome added support for WebP, a new image format that results in an average 30% savings over other formats, like PNG, JPEG, and GIF. Was it? Oh, sorry. And this year, Edge and Firefox ship support for WebP, so make sure your images are fully squished. All right, so the web and Chrome have come a long way these past 10 years. So where do we go from here? We've just spent the last few minutes talking about the importance of speed and the enhancements to the web platform that make that more possible. And we think this will continue to be a major focus for all of us into the future because speed is one of the defining features of the web platform. It's what makes it so magical to go from a link right into a rich experience seamlessly without any delay. Now, delivering speed is a partnership between the browsers and developers because no matter how fast and efficient browsers become, a poorly optimized site will always ruin the fast load in experiences that our users want. And increasingly, JavaScript can also become a major factor in slowing down sites. It's because since 2001, sites use an average eight times more JavaScript than they did before. And that's a problem because in addition to having to be downloaded, all this code needs to be compiled by the browser and executed. And with mobile, many of the devices that our users are using are slower than they've been before. And so increasingly, performance can include the CPU as a bottleneck. Now, we've created tools like Lighthouse, PageSpeed Insights, and the Chrome User Experience Report that can provide deep insights into your site's performance and how it can be improved. However, as helpful as these tools can be, a pattern we've seen is that a site's performance will degrade over time because we're constantly adding new features to enhance our site. And these new features create pressure for the site to get bigger and bigger. And that includes more script and more resources that have to be downloaded. Of course, we're not saying that we shouldn't be constantly enhancing our sites and adding features, but as we do so, we have to take care that as we add these features, we ensure that our pages continue to load fast. So an approach that we've seen that's led to great results here is the practice of establishing a performance budget. And popular tools and frameworks, like Angular, Preact, and Webpack, have built-in support to define budgets and enforce them. These budgets can be based on the bite-size of your resources, or they can ensure you hit target metrics and scenarios. For example, you can specify that your site should load in under five seconds on a mid-range smartphone and a high-latency 3G network. The team at Pinterest have set a great example of using a budget to guide their optimizations' efforts. We'd love to share a video that explains this process now. Let's roll the video. Pinterest's mission is to help people discover things, collect them, organize them, and then find ways to apply them in your life. With more and more people on the go, the mobile web has become central at providing our discovery experience. But our mobile web experience in the past was basically an upsell for the native app. Which brought us to the realization that we needed to fix it. The technology was already in place for us to be able to do so. And so we brought a team together to test the mobile web from scratch. Having a fast mobile web was crucial to the success of the project. We made sure that we split out what was sent down to the user to be only the crucial things to start and then everything else that wasn't immediately important sent down later on. We made sure to test out on average devices on 3G just like our users would be using. And we could see the dramatic difference that there was on that initial time-to-interactivity. By using modern caching best practices to make it work, even if you're on a bad connection or have no connection, we were able to preload the user interface for follow-up visits. Like the native app, our site was optimized for touch interactions. This immersive experience resulted in mobile web becoming our top platform for new sign-ups. We wanted to make sure that the users for our mobile web continue to use the product as time goes by. And one of the most important technologies that added to browsers was the ability to add that site to your home screen. Pinterest is not just about the content, but about what you do with it. The browser is a discovery platform. So making it easier for Pinterest to use our own discovery service is really a perfect fit. Thank you. Pinterest is a great example of doing performance right. And Zach from their team is here to share more with us later today. Now let's talk about another example. Wayfair. After experiencing some performance regressions, they created an internal dashboard that provides a simple stoplight system for their developers and keeps performance top of mind. Since they implemented this system, they've seen consistent speed gains across their sites, and their year-over-year conversion rate has increased by over 10%. Okay. While we're talking about speed, we'd like to tell you about two features that we're implementing in Chrome that we think may lead to more instant and seamless web experiences. Web packaging is a new feature that allows developers to sign a web page with a special encryption key that proves the page's original domain and creates a sort of package that can be served from anywhere and that the browser can securely represent with the page's original domain. Now, portals. Portals adds a new iframe-like element to HTML that allows users to seamlessly transition into the portal's content, making it a new top-level web page in the process. Taken together, these new standards enable the browser to securely preload pages and deliver experiences like this that transition to the new site instantly. Maybe I'll show it again. Sure. Tomorrow, we'll talk through more of the details and explain how you can start experimenting with web packaging right now. We're pretty excited about this in the impact that we think it can have on the web. Loading fast is important, but so is delivering a buttery smooth UI once the page is displayed. Buttery means responding to user input, like taps and typing instantly and ensuring that the page doesn't jank or sort of skip around or scrolls and interacts with it. Now, traditionally, this hasn't maybe been the strong suit for the web, but we think that things are changing. So today, we're releasing Squoosh and is an example of how responsive complex apps can be on today's web. It demonstrates a really fast load in experience using WebAssembly to compress images using C and C++ libraries and employing web workers to handle long-running tasks in the background, so that you can start experimenting with the web. Now, this is a full PWA. It works across mobile, desktop, tablet, and offline too, thanks to ServiceWorker. Let's use Chrome's DevTools to go under the covers of Squoosh for a moment and take a look. Here on the screen, we can see different browser processes that work in the app. And if we zoom in to three of them, the main process and the workers, we can see that they're all in vertical lines. And each vertical line indicates when the main thread is busy and the app can't respond to user input. And you can see here they're all generally pretty narrow. And then you can see in the workers, we have these really long bars representing these long-running tasks that would jank the snot out of the UI if they weren't in the background. If Jake can get his nerves back, he can get his nerves back. So sometimes there are tasks that you can't actually move off of the main thread such as code that updates the UI. With the recently released off-screen canvas feature, now in Chrome and behind a flag in Firefox and Opera, helps here by letting you paint to image surfaces like canvas and WebGL in a worker. And then copy the image buffer back over to the UI in a single fast operation. And then copy the image buffer back over to the UI even after we artificially add code to make the painting process expensive using that jank bottom in the middle. It's pretty smooth. Tomorrow we're going to tell you about more features that are coming to the Web to help with smooth, buttery UIs such as new APIs for background code called Worklets that make tasks like animation, audio, custom painting and so on. And then we're going to talk about a little bit of a little bit of a virtual scroller that makes it easy to develop really smooth scrolling views over large datasets and a task scheduler API that gives developers more control over how the browser spends its time. That's a lot, but as Mariko said you could never add enough butter. How true. So having a fast loading responsive and operating system. Let's take the Spotify PWA as an example. With it, users can go straight from a link right into Spotify without having to install anything. And with the Web's full support for background audio DRM and offline Spotify can provide a full featured Web experience. And now Chrome supports desktop PWAs, letting apps like Spotify run in their own windows and be easily installed on Chrome OS Windows, Linux and soon on the Mac. I think desktop PWAs are really handy when you've got lots of tabs open like Dion as I mentioned earlier and you want to keep your favorite stuff really easy to get to. Like with the Twitter PWA that we're showing here running as a desktop PWA in Chrome canary on the Mac. And what's really cool about Twitter is that they've got experimental support that enables their mobile PWA to run at desktop screen sizes as we're showing here which has kind of been a part of Web development for a long time. While we're on the topic of desktop Web apps, we also recently implemented new Web APIs in Chrome that enable unfettered access to input devices. So Web apps can receive full keyboard input elements, sample gamespads at high resolution and enter immersive full screen modes. All of this is perfect for apps like the recently announced Google Project Stream shown here. Speaking of media, we've also taken the recently released picture-in-picture feature that we have on mobile video over to the desktop which makes Web video on desktop more of an integrated experience. And then another important new Web API is WebOrthen which enables websites to integrate with biometric sensors and other forms of multi-factor authentication. It's currently implemented in Firefox, Opera, Chrome, coming soon to Edge and it can make WebCheck out a seamlessly secure experience. So in here with the checkout and with PayPal and it displays the authentication challenge as integrated with the device. So as the Web's capabilities increase it's also important to augment the security and the privacy of the platform and ensure that everyone has an opportunity to participate in the open digital economy. This is why we've been leading the development of advanced new privacy techniques like federated learning and secure aggregation and personalized digital experiences that also have the strongest privacy guarantees. And we've been hard at work reviewing the fundamentals of how the Web works with respect to privacy and personalization and in the next year we'll be shipping new features into Chrome as well as new server-side technologies that help us uphold the highest standards of user privacy with regards to data collection and usage. Now as the Web evolves and we get new capabilities, we're constantly focused on interoperability, on how well Web apps work across browsers and we know this topic is top of mind with developers too. To this end we've been collaborating with other browsers on an open-source comprehensive Web platform test project and we're pleased to share that these tests the browsers are becoming more interoperable as measured by these tests and we're continuing to collaborate with the other browsers on reference docs for the Web on the Mozilla developer network. These efforts are resulting in a less fragmented developer experience for all of us which frees us to spend our time not chasing down compatibility bugs but on adding more features to the platform and performance budgets. We also think it's more important than ever for us to make it easier to create high-quality Web experiences. So to this end we've been working on a new learning Web site called Web.dev. On Web.dev we start out by explaining the principles that we've found lead to top quality sites such as speed discoverability and reliability and we follow these up with guides that explain why you should care about a given concept. They provide clear tutorials and offer plenty of sample code. We've also partnered with Glitch to embed their popular coding tool right into the experience which provides an interactive learning environment for getting hands-on and letting you remix and test your code. And we've integrated Lighthouse into the Web.dev experience giving you a platform that lets you track the scores of your websites over time and if your site isn't quite at a perfect score yet it also gives you instructions on how to resolve the specific issues that it's found with your site. So that's Web.dev. We'd love to get your feedback actually while we're here together at CDS. By the way, if you like that.dev domain check out get.dev for details on how you can register one for yourself in the future. We also have another fun learning project that we want to share with you. We've collaborated with Dave Getty who's created interactive tutorials like Flexbox Zombies and CSS Grid Critters. These are real things and they're really cool to develop service workies. It helps you really deeply understand service worker soup to nuts. It's a fun, free, engaging way to learn. So please head over to serviceworkies.com to sign up and start the preview levels that launch just now. Now, we've also got plenty of updates to our developer tools and libraries that we're going to share over the next two days. But there's one new tool that we'd like to tell you about. We call it Visbug and it puts a new spin on how developers and designers work together to create a new platform for their services. We'd like to give you a demo and to do that, let's invite Adam Argyle up to the stage. Please welcome Adam. Thanks for this opportunity. So, developers, you got a lot of tools and your keynote is amazing so far. You're spitting fire. I'm back there laughing. I wanted to give some tools to people that don't generally feel like they're going to be able to use this. So, I'm going to hit play on this side because I wanted more people to be able to contribute to good web experiences and be able to inspect and dissect things. Consider what Zeppelin is as a designer to develop her handoff. I've created this thing called Visbug that essentially gives those tools in reverse. This first tool here is the guides tool. Have you ever held a piece of paper up to your screen to use it lined up? There you go. There's even, I think, a Chrome extension that just does this one thing. So, you can come down here and you're like, that footer, that looks generally aligned. Let me just hover a little. This is kind of interesting. I can go into a nested hover state and still check what's going on there. That's just the grid tool. There's a whole bunch of tools. I'm going to do my best to show you more what's going on here. I'm going to do my best to find what is there. You just hover now. You just hover for color contrast. You hover for surfacing aria roles and tab indexes. So, basically, the things that you're generally looking for are now a hover away instead of maybe a five or six clicks. And the same sort of thing is for styles. So, you can hover over SVG. You can hover over buttons. Nothing is off limits to Pixelbug. It's kind of punk rock. It's like, hey, is there DOM there? And so, here's what you... Let me go through a couple more tools. I really like the Hue Shift tool. If you go download this Chrome extension, you just have to hover over a tool and it'll show you how it's used. I'll do my best to walk through the keyboard shortcuts that I'm doing, but it's kind of like Design Vim at the moment. Anyway, it's super fun. So, what I'm going to do, I'm going to select this path. I'm going to make sure I got my Hue Shift tool. I'm going to hit Command E to expand my selection. I'm just going to hold it. It expanded based on the node name and the class name. It found similar elements that were siblings. And I'm just going to add some black here. Is that magical? It feels magical. If I hold Control, I can make my selections go away. So, I can be like, ooh, do I like it? And in this case, I think black's wrong. I'm going to add some white. I'm going to hit Shift and right to add some Hue. And then I'm going to rotate the Hue with Command Shift and down. So, I'm rotating the Hue by values of 10. And now I can explore color palettes. It's HSL based. It's awesome. So, another thing I want to show you real quick too is the Move tool. You can just move things in the DOM around. You click stuff and you move it with the keyboard. Why haven't you had this before? It seems to make so much sense. So, I'm just going to rearrange it here a little bit. I'm going to hit M for margin and I'm going to hit Shift Up. So, Shift Up, Shift Up, Shift Up. I'm just adding 10 pixels of margin to the top. So, consider that these little quick tasks were things that you had to nested find. And now you can do them in the chaos of the browser in production. I wanted the DOM to be the source of truth. I wanted the design tool that treated the DOM as the source of truth. Anyway, I like this site because it's like, ooh, super flashy. And this tool doesn't care. Let's get a little meta and take a quick look at Squoosh, which was just introduced. Super rad. And the first thing I want to do is talk about some of the additional tools in here. But I want to troll Jake a little bit while I'm up here. So, what I'm going to do first, I'm going to grab this image and I'm going to turn it into myself. So, have you ever wanted to just swap an image on a web page? That's just drag and drop. Right? Why hasn't that been there? A lot of this stuff I don't like. Here's another one. Double click for text editing. Let's just make this drag and plop. Because, again, I'm trolling him. We'll go back here to my little inspect tool. We'll look at what styles he's got in here. Ooh, nice. I'm going to go to Flexbox. I'm going to select this image. I'm going to hit Command E again. Here's a fun idea I borrowed from XD. So, I'm very inspired from XD and Figma and Zeppelin and even developer tooling, too. So, watch this one. I love this one. I'm going to go in here. I'm going to grab two images. I'm going to grab a picture of me. I wish I had other pictures. Who wants to see my face multiple times? Anyway, I'm going to drag in two images. And if you remember, we had, oh, hey, come on. Take focus. Take focus. Take focus. Oh, good. I was just impatient. Here you go. And I'm going to hover there. Boom, they ping pong. Just sort of was like, hey, how many? I don't care. It'll just fill it up. Super fun. Okay, so we're done kind of like trolling his homepage. Let's actually go do something meaningful. He asked me to do a design review. Okay, so I hid my tools, right? The site doesn't even know I changed it. Let's drag in something fun. I like this Christmas eggnog picture. Yeah, this is good. Me and my wife. Hello. And so, right, we have this very complex website. It's backed with wasm. And it, you know, maybe conceptually you're just like, there's no way I could kind of help design that. Well, of course you can. It's Pixelbug. It doesn't care. So I'm going to expand it. I'm going to get some things into like a nested state, right? Something that's theoretically like harder to design with. And I'm going to darken this site up. I'm a little, I'm a fan of like retro wave right now. And I love those like purples, pinks and stuff on dark. Let's do that to this site really quick. So I'm going to launch Pixelbug. And we're just going to use a couple of features here. So I'm going to hit this header because I think these need to be a little bit more pronounced with some accent color. I'm going to expand my selection by clicking a class name. So what's cool is BEM and just dynamically generated class names. What they do is they create a consistency between related nodes. And so I leveraged those. You just make a selection and you can, you can hover on the classes too and they'll pseudo highlight. Anyway, this tool is so full of features. I'm having a hard time like talking slow. Okay, so anyway, I got these headers. I just want to like spoof these up. So I'm going to get rid of their opacity by doing command shift right and left. You can manage opacity. I'm going to add some white by hitting shift up. I'm going to add some hue by adding shift right. And I'm going to change the hue to hot pink. Right, super rad. Okay, so then I'm going to go in here and I'm going to like darken this up. I like how transparent it is, but I've got this, I've got this darkness vibe. I just dark, darken it up, right? Okay, and we'll come down here. And you know, this should look like magic right now. So I'm just doing shift down. I'm just adding black. Now this particular node, he's a little tricky. Look, he's a number input. So you can navigate the entire DOM with your keyboard. So I'm going to hit tab. I just tabbed me to the next child. Hit shift down. I'm going to black that out. That's fun. I'm going to grab this. I'm just going to black that out. And we'll get rid of the tool. And I will send a screenshot to Jake and be like, this is how it should look, dude. It should be pink and blue. Come see me in the forum. I've got more stuff to show you. It's weird. It does a lot of weird stuff. And it's really fun. Nothing is awful amidst this tool. And I hope content writers, web designers, just people finally have a tool that helps empower them in the DOM. It should feel like a document. It should feel like an art board. That's the illusion I'm trying to go for. Awesome. Thanks, guys. Give it up for Adam. That was awesome. It's pretty cool, right? Thanks so much, Adam. All right. So listen, it's been a pleasure to reflect with you on the past 10 years and what's new for the web. Before we go, we want to give a brief mention to Chrome OS. It's a great way to engage with the web and with support for Linux and Android apps. It's also ready for web development. We're offering a 75% discount on the Google Pixelbook to all CDS attendees so you can check it out. The vouchers will be available tomorrow morning before the first session begins. So don't party too hard tonight. And with that, thank you so much for joining us here this morning. And let's all work together. Thank you. Clemson, Clemson here. Wow. My t-shirt tastes delicious. You need to clean that up. So last year, in 2017, me and Monica were MC. You two are ludely not here. So we did a mini presentation in between talks, and we shared favorite bits of the web, like deprecated HTML tags like intersectional observer. But what are we doing this year? Yes, we thought that was far too much work. So we thought we'd bring back the big web quiz. But not just bring it back. It is new. It is improved. Before you get into that, I had fun making the intro for this two years ago in After Effects. You said you wanted to do it this year and you've not let the two of us see it yet. So I'm desperately keen to see what you've done. Are you ready? Yes. To make sure I got this right, you took Paul's intro from two years ago and added MS Paint at the end. Yeah, that's about exactly what I did. Unbelievable. It's outrageous. So if you want to get on that game, please go visit bigwebquiz.com from your mobile phone or your laptop if you have it open, and please make sure you log in. Yeah, there's a leaderboard as well. If you want to be part of the leaderboard, then click your avatar in the top right and there's a little button there to top it. That means if you are in the top three, your face will appear on the big screen. Yeah, because, you know... And if you're on the live stream, you can play along, but it will be slightly behind the live. Also, there's no way to give them prize. That is also very true. So speaking of prize, no quiz is complete without prize. So I had a privilege of ordering prize. Oh, it's been stuck down. It's very exciting. I can't wait to... What do you think? OK, you hold... You walk backwards. OK. Huh? Oh, yeah. OK. The first thing I notice about this is it's quite big. Yeah, there's story to that. Yeah, why don't you tell everybody what happened? So in Japan, we use metrics. Yes, yes. Well, America use Imperial. And how long have you been in America? Eight years. Are those Imperial years or metric years? Shut up, Jake. All right. So, assuming that you would like to play along and you've been able to log in, let's do a practice question. Yes. OK, here we go. No. Well, you will not have time. This is our first question. You are about to see a series of characters on your phone or laptop, and you have to decide whether they are part of the ASCII character set or not. You've got a few seconds per answer. Only three seconds per answer. So you have to decide quick. Go quick. Here we go. Right. And what we're seeing up here is the answers as they come in. This is a confidence rating. So if it's at 100%, you as a rumor all voting the same direction. If it's 0%, then it means you're kind of undecided it's a 50-50 split. Take a look at the next screen. Oh, umlaut. Not very confident on the oh umlaut there. Unsure. Then we've got AF Tilda there. OK, pretty confident there. It doesn't tell you which way you're voting. It just tells you you're voting the same way. What else have we got? You've got a total confidence. OK. It's closed. So this is the way you voted. So you're saying pretty confident ASCII for that. Not so sure about the emoji. Don't think that's it. Should we reveal the first round of answers here? Here we go. Well, so, oh. What? Why is why not ASCII? Because it's not why. No. Oh. That would be the small Greek letter gamma. Of obviously. Yeah, of course. What about these ones? What's next? Again, high confidence on the ASCII side. People saying these are definitely ASCII. But... Well, let's find out. No. So what happened with these two then? Well, that thing there that looks like an A is another Greek letter. We've gone for alpha this time. A for alpha. And this F here, this is a Canadian Aboriginal Symbol. So, you know, whatever. So, yeah, reasonable. And the last one. Here we go. It is. Perfect 50-50 split on the... What is that letter called? Here we are. Oh. Yeah, what have we got here? We've got ASCII on the H. That's fine. But it's not ASCII. Because that is a LISU letter. The LISU people are a Tibet or Burnham ethnic group. So, yeah, it's one of their letters. And the pound sign at the bottom there. Not ASCII. No. And I find that personally offensive. So, very enough. Now, we thought that was a particularly cruel question, as you've noted. So that wasn't scored as a round. So, I think we should do a for reals question. Yes. What you're going to see is a worse coming up on your phone screen or this big screen. You are going to say if it's the old CSS color or not. And by real, we mean colors that are in the CSS free, the CSS level free colors spec. Not things that have to work in browsers. It just has to be in the spec. And the fake ones, they're just ones we've made up, right? Yep. And believe me, it's going to be hard to figure out the difference. Two seconds per question. Here we go. Ooh. Okay. So the confidence was up. It went down very fast. Wow, it's changing a lot. Ghost White is my skin color for anyone. I like Old Lace as a name. I don't know if it's... Maybe it is a real one. Going to the next set here. Ray and Ray. So incorrect followed by correct. In terms of spelling. No, it's spelling. Only in terms of spelling. Tartan, we're less sure about tartan. I'd love tartan in there. We're going to be closing in five seconds, so get your answers in as quickly as possible. Remember, this is counting the points. Surprise. Let's see how you did. Ghost White was the 50-50 split. Perfect 50-50 split. Wow. But let's find out. Old Lace is a real name. Yeah. Wow. That's a real one. Red Orange was the only fake one there. Brilliant. Ray and Ray are both real. Yeah, we'll show them. As an audience, you are absolutely correct. Dark White Ultra Teal. Nope. What have we got here? Tartan. We made that one up. That's many colors. Dark Black, Pinky White, Red Orange, Red Orange, Red Orange, Blue Orange, Red Orange, Red Orange, Red Orange, Red Orange. I think we've been on the stage long enough. Yes, more than long enough. So, we have our next speaker. Yeah. here, Anujal. Big round of applause. Anujal. Hi, everyone. I'm Anujal, and I work on product partnerships for Chrome. My job is to evangelize the web to all of you and be an advocate for your feedback, as Chrome continues to build exciting new things that you just heard about from Ben and Dion. Sure, I play that role, but if you look harder, I really am the unicorn that all tech companies are looking to serve. I'm a millennial. Happy being friends with my phone, my laptop, my noise cancellation headphones, Fitbit, Kindle, you name it. So in this particular instance, I went on a vacation, unlike millennials, with my mom. I grew up in India, and I go back three times a year to visit my dog, Rafiki, and my family is usually there, too. And he usually needs to be held that way 24 hours a day. And so someone needs to stay back, had to be my father. And so my mother and I drove up to the hills of Landor. I know it sounds like a location from the Lord of the Rings, but it actually is a town in northern India, and it's famous for great weather, good desserts, and after this talk, poor network connections. So quick show of hands. How many of you have experienced coffee shop Wi-Fi? Expected. I, for one, always looking for that perfect spot on a couch in a coffee shop, solving the world's problems on my laptop. How about airplane Wi-Fi? Wow, quite a few of you, but yeah, definitely more frustrating than coffee shop or conference Wi-Fi. What about instances when you've paid for Wi-Fi and you've got no Wi-Fi? Seems like the entire population of Landor was either on 2G or largely offline, or like me pointing my phone at network stars demanding the internet like it's my human right. Even in an offline situation, the web came to my rescue. I'd offline a few articles on the best bakeries in Landor, how to find more food in Landor, and the web came to my rescue there. So this map from the Chrome User Experience Report shows you 4G densities across the world. And it looks like in the US and some parts where it's dark green is really good. But think about the last time you were on your commute in the bar or anywhere on the train, reliable fast 4G is still a privilege. And this isn't really a US or an emerging market story. The web needs to work for everyone at any time, everywhere. These instant loading experiences for everyone make the web like that crucial friend who's actually there for you when you need them the most. But historically, despite its reach, the web has not met user expectations when it comes to mobile. A lot of you are changing that narrative, are shaping it real time. Adoption of new web technologies like Service Worker are playing a role in changing this. Investment in web performance literally pays off. These companies across the world have been investing in web performance and have actually seen business impact. LinkedIn Lite saw four times increase in job applications by just building a progressive web app using caching strategies. Wayfair, like you just recently heard from Ben and Dion, have also seen 10% conversion increase. And ClickBus in Brazil is also seeing sales go up. So it literally pays off. So I look for that exact point of intersection that lights up a millennial's brain and makes them really happy. And I realize that it pretty much comes down to music, coffee, and photos. Music on Spotify, coffee from Starbucks, and photos of the latest fashion trends or whatever, dogs if you're me, on Pinterest. I still remember my first few conversations with these companies and the people that I actually spoke to are here today to tell you the valuable lessons they learned building on the web. It hasn't been easy. It's taken months and months of testing, some failures, some learnings. But ultimately, they've all seen business impact, and we're glad they're here to share it with you. So without further ado, please join me in welcoming Rizzo from Spotify. Last year, Spotify had a burning question. Would having a mobile web music player allow us to grow faster? Today, I have the answer, and I'm here to share it with you. I'm Matt Rizzo, and I'm a product manager at the growth team at Spotify. As a growth team, our goal is to drive sustained growth of users around the world. Within this greater growth team, my team's specific focus is on using the web to capture and validate growth opportunities. Over the next few minutes, I'll share with you how we decided if we should invest in the mobile web and where we are today. So let me take you back to 2017. At this time, there was no way to listen to Spotify on mobile web. We already had a robust web experience on desktop that was working on Windows, Chrome OS, and Mac OS, but on mobile, we had no such option. And frankly, we were unsure if we should. Things seemed pretty good for us. We had been growing fast, and our native apps had impressive retention figures. But every day, millions of people visited our artists, album, and playlist links on the web, and were hit with a wall. Each one of those users had to download the Spotify app to listen to music. Our hunch was that this hard download wall was hurting our ability to grow. In fact, it was turning users away from Spotify. So to explore this question in more depth, we headed to Brazil. Why Brazil? Well, we saw two things that were important to us. One, we had a ton of web traffic from Brazil. It was a growth market for us. And hey. And a significant amount of our users in Brazil had less than a gigabyte of storage space on their phone. This left these users in a constant state of prioritization. Should I keep this app on my phone? Or should I delete it to make way for this app I really need right now? So after talking to more users in Brazil, we realized this was a real challenge, and it was affecting us. This challenge caused people to either delete Spotify or to not download it in the first place, effectively making it impossible for them to use Spotify on their phone. We also learned that there was another type of user that could benefit from Spotify. Those who were unfamiliar with Spotify, imagine you've never heard of Spotify and you get a link sent to you from one of your friends. You click that link to listen to a song. You end up on a page. This page prompts you to download an app. Downloading an app is a tall order for you if you've never even heard of Spotify. Why should I download this? I just want to hear this one song. So because of these insights, we had a strong perspective that we should have a mobile web experience. And there were two users that we thought we needed it for. The first is people with low storage space who can't download our app. And the second is people who don't know Spotify and are not familiar enough to actually decide to download Spotify. So we started experimenting. We started experimenting to test our belief that the mobile web would unlock growth. We started really small. Our first tests were fast and simple. What if we gave our existing traffic a limited player from which they could play just one song before downloading the app? Guess what happened when we launched this test? We saw first day plays increased by 25%. This was huge. That was the number of users who play within 24 hours of visiting our site. After seeing this, we knew there was enormous demand for a mobile web player. So we took the next step. Now that we had strong data to back up our intuition, we went a step further. We wanted to test two universes against each other, one universe in which we had a mobile web player and then one in which we didn't. We wanted to do this because we wanted to understand if there would be an incremental benefit to us going and investing in the web. So we built a full experience and this full experience would allow for more immersive sessions on the web and then we exposed it to 50% of our visitors. This experience included three key Chrome features. First, the media session API. And this allowed users to control playback from the notification tray. Second, PWA install for quick access from the home screen. And finally, we had protected content support to ensure artists were guarded from piracy. Guess what we saw from this test? We saw a 54% increase in day one plays and even a sustained 14% increase in active users all the way through to day 60. So real retention. We also learned about the power of the web as a reactivation driver. So 30% of the logins we were seeing were coming from users who had not been active in the past 28 days. Our mobile web player was bringing churned users back onto the platform. We had one concern though. And our concern was that launching a mobile web experience would actually cannibalize our app downloads and hurt those impressive retention figures I referenced. But we've seen the opposite. When people have access to a robust web experience, they're actually more likely to download the Spotify app. So to summarize, we had a hunch that the app download barrier was turning users away from Spotify and hurting our growth potential. By building a web experience, we made Spotify accessible to more users and have seen sustained growth on both the web and in our native apps. If you are on the fence about investing in web, I would start by understanding what barriers are keeping you from reaching your customers. After you've done that, then start understanding how the web can remove those barriers. And finally, start with small tests to validate your hypothesis that the web can unlock growth. Just like Spotify, you might find that the web is a powerful way to reach your goals. Now I'm going to turn it over to David to talk about how Starbucks meets their customers where they are on the web. Good luck, man. Good morning. My name is David Brunel and my team builds the digital products used by Starbucks customers in the US and a number of international markets. And our job is to extend the Starbucks experience beyond the four walls of our stores to the devices that our customers have in their hands. In early 2017, we asked ourselves, how can we use the web to give our customers the best experience possible? That question led us to launch a PWA later that year and I'm excited to share some more of that story with you, the why, the how, and some of what we've learned so far. Before we started, we knew a few things. First, millions of customers were using our native apps every day to order ahead and pay in our stores. Second, the Starbucks native apps had a higher number of daily active users than our website. But third, the Starbucks website had significantly more monthly active users than our native apps. In fact, our website was reaching six million more people every month than our iOS app. The opportunity for us was huge. If we could provide a better experience to these six million users, we believe they'd have a more meaningful interaction with Starbucks and that that would be great for our business. It was important for us to build a web app that was as full-featured and pleasant to use as our native apps. That meant that it had to be reliable, fast, and engaging. Our customers tend to use higher-end devices, but they experience a wide variety of network conditions. We have stores all over, suburban shopping centers, basements and office buildings, and airports, for some examples. Customers at these stores might experience 3G, unpredictable Wi-Fi, like captive Wi-Fi portals, or have no connection at all. When a customer walks into any of these locations, they need to be able to open their app, scan their barcode to pay, and be on their way quickly. We address this by using some of the web platform's capabilities. JavaScript, HTML, CSS, and other static assets are cached locally. Don't pay too much attention to the code you see up on screen. It's there to illustrate just how little code is actually required to implement runtime caching and pre-caching thanks to Workbox. User-specific data isn't encrypted and stored locally with index DB. This means that when a customer opens up the PWA, she can expect it to work. Customers need our PWA to be fast. No one wants to be standing in front of the barista waiting for their app to load while a line develops behind them, especially if you're a millennial like Ancel. Some strategies that helped us make the PWA fast. Webpack helps us keep our JavaScript bundles small. This means that devices only need to download and parse the code needed to power the experience they're trying to access. We use the Service Worker API and Workbox pre-cache to download additional JavaScript and CSS before the user needs it. Our CDN dynamically optimizes images, serving the right size and the right format at the right time. These and other techniques mean that our PWA's initial time to interactive is two times faster than the web experiences it replaced, and subsequent page loads are lightning fast. Starbucks customers trust us to keep their personal information and payment data safe and secure. The Credential Management API gives us the ability to provide customers the opportunity to sign in using saved credentials, eliminating a lot of the friction associated with signing in. Today, 28% of customers using Chrome, across mobile and desktop, use the Credential Management API to authenticate. Using Add to Home screen, customers can quickly install the Starbucks PWA to their phone. Once on their home screen, it's indistinguishable from native apps. And paying for your order by scanning your barcode is only two taps away. And it's really a joy to use. Throughout the experience, we've added thoughtful animations and meaningful user feedback. The experience is compliant with web content accessibility guidelines. We use a pattern library to ensure consistency throughout the PWA and other web experiences at Starbucks. The pattern library is a collection of React UI components, shared styles, and commonly used utilities. Developers consume the pattern library via NPM. We also expose it as a web app so developers can access documentation and so that folks from our design and product teams can access components and interact with them to see exactly how they'll behave in a browser. The great thing about building a PWA is that it runs on a wide variety of devices that might not be supported by our native apps. Like this commenter from Hacker News, who was excited to discover that they had a Starbucks app that worked on their BlackBerry. We built our PWA to be mobile first. But because we built it to be responsive, it provides a great experience on desktop too. And we've discovered that customers love to order coffee from the desktop. In fact, 25% of our orders through the Starbucks PWA come from desktop browsers. In short, we've learned that customers love the Starbucks PWA, and it's been great for business. Since we launched app.starbucks.com and created an experience that's on par with that of our native apps, the number of customers joining Starbucks rewards via the web has increased by 65%. So should you invest in the web? Yeah, absolutely. The web's reach is massive and the barriers to create a high-performing PWA are quite low. You've heard from Rizzo at Spotify. Now for me at Starbucks, that the results are real. Now I'd like to introduce you to Zach from Pinterest to tell you about how their team keeps their PWA fast. Hi, my name is Zach Argyle and I lead the web platform team at Pinterest. About a year ago, we teamed up with the growth team to rebuild Pinterest mobile web from scratch. But why? Well, for one, our users did not like the current offering. It hurts because it's so real. So Stephanie, if you're watching, my sincerest apologies. At the time, our mobile website was essentially just an upsell for our native app. However, we noticed that the number of users who actually downloaded the app compared to the number that actually came to the mobile website was very low and we knew we could do better. So last year, we shipped a from scratch rewrite of the mobile website in just under three months. It was four times faster to load for initial page loads and almost six times faster on subsequent visits. The initial JavaScript bundle size decreased from 650 kilobytes to less than 200 and our CSS payload decreased from 160 kilobytes to six inlined into the app shell. But what about now? It's done, we shipped, it's over. We can all go home. We did our job. Unfortunately, entropy is very real. At Pinterest, we have over 100 engineers committing to the web code base. So we had to ask ourselves, what could we do to make sure that this fast experience stayed fast as time went by? The answer was performance budgets. At Pinterest, this breaks down into three sections, logging, alerting, and prevention. There's two main metrics that we look at to ensure that the PWA stays fast and the first is JavaScript bundle size. For logging, we wrote a webpack plugin that reports development and production bundle sizes to our internal logging system. This gives us the ability to track bundle size changes over time for all of our web applications and then using those real-time metrics, we built out dashboards with alerting enabled to make sure that all of our core bundles stay small. We know painfully from experience that one of the main offenders of JavaScript bloat is importing something with a large dependency tree. A single line of code can add tens or hundreds of kilobytes to your bundle. So to help prevent this from happening in our monorepo, we wrote an ESLint plugin to disallow imports from certain paths. For example, our mobile web-only code base cannot import from our desktop-only code base. The other metric we care about is a composite performance metric called pinner wait time, or PWT. It's a combination of time to interactive and image load times. When deciding what to measure, look to what your users care about. For us, that's images. Until the above-the-fold images are loaded, to our users, page load is not complete. So time to interactive was not enough. So we created a custom metric specific to our service and user expectations. We also track more granular performance metrics like time to first paint and time to interactive, which are really helpful for debugging performance regressions. We also have client-side logging that includes browser, country, user state, many other fields that are helpful for segmenting usage. Again, we set up dashboards with alerting enabled to make sure that all these core landing pages stayed fast. We did notice that one of the main offenders of page load regression was the hundreds of experiments that we had running at any given time. So we added performance regression information front and center in our experiment dashboard so that engineers could be aware of any performance impact caused by their experiments. This also means that you can see the performance improvements of your experiment as well, which was really cool. These are just a few examples of how we incorporated logging, alerting, and prevention into our workflows to ensure that the progressive web app stayed fast, and we saw incredible results from building out the experience. Year over year on the mobile web, we saw a 103% increase in weekly users, nearly 300% increase in session length. And our mobile website has since been our number one platform for new user signups, surpassing Android, iOS, and desktop web. You've seen it from David and Rizzo, the future is truly in the browser, and performance is the foundation of it all. With these budgeting strategies in place, we've continued to see incredible growth, especially in emerging markets where bandwidth is limited and low-end devices are common. In some countries, a year after shipping our new experience, weekly users had increased by over 300%, and the number of pins seen had increased by over 600%. For Pinterest, the number of pins seen is directly correlated with our overall revenue. Investing in improving and maintaining performance has directly impacted the company's bottom line. So if you, like Pinterest, care about an exceptional user experience, regardless of device, network, or conditions, build a progressive web app with performance as a priority. If you'd like to read more about our one-year PWA retrospective, use the short link on the slide to visit Pinterest's engineering blog. And with that, I'm happy to bring Anshil back on stage to share a quick recap for you all. All right, so it's day one. It's just the first hour of Chrome Dev Summit. Can you please join me in giving a thunderous round of applause to David, Zach, and Rizzo for summing up months and months of hard work into just five minutes. All right, so if you remember what I said at the beginning, you also know I have a very short attention span. So I was backstage making notes. I have three things to share with you. This is the part of the talk where you remember to keep your phones out to take photographs and screenshots if you're watching us on video. Very quickly, three business wins from Pinterest, Starbucks, and Spotify. So lift like Spotify, you want to get more users. Remove friction. It seems pretty obvious, but the results are real. Run multiple tests, be convinced just like they were. Rizzo told me it took eight or nine tests for them to do this. And the results, an increase in activation, 54%, and retention, 14% increase in day 60 active users. If like Starbucks, you want to meet your users where they are, airports, hostel rooms, tanking up on caffeine, wherever they might be, be sure to be using work box caching strategies. And my favorite thing about their Progressive ABAP is the thoughtful animations that pop up to make sure that my 100 different customizations of my coffee are actually accurate. It's fantastic. 65% increase in number of customers joining the Starbucks rewards program. Direct business impact. And finally, if you want to set a standard for web performance, be like Pinterest. Focus on performance, focus on setting a performance budget. 600% increase in number of been seen. When Zack told me this number, I was like, are you sure? 600%, but it's real. This has positively impacted Pinterest's overall revenue. So now you're wondering, this is all great. How am I going to get there? We've given you the tools. We are here to support you. And this talk is going to be up on YouTube right after I say thank you. So please go and build top-notch web experiences. Do it, do it for the millennials. Thank you. Bring us to first break of the day. Absolutely, so feel free to head outside to the Ask Chrome area for speaker Q&A. Also feel free to grab yourself a snack from just outside the forum. We will be back here at 11.30 to meet Paul and Elizabeth's talk about performance tooling. Thank you. Rick Viskomi, host of The State of the Web. It's a new series committed to staying up to speed on all of the big web trends. We'll be using big data to shine a light on areas like security, performance, publishing, and more. You'll also hear directly from the industry experts who know how things are going and where they're going to be. This is going to be a lot of fun. We've got some very exciting guests lined up. So please subscribe to the Chrome Developers Channel and tune in for a new episode of The State of the Web every other Wednesday. Take your seat. And most importantly, logged into web... What was that question again? It's called Big Web Quiz. I'm sorry. Don't worry about it. You're crashing already with... Because you were a bit jet-lagged as well, right? Yeah. We'll be dead by the afternoon. Right. Everybody takes a seat. There's a few more seats up front. Can we get the Big Web Quiz screen up, please? Whoever's pressing the buttons backstage to do that. I noticed that because we've got the scoreboard here and we're not going to reveal that yet. But we have noticed that a lot of people haven't opted into the leaderboard. Notably, some of the people in the top three, top 10 haven't opted in. So if that's something you're interested in, it's almost like you don't want the prize that we showed. We've got fridge magnets as well. Yeah, yeah, yeah, yeah. So it's not only one prize. We have two other prizes for the second prize and third prize. Yeah, we'll give some prizes at the end of the day as well. Yeah, so you've definitely learned to participate. Click on your icon after you log in and then opt it into the leaderboard. So, should we do a question? Yes. What are we doing this time? All right. Let's see. This is a question we're going to do. When it appears on the screen, here we are. HTML element. So much like CSS one, you're going to see a word come up to your screen. You're going to judge if it's real HTML element or not. And Jigs can define what real means. Yeah, I don't know if we have many HTML developers in the room, but this might be the question for you. But here's the rules. By valid, we mean a tag that is a recognized opening tag according to the WG HTML parsing spec. That's the rules. Okay, and it's going to be two seconds to guess each one. Here we go. Yeah, I recognize this. Yeah, some of those familiar, some less familiar maybe. Let's see what's happening on the next screen. Yeah. Oh, look at this. You know what, a few of them, I did talk about this tag last year. So, if they paid attention last year, they might know. Interesting. Okay, we've got a few more to go. 20 seconds left on the clock. We've got stuff here. People look very unsure about alt. Oh, wow. Lots of confidence when it comes to sarcasm. Sarkasm, yeah. Spoiler, lots of confidence. Alt, glyph, interesting. Right, we're getting to the end of the round. So, get those answers in. It's almost closed. And that's it. Okay. So, let's have a look at some of the answers we've got here. Yes. Pretty confident that TD is real. Leisure about TR? Okay. TT? No one, did anyone do web development in the 90s? Come on. Excellent. So, we've got a TTT head, but... It's all real. Yeah. Tried to catch you out there with TT, it's nothing to do with tables. Let's go next. Oh, everybody thinks movies are fake. That's the trick one we've put in. It's real one. Ruby is the tag that is used to put a pronunciation annotation on top of Chinese character or Kanji character. So, yeah. My favorite HTML element. Ah, yes, I love this one. Almost 50-50 on the alt tag. Oh, and sarcasm? Most sure. It's not a tag, though. It's an attribute. So, I threw sarcasm in there to try and trick people, because we did talk about that two years ago. The closing tag for sarcasm is in the spec. But not the opening tag. Not the opening tag. Alt glyph. Yeah, that is a real one. That is part of SVG. Yeah, sorry. Oh, big is spread out. Excellent, and the novel element, that's my favorite one. SUP. Oh, that's so good. You pronounce it no-bar. Yeah, I'm British. I say no-b-r. I think that's the correct pronunciation. So, they're real. SUP is real. Super script. Acronym. Now, we tried to split people on this one, because the acronym does work in browsers, but it is not in the parsing spec at all. It has been so deprecated. It's no longer in there. We should not use this, please. OK, in the last set. Oh, we tried to catch people out with image, but you spotted that one. Fifonka, right? Who fell on the keyboard? SVG fell on the keyboard. That's a real element. Yep. SVG filter primitive defines the transfer function of the alpha component of the input graphic of its parent FE component transfer element. There you go. Look at your smug face. All right, are we going to look at the leaderboard? Should we look at the leaderboard? No, we'll look at the leaderboard before the next. OK. Back after lunch. OK, cool. So, it's almost time for another session. Please welcome Elizabeth Sweeney and Paul Eilish to talk about performance tooling. Huge round of applause. We talked about this. Your TTSSL is kind of out of control. My TTSL? Yeah, you're time to stage left. True. True, got to work on that. Well, we are excited because nobody likes to wait. And we want to talk to you about all of the optimizations and measurement tools that we have been working on and can provide to you. And we're going to start with going over an overview of user metrics, kind of what we care about and why, then looking at the latest developments in Lighthouse, the Chrome UX report, or Crux, and talk about how we're unifying tooling across the board. All right, so when it comes to web performance, one thing you could say is you can't improve what you don't measure. This was Peter Drucker, right? He was a management guy? This is actually true. Peter Drucker, really, really well-known kind of business management guru. To be honest, I think he might have been a front-end developer as well. But this is just absolutely true. If you want to make something better, step one is measure it. And let's get into that with web performance. To measure, we need to look at some metrics. And it's really important to make sure that your metrics are user-centric and really focused on the user. We heard a little bit ago about pinning wait time and some of those custom metrics. But we'd like to have metrics that really focus on how the user experience is. So let's take a page load, break it down, and look at a few key metrics in here. So a little film strip here. We're loading the search result page. And so we just have a little progression of a few images getting to our final result where the page is done loading. Now, underneath the things that we see visually is a few things happening. There's the main thread and network requests. And these are really important, too, because it actually weighs in a lot on the actual user experience. So I want to point a few things out here. First of which is this point right here. This is the first time that text shows up on the page. It's the first time that content is there. So we call this the duration from the navigation to the point at which text shows up, the first contentful paint. Easy enough. We know this one. A little bit later, though, you can see right after the main thread kind of quiets down a bit. This right here is an important moment. All the long tasks on the main thread have done. So the main thread has kind of quieted down and allowed the page to now be responsive to users once they choose to interact with it. Also, the network is quiet. So we know there's no one big massive script hanging out ready to run and take up time on the main thread. So the duration from navigation to this point, we call this time to interactive. And there's one more key metric I want to cover real quick. Now, in this page load, a user could really touch the screen at any point. They could interact with it here once there's a paint on the screen or a little bit later. But let's just say, for the purposes of this, they tap the screen at this point. Now, the thing is, if they tap the screen at this point, take a look at what's happening on the main thread. A lot of things. We're in the middle of a big, long task. And that means the page is not going to be able to respond to the user. So we have to wait a little bit until the page can actually respond. So this duration, the time from the input until the end of the long task that we're dealing with, we call this the first input delay. This is an important metric. I want to spend a little bit more time on it. If this is a main thread, well, it's a very open and available main thread. Really nothing happening. So let's say if the user has some input, well, piece of cake, we can just reply to it immediately. Your event handler is going to run, touch start, or click. We're going to do style and layout and paint, and ship a frame, they're going to see something. So let's say they're tapping on the menu icon, and then the menu slides out. So we're good. But if there is a long task sitting on the main thread, well, we're just going to have to wait. So we always, yes, we'll be doing the event handling. But this time between when the input's first received and when the events will be dispatched, that is the input delay. First input delay is just the first input of the page. It's just the first time that a user touches a page. And the important thing here is that first input delay is a field metric. It really only makes sense to gather in the field. Time to Interactive is a great and really powerful metric, but it makes sense mostly in the lab, in a lab scenario. And we've recognized this that, basically, Time to Interactive, out in the field, where real users are tapping on the screen as the page is loading, it really kind of messes with this metric. So there's a few metrics on the screen here, basically just outlining, which makes sense to gather in the lab, and then some that are only exclusive in the field. So just want to point out TTI and first input delay or FID are our interactivity metrics. Really key for understanding how available the main thread is to the user. Okay. So all of these metrics are awesome, obviously, but where can we actually find them? So all three of these metrics are readily available in their respective lab and field environments. So as Paul was saying, because FCP can be measured in both the lab and in the field with real users, it's available across the board. So that's in Lighthouse, in the Chrome User Experience Report, or Crux, and as a web perf API. TTI is only available in the lab, and so it can only be accessed via Lighthouse and PageSpeed Insights. Now FID requires real user input to measure, so it's available in Crux, and FID is exciting because it's actually going to be coming to Chrome in Q4 or early Q1 as a web perf API. So it should be able to give you, you can view it in a performance observer just as you get FCP today, which is kind of cool. Super cool. Yeah, excited about standardization of this stuff. It's really exciting. So for those of you who aren't familiar with Lighthouse, and we know that it's awesome and a lot of people know, but Lighthouse is an open source, automated tool for improving the quality of webpages. So you can run it against any web page, and that's either public or requiring authentication, and it has audits for performance, accessibility, PWA, and more. I'm excited to tell you about some things that we've been doing with Lighthouse. So one of those things is a PWA refactor. So currently there is a broad spectrum of PWA definitions in the wild that can make it difficult to identify whether or not definitively you are a PWA. And while our PWA checklist is absolutely wonderful and it gives helpful guidance towards what a PWA is, we want a machine verifiable way to say yes or no. So today we're launching the new Lighthouse UI with a more binary badging system for the PWA category. And the badge groupings reflect that we want everybody to be able to achieve the fast and reliable badge. All experiences should be that, whether or not you're installable or not. In order to actually get all, become a full PWA and get that badge, you have to successfully fulfill all audits in the categories. Yeah. There's a few more things that we've been doing. And in the new, it's a 4.0, 4.0 alpha that's coming out in Lighthouse. There's a few nice changes that we've made. So one of the things that we've been working on is reducing the amount of time that it takes to run Lighthouse. Nobody wants to sit around waiting for a long time. So we're happy to report that the median run time of all Lighthouse runs that we're aware of has dropped down about 50% and up the 90th percentile, we've also dropped this down about 66%. We're really jazzed about this. We want to make sure that it's not a long wait for you to get the insights that are available. A few more changes. We've changed how scores are kind of represented. So if you've seen kind of at the top of a Lighthouse report these score gauges, right beneath them is this little scale, right? So this is just how the numeric scale is mapped to a color. We made a change here. I just want to point out none of the numerical scores and those calculations have changed in this new update. It's just deciding which color is applied. So this is basically the change. We've just adjusted how the various numerical scores map to these colors. Yeah, so basically we're raising the bar about what our expectations are for a performance site. But if you're in the green you should feel really good about it. Yeah, it's good. I know a lot of, yeah, it's nice to go for the hundred. And I love the hundred. I'm excited about it. But yeah, if you're in the green, you're good. I just want to make that clear. All right, sweet. Now a few more changes. And this one about throttling. When it comes to throttling, a good mobile throttling preset shouldn't necessarily map to the particular conditions of a telecommunication system and its specification. A good preset maps to what real users feel. And so really what we want to do is we want to capture the latency and throughput at like the 80th percentile. The frustrating experiences that you oftentimes experience. And we want to keep pace with this measurement as our global telecommunications infrastructure gets upgraded. A lot of people are moving from 3G to 4G. And we want to make sure that we capture that. And so we're making a change, but actually not in the latency and throughput numbers. This is actually just a labeling change. So wherever you see fast 3G today, you'll be seeing slow 4G. And it's actually because the preset that we use actually captures a 4G experience more than a 3G experience. So just FYI, same stuff, different name. It's all good. What's next? Oh yeah, so there's a few other things going on with Lighthouse, some really nice projects making use of it. First up, check out some of the projects on GitHub taken advantage of Lighthouse, some of the dependent projects. Really cool stuff in here, a lot happening in the recent months. Many projects looking into or building systems around using Lighthouse in a continuous integration experience so that on every commit you run Lighthouse, store all that data, get graphs, some really cool stuff happening in here. So take a look. Lighthouse is also available in a number of different commercial products as well. First is Caliber, fantastic stuff here. Trio is another one. I think this is my site, which is doing okay. Accessibility actually does some work. But there's some nice stuff. And the last is SpeedCurve, which actually just added support for Lighthouse a few months ago. So we're excited to see Lighthouse becoming a part of the production monitoring kind of ecosystem. And even internally, we're excited to see where Lighthouse is being integrated. So one of those examples, as was announced in the keynote, is the new site, web.dev. And it's exciting to be integrating it with really prescriptive actionable guidance. And you can run Lighthouse with any URL and it will provide you with a prioritized to-do list with that guidance and interactive code labs for the specific things that you need to work on. What's so exciting about this is that for the first time, tooling is directly integrated with the documentation. Yeah, it's pretty hot. And we wanted to also call out another wonderful partner who has done a good job of using Lighthouse. So Squarespace was able to use Lighthouse as an out-of-the-box auditing and reporting system to build on top of. And it allowed them to improve their 50th percentile and 95th percentile TTI by over three times. So we were super excited by that. They used it to generate traces and dig deep into specific problems as they happened, as opposed to post-regression. So now we are going to talk a little bit about the Chrome User Experience Report, or as it says, crux, as I've already said, I think three times. The crux actually provides user experience metrics for how real-world Chrome users experience popular destinations on the web. So it's a dataset that is powered by real user measurement of key user metrics across the public web, and it's aggregated anonymously from users who have opted in. We're excited to talk about some of the updates that we've done here. One of the things, and it was actually featured in Anshal's talk earlier, was regional analysis. So we heard loud and clear from developers that we needed to be able to break down this dataset in a country-specific way. And now you can do that. So via BigQuery, which is where you can interact and play with this dataset, you can now get separate country-specific datasets to pull it apart. And yeah, so this is just, this is how you've been interacting with Chrome User Report in the past, just working with BigQuery. But I heard that there's like a nice, new shiny thing. Yeah, you can get it way easier now. So the brand-new Crux dashboard, which was announced just a bit ago, it allows you to understand how an origin's performance evolves over time. And so it's built on Data Studio, it's much more easily accessible, and it can be easily customized and shared with everyone on your team, and it doesn't require you to write your own script on BigQuery to access it. And it's automatically synced with all the latest datasets, so you're good to go. Also to ensure consistency across all of our tooling, as we've mentioned, that's a huge goal for us, FID is now launched as an experimental metric in Crux. So when we announced last year, the dataset only had 10,000 origins, and now we are at over 4 million. And if you are excited to see your website, I am excited to see PaulIrish.com in this dataset, if it's not, and it would be great. Yeah, we're working hard to improve it and get an expand quickly, and so if you're excited, check in soon because we are working hard to move fast. All right, so one of the things that's really important to us is to have a unified story between our performance tools. So, okay, so hand raising time. Raise your hand if you've used Lighthouse. Raise your hand if you've used PageSpeed Insights. Yes, of course. Raise your hand if you've noticed that what you're seeing in Lighthouse and PageSpeed Insights doesn't necessarily be telling the same story. Hmm, yeah, I'm there with you too. Now, we saw this was a bit of an issue and we wanted to improve it because we don't want advice from two different tools that Google provides that is kind of conflicting. So we've been working hard and collaborating with the search team on this, and so today we're excited to announce that there's a brand new next generation of PageSpeed Insights now powered by Lighthouse, and this is really exciting stuff. So now if you use PageSpeed Insights, all of the data that you've been seeing in Lighthouse when it comes to performance is now in the report. All of the metrics and opportunities and diagnostics are right there. You also see the top score that you have been seeing in PageSpeed Insights. That score is the Lighthouse Performance Category score. So kind of speaking the same language. And still, if you've really enjoyed kind of the Chrome UX report data that has been available inside of PageSpeed Insights, that's there too. Play a quick little screencast of how this looks. So let's take a look at chrome.com in PageSpeed Insights. We're going. Come on. Yeah, great, good, awesome. This is in real time. I did not speed anything up, so we got away with the latency. So yeah, so here's just, this should look fairly familiar if you've used Lighthouse. But up at the top we have field data and by default PageSpeed Insights runs both analysis on mobile and desktop at the same time delivers you the results simultaneously so you can check that out. So this is live today. So go check it out, take a look, give us your feedback. Excited to have this out there. All right, thanks. Oh yeah, I mean you can clap if you want. I mean that's cool. All right, if you've ever actually opened up, I don't know. I have a tendency of opening up the DevTools on basically every site that I do. You know, it is just a habit. So I opened up the DevTools on PageSpeed Insights and lo and behold, like the web, it's just a thin web app that makes a call to an API, a RESTful API. It's actually the PageSpeed Insights API. So we were like, well, this kind of means that in order to do this, then we're gonna just have to have all the Lighthouse data available over the API. So that's what we have. The new PageSpeed API v5, consider it the Lighthouse API v1. All the same Lighthouse data, including all categories, not just performance, but all of them. And all the work is done for you, no kind of waiting for your own Chrome to reload and do the analysis. So we'll do the work for you. And the Chrome UX report data, that summary is still it's added into the response. That's the word. Basic usage, I don't know if you'd use it from fetch client side, but if you did, it would look something like this. Well, just pass the URL. There's a few other parameters to customize things. Get back the results. Looks a little like this. There's a Lighthouse result full of the exact same Lighthouse data that you'd be getting by running Lighthouse anywhere else. And inside that loading experience property, that is the Chrome UX report stuff. So really cool. Check it out. Details and documentations, reference guides here. PageSpeed inside to v5. All right. And so this is really cool. Yeah. It means all unified analysis and it's the same. So when you're measuring, you're optimizing, you're monitoring, if you wanna start making changes and testing things out, there's a place for you to go for that. So that's great. And we have all of these things aligned, but where do you go for what and when should you go there? So if you want, if you need a snapshot of a page's performance, as Paul said earlier, PageSpeed Insights is a good default to go to because it provides you with both the field and the lab and gives you a good benchmark. If you want to make changes, test and iterate and really have that fast feedback, then the Chrome extension, the audits panel or operating within the command line interface is gonna be a good place to go. And finally, if you want to set up production monitoring or set budgets, then the API is gonna be fantastic. But across the entire development lifecycle, you now are completely powered by Lighthouse, which we're super excited about. Yeah. So to wrap up, well, I guess if there's one thing or four things that you take away from this, first up, measure well, measure often, can't approve what you don't measure. Yep. You can now use the PageSpeed Insights for quick Lighthouse analysis. The crux, real-world data really helps round out your view of what's happening with your users and really understand different percentiles where users are feeling pain and frustration. And finally, to evaluate performance at every stage, which is really important to us, you know, you can now check out the API. So go use it. All right, I think that's it. Thank you guys very much. Thank you. It's that time again. Devices are there ready, folks. Big web quiz. Yes, so let's get the quiz screen up again. This round, when the screen is up, here we go. This round is going to be another one for the old-school developers in the room. I think I'm really looking forward to this one. Here we go. Mm, look at this. Now, no, no, no, no, no, no, no, no. All these people who like their Lucy, Goosey, HTML5 doc type put whatever you want in there. I was talking to Serma the other day and he said, I don't even bother putting ahead in my documents. What a monster. This, this is how it was done back in the day. And what we're looking for here are the actual parts of that doc type. So you might see sort of little snippets of the doc type and you have to decide, is it in there or is it not? Again, it's two seconds per choice, so be quick. Here we go. All right. Right, here we go. Rameset, CR, HTML, W3C, is that in there? Confidence is a flip-flopping. Because even though we wrote the answers for this, I can't remember them as well. For all my bravado, I actually can't remember either, so. I once saw Adios Manny on stage right out to this doc type manually. That's incredible. And he got on applause afterwards. That's well healed to have. Look at the confidence, very low. Very, very low. Transitional people do feel the, oh no, wait, maybe XML. Question is closing now, so if you- It's quick. Select quickly. Ooh. Here we go. Oh, this one. I think we've divided the audience, if I'm honest. And I'm with you. I'll be 50-50 on most of these. Yeah, CR, that's part of a lot of web specs, but it's actually TR, that's in there. XHTML is in the rest of it. No, that's not. Let's go to the next bit. Well, 1.1, is it in there? 1.1 was never transitional. No, it was always 1.0. The rest is part of it. What about this lot then? Feelings of sadness overwhelm. It was, of course, 1.0. It was, of course, transitional, so those are in there. Not XML, though. Not XML, surprisingly. And there's TR, that's definitely part of it. The last block of stuff is web part of it. You'd think it would be, it being part of the web, but no, it is not. There's nothing to do with it. And Loos isn't either. Excellent stuff. So should we go into the next talk then? Absolutely, and so please welcome to the stage to talk about techniques for fast websites. It's Hussain Jardet and Katie Hempenius. Big round of applause. Katie Hempenius and I'm an engineer on the Chrome team, specifically focused on speed. Today, Hussain and I are gonna talk with you about how you can make your site fast. We're gonna focus on three things. Images, web fonts, and JavaScript. We've chosen to focus on these three things because they are the three largest components of most websites. In addition, they're likely to be the three largest components of your performance budget. We hope that after this presentation you'll go home and make changes to your website. Know that during this process, you can lean on both Lighthouse and Web.dev for additional resources. Almost everything we cover today can be audited by Lighthouse. In addition, at Web.dev, you can find guides, code samples, and demos of everything we cover today. So let's start by talking about images. Images are taking over the web. On many sites, images alone consume the entire performance budget. And on some sites, it would far exceed that. I think the reason why these numbers are so bad lies in the fact that performance images are the result of many steps and optimizations. As a result, they're not going to happen accidentally. A performance image is the appropriate format. It is appropriately compressed. It is appropriate for the display. And it is loaded only when necessary. To be successful with images, it's imperative that you automate and systematize these things. Not only is this going to save you time, but it's going to ensure that these things actually get done. There's much more to images than meets the eye. At a bits and bytes level, an image is as much a byproduct of its image format and its compression as its visual subject matter. You can think about image formats as choosing the right tool for the job. The image format that you choose will determine what features an image has, for instance, whether it supports transparency or animation, as well as how it can be compressed. First image format that I want to talk about today is the animated GIF. You should not be fooled by their crappy image quality. They're actually huge in file size. This one and a half second clip is 6.8 megs as a GIF. As a video, however, it is 16 times smaller at 420 megabytes. This is not uncommon. Animated GIFs can be anywhere from five to 20 times larger than the same content served as a video. This is why, if you've ever inspected your Twitter feed, you may have noticed that the content labeled as GIF is not actually a GIF. Twitter does not serve animated GIFs. If you upload an animated GIF, they will automatically convert it to video. The reason for the drastic difference in file sizes between videos and animated GIFs lies in the differences between their compression algorithms. Video compression algorithms are far more sophisticated. Not only do they compress the contents of each frame, but they do what is known as inner frame compression. And you can think of this as compression that looks at the GIFs between the different frames. The first step in switching from animated GIFs to video is to convert your content. You can use the FFMPag command line tool for this. Next, you'll need to update your HTML and replace image tags with video tags. The code I have up on the screen is technically correct, but it's probably not what you want to use. Instead, you want to make sure to add the four attributes I've highlighted up on the screen. That's going to give your video that GIF look and feel, even though it's not a GIF. Now we'll switch gears and talk about a much more modern image format. And that, of course, is WebP. WebP is no longer a Chrome-only technology. Last month, Microsoft Edge shipped support for WebP. In addition, Mozilla Firefox announced their intent to ship WebP. Currently, 72% of global web users have support for WebP. And given these recent developments, you can expect this number to only increase. This is a big deal because WebP images are 25% to 35% smaller than the equivalent JPEG or PNG. And this translates into some really awesome improvements in page speed. When the Tribune added support for WebP, they found that there was a 30% improvement in page load times on WebP-supported browsers. By far the biggest hesitation I see around adopting WebP is a fear that you can't both serve WebP and support non-WebP browsers. This is not true. The picture and the source tags make it possible to do precisely this. You can think of the picture tag as a container for the source and image tags that it contains. The source tag is used to specify multiple image formats of the same image. The browser will download the first and only the first image that is in a format that it supports. So in this example I have up on the screen, the Chrome browser would download the WebP version, a Safari browser would download the JPEG version. The great thing about this is that even though all major browsers have supported picture and source tags since 2015, however, if, say, 2014 browser were to encounter this, it would still work, because those browsers would just download the image specified by the image tag. You can't notice I've been talking about image formats, but I want to kind of go on a tangent and squeeze in a mention of the AV1 video format. And the reason why I wanted to squeeze it in is that it is the future of video on the web. The reason why it's the future of video on the web is that it compresses video 45% to 50% better than what is currently typically used on the web. It's still fairly new, so it's not really practical for you to be implementing it on your site yet. However, I encourage you to attend Francois and Angie's talk at 3.30 today where they're gonna be diving into AV1 in more detail. Image compression is a topic that's tightly coupled to image formats. Image compression algorithms are specific to the image formats that they compress. However, all image compression algorithms can be broken down into lossless and lossy compression. Lossless compression results in no loss of data. Lossy compression does result in loss of data, however, can achieve greater file size savings. At a minimum, all sites should be using lossless compression, no questions asked. However, for most people, it's gonna make sense to be slightly more aggressive and use lossy compression instead. The trick with lossy compression is finding that sweet spot between file size savings and image quality for your particular use case. Many lossy compression tools use the scale of zero to 100 to represent the image quality of the compressed image, with zero being the worst and 100 being the best. If you're looking for a place to start with lossy compression, we recommend trying out a quality level of 80 to 85. This typically reduces file size by 30 to 40% while having a minimal effect on image quality. My far the most popular tool for image compression is ImageMen, and it can be used with just about everything. ImageMen is used in conjunction with various ImageMen plugins, and you can think of these plugins as implementations of different image compression algorithms. Up on the screen, I've put the most popular ImageMen plugins for various use cases, however, these are by no means the only ImageMen plugins available. Image sizing is something I think many sites can be doing a much better job at. We have so many types of devices, and specifically sizes of devices that access the web these days, however, we insist on serving them all the exact same size of image. Not only does this have transmission costs, but it also creates additional work for the CPU. Solution, of course, is to serve multiple sizes of an image. Most sites find success serving anywhere from three to five sizes of an image, and in fact, this is exactly what Instagram does. Instagram uses this technique throughout their site, however, one use case where they were able to measure its impact was with their Instagram embeds. For context, Instagram embeds allow third-party sites to display Instagram content on their own site. As a result of serving multiple image sizes, Instagram was able to reduce image transfer size by 20% for their Instagram embeds. Two popular tools for image resizing are Sharp and Gymp. The biggest difference between the two is that Sharp is faster, and when I say faster, I mean faster at image processing, however, it requires that you compile C and C++ to install it. In addition to creating multiple sizes of your images, you'll need to update your HTML. You'll want to add the source set and sizes attributes. The source set attribute allows you to list multiple versions of the same image. In addition to including the file path, you'll also want to include the width of the image. This saves the browser from having to download the image to figure out how large it is. The sizes attribute tells the browser the width that the image will be displayed at. By using the information contained in the source set and sizes attribute, the browser can then figure out which image it should download. Lazy loading is the last image technique that I'll be talking about today. Lazy loading is the strategy of waiting to download a resource until it is needed. In addition to images, it can be applied to resource types like JavaScript. Image lazy loading helps performance by using that bottleneck that occurs on initial page load. In addition, it saves user data by not downloading images that may never be used. Spotify is an example of a website that uses this technique very effectively. On this particular page that I pulled up, image lazy loading was the difference between loading a meg of images on initial page load and 18 megs of image on initial page load. That's a huge difference. Two tools to look into for image lazy loading are lazy sizes and low ZAD. And you implement them both more or less the same way. Add the script to your site and then indicate which images should be lazy loaded. However, just because this is a fairly simple-to-use technique does not mean that it's not important. In fact, it is so important that native lazy loading is coming to Chrome. Native lazy loading means that you'll be able to take advantage of lazy loading without having to add third-party scripts on your site. It'll be available for both images and cross-origin iframes. And you can truly be lazy when it comes to implementing it. If you make no changes to your HTML, the browser will simply decide which resources should be lazy loaded. If you do care, however, you can use the lazy load attribute to specify which attribute should or should not be lazy loaded. Fonts and cause performance problems because they are typically large files that are downloaded from third-party sites. As a result, they can take a while to load. This leads to the phenomenon known as the flash of invisible text. And shockingly, this affects two out of every five mobile sites. Flash of invisible text looks like this. Instead of a user being greeted with text on your site, they're greeted with invisibleness. Not only is this frustrating, but it also looks bad. What you want to incur instead is the flash of unstyled text. And this is when the browser initially displays text using a system font and then swaps it out for the custom font once it has arrived. The good news here is that this fix is literally a one-liner. Everywhere in your CSS where you declare a font face, add the line font display swap. This tells the browser to use that swapping behavior that I just talked about in the previous slide. Now I'm gonna hand the mic over to Hussein who's gonna talk with you about techniques you can use with your JavaScript. So Katie showed a number of techniques that could be quite useful for the images and web fonts on your site. As well as a few exciting things coming to the Chrome platform in the near future, like native lazy loading. For the rest of this talk, we'll go over some other important things you should be doing, but for the JavaScript that makes up your application. Earlier in the session we saw how images can make up the majority of a site with regards to the number of bytes sent. However, we also send a significant amount of JavaScript to browsers. If we take a look at HTTP archive data once again, as of last month, the median amount of JavaScript that we shipped to mobile web pages was about 370 kilobytes. For desktop, the number was about 420. Now JavaScript code still needs to be uncompressed, parsed and executed by the browser. So in reality, we're looking at about a megabyte of uncompressed code that needs to be sent and that needs to be for an application of the size. Users who try to access this with low-end mobile devices will notice a much poorer performance. But why are we as developers shipping way more JavaScript code than we've ever done before? There are a number of reasons, one of them being the amount of dependencies that we pull into our applications and how easy that process has become. Frontend tooling has come a long way in the past decade, but there has been some cost. So what can we do to continue to try and build robust and fully-fledged applications but not at the expense of user experience? The very first thing we can and should consider doing is splitting our bundle. The idea behind code splitting is instead of sending all the JavaScript code to your users as soon as they load the very first page of your application, it's to only send them what they need for their initial states and then allow them to fetch future chunks on demand. The easiest way to get started with code splitting is by using dynamic imports. Now, dynamic imports has been supported in Webpack for quite some time, and it allows you to import a module asynchronously where a promise gets returned. Once that promise finishes resolving, you can do what you need to do with that piece of code. The idea behind dynamic imports is you want to make sure that it fires uncertain user interactions, and you want to do this to make sure that you only fetch code when it's actually needed. If you happen to be using another module bundler like Parcel or Rollup, you can still use dynamic imports to code-split where you see fit. Now, a number of JavaScript libraries and frameworks have provided abstractions on top of dynamic imports to make the process of code-splitting easier with your current tooling. With Vue, for example, you can define async components, and they're just functions that return a promise that resolve to the said component. By using that with dynamic imports, you can attach async components into your routing configurations. So that only when a certain route is reached only then will a code that lives in that component be fetched. Angular has a very similar pattern. In its router, you can use the load-children attribute, and you can use it to connect a feature module to a specific route. With load-children, you can define a dynamic import with Ivy, and Ivy's a new rendering engine that the Angular team is working on. When you do this approach, all the code, all the components, all the services that live in the feature module will only get loaded when that route is reached. In the meantime, you can use load-children, but you just need to use a relative file path to the feature module. With React, libraries like React, loadable and loadable components have allowed us to code-split on the component level while taking care of other things, like showing a loading indicator or an error state where applicable. However, with React 16.6, the lazy method was introduced, and this allows you to code-split while using suspense. Now suspense is a feature that the React team has been working on for quite some time, and it allows you to suspend how certain component trees update their state or update the DOM, depending on how all of its children components have fetched their data. Another very useful technique that ties in well to code-splitting your bundle is by using preload. Preload allows us to tell the browser that if we have a late-discovered resource or a resource that's fetched late in the request chain that would like to download it sooner because it's important. So by doing this, we're telling the browser to prioritize it. To use preload, you only need to add a link element to the head of your HTML document, and you need to have a rel attribute with a value of preload. The as attribute is used to define what type of file you'd like to load. Now, as developers, it's also important to make sure that the code that we write works well in all the browsers people use to access our site. So if we happen to include ES2015, 2016, or later syntax, we also want to include backwards-compatible formats so older browsers can still understand them. This usually involves adding transforms for any newer syntax that we use and polyfills for any newer features. Now, because transpiling means we're adding code on top of our bundle, our application ends up being larger than it was originally written. One way to make sure that we only transpile the code that's actually needed is by using Babel Preset ENV. This preset takes the hassle out of us trying to micromanage which plugins and polyfills we need to add. And it does this by allowing us to specify a target list of browsers and letting Babel handle the rest. You can add this preset into your list of presets in your Babel configuration, and you can use a target's attribute to define that set of browsers that you'd like to reach. Now, this is a browser list query. So if you've used tools like Auto Prefix or before, you may already be familiar with it. Using a percentage like here is one type of query you can use, and it allows you to target browsers that cross a certain global market share. The use-built-ins attribute allows us to tell Babel how to handle adding polyfills. The usage value means that Babel will only automatically include polyfills to files when it's actually needed for features that need to be transpiled. Now, this is the behavior we all want, to only transpile code when it's required. So although Babel Preset ENV means that we can limit the amount of transpiled code that we have to make sure that we only include what's necessary for all the browsers we plan to target, what if there was a way to differentially serve two different types of bundles? One that's largely untranspiled for newer browsers that don't need nearly as many polyfills, and another legacy bundle that contains more polyfills is a bit larger but is needed for older browsers. We can do this by using JavaScript modules. Now, JavaScript modules, or ES modules, allow us to write blocks of code that import and export from other modules. But the amazing thing about using modules with Babel Preset ENV is that we can have it as a target instead of a specific browser query. One site that's actually using this module approach today is the New York Times, and they're using it for one of the flagship articles of the year, polling in real time for the 2018 midterm elections. They're using SAPR as their client-side framework, which contains a number of progressive enhancements baked in, like automatic code splitting. But they're also using roll-up to emit module chunks as well. They're using a fairly simple heuristic to make sure that users who have older browsers download a larger, more polyfilled bundle. But users who are using newer browsers can only download smaller and slimmer modules. A very simple way to make sure that users who access your app only download one or the other is by using the module no-module technique. When you define a script element with type module, browsers that understand modules will download that normally. But they'll know to ignore any script element that has the no-module attribute. Similarly, browsers that don't understand modules will ignore any script elements that have type module. But since they can't identify what no-module means, they'll download that bundle as well. So here, we can get the best of both worlds, shipping the right bundle to our users depending on what browser they use. If you happen to have critical modules that you'd like to download sooner, you could do that by also preloading them as well. And you can just need to specify a module preload value to the real attribute. So we've talked about a few things you can do to improve the code that you ship to your users. But if you're thinking of adding any of these optimizations, it can be useful to try and keep an eye on things. And there are tools out there that could actually make this easier. The Code Coverage tab within Chrome DevTools allows you to see the size of all your bundles, as well as how much of it is actually being used. You can access it by opening the command menu and just typing in coverage. If you're using Webpack, Webpack Bundle Analyzer can be a very handy tool. And it gives you a nice heat map visualization of your entire bundle. You can zoom in, see which parts of your bundle are larger and which parts of your bundle are smaller. And if you've ever wanted to find the cost of a specific library, you can use Bundlephobia. You could type the name of any package and see how large it is, as well as how much of an impact it can make to your application in terms of download time. You can also scan your package.json file to see how much of an impact all your packages make. Now, as useful as it is to use tools to manually keep an eye on how things are doing with your bundle size, it can be especially useful to also include checks into your build workflow. One tool that could actually help here that can allow you to set performance budgets is Lighthouse CI. So instead of only running a Lighthouse in the Chrome Audits panel, or as a Chrome extension, you can also run Lighthouse in CI and have it included as a status check into your workflow. You can specify certain Lighthouse categories and set scores for them so that merges and pull requests only get included if those scores are met. Now, a site that's actually taking steps to add a number of these optimizations is Uniclo. They're a clothing retailer based out of Japan and they're taking steps to improve their entire web architecture, beginning with their Canadian site. They've identified a number of critical resources and decided to try and download them sooner and they're doing this by preloading them. They've done this with some images, some core fonts, as well as a number of cross-origin fetches. They then also identified that they can code split and try to get some wins that way as well. They took the correct first step of code-splitting at their route level and just by doing that alone, they noticed an almost half-sized reduction in their bundle size. They also moved on to code-split their localization package and noticed that they can get their bundle size down to 200 kilobytes. After this, they even added more optimizations such as using a pre-act compatibility layer for the react bindings to get their bundle size to about 170 kilobytes. While doing all of this, they made sure to also set budgets so their whole team can stay in sync and they're using another open-source tool to help your called bundle size. They've set 80 kilobyte budgets for each one of the chunks and that allowed them to stay under a 200 kilobyte total for all of their scripts. While adding the optimizations, they noticed a two-second time-to-interactive reduction for users that use low-end mobile devices and have weaker connections. Now, you might think two seconds is not that much, but it can make an impact for your customers. After these optimizations were added, they noticed a 14% reduction in bounce rate, a 31% increase in average duration, and a 25% increase in pages-viewed per session. Now, there were other things that were also being added to the site at the same time, but they know that performance played a very huge factor here. So we've talked about quite a few things that you can do today to improve how your site performs. But what can Chrome do as a browser as well? For users that opt-in to data-save remote, Chrome will fry and show a lightweight version of the page where possible. And it does this by minimizing data used as well as showing cache content whenever it can. Now, as developers, you can also tap into this as well. And you could do this by using the network information API. If you look at the navigator connection-saved data attribute, you can identify whether your users actually have data-save or enabled. And you could try and serve a slightly different experience to make sure things are fast for them as well. You can also use the effective type attribute and use that to actually be able to serve different assets conditionally, depending on what connection your users are having. The very last thing that I do want to mention is although me and Katie were talked about a lot of the things that you can do to improve your site, every application is built differently. Every team is different. Every tool chain is different. So this isn't something you need to start doing wholesale and including everything at once. By setting budgets and keeping an eye on your bundle size from the very beginning, you can include performance enhancements as a step-by-step procedure and make sure your site never regresses. Performance doesn't need to be an afterthought. Almost everything we've talked about is in Web.dev. So I highly suggest you take a look if you haven't already. We hope you enjoyed this talk as much as we enjoyed giving it. Thank you. Oh, I'm still thinking about that last round of Big WebQuiz. We're going to do another one. Yes. Devices, right? What are we doing this time? Well, that is a great question. Yeah. Databases. Certainly, databases have like, mm. Oh, yeah. Well, I think this might separate the field because what we're not going to show the leaderboard just yet. I can tell you that it is very close at the top. Oh, yes. That's true. Yeah. So, ready? Set. Let's go for it. See what happens. So then, are they ready? I mean, it is the Big Web. So we have databases on the Big Web, CMS, and so on. Provasive PSQ. What have we got? Acerbase there. Mm. EBDB. I'm a big fan of that one. All your base. Oh, the confidence is just, it's fluctuating wildly here. Wow. JakeDB. Confident. Are you on that one? But a biscuit base. Mm, interesting. Wow. That was very split, wasn't it? Yeah. Everybody's unsure about the database name. You ready to find out? I'm ready to find out. Yep. Let's reveal. Interesting. Interesting. Max and I... Wait a second. The last one. Wow. Wow. All right. Let's keep going. Acerbase. Is it fake? Who knows? It's... I've got a feeling that some of these might be the wrong way round. We might have to do a quick test at the back. And then last but not least, let's take a look. JakeDB, is it fake or real? We're all wondering. Mm. Is it, is it? Fake. Oh. I think this is the wrong way round. I'm going to check it. I think, yeah. I think JakeDB is real. I think we need to go backstage and, like, divact this. Have we got a bug? Possibly we do. Yeah. While we figure that out, we should have our next speakers. So we will invite to the stage Eva Gasparovic and Phil Walton to talk about caching strategies. Hey, everyone. My name is Philip Walden. I'm an engineer on the Chrome team working on web performance and service worker tooling. And I'm Eva. I'm also an engineer on the Chrome team. Today we're going to talk about service worker from the speed and resilience perspective. We look into how our decisions regarding service worker implementation can influence both positively and negatively our site's performance. And we hope that by the end of this talk you'll have a really good understanding of all the trade-offs involved in using this more and more mainstream technology. So Eva, I notice you said service worker was mainstream. What makes you say that? Well, service worker is now supported in all modern browsers, which means you can move from treating this as a pure progressive enhancement and treat it as a core part of your site's architecture. Service worker can do many things, but in this talk we'll focus especially on its caching capabilities. Often when we think about service worker and caching, we usually associate it with providing offline support. After all, one of the main achievements of service worker was that we finally could get rid of the offline down Azure and, you know, send him to the well-deserved retirement. But apart from that, service worker can also be a great tool for improving the performance of your online site, especially for your returning users. When used right, it can give you a serious boost in terms of speed on a repeated visit. That's right. And on the other hand, when used incorrectly or without proper analysis, it can actually hamper a site's performance or even derail the whole experience altogether. So as developers, it's critical that we understand how a service worker affects the performance of our site. As with any technology, there are both costs and benefits, and we want to maximize the benefits while minimizing the cost. Well, using service worker brings a lot of benefits in terms of performance. In many cases, it allows you to overcome the network latency entirely. For example, if you cache your entire app, you don't need to go to the network anymore. Also, if you have some cached content, you can show it immediately, even if you're a bit stale, and look for the updates in the background at the same time. It can also make your average request smaller on average. For example, in the actual model, where you're fetching just partial part of your page rather than the full HTML each time. Finally, there are some more subtle benefits. For example, we know that JavaScripts need to be parsed, compiled, and executed before it can be used. This can take time. So engines like V8 use some heuristics to see if they can actually store the bytecode from these faces to be used on repeated visits to avoid this cost. Now, in service worker, the chances of a repeated visit are pretty high, so it can opt-in into that optimization automatically for you. This means that it stores the script as bytecode by default, making the repeated visits faster. All these are really good performance reasons to implement service worker on your page. But it's not for free. Phil, what are the costs of running service worker on your page? Yeah, so the first and arguably most often overlooked cost of service worker is that it can take time for the service worker process to start up if it's not already running. And this can happen if a user hasn't visited your site in a while. Let me show you what I mean. Consider a basic network-first strategy in which the service worker just forwards the request from the web app onto the network and doesn't touch the cache at all. Since this web app is running a service worker, every request then has to go through that service worker. And if the service worker process isn't currently running, the web app has to wait for the browser to spin it up before it can make any request. So let's take a look at the total time it takes to make a request in various scenarios. In the case where the web app isn't using service worker, the total time is just the total network latency. In the case where the web app is using service worker and that service worker is already running, there's a little bit of extra cost because it has to go through the service worker, the request has to go through the service worker, but that cost isn't usually too high. However, in the case where the service worker is not running and needs to boot up, that startup time can really delay the response. So in cases like this where the service worker is actually not running on a page, how long does it usually take to boot up? So that's a good question and the honest answer is it depends on the user's device, but fortunately there's an easy way you can measure this cost yourself. So this code here uses the performance timeline to get performance data for a particular URL. If the request for the URL went through the service worker, then the worker start property on the performance entry will mark the moment right before the service worker was run and the request start property will mark the moment the service worker received the fetch event. So the difference between these two timestamps is the total time it took for the worker to start up. And if the service worker process was already running, this time will be usually zero or close to zero. And so I actually measure service worker setup time on my website, philipwalden.com, and here's what I found when looking at my own data. When a user visits my site for the first time, or sorry, for the first time after installing the service worker, it's already running only about 25% of the time. That means 75% of the time the service worker is not running and needs to take some time to start up. For those cases, I found that on desktop it's usually between 20 and 100 milliseconds to start up, but on mobile it can be more like 100 to 500 milliseconds and at the 95th percentile, sometimes it's more than a second. So let me reiterate, these are stats from my website. The numbers you see might be different, but this should give you kind of a general idea of what's possible. Another cost of using service worker is that the cache reads aren't always instant and this affects any caching strategy where the service worker has to wait for the cache to either miss or error before it can go to the network. We saw before that a network-first strategy can be slow when you're not using service worker at all, but what about strategies that use cache or start with the cache anyway? A cache-first strategy will initially look for a response in the cache and if one isn't found, if one is found it'll be used, but if it's not found or there's a timeout or error, like I mentioned, it will fall back to the network. So here's how the performance of cache-first strategies break down in different scenarios. The most common case is going to be the one at the top, which is very fast when there's a cache hit, but this is not the only possibility. There could also be a cache miss, there could be a slow cache, you could have a timeout. Remember, there's also the possibility that the service worker isn't running and so then that could delay it as well. All of these bad cases could happen at the same time. So while it's definitely possible to have a cache-first strategy that's faster than not using a service worker, look at how many of these examples end up being slower. So Phil, I'm wondering how likely is it that the slow cases actually occur? Is it measurable somehow? Yeah, so this is also measurable. It's a little bit trickier than the last example I was showing. So this, the same way, uses the performance timeline and you can look at the entry's transfer size property. If the transfer size is zero, that means the request either came from the cache or it came from the service worker. For requests that you know are being handled by the service worker because you set up a route for that URL, you can look at the time between the response start property and the request start property and see the total strategy time. Of course, that doesn't tell you if it was just handled by the cache or if it was handled by the cache and the network. If you need more granular timing data into that stuff, then you have to add these performance marks in your service worker itself. You can use something like performance.now and then post-message the data back to the window. It's a bit conky at the moment, to be honest, but we have a new API proposal for fetch-event worker timing that should make this easier in the future and it is a proposal right now. If you want to offer feedback on the design, go to this short link here on the slide and we'd love your feedback and you can chime in. The last cost that we want to point out is that requests made from within the service worker can sometimes compete with higher priority requests on the window. The cause of this is usually overaggressive precaching or precaching before the window has finished loading all of its resources. For example, if you're precaching a local asset on your website, you could potentially get to a situation where those pre-cache requests get queued ahead of more important requests that the user needs. So while APIs like Priority Hints can solve this issue somewhat, the recommended approach right now is just to wait to register your service worker until after the load event. Okay, thanks Phil for the thorough walkthrough through the costs. So once we know all this and we know how to measure it, how does this translate to the design of service worker? Well, usually there are three sources the service worker gets content from, either from network, either from the cache, or it can also generate it on the fly, for example by using some, you know, template logic. When designing service worker, your role is to find a combination of these three sources that is the most efficient for the use cases on your web pages. This is the serving strategy of service worker. Of course, in order to use anything from the cache, we need to populate it first with the resources first, and this is the caching strategy of service worker. Taking all these aspects into account can be quite daunting, but fortunately there are tools that make it easier. For example, Workbox is a set of libraries that make it easy to cache assets and take advantage of service worker features and related APIs. In this talk, we'll focus on general design principles that you can implement yourself in service worker, but we will also call out some of the Workbox features that can help you out in some common scenarios. Okay, so let's start with looking at the serving strategies for service worker implementations. And given what we said about the cost that can be associated with using service worker, you're probably wondering, is it possible to avoid these costs entirely and make sure that my site actually loads faster than it would have without the service worker. So I think the most important step in building a fast site with service worker is an understanding of which requests are most important to optimize. Broadly speaking, there are two types of requests. Navigation requests and resource requests. Navigation requests are for your full HTML pages and resource requests are for the assets like JavaScript, CSS, and images that those pages then reference. In my experience with talking to other developers about service worker implementations, I don't see a lot of people responding to resource requests from the cache, but unfortunately I don't see a lot of people responding to navigation requests from the cache. And that's really too bad because navigation requests are typically where the biggest performance gains can be made. So here's why. The key difference between resource requests and navigation requests is that navigation requests are likely already being cashed by the browser. In addition, you can already use APIs like JavaScript to load to warm the HTTP cache for future requests. So if all your service worker is doing is pre-caching static resources, you're essentially just recreating what the browser is already doing for you. Navigation requests, on the other hand, are completely different. In general, it's not recommended to put caching headers on pages that you navigate to because obviously the content might change, but the URL doesn't, and so that can get you into trouble. Which means that navigation requests can be used to cache in the same way that resource requests do. They also don't work with APIs like link rel preload. And to top all that off, navigation requests are typically the ones that will encounter a service worker that's not running. Whereas resource requests, by the time the service worker starts up from navigation requests, it's already running and so those kind of work just fine. So don't get me wrong. I'm not suggesting that you ignore resource requests and don't cache them. You should respond to navigation requests from the cache as well. So here are three practical and concrete ways you can speed up navigation requests and avoid most of the service worker cost that I mentioned earlier. First, as I just said, respond to navigation requests from the cache. Even if you eventually need to go to the network, you should respond with something right away so the user understands that it's happening, that the request is working. A simple way to do this with Workbox is to use either a cache-first strategy or a stale-while-revalidate strategy. Personally I like stale-while-revalidate because it gives you an opportunity to check for updates in the background and then you can notify the user if there's new content. In terms of performance, as you can see, responding from the cache is generally faster when not using a service worker and that's even true in cases where the service worker is asleep and needs to wait until the cache is slow. The only situation, it's worse is when there's a cache miss and you have to go to the network anyway, but that's to be expected. So I know what many of you here are probably thinking, there's absolutely no way that I can respond to all network requests, navigation requests with cache content. I need my content to be up to date, I need it to be fresh and this is true. Many apps simply cannot provide value with stale content. They have to have stale content in the cache or from the network. It doesn't mean you need to fetch the entire HTML page from the network. My second tip is when the network content is truly required, fetch just the minimum amount of content you need and then combine that content with other parts of the page which should already be in the cache and even better is if you can combine that content in the form of a streaming response to the user. Let me show you an example of what I mean by that. I'm going to show you a head section, a header, navigation sidebar footer and a lot of these sections of the page repeat on every single page throughout your site. The only thing that changes is often a single content area like you can see here. So in these types of situations, clearly it's more efficient to just fetch the stuff that's changed rather than fetching all of that stuff every time. If you've ever built a single page application, you're going to have a lot of problems with that. You're going to have a lot of problems with that. You're going to have a lot of problems between traditional network responses and a streaming response that combines network content with cache content. So in the example on top, since the network request is for the full HTML page, you can see it takes a long time and in the example on the bottom, the network request is for a smaller part of the HTML page to add to the total time it takes to make the request. And lastly, and this is the best part about streaming, since it is a stream, we don't have to wait until all the content is available to start sending something to the user. As soon as we have the cache content from the start of the page, the header section, we can send it to the user right away and then we can just add to the stream once we get more network content in later. So that's what we're going to do right now. We're going to do a little bit of a quick tutorial. If you've never used streams before, you might be a little bit scared of the idea, you might think it's complicated, but actually with Workbox, it's really easy. So what you do is you just register a route like you would do with any Workbox strategy. And then you invoke the strategy function from the Workbox Streams package. The workbox takes care of merging this content together in the form of a stream. In this case, I'm using cache first for the header part and the footer part and then I'm using network first for the content so I can make sure that it's fresh. And that's really it. Workbox takes care of merging this content together in a stream and if the browser doesn't support streams, it automatically falls back to a single text response. And that's really it. And it's really easy to do this in the Node Service Worker case. Of course, the actual speed differences will depend on the size of the content area relative to the entire HTML page and that will vary from site to site, but in general, streaming cache content with network content is one of the fastest ways to respond to navigation requests. And I say one of the fastest ways because there actually is one way to respond to navigation requests, which allows you to make the navigation request in parallel with the service worker starting up, essentially eliminating that service worker boot up cost that I mentioned before. So by now you've probably seen this chart many times, you understand that service worker boot up time can extend the navigation request. So with navigation preload, what you do is you just do these service worker navigation preload header on the preload request and then that allows your server to respond to this request as it would have had the request come directly from the service worker itself. So for example, if you're using the streaming partials strategy that I just discussed, you could respond to the navigation preload request the same as you would have if it had come from the service worker. To use navigation preload, it's very easy to do. You can use it in any way you want to feature detect it first, but then you enable it at any point in the life cycle really, but it's often best to do it in the activate event. And then once you've enabled it, fetch events for navigation request will have access to a preload response property, which you can then use however you want. And then looking at the performance of this technique, you'll be able to use it in any way you want. One last thing I want to say about navigation preload is that you really only want to use it in cases where you know you're going to have to make a network request on navigations. If you can use just the cache first strategy to respond to navigations, then that ends up being faster and then you waste the network request. So in general, only use it if you know that you have to use content from the network. So to summarize this, you can easily overcome these costs and you can end up with a even faster loading site than what you could have done without ServiceWorker. Okay. Now let's talk a bit about the caching part of the Service Worker. When we think about cache management, we usually want to achieve the following. We want to store the right resources at the right time while controlling the overall size of our application. We definitely want to have the right resources at the right time, because as developers, we do have quite a bit of storage space on users' device, but it's not unlimited, so we need to stick to that. And we also want our resources to be as fresh as possible, which means we need to have efficient updates. Now, when it comes to right resources, it's good to understand what you actually want to put in the cache in the first place. Now, when we talk about core scripts or basic styles, this should really get the highest priority in terms of cache. Then there are non-critical ones. For example, images that are not visible straight away, or some big media files or some additional widgets. We cache them on the best afford basis. And finally, there's trash, which means stuff that should not be there in the first place. Now, when we look at the most advanced images, that code, unused script, and so on, why do we have trash on our pages? Well, because we're humans and sometimes our projects are simply not perfect. This moment, when you're considering how to shape your caching strategy for service worker, it's a great time to stop for a while, make an effort to make sure that your caching strategy is working well. So, reviewing your app and understanding which resources are critical and which are not so critical, we really help you later in designing and updating your cache in an efficient manner. Now, what about timing? Usually, we cache assets either in the install event of service worker, that's usually made package with your app. It's great for caching the critical content, since we know it's most probably going to be needed anyways. It's relatively easy to implement and to manage and to update because you can just, you know, replace the whole package with the new version when needed and also because the size of such package is known beforehand. On the other hand, when using pre-caching, you need to make a lot of arbitrary decisions about what the user is going to need even before they start interacting with the page, which might lead to the situation where you cache many more resources than necessary. Also, as we mentioned before, it can cause network congestion and compete with the other network requests from the window. Runtime caching, on the other hand, is really great for non-critical assets because you can draw conclusions from the user's input. For example, in cache-only images they already accessed or show different, cache different parts of your app depending on the user's entry point. On the other hand, the problem with runtime is that you need to be really careful about updates because assets might end up in cache at different moments in time. For example, if resources depend on each other like you have a script that depends on markup time, you need to be really careful about versioning. Also, important thing is that in this case, the cache will grow over time. The size is not known beforehand. So the user, as the user interacts with the app, the size will be different. Here's an example. This is a very simple e-commerce app I built recently. If I cache pages in this app at runtime and the home page is fully connected to the image. But later, when the user navigates to a new category like accessories page, it gets added to the cache and the overall size grows to 300 kilobytes. As the user interacts with my app, the overall size grows. If the app is really big, it might be hard to predict ahead of time how much space it will take on users' device. This is why it's so important to control the size of your app throughout the process. So how do we do it? Can we do it programmatically somehow? You can. If you ever need to check your app's current storage usage, you can use the storage manager API which has an estimate method that returns both the total quota as well as the current amount that's being used. Well, it's really cool that we can estimate that because it depends both on users' device and also on the amount of currently available space on the device. So you can just throw assets into the cache and assume it will never fill up. You always need to have a plan on how to remove old or unnecessary assets. After all, you don't want a situation where you can't update some critical script because your cache is full of cat videos. Here are some things that can help you to stay below the page. As we mentioned before, you can store partials of your pages instead of full HTML to avoid duplication and save some space. You can also separate your critical and non-critical assets into different caches with different names so that you can evicted the non-critical ones when needed without touching the rest. Finally, you can also cache some resources only if there is a good size or maximum number of entries constraints on your cache. I think this is something Workbox can help us with. With Workbox, you can easily manage the rules for how and when cache entries should expire with the cache expiration plugin. You can use it with any of the Workbox strategies. You can configure both a max number of entries per cache or a max age for each entry. You can also configure the plugin to allow you to create a code error or anything like that which is usually a good thing to do when you have a non-critical asset cache. Now a few words about updating the cache. The simplest solution is what I call the Nuke approach which means you clear all your caches and start fresh every time your service worker updates. It's very easy to implement but it's not very efficient nor kind to implement. If you want to be more granular, you need to properly tag your assets so that you know which ones are compatible with each other. You can use the content-based hashes in the file name of the given resource or provide revision data on each asset so that you can manage it in service worker later on. This process can be very error prone when done manually so fortunately we have tools to manage the update process yourself at all. It has an asset manifest that maps file URLs to their revision hashes and that allows it to remove old assets and fetch new ones without having to touch any of the unchanged assets. It makes the upgrade process really efficient. Workbox also has both gulp and Webpack plugins and as well as the CLI so you can easily generate this asset manifest yourself. As you can see, Workbox makes a lot of things easier for us as developers so that we can focus on what matters most and that's the user. Users can really vary it. They can be of different background. They can use our app at home or on the go. They can use more or less advanced devices or have different access to data and connectivity. There are some things we can do to accommodate those differences. First of all, when working on performance never assume the environment you work in is representative of your whole user base. For example, you should always throttle your network to 3G speed when testing to get a more realistic feel for your app performance. Secondly, keep in mind those underpowered devices with little storage and really control the size of your app. Remember that overall size of your app might grow over time if you use runtime caching and plan accordingly. Also, sometimes there are actually explicit hints from the user that you can use in your decision making. For example, you can refrain from speculatively precaching future resources if the data saver mode is turned on. When user enables this feature in Chrome, the saved data header is being sent with each request so you can detect it and, for example, refrain from aggressively precaching a lot of future assets. Similarly, you can use the network condition of your app to perform the application. Finally, you can also consider scenarios where you give the user the full control over the experience. For example, you provide, say, for later button where user can explicitly opt in and decide to get something stored for future use. Putting the user first can really benefit the quality of your app, especially in the long-term development horizon. So, to wrap up, I know we've presented a lot of content today, and we don't expect you to remember everything. If you want to learn more about service worker best practices, we've launched a new section on Web.dev with content dedicated to building fast and resilient web applications with service worker. So definitely check that out. Also, we've just released a V4 beta of Workbox with lots of cool new features, and we love your feedback on GitHub before the public release. And finally, just a few key points we want to leave you with. First, definitely have a plan. You can't just assume that adding service worker to a site will magically make it faster, because without an optimization plan, it probably won't. Second, don't just reinvent the HTTP cache inside of your service worker. And don't just cache static resources. If that's all you're doing, you're not actually optimizing your site for making it slower. Third, remember that navigation requests are the most important requests to optimize, and you should always try to respond to them from the cache. Fourth, measure the real user performance of your implementations, and make future performance decisions based on data. Don't just guess. Fifth, control the size of your app and how much you store on the user's device. And last, but definitely not least, respect the user, respect their data and respect their preferences. So, thank you. If you have any questions, feel free to find either of us afterwards or hit us up on Twitter and ask questions. That's it, yeah. Thank you. Oh, we figured out the bug. Where is the last one? Definitely had bug. Yeah, so we were backstage. Now, the thing to know is that... We are the one that's standing here. Yeah, but Surma is the one that built the bug in. So, Surma, do you want to come out and talk to the nice people? Hashtag Surma bug. Hi. I'm Surma. I'm a web developer, and I don't write tests. Good. Now, tell them what happened. So, in this entire thing, the entire big web quiz is built with Firebase, the real-time database, and we have this entire collection of rounds and questions, and we have this array with a question. It's like the MaxDB and the JTB. And then, in another property, we have the correct answers. And so, when I wrote the editor, which would be like, wrote the questions into the database, and you delete one of the questions, it only deletes the question and not the answer. So, you were just like... Different length, and of course, I don't check that anywhere. And so, now, they were like out of sync, and that's why everything was wrong. But, don't fear. We have your questions, and I corrected it. We can recalculate all the points. You already did that. You didn't lose out. It was... And the good news is, it's always a big web quiz if something goes a little bit funny in the middle. That's part of the character. It's fixed now. It's all gonna... Right. So, it's time for lunch. Lunch is served at the forum space. So, go ahead, and then head to the forum space. If you have a directory distribution that's also labeled in the forum space, we will be back here at 2.30. Thank you. Here's what's new in DevTools. You've got a lot of new features. Open the layers panel, and then rotate the perspective a bit so that it's easier to see the layers. There's now an overlay that shows your pages' FPS performance in real-time. In terms of accessibility, the contrast ratio affects how readable text is. Here's a tip about a lesser-known DevTools feature. Let's try and clear things up. The user timing specification. Okay, whoa, whoa, whoa, whoa. Slow down. What you just saw are Supercharged MicroTips, and they're fast, but not quite that fast. They're about JavaScript and web development in about under two minutes. But on Supercharged, we also do live streams where I invite a guest and we go live on air and you can chat to us while we build something together. They look like this. We're on the Chrome channel with Surma. Hi, I'm Surma. Oh, you want me to say my name? Joking aside, what are we talking about today? Web workers. Streaming service worker. We're going to be doing a little bit of WebGL. We're going to build a web component using server-side rendered flaky network resilient comment section with Firebase. We're going to return something here later. So let's see if this works. Yeah. This is what you can find on the Chrome developers' YouTube channel, and I will see you there. Hello. Everyone came back. Wow, you all seated and everything. Wonderful. Excellent stuff. Good break? Yeah, good lunch? Yeah. There's sleep. Okay, fine. Did you notice anything? You weren't wearing that before? No. So you've changed? Yeah. Why? Look, dudes. He came in. Oh. Oh, I feel personally attacked. Sleep in everything. Oh, that's harsh. Look, I don't go to fashion. I wait for fashion to come to me. And how many times fashion visited you? I'm still waiting. I'm still waiting. I'm doing the same with hair. Should we do a big web quiz question? Yay. What's your phone's, whatever it is you're playing with? This will wake you up, I'm sure. Oh, this question, this question, as the server wakes up. Google font. So we need some audio coming through. We should have sound as well. Hopefully that will come back during the round. So, the question is, are these fonts to be found on the Google Fonts website? Yeah, how well do you know your fonts? Some of them are amazing. Yeah. Ready when you are? Let's take it away. Right, we should be seeing the first set of things. Here we are. Sans Francisco. I like that one. What have we got here? Lobster 2. I didn't realize there was a sequel. Remember, you've only got a couple of seconds per question here. Lobster 2, not very confident on that one. That's a 50-50 spit. I shot the Serif like that one. Yeah, that's very good. Times Old Roman. Oh. I often feel like font names. It's the same people who invent the names for horses in races. Yes. Look, Mar no Sans. Could it be? Right. Round is closed. Should we see how it went? Yes. We're thinking Oxygen's fake. How do I pronounce that third one? Sans Francisco. That's the second one. The third one. Yanone Cafe Zatz. You're making it up. No, Serma told me about this. He gave me the pronunciation lesson. I've probably done a very poor job of it so I apologize to anybody who can speak German. I believe so. Otherwise, Serma was making it up too. Let's have a look. San Francisco. It should be a font. Why have we not got a font called that? Right. Unreal. Right. Let's have a look. Are these real ones? 66% of people. Close Sans, you all thought it was real. It's fake. Open Sans. Open Sans, not closed. They're all open sources. I shot the serif as fake, apparently. Is it? We had a lot of fun making that one up. I made that one up. Who gave up that one? Times Old Roman. Turns out no. That's a fake. Let's move on to the next one. Look, Mark, no Sans. Quite even on the split here. Except for Look, Mark, no Sans. Which is fake. I was really proud of that one. Last sets. Here we go. Lobster. Fake is a real font, is it? A font of knowledge. No. It's fake. Of course, fake. Excellent. Right. Are we going to look at the leaderboard? That is well remembered. Can we get the screen back up? We're going to do the leaderboard. Let's see where you're all at. Remember, you only show up if you opted in. Ah, yes. A leaderboard. Look how close that is. 54, 54. I didn't tell you it was close. Good to see GD's representing as well. Excellent. OK. Should we get the next speakers on? We should. Here to talk about rendering performance. Please welcome to the stage Jason Miller and Adam Argyle. Off you go. I hope you're feeling good and you're ready for some knowledge. I'm Adam Argyle. Jason Miller. We're here to talk to you about smooth moves. So, when we were thinking of what to talk about at CDS this year, we wanted to demonstrate some techniques for optimizing performance of existing applications. We were trying to think of what's like an app that Google doesn't have, something not in our portfolio. It took us a while to come up with something, but I think we kind of hit the nail on the head. A chat app? They don't have a chat app. No. It's French. This is an app for cats. Sorry, I knew this slide would be really confusing. Yeah. So, it's a chat app. We've been working on an app that actually facilitates communication between cats. Yeah. No cats are working on this with humans of sorts. And we figured that's sort of like an opportunity for an app that helps out with this, with the planning and whatnot. Yeah. So, this is what our app looks like. Design is sort of a mix of typical material stuff and take some inspiration from Android messages, among others. We wanted the UI to be realistically complex and well-designed so it would be challenging enough for the browser and we could measure our optimizations. And obviously, the text that's coming from the app, I don't know what's going on. I do, but I refuse to make a demo. So, next, we turned on the performance monitor, because it looked good on my machine. Let's measure it. It seems to look like everything's decent enough. Yeah. So, we saw a bit of jank and some of the early user feedback we got was that the app felt slow. We applied a bunch of the techniques that Katie and Hussein talked about in their talk earlier today. And there was something else wrong. So, we decided to load the video up on a Nexus 5. It's really useful to have a device like this sitting around as you develop for the web. As developers, we often have the privilege of fast devices on fast connections and that can kind of trick us into thinking that we're meeting our performance goals when we're actually not out in the real world. So, watching the video, you can see it's far from smooth. Many types of jank are present. So, smooth. We hear a lot when we're talking about user interface. It's true and it can be difficult to describe since it's sort of an invisible metric. If your app is smooth, no one really notices anything is wrong. So then, what does that really mean that smooth is? We thought smooth could destructure into these attributes really well. So, let's break them down. Cool. Smoothness is something we believe that happens when the user feels connected to the app that they're using like they're directly controlling it. So, when we feel connected to the interface, when the application doesn't respond quickly to input, it breaks this connection and we feel distanced from the task at hand. It's best when it feels like magic paper. Right. Yeah, so when an application animates irregularly, it actually distracts us by breaking something called the illusion of motion. You can see an illustration of what I'm talking about up here on the right. The eye responds to light changes at 10 times per second. So, if we show the eye a sequence of 10 images per second, we can actually still see each individual frame. Yeah. So, the brain doesn't necessarily interpret that as motion. If we increase this to 20 frames per second, somewhere in and around the red orb there, we start to interpret this as motion because we've exceeded two times the number of light changes we can detect. And this effect increases as we approach 60 frames per second where we start to see some diminishing returns. So, basically, smoothness relates to the human perception of performance. It's a measure of the question, is this application keeping up with me? Or does this have to feel like it's helping me get something done, or is it getting in my way? The way the user feels when interacting with software is really important. Because we've all been in situations like this. He was obviously trying to send a tweet. And it just wasn't working out. You probably should have used some newer hardware. Yeah. So, this is backed up by statistics, too. In a 2017 study, 46% of people who had an interruptive mobile experience said they wouldn't purchase from that brand again in the future. And in that same study, they found that 79% of people said they're more likely to revisit or share a mobile site that was easy to use. Another metric is a good one. 100% of atoms say they doubt a thing's reliability if it hiccups while performing its task. Maybe you agree. Right. So, anytime we talk about the perception of performance, it's useful to frame things using the rail model. Rail provides a set of goals that cover the four ways we all perceive performance and speed when using software. Those are response, which is reacting to events in 100 milliseconds, animation, which is producing frames in 16 milliseconds, idle, which is maximizing idle time, and load, which is loading in under one second. And these are based off of like a few years, a few decades actually worth of research. So they're very unlikely to change. And you know what this slide looks like to me is click, run, swipe, and load. Yeah, we could have used a different acronym there, I guess. Yeah, we'll get there. Cool. Yeah, so there's times that we have to hit for all of these. And we believe responding to user input quickly is the first and arguably most important thing to do in your app for it to feel or appear smooth and snappy. Right. Even sticking to rails goals, though, idle work can potentially delay input back to 50 milliseconds, which we're going to hear about in a second, but we recommend responding to input in 50 milliseconds in order to have the combined time never exceed 100. That's why you'll see the difference between the goal and the guideline. Animation is a similar story, so there's more than just our code that has to run in order to get a frame on the screen to account for a reasonable amount of browser rendering overhead. We recommend shooting for 10 milliseconds of your JavaScript time to produce that frame, leaving 6 left over for the browser. Browsers are so cool. We're talking about 6 milliseconds, 10 milliseconds. Who could do anything in the amount of time? And the browser is doing crazy stuff. Right. So rad. So in order to, in addition to deferring as much work as possible, it's also good to defer your work in 50 millisecond chunks. This is where that response metric gets its differentiation from. So if you have 50 millisecond blocking idle tasks, there's a chance that any one of those could cause input to be queued for 50 milliseconds, which means if you need to respond in 100, you only have 50 milliseconds left. So that's where that metric comes from. And the load goal is based on research, showing that we should, we generally lose focused after about one second. We start to feel a bit disconnected from the activity we're doing, and then at 10 seconds we become frustrated. We go grab a sledgehammer and probably light something on fire. That's how that picture happened. So rail's guideline for load is kind of right in the middle of those two numbers. We set it at 5 seconds. And this is a good metric for something like a page load where you're waiting for a decent amount of new content to come in, even a whole screen's worth of content. For little interactions though, like the sending button in our chat app, it's actually best to shoot for that one second goal. There's only a small amount of content being changed on screen, so it's not as perceivable to the user. Yep. And with rail, we have a nice mental model for figuring out why the app feels slow. It helps us set a useful challenge, and you'll notice on each slide as we continue here, there'll be a little badge in the top right. It'll have an R, an A, and an L, and it'll pertain to the rail model. So we're pretty confident that we can meet rail guidelines on a $1,500 laptop or maybe some other very popular laptop. Yeah, that's a Kush laptop there. It kind of looks like it's on John Snow's bed or something like that. I've never seen Game of Thrones, I assume that's the reference to Game of Thrones. So we can do it on the laptop, but can we meet our guidelines on a $100 phone? Doi, the web is awesome. Yes. So we want our apps to be smooth everywhere, and what we need to do to ensure that is to optimize for the lowest common hardware. So before we embark on a challenge like this, though, we need to be able to measure whether we're meeting those guidelines or not. Right. So all the tools that we have available to measure these things live in Chrome DevTools. DevTools! We have seen a bunch of them earlier today in Paul and Elizabeth's talk. They talked specifically about DevTools. We're actually going to show a couple of features in DevTools that are maybe a little bit less commonly used, some of them potentially even more commonly used. So let's get into them. The first tool is down on the drawer, which you can pull up using the escape key. It's the rendering panel. It lets you visualize important things that we're going to get into today, like painting layer borders and live FPS meter. Right. The second useful tool is up in the three dot menu in the upper right. It's called layers. It shows a real-time view of the layers that make up your page. And the next tool is back down on the drawer again. It's the graphs we showed earlier called the performance monitor. And here you can monitor various aspects of your app's performance charted over time. Plus, it looks so good on a dark theme. Just love that. It's pretty useful to find hiccups. Right. Finally, there's Lighthouse. And we've seen a couple of times today this used for tracking load performance metrics. But you can actually find it slightly useful especially for input response delay. So Lighthouse doesn't track for very long after page load. But it does for a little bit. And this might be exactly the kind of heads-up you need in order to figure out that there are some optimizations left for you to do on your website. Cool. All right. It's business time. We're equipped with some tools and strategies and how to investigate performance. Let's dive in. Let's dive in. So we came up with three types of issues. We have efficient animations, reading and then writing. And then sort of a grab bag of things that we're calling lazy wins. So that's sort of a mix. So we'll start off with efficient animations. Animating things effectively in the browser really requires a somewhat deep understanding of how browsers render. Let's say we want to build a, you know, like the chat UI that we showed in the intro. Chat UI. Right. Chat. Sorry. Let's start off with HTML. Give it a bunch of fancy styling. And then arbitrary magic happens and you have pixels on a screen. That level of understanding is not enough for us to be able to optimize here. So let's dig into it. We start off with some HTML. This all gets parsed and turned into a tree structure that we call the DOM. We're familiar with the DOM. And this sort of preps us to the point where we're ready to render, which is a four-step process. So we're going to go through all the elements for the elements that you see, which is, you know, resolving things to their final values, you know, resolving CSS custom properties, inherited values, et cetera. And it does this by going through all the elements and figuring out which CSS properties apply to them. With those values calculated, we can figure out the positions of where all those elements are on the screen. So you see those are represented as boxes up here on the right. The cost of this calculation step varies quite a bit. Some types of layouts are multi-pass, and some are not. And other types like position absolute are just static. So they only need to be one pass. So that computed layout has enough information to break things up into pieces that we call paint layers. Here, in sort of the painting process, we walk through all the paint layers and we convert the layout information that we had into draw commands. And these sort of look like the 2D canvas API if you've used that. So Chrome takes these, and it sends them over to its graphics layer, which is called Skiya. And those get rasterized and sent back as image representations. Wait a sec. Chrome, there's Skiya's? I thought they were snowboarders. Yeah, no, all Skiya's. OK. So with the paint layers rasterized, the last step is to composite them. So this step takes all the rasterized, essentially, images at this point, and lays them out on top of each other in order to make things look like the final page. So what you get is the result on the right, which is what we wanted. This whole process applies any time you have page updates that are triggered by JavaScript, manipulating the DOM. So let's say we want to update the left property of a div, left position of a div. The browser is going to have to recalculate styles. It's going to have to redo the layout because they have changed. Paint all the layout, or the paint layers that changed, and then composite them together to form the resulting page. Yeah, a lot of work. A lot of work. And then if we set the transform property, for some contrast, transform, opacity, and filter are all what's called compositable properties. So modifying these doesn't actually trigger layout and can often bypass paint. So you go straight from style recalculation to compositing, maybe the paint. Yeah, so a lot less work. These things, they get really important when you're animating. On every frame, we are now potentially computing the layout if you've set something like width or left. And this is a lot of work for the browser to do. This is going to be a low frame rate. Animating something like color, layout isn't required because you're not changing the bounds of the position of the element. So on each frame, you're recalculating styles, painting, and compositing. You skip the layout step. A little better. This used to be a performance concern, but in modern graphics, pipelines, it's actually pretty fast. Finally, animating compositable properties, like transform, opacity. All that work is handled by the compositor. So this is really good for things like responding to pan gestures because it's important to keep that pipeline as short as possible. The one thing to keep in mind though is that compositing is not free. If you're on resource constrained devices, you really need to watch your composited layer count. If it gets too high, you can hit memory. Yeah, you pretty much have the jank permanently, yeah. So we had a send button in our application and depending on whether it was enabled or disabled, whether there was text in the text field beside it, it would have a large or a small box shadow and change its color. So originally, we just did a CSS transition on the box shadow. Box shadows are paint-based animations because they physically cannot affect layout. And that meant that on every frame what we had to do was send a bunch of new commands over to Skiya, like we saw in those demos. So here's an example of what that might look like in a three-frame animation. And you see it's pretty much repeating the same process each time with new variables. We tried switching this to a composited animation and the way we did that was we duplicated the button using a pseudo-element and then the foreground was the element and the background was a pseudo-element with the same shape but a box shadow applied to it. And then all we have to do is change the opacity of the box shadow pseudo-element behind the button and we can get any of the frames of the animation we want between zero and 100 using just composited animations. So any frame can be constructed from just those two paints. We do not need to repaint. You can see this in action too. So looking at DevTools... I love this GIF. It's so cool. Yeah, we've got a paint-based animation on the top. That's just box shadow. The center one is animating the opacity of a shadow element and then the bottom one is sort of the penultimate. This is animating the transforms scale of the shadow element. Hey, where's the middle one paint at the end and beginning? Yeah, so paints at the beginning, paints at the end. This is when it's being promoted and then demoted to a composited layer. That was so quick. It was barely promoted. Right. That's a bummer. So we did this experiment. The profiler told us that this was a good idea. Obviously, there's a lot less main thread work going on in the second two animations. These are in order. And so this might be something we would want to do, except if we're targeting really low-end mobile devices where they might not have the best GPUs. The reason why this is good is because GPUs are really good at transforming composited layers. It's important to remember that this is not a silver bullet, though. You always want to make sure that you profile and that your performance improvements on desktop don't become problems on mobile. Yeah, measure. Oh, and please, so we don't have context for this particular slide, but please don't animate max height. This is something that feels really easy and really good when you do it, but it can be really, really brutal down the line for performance. I also don't think that's like what people even wanted to do. It's like you didn't want to take something that was squishy and all warped and make it unwarped. I think you really wanted to reveal the element, right? I think when we were doing our performance optimizations, we just switched our max height animation to be slide in. Slide in. We just used transform. Better anyway, nope. Okay, so we talked about efficient animations, but there's another whole aspect of rendering that's really important if you want to hit that 60 frames per second target. And that's this idea of reading and then writing. To understand this, we need to take a trip down to the rendering assembly line. Yay. So ideally when we look at an app in the performance tab of DevTools, we see little chunks of this sequence. And these are the frames. And in an ideal situation, there's a lot of white space in between these, which means that you don't have a lot of idle time work. Your app is sort of main thread, jank freeze. This looks like a Jake Archibald socks. I think those were the color profile we went with. Yeah, so we peaked at the anatomy of a frame earlier. We have style recalculation, layout, paint, compositing. We didn't really dig into the script portion, that initial portion that leads off this whole thing. And as it turns out, it's really important. Not all scripts are created equal. So our interaction with the DOM affects every other step in the rendering process. And we can split out the two types of scripting into two groups. We have read and then write. On the left, you can see some examples of properties that are DOM layout reads. Some of these are pretty obvious, offset top. Obviously, if you wanna know where the element is, you need to lay it out. Some of these are not so obvious, though. Intertext inserts, it's a string. It's kind of like text content, but there's new lines in it based on layout. If you have paragraphs or elements with display block, the string has new lines in it. And it's impossible to return that string without calculating layout first. If you're using intertext in your app and you're not using it specifically for that feature, I would strongly recommend going with text content. On the right is maybe a little bit more obvious. If you change CSS, layout, add elements, mutate the DOM tree structure, you're going to trigger layout. I wish it was as simple as like getters on the left and setters on the right. Yeah, yeah, yeah. So let's say we have a timeline like before, but this time there's two bits of script that are gonna run. The first sets the width of an element to 10 pixels. The second script, maybe it's a different script, asks how much space is left beside foo? How much space is left beside that div? And in order to do this, it's gonna use the offset width property. This is where things fall apart. We hadn't had a chance to calculate style and layout. So we have to do that synchronously now, which is gonna be forced. So we have to block the main thread, do all that work, and then come back to the script that asked for that width. And the worst part is that script actually did this to conditionally then set the width of foo, which invalidates the layout and the style calculations we did, which means that they have to happen again. I smell code smell. Right. Yeah, so this is a problem and this is really common. You might be tempted to use request animation frame to fix this. That's actually often a good strategy. Let's see how that would work. So request animation frame callbacks fire just before the browser does all of its rendering work. And let's say we might read some values synchronously in the script block on the left and then in queue an animation frame callback where we do our DOM rights. So you're doing read and then write using this RAF callback. For really well-behaved code, this is actually an awesome way to give the browser more control over and insight into your rendering code. Yeah, it's like polite. You're like, hey, whenever you're ready. Right. So if you pop up in the profiler on an arbitrary website, you might find this technique in play where it's not actually panning out. And the reason for that is deferring reads using request animation frame doesn't really fix this problem. It's still going to trigger forced synchronous layout inside of the RAF callback, which can sometimes be worse. That's tricky. Yeah. It's going to end up in the same situation we had before. So reading properties that require layout information before modifying layout is an important thing to keep in mind when interacting with the DOM. This brings us to our variety portion of the show, which we have called lazy wins. All right. And so earlier we were talking about smooth as described as connected, silky and asynchronous. And I feel like this crowd is savvy about the topic of sync versus async and how to be polite to threads or thread. But it's not often enough that we are considerate of our users thread. They're often pressed for time and resources as well. So we should do our best to keep their thread free of locking. Our first version had synchronous interactions, just like the animation here. It's kind of painful to watch, right? You're like, hey, what? I hope it's done. So here's a demo of us sending it. It's called rough moves. You'll like that. So here's how this one, we have a new message or thought can't be started until the previous one is finished. And it's not too bad when you send one message, but when you're sending many, you're gonna be super disruptive. The user interaction implementation was easier for the engineers because they could handle the flow being synchronous about it. It's sort of like less work for me. But we redid it. And so here's the same thought process being typed out. And these messages are being sent using asynchronous, almost sort of known as optimistic interactions. And the user is never blocked. They're free to express themselves at the rate in which they desire and are capable. And this can increase the perceived performance of your application because it doesn't appear to stall or think too hard about certain tasks. Measuring smoothness as a UX metric can be a difficult thing to quantify or qualify. But luckily, the experts at Google have devised a fantastic framework for evaluating the quality of your UX and therefore its perceived performance. It is called heart. And I wanted to throw this in there because it's a sort of mix of lazy wins because it can help you measure the impact of your UI interaction paradigms. Cool. So one quick way to improve your application's performance is to just find a browser primitive that is more efficient than something manual you're doing today. A great example of this would be position sticky. The manual implementation of position sticky requires querying for layout information while you're scrolling in the scroll handler. This is really expensive. This triggers all the bad performance problems we saw before. Another one would be to leverage native scrolling by swapping out your custom scroll implementation, scroll smoothing implementation, for scroll interview with behavior smooth. This is relatively new. This feature is awesome. It made building our chat UI really easy. The only thing that we had to do, we were firing it a lot. So we actually debounced it using request idle callback. So we only scroll interview once the main thread settled down and messages have stopped being sent. That was a really slick one too because on an old device, we had a test where we hammered messages in there. We sent like 30 of them over the span of like three seconds. And an older device was able to not be overwhelmed because we were nice about asking when to smooth scroll. So the messages would pour in it and then smooth down to the bottom. It'll wait. It'll wait. And on a nice machine, they just poured right in because it was powerful enough. It was easy. OK, so just like scrolling, panning is a gesture where we expect to feel like we're physically pushing content around on the screen. We want to have that connection between our thumb and the pixels. In a well-built custom panning implementation, touch input received by the browser is sent to the page and then the JavaScript input handling code responds by updating probably the transform property on an element. And then that is rendered by sending that back to the compositor, which results in pixels on the screen. So this actually works pretty well, but we can do better. In browser scrolling, optimal browser scrolling, we don't have to wait for JavaScript event handlers to complete. We could just directly translate that layer on the compositor. Events will eventually get sent over to the page, but in an ideal setup where passive event listeners are being used, they can be firing for debt. And this is going to get you your shortest time to pixels. So here's an example of that manual kind of panning, the first kind. It's a simple carousel. These are my pets, by the way, my wife required that I said that. So it's using pointer events, and it sets a transform property on the layer. There's some problems with this, though. So we're getting composited animation, which is good. But the implementation is non-trivial, especially if you want to integrate really well with the browser's own page scrolling. If you're panning and the person's thumb starts to move more up than left or right, you really need to cancel that animation. Get that uncanny, scrolly valley. Yeah. So another option here would be to use something called scroll snaps. So this is really elegant. Basically, instead of all the manual implementation, you just put two CSS properties on that carousel. The first is scroll snap type. We just said we want to do it in the x-axis and always snap. And the second is scroll snap align, which just tells it to snap to the beginning of each image. So cool. And what this looks like is this. So it requires a lot less code to implement. It's still composited, but this one doesn't rely on the main thread at all. And it feels natural on each platform because the easing rather than being manually implemented is just the platform's native scroll. Yeah, you get to bounce at the end for free. You get to throw it. I mean, I'll hold on. So to sum things up, make sure you're keeping the browser's rendering pipeline in mind when you're animating and always try to avoid animating layout. Sync up your DOM access so you read properties that require layout information before mutating the DOM and causing layout. All right, and consider using optimistic interactions because they can sometimes be better than optimizing rendering, especially if you're just getting started. Right. And finally, take advantage of places where you can offload the most performance critical work and rendering to the browser. If you can leverage something like scroll snapping or position sticky to implement your designs, that might be the biggest performance win of all. It's a lazy way to load. Right. Cool. So thanks, everybody. Thank you. It's really hard to introduce two people who've been on stage all day and they give me two minutes to do an intro. So I'm gonna say who's gonna break first, you or me. It's me. Please welcome Jacob and Mariko. We're just gonna keep walking. Bye. Thank you. Don't go. I need partner for giving this talk. Oh, it's nice to be back on stage, isn't it? It's been a while. Yep, yep. Right, so let's go back to February 1993 on www.talk mailing list. This man, Mark Andreessen, he's a creator of Mosaic, one of the earliest web browsers. He wrote this. I'd like to propose a new optional HTML tag, image. And this man replied, so Tim Berners-Lee, he wrote, or how about entity, icon sex, system, URL, and icon sex. And Mark replied, quick, Tim, look over there. And two months later, Mosaic shipped with the image tag. It's really fair to say that image has been pretty popular on the web for the past 25 years, right? In fact, according to HTTP archive, average web page loads a 650 kilobyte of images. That means it's 52% of page content. Yeah, and images are less blocking compared to things like CSS and scripts, but they can take bandwidth away from more important things. And of course, if the image is your primary content, loading it quickly is very important. Yeah, unfortunately, websites throw those bytes away by using nearly inefficient codec, like the one that comes with Photoshop. There are better options as well. So this is Moz JPEG from Mozilla, but there is also WebP, which is probably supported by the majority of your visitors. As you can see, WebP has beaten Moz JPEG, but both of them are done really, really well compared to Photoshop's JPEG encoder. Yeah, but like, except like, you can't really see it, you know? Or you can see some text and numbers, and these command line tools are great for batch processing and automating your build process and everything. But like, when you are trying to build a site and trying to choose something, it's really not good experience to change the option because you can't really see the effects of the image while you're doing it. So like, a lot of innovative small complesions are hidden behind these texts. Yeah, and it winds me up every time I see like a new image format come out because you get these really bold claims and like a few sample images, and I'm left feeling, I don't believe you. I don't believe you unless I can try it in an easy way using my own images, not cherry-picked examples. So we thought, let's make image compression easy using the world's most accessible GUI, the web. Yeah, so that's what we did. I'm really excited for this slide because I get to do this like, announcements of the product thing. Yeah. Well, here we go. Today, we are excited to announce new Progressive Web App schools. Yay, okay, fair enough. Did you like that animation? It was a beautiful animation, well done. Okay, so now the most terrifying part of any presentation, we are going to give a live demo ago. And you know, with the Big Web Quiz, failing earlier, you know, luck isn't on our side. Actually, so with the Big Web Quiz stuff, I had noticed like some of the answers changing from underneath me as I was like correcting them and editing them yesterday. And when that happened, I kind of thought, well, what could have happened here? Could Serma have coded it wrong? Or am I so jet lagged I don't really know what's going on? And I went for that one. I just didn't believe Serma could have done it wrong. Okay, are you ready to go? Yep, okay, okay. Oh, it's gonna be fine. So okay, this is Squoosh, this is Squoosh.app. So all we need now is an image to compress. And like there are some example ones along the bottom here. But let's pick a real world example. We're gonna go to the Google IO website. This is the one from this year. Now that header image there is a massive 1.7 megabytes. And that is a large portion of the page weight here. So I'm gonna save that image and we're gonna drag it into Squoosh. With Squoosh, it supports the file picker. It supports copy and paste. In this case, we're just gonna use a drag and drop. There we go. So on the left hand side here, we've got the original 1.7 megabyte image. And on the right hand side, we've got MozJPEG. As you can see, straight away, MozJPEG has taken it from 1.7 megabytes to 500K. That's huge. And to my eyes, I can't really see a difference between the two. If we zoom right in to folks on the screen here, we can start to see the difference. We drag from side to side. There you go, slight difference. But it's not gonna be displayed like this on the website. It's displayed zoomed out and you can't see a difference. All this kind of interface, this works with sort of touch events and stuff as well. You can touch the screen and do it. So it works great on a Chromebook. I was paid to say that. But it is nonetheless true. But because it's displayed for these, it's optimized for high resolution screens, this big image, where we can tweak the settings based on that. So we're gonna bring the quality down to 44. Obviously one that we rehearsed. And that's just because it's gonna be resized down like that. And we can't really go lower than 44 because it starts picking up banding issues, especially around the sky a bit there. But we've really dropped the file size. It's almost another 200K gone. Now, lossy formats, they work by removing data that humans aren't very good at seeing anyway. So for instance, we're more sensitive to changes in brightness than changes in color. So these lossy formats, they'll store color data at a lower resolution than the brightness data. But again, since this is aimed at high density screens, we can go even lower. So we're gonna dive into these advanced settings here. We try to expose everything that these codecs can do. So there's some kind of scary things here, like trellis multi-pass, what does that do? Well, I have absolutely no idea, but I can check this. I can see the impact of it. It's really cheap for me to try it. I'm not making any. Yeah, the great thing about having UI is that you can see it right here, right? And you can figure that out like to go at. Yeah, you can start to learn. And if it's a command line tool, you can't really see until the thing finished, right? Absolutely. And one of the options we are gonna look at is the auto subsample chroma. This is the thing that's changing the resolution of the color data. By default, it halves it, but we're gonna quarter it. And that's like another 20K gone as well. So we've now reduced it by like 83% of the original that is on the IO website. If we zoom right in, it's looking like a bit of a mess. You can really see the difference there. But again, zoomed out, which is how it's displayed on the site, it's really difficult to tell the difference. Mozilla have done an amazing job at getting the most out of an old image format like JPEG. But let's compare it to WebP. So on the left-hand side, we're gonna switch to WebP and we'll drop the quality down to 20. WebP is a much more modern image format. It's based on the VP8 video codec. It's supported in Chrome. It's coming to Edge, Edge 18, Firefox 65. But straight off now, we're gonna be sort of round about, there we go, about another 100K less than the JPEG, like I said, 10th of the size of the original. If we zoom into those people on the screen again, we can start to see the difference between the two codecs. Like JPEG goes for a sort of very blocky look, whereas WebP loses details through smoothing. In fact, the WebP at this size kind of looks in some ways worse than the JPEG. I don't know, it's subjective. But that's also because we've been able to drop the quality of the WebP a lot more because it doesn't suffer from those banding issues that we get with JPEG. And again, zoomed out, there's barely a difference. So now we've done that, we can download the two images and hopefully send them to the people who run the IO site because that's 1.4 megabytes saved for every visitor that supports WebP, 1.3 megabytes saved for people who just support JPEG. So that's lossy compression. But we support lossless compression as well. The Scrooge logo here, this is an SVG. Vector formats are great, right? Because they're maximum resolution on any device density, any size. But it's worth looking at how this comes out in a non-vector format. This looks a bit broken. This is because we're using JPEG, which doesn't support transparency. So we'll change to another format. We'll change to OptiPNG, which is an optimized PNG format. Once we do that, it's bigger. It's over twice the size, which is not great. But we can optimize for these kind of codecs. So we're going to reduce the pallet. We're going to bring it down to, I don't know, 70 colors. And we can turn off differing because we're happy with just a flat color there. And once we do that, we are now smaller with the PNG versus the SVG. It's just a few k. If we zoom right in, we can start to see, I don't know, very subtle differences. But zoomed out, it's not a big deal at all. But let's throw this at WebP's lossless mode. So we're going to copy the settings from one side to the other so we can keep the reduced pallet data, switch to WebP, and put it in lossless mode. So WebP's lossless codec, it's kind of like a completely different codec. They should have probably called it something different because it's much more similar to PNG. But now you can see we've cut even more off the file size. We're like 34% smaller than what we were with the PNG, with the SVG even. But this is yet another cherry-picked demo. Like, we have scripted this to look good on stage, right? What I am saying to you right now is literally written on this screen, including that, and that, and that, and this. Excuse me? Yeah, but all we are saying is that just don't trust our demo. Just go to our app, throw your own image, and try it out, and see how many bytes you can shove off that image by keeping acceptable quality. So we can go back to the slides now. OK, so that's the good parts. But Scrooge is still in beta. It isn't finished yet. There are a few rough edges. And we do want to be honest about that. For instance, right now, if you open it in Firefox, you get this. We're a little bit behind on our browser support. We run out of time. We are hoping to fix this. Yeah, we had the big web quiz to build and everything. Everything's built. We are joking, of course. Come on, right. Yeah, as well as being a useful tool, which we hope that you find it useful, too. We wanted to practice what we preach. So we wanted to make the Scrooge to be a great PWA. So Scrooge works in the latest version of all modern browsers. Well, this is kind of. Only Googlers can get a round of applause for just making things work in the browser. This is kind of sad that we got applause for this. Yes, in fact, what you're seeing here is Firefox 63 opening a WebP of my stupid cats. And this is in Firefox 63, which does not support WebP. WebP support doesn't land until Firefox 65. Yeah, but we didn't want the key features to be missing. So we needed to make sure that Scrooge could lead and create images in all modern browsers using those codecs, which brings on to, let's talk about how we built Scrooge. Yeah, and this is an idea I had around about five years ago. It's something I've been wanting to build, and I tried to build it five years ago. But browsers weren't quite smart enough, but more importantly, I wasn't quite smart enough. So Scrooge is a real team effort with contributions from all of these talented engineers and Paul Kinlan. The key thing that made, are you going to joke on it really well? Thank you. The key thing that made Scrooge possible is WebAssembly, which is supported in all modern browsers for over a year now. I know we have WebAssembly sessions later in this conference, but as a little primer, Wasm, or WebAssembly, is a compile target for the web. So you have your code written in C, Rust, Go, or whatever choice. You would usually compile it to run it on different OS as an app. But using WebAssembly, the technology, you can compile the same code and run it in browser. So for Scrooge, the ModsJPEG, the WebP, the OptiPNG, all of those codecs are written in C. So we use the tool called MScripten to compile those C code into a WebAssembly so that we can run it in browser. Yeah, and this was really, really essential for the projects because browsers do actually ship with imaging coders like JPEG and PNG, which you can access via the Canvas API like this. But they suck. They are really, really bad. We have made them available in Scrooge, so you can use them if you want. They are actually quite fast. That's the one thing they've got going for them. We thought, why not? But you only get this one quality option and they're just rubbish. This is Chrome's JPEG encoder on the left compared to ModsJPEG. And the Chrome version is just a lot rougher. It's horrible. It's actually way worse in Firefox. The JPEG encoder in Mozilla's browser is terrible compared to Mozilla's JPEG encoder. Yeah, we see the same thing for PNG, too. So that Scrooge logo that we looked at in the demo, if you use a browser's native compressor, then those are the numbers you get. The Firefox are significantly better than other browsers, but it's still not good compared to Opti-PNG, which Scrooge uses. And this is for exactly the same image, pixel by pixel, just compression, different numbers. Exactly. It wasn't let's bypass the browser encoders and use much better ones. It also let us bring in the Web PD coder to use in browsers that don't have native WebP support. Now, I know almost nothing about C, so that part of the project was really down to this man. So please, welcome to the stage. So good. We named him once. It's Surma. So you made most of the WebAssembly stuff work. Yeah. How was it? It was good. Surma, ladies and gentlemen. Yeah, well, thankfully, Surma wrote an article about it, so we have some things to share. Yeah, compiling a codec to Wasm, that involved writing a little bit of C++, which included the codec and some of the script and stuff, and then rewrote a function. This is the function we want to expose to JavaScript. It takes our image data, takes it as a string, because that's how it subscripts and represents binary data in C++. So on the JavaScript side, we're passing in a UNT8 array, and it ends up in C++ as a string. We also have our other arguments, width and height, and the codec settings in this example. I'm just using quality. And then we call out to the actual encoder under the hood, passing it all the settings, getting the output, and then we return it. And this vowel and type memory view stuff, this is annotation to telinscripten to treat the return value as a UNT8 array back in JavaScript land. And then all we need to do is telinscripten which methods to expose. That's it. I'm a web developer. You completely lost me. Do you get any of this? No, I don't understand it either, but I copied the patterns from Serma's article, and hey, it just works. It's great. I was able to take those patterns and use them for different kinds of C projects as well, like text compressors and stuff. Yeah, it's pretty good. Pretty good. Yeah, so compiling that code with mscripten gives a JavaScript file and wasn't binary. So now you can include that script onto your page, which also loads that wasn't binary. And once all of the script is ready, you can just call that encoding function as if you're calling JavaScript. Yeah, using mscripten, it feels a little bit rough and ready, but the docs are really good, and Serma's article is good enough to get you started. Yeah, so, but in this case, we are dealing with large C project here, and those are really CPU intensive. So that brings us to performance, which we were kind of worried about in the beginning of project because we knew that's gonna be a thing that we need to take care. Yeah, we're halfway through the talk, so we're actually finally starting to get to the point. Yeah, so we use Preact to orchestrate the DOM and Webpack to bundle it all together. Yeah, we use Preact because it does what it does in 3K, which is kind of amazing. We also had Jason on the project, that's probably another reason we wouldn't want to upset him. And we use Webpack because there isn't really anything else that does what Webpack does, certainly not in the way we wanted it done, we wanted kind of a lot of control over it, it was really on the Webpack. Yeah, so as a result, our app is 400 kilobyte zipped, all in all, everything. So this is, but like this is well below the median from HTTP Archive, which is 1.5 megabytes. So we're quite happy about that. Yeah, we're winning, we're winning, we're happy with that. So the vast majority of that size is the codecs and the processes, like the stuff we brought in from SeaLand, 300 kilobytes of that stuff. And I'm pretty sure there's some waste and duplication here, but like I said, Inscripten's a little bit rough and ready and code sharing between WASM modules, it doesn't seem all that easy right now, it's something we'll look into, but we are able to limit that damage by lazy loading these using workers. Yeah, as we've already seen, complicitors, like it's CPU intensive. So depending on settings, encoding can take like 30 seconds to sometimes minutes. And when that thing is happening and occupying CPU, we don't want that UI to be frozen, we want users to be pinch zooming and moving around and testing and possibly changing the options too. So. Yeah, workers give us this lazy loading stuff for free, which is great, but the main benefit is concurrency. We create up to two workers, and both of them are just gonna be pointing at the same script, that's one for the left-hand side of the image and one for the right-hand side of the image. This means the left-hand side can be like encoding a JPEG, and the right-hand side can be optimizing PNG, and all the while the main thread stays responsive. And that was really all the concurrency that we needed. Yep, but it came with another benefit too. So the left-hand side, let's say left-hand side worker is compressing an image, right? But what if user changed the setting? The current job is out of date, we need to update it, but those compression encoders are written in synchronous way, so there's no abort API, there's no way to kill it. However, we put it into worker, so in this case, we just terminate the worker and then click new workers, put the new job in, and then start it back up again. Yeah, and the implementation for this is pretty simple. So it's just like this, oh, oh, oh, go on, you click. This is a problem, right? We've got two slide clickers, and we needed to rehearse who was doing the changing for each slide. So what you saw there was us messing it up. Okay. Implementation is roughly like this, like if busy, terminate and restart, but also we only spin up one of these workers when we need it, like just in time. So if you're doing something, kind of like what we were doing in the demo, we had the original image on one side, JPEG on the other, we only spin up one worker because we're only having one side doing compression at the time. And also, if one of those workers is idle for 10 seconds, we kill it as well, and that's really just to be kind to the user's device to bring the memory footprint down. Yeah. The worker is almost empty when it started. Peppy stuff is imported when it's needed. This keeps the worker startup time down, and also you don't really need to download and parse and wait for the OptiPng and ModJPEG stuff if user is just encoding WebP. Yeah, and using like a way to import like this, this is part of the platform. It's supported behind a flag in Chrome, but it isn't supported in stable or in other browsers. Thankfully, it's something that WebPack can just handle. It sort of polyfills it and it just works. Now, the final piece of this puzzle is how we actually talk to the workers. Yeah, so request and response communication with workers is a little tricky. So the way a worker works is that you send the worker some job, and at some point, they come back with the results. But you need to associate those requests and response. So in this code example, we just give a unique ID, and when worker is done with that job, then it gives back the ID, but you have to write your own logic for this. And then it's like, you know. It's rubbish. I really don't like it. It's one of the things that puts me off putting stuff into the other thread. You know, it's like, oh, I'm gonna have to do the post-message thing. Yeah, another layer to worry about, right? Yeah, yeah, yeah, yeah. But again, to the rescue, it's... He built Comlink, a little library that you can get from NPM to use it. It's just like this. You create a worker as you usually would, and then you tell Comlink all of the things that you want to expose. These can be classes, values, but we were just using functions. And then over in your page, you just kind of hook the Comlink up to the worker, and now you can call those functions just as if they're part of your page. Comlink takes care of all of the bookkeeping, the post-message stuff, all of that mess. So once we've split up that codex, our app came down to 35 kilobyte. Yeah, but I want to make sure that user will still have to download those 300 kilobytes if they want to use all of the codex, but they don't need to load those up front. But 35 kilobyte, it still contains this pinch zoom logic, the options parallel, the fancy slider, this thing, that thing, this thing. Ooh, that thing too. And what these have in common is that it's not here, our first page. Yeah, imagine going to a restaurant and asking to see the menu, and 10 minutes later, you still haven't been given it. So you go and ask like, excuse me, where's the menu? And they say, oh, well, we're busy preparing every single dish we do, and then we'll give you the menu, so when you pick something, bam, we will give you that straight away, it's ready. Yeah, you laugh, but much of the web is built this way, especially JavaScript heavy apps. Yeah, because you might look at the menu and decide, oh, I don't want any of this stuff, or you might want something simple. You're certainly unlikely to want everything, so why should getting the menu and making a choice be blocked on them preparing all of their most complex time consuming dishes? Yeah, same for JavaScript app. They prepare everything up front before doing anything. So the user really shouldn't have to wait for all of these things to download, parse, and execute. They just need to get to this page. And like most web apps, like most apps, Scrooge has a limited set of first interactions. For us, it's dropping an image onto the page, selecting one via the file picker, or selecting one of the demo images, so we split all of that out. This is roughly how things looked at the start. We had our intro, we had our compression UI. If no file had been selected, we showed the intro, otherwise we send that file to the compression UI and display that. We changed this, so the whole compression API part, that was loaded asynchronously, and the toolchain actually made that a lot easier than I expected. Yeah, so first up, we removed the regular import, and instead dynamically imported that in the constructor and set that as a state. So dynamic imports are supported natively in Chrome and Safari, but using webpack provides similar functionality to all of the browsers. Yeah, so the compression UI is going to start downloading as soon as the intro loads, but there is a small chance that the user is going to be able to select the file before it's ready, so in which case we just show a spinner. Very unlikely that the user is going to see that, but we put that in just in case. Yeah, so remember, with codec split it out, we chopped that 300 kilobyte off, but with the code splitting, we made that into 15 kilobyte, and this is despite our app being JavaScript-driven using frameworks, and 15 kilobyte is even with empty cache on the user's device. Exactly, and that means on a slow 3G connection, the user gets the first interaction in 3.3 seconds, and that's a slow connection on a slow phone, but it's not the worst conditions. Even on 2G, we are still interactive in less than five seconds. Now, I don't think many people in this room spend a lot of time on 2G, I know I don't, but this kind of performance means the app is really usable even in emerging markets. And this is why I'm a huge fan of these micro frameworks like Preact, but also the HTML, Hyper HTML, Svelte, like we couldn't have actually hit these numbers with React because that's 30, 40 kilobytes out the door. We couldn't do a view because that's 20 kilobytes, Angular is 60, Ember is bigger still, but let's not get carried away. 15 kilobytes, eight of which is JavaScript, is actually a lot of JavaScript for these couple of interactions. Yeah, so for instance, let's look at how we might build this user interface with just vanilla JavaScript, right? So here's a minimal implementation of drag and drop, just a few lines, and then to that drag and drop is selecting a file code, that's not that much, and then this is for selecting one of the demos, clicking and loading the image. So these all of that code, minified and gzipped, it's only 550 bytes. Oh, 350 bytes, whatever. I'll go with the number on the screen. Can't see the number, yes, 350 bytes. And that's why I get really, really grumpy when I see sites like with this 600K or more JavaScript bundle, and that's for first interaction. I have friends who build native apps and when they talk about their two megabyte Android Instant apps, I love being able to go, Instance, two megabytes, great, 15K. With your smug face on. I think it's called a shit-eating grin, anyway. But then they can point to large parts of the web that have thrown this advantage away, like parts of the web that I've taken on the worst of native, but none of the benefits. Yeah, so if you take anything, anything out of this talk, please, please, please study your website or your web apps and find out what the first interaction is and ship the reasonable amount of code for that first interaction. Now, reasonable amount of code, how do we find out? We have a lot of tools. Yeah, just experiment and find out what is reasonable to you. So. So you also need to keep an eye on the bundles between builds. And with Webpack, right now that unfortunately means going and looking at minified code, because that's where it does things like tree shaking and dead code removal. It's part of the minification process. Yeah, we may need a few bugs because of that, right? Yeah, we're still not quite sure if it was a bug in Webpack or if it was our expectations, we're different to what Webpack was doing, but it was when we dived into the minified bundle, that's when we found that stuff out. Alternatives like Rollup are much better here. Like they will show you the dead code removal during development, so it's easier to see when something's not going as you expect. But Rollup doesn't have the kind of holistic asset graph that we really needed for this project. Anyway, my point is there are huge gains to be had by keeping an eye on this stuff. Yeah, so like we said, we use Pliac to orchestrate the DOM, but we also use Web Components. When we were building this system, we were talking to a lot of developers of, yeah, we were building this cool app, and we were using Pliac, and we were going to be Web Components, and they were kind of surprised because they were asking questions like, well, is it Web Components alternative to frameworks? Yeah, but not. I don't know why you would think that. Well, actually, I do know why you would think that, and I think it's our fault. Well, I think it's Google's fault. I think our messaging suggested that Web Components are the same thing as Polymer, and Polymer is an alternative to the frameworks. But Web Components, they're not the same thing. Web Components are the lower-level primitive that Polymer uses. Yeah, we did use custom element polyfill for Edge, but for other browsers, we just ship the vanilla Web Components. Yeah, we use custom elements of some of our leaf components that contain the minimal amount of state. For instance, this is the Pinch Zoom element, just put stuff inside it. Now you can Pinch Zoom it, and it has an API. We also did this with our side-by-side, slidey thing comparison. We call it two-up. We call it a two-up, yeah, because we didn't know what else to call it. A fancy range input, drop target. We made components for all these bits and pieces. Yeah, so we could have done all of this in Pliect component because we were using Pliect, but using Web Components, we can take that and put it into some other project using different frameworks, or just use it as is in the project but you don't need a need framework. Yeah, and if you're open source inclined, Web Components are the best way to share things like this, so we did. We have released a few of our Web Components, a separate project, so we'll make more available as we extract them from the main app. We were hoping to release more, but our run out of time, whatever. We use Preact, but these things, they can be used in View, Svelte, Angular, React, whatever. Yeah, most frameworks are fine with custom Web Components, but some have few laugh edges, so if you're curious, check out customelementseveryyear.com to see how custom elements work with whatever the framework you were using. We also released libraries for some of the other things we built, like a Point Revent Helper thing and a Webpack plugin for dealing with asset inlining and optimisation, so it has been, well, we're vastly running out of time, 20 odd minutes, and I haven't yet mentioned service workers. A new record, at the very end, very end. New personal record. Yeah. Yes, the site does work offline. Our service worker approach isn't particularly novel. We cache the things, we serve the things, that means it works offline, and we also let the user know when there's an update available. Yeah, but there's one unique thing about this scoosh. That is, when user first visits the site, we cache all of it in the next interaction, but we do not cache any of the codec, the 300 kilobyte bit, until the user access will drop the image and then get to this view. Yeah, and this was just being kind of the user's bandwidth. When the user just visits the site, it feels rude to download the whole 400K app, so we wait for some signal from the user that they're actually interested and that's when we cache the chunkier parts. Yeah, and this is what we love about the Web, right? Write it once, build it, and then put it into phone, tablet, desktop with a fraction of the size, yeah? Yeah, of the equivalent native app, yeah. And those native apps are only targeting like one operating system, whereas we can hit loads of them, and to share it, just copy the URL and send it to them. That's the best part of the Web, yeah. So if you want to dig into the code, it's all on GitHub. There are lots of things we want to do, there's bugs to fix, codecs we want to include, and if you're interested, please come join us to build this up. We really want this tool to be really useful to everyone. Yes, absolutely. So thanks for listening. Now go squish some images. Please never say that again. Thank you very much. Thank you very much. No, no, no, no, no, no, no, no, no, no. No, no. Come on. You tried to walk off, didn't you? Last time I was out here, I tried to just do this by myself. It did not work out well, so you can stay here and help me. Okay, okay. Well, my brain is fried, so... That will make this all the better because we're going to do a big web quiz question. Oh, excellent. Devices at the ready. You're ready for this one? I am. Give him a few seconds. Great, okay. Okay, let's announce the round. It's going to be... Ooh. A long time. We need to get Surma out again. Okay, here we go. Reserved ES6 keywords. Now, if you're not clear on reserved keywords, they are words that may or may not be used... Or may not. They may not be used. Reservedly may not. But they might be used in the future. And so they're reserved as part of the spec so that you can't just write them in your... Oh, I thought I... They may not use them as a variable name, that sort of thing. Exactly. Okay. So you get two seconds per answer. So you've got to go fairly quick here. All right. What are we having a look at? Yes, we've got a blank screen. That's fantastic. That's looking good. I'm very excited about this. Probably not all wanting. If we switch on to the next thing. Oh, here we go. Look at that. Jake is in there. Excellent. People are quite confident on Jake. With transient... What else have we got? Arguments float, package synchronized. Let's see how this goes. Some of these are really done. Really done. And we're out of time. Whoa, that flew by. That was quick. I didn't have time to read those. I apologize. But there you go. That's how this works. Okay. All right. So we're saying void is not a reserved word, but we're kind of saying the rest of them are. Should we reveal that? Let's see what happens there. Oh. Interesting. Double is not reserved. But it was reserved in ES2. Right. And it stopped becoming a reserved word in ES5. I love ES5. And the question was specifically ES6. That was very nerdy. Oh, people don't think Jake is a reserved keyword. I like this 4% who do, though. Thank you. Thank you. I would reserve... Oh, it's not. Okay. But these other ones, like transient and native, that's the same story again, reserved in ES2, but not in ES5 and onwards. You're doing surprisingly well, given your brain is not really switched on at the moment. I mean, none of this is true. I'm just making stuff up. It is that. Arguments float package and synchronous. Right. So mostly ES is on here. Shall we find out? Oh. Let's find out. Oh, okay. Is floating the same campers double? Yes. Yes, they all are. All... Apart from Jake, all of them at ES2 that were no longer reserved in ES5. Well, that's very exciting. Oh, it's easy to remember. That's why I remember it. I wish I'd known that before we stepped out here. Well, that is us for now. We're going to pass you over to our next speaker, who is Francois. Oh, excellent. Round of applause. You are tired. I can feel it, so I will cut to the chase. Media on the web matters. And I'm not just saying it because I deeply believe in this. Let me share some numbers with you. Almost 40,000 years of video are watched every day in Chrome. You may want to take some time to digest this. 30% of all time on Chrome for Android is spent watching video. And it is about 15% of all time on Chrome for Android. It is huge. And I'm not the only one to have noticed it. Media PWAs across the world have seen business impact. Spotify globally and Ghana in India are both in great success. And it's just the beginning. In the next 20 minutes or so, Angie and I will work through four topics we think will help you build great and modern media experiences. We'll cover multitasking with picture-in-picture, bandwidth saving with the AV1 video codec, a brand new way of switching codecs seamlessly, and finally, playback quality predictability with the Media Capabilities API. Let me tell you a little bit about myself first and how I work. I love to multitask on my computers, doing many things at the same time. Browsing, obviously, writing some code, sharing news on social media platforms, watching educational videos, and so on. This is what I do on a daily basis. And I'm quite sure I'm not the only one to do that. But you may wonder, Francois, this looks cool, but are you good at it? Does that make you more productive? This does not matter. I love to do that, and I want to be efficient at it. But it's not always easy. For instance, watching a video while I'm coding, how does that work? First, I have to open a separate YouTube window, move it to a corner of my screen and making sure other windows are not covering it, and only then I can start to enjoy. But can you imagine my frustration when window positions are not remembered, or when some new window opens in the middle of my little video? But that's okay. Because today, a brand new web API called Picture & Picture sold at very specific use cases. Picture & Picture, also known as PIP, is a common feature in television that allows users to watch video in a floating window, always on top of other windows, so that they can keep an eye on what they're watching while interacting with other sites' applications. The BBC website has seen Picture & Picture a month ago, and they are quite happy with the early result they got. Now, like I said earlier, I like to write code, so let me show you some code. To enter Picture & Picture on a video element, you simply have to call Request Picture & Picture on a video element. And because this call is asynchronous, it will return a promise that can either resolve or reject, and I'll explain why it can reject in a bit. The important thing to notice here is that the user has to interact with the page first to be able to enter Picture & Picture. In this example, I've used the button. Making this button a toggle button, sorry, is quite straightforward. By checking if the video element is not the document of Picture & Picture element, in other words, already in Picture & Picture, I'll proceed as before. If it is, let's call document.exit Picture & Picture. Requesting Picture & Picture may reject for several reasons. The most common ones being the video metadata, not loaded yet, Picture & Picture not supported by the platform, simply not allowed by the user. The full list of reasons is available in the documentation. Updating your website when the video is playing in Picture & Picture is crucial. And you may think that waiting for the Request Picture & Picture promise to wait is good enough, but it is not. What if the video enters Picture & Picture from another path, for instance? What if the user clicked the browser context menu, for instance, or the browser triggered Picture & Picture automatically like Chrome does on full-screen video on Android? This is why I strongly recommend you update your layout and enter and leave Picture & Picture event listeners. Now, having a 4K video playing in a small window may not be what you want. So to adjust the quality of the Picture & Picture, of the video, sorry, based on Picture & Picture window size, you can simply check the width and height attribute of the Picture & Picture window available in the Enter Picture & Picture event. Too many Picture & Picture, I know. Adding a resize event to this object would also let you know when the user resized the Picture & Picture window so that you can update the video quality. By the way, Angie will work you later on how to change, seamlessly, the video quality and the codec container to help with that. And finally, defining the availability of your custom Picture & Picture event should be as easy as checking the boolean value of document.Picture & Picture enabled. But it is not, because you want your website to be perfect. So you'd also have to check if the HTML video attribute disabled Picture & Picture is present. And finally, for real this time, you'd have to check if the video is actually ready to play. And only then, you'd get a perfect implementation for your custom Picture & Picture button in your media player. I'm glad to say that we have shaped Picture & Picture API last month in Chrome 70, in Linux, Mac and Windows, Chrome OS and Android are coming soon. And we're looking forward to see other browser vendors implement this API as well. You'll find all documentation and sample units at this URL. Now, what if I tell you this is just the tip of the iceberg. In Chrome 71, currently beta, we'll support media stream video in Picture & Picture. These two little lines of JavaScript do what you think it does. Your webcam video stream in a Picture & Picture window. And this already makes me happy. And in case you're wondering, those are real fake glasses. But wait, there's more. Soon, a brand new web API called Screen Capture will allow our website in Chrome to capture a screen to a media stream for recording or sharing over the network. This API will enable a number of web applications, including screen sharing. Imagine now if you combine this API with Picture & Picture. Let's have a look at this code. After getting the screen, we get the display media and the voice audio stream will get user media. I create from scratch a new media stream that contains the screen video stream, including my Picture & Picture window and my voice as the audio stream. This code is simply gorgeous in my opinion. This is it. There's nothing more. So let me show what this code does. We'll show demo I've created just for you. And can we switch the demo, please? So on the left is me, Francois, the hardcore gamer. On the right, it's Timmy, but this time as a casual viewer. And what you're going to see on the left is me, sharing my screen, including the Picture & Picture window, while I'm playing the Dino game. So this is me. Hi, mom. And let's say if I'm not bad. So like I said, I'm a hardcore gamer. Okay, sorry, I lied. So keep in mind that the code you've just seen with 10 more lines of JavaScript involving the Media Recorder API and some web sockets are pretty much the entire code for this demo. Can we switch back to the slidespace? The demo is available for you on Glitch if you want to play with it later on. Now, some of you may have noticed that the Picture & Picture window contains only two buttons, a Play Post button and a Close button. Those are blue there. We've talked to other browser vendors interested in Picture & Picture about this, and we're happy to share that the Media Session API we've talked about last year at Chrome Test Summit will be used in a new feature to add and customize some actions to Picture & Picture window. Think of seek backwards, forward, previous track, and even new ones. If you are already using it for your mobile website, this will come for free. To illustrate these upcoming possibilities, imagine a web app that shows the poster image of a podcast show, for instance, in a Picture & Picture window, all at the top, and use these window media controls to tailor the media experience. I think that looks cool, and this is coming as well. So to summarize, Picture & Picture is great for multitasking. And in the near future, it may also be used to record your screen with your webcam, or even to create a custom media center always on top of other windows that user can access easily. I think we all agree that Picture & Picture improves a lot the user experience in general. And you know what else improves it a lot? Video codec. And to talk about this, let me introduce Andy Chiang, a software engineer working on video compression. Thank you, Francois. Hi, everyone. I'm Andrew from Google's Web Ending. Today, I'm going to share with you that a new generation video codec, AV1, was launched recently. So we have three main goals for AV1. First off, we want AV1 to provide a state-of-the-art compression efficiency among other codecs. Secondly, we want AV1 to be accessible by AV1, so we made it an open source project, and it's royalty-free. Finally, we want to deploy AV1 widely and quickly. So before I jump into how we did or will do to achieve these goals, let me explain a little bit why video compression is important for users. To visualize the importance of compression for video service, let me give you an example. Using H.264, a five-minute HD compressed video will take about 300 megabytes. On the other hand, the uncompressed version will take about 25 gigabytes, 80 times larger than the compressed version. This means without video compression, watching video online will eat up all your internet bandwidth, not to mention the skyrocket cost of storage on the cloud. So more compression gives users a better user experience. We have seen the proof of this with VP9, the predecessor of AV1. YouTube did an comprehensive AV experiment when launching VP9 compared to H.264. Performance improved in a number of ways, ultimately result in higher watch time. This is due to VP9's outperforming coding efficiency. With AP1, we have done it again. A new generation codec provides 30% B-ray reduction over VP9 across a variety of visual qualities. This means, given the same quality, AV1's video size will be 30% smaller than VP9. This project integrated over 100 algorithm tools, including the technologies from open source projects like Mozilla's DALA project and Cisco's SOAR project. And again, AV1 is an open source project and it's royalty free. So its development community, Alliance for Open Media has attracted 40 companies to join and to contribute their technologies into AV1. There are content providers, streaming service providers, and Howard companies, which covered a wide spectrum of the ecosystem for video on the web. Having Google, Apple, Microsoft, and Mozilla means we can have AV1 to work everywhere on the web platform. So we have been working hard toward this goal. In this quarter, AV1 Decoder is supported by Chrome browser and we started serving video content from YouTube. Firefox and Edge are also launched in the beta platform last month. And we plan to integrate AV1 with WebRTC and deploy AV1 into Android platform in the following years. Howard support is also under development. First arriving in 2020 for TVs and mobile handsets. For web developers, using AV1 is easy. You simply need to code its type supportive function to make sure the browser supports AV1, then you can stop playing the video. So AV1 is very exciting, but running out AV1 is not easy. To ease the process of the codec transition, a new change type function is supported from browser, which allows the browser to switch one codec to the other seamlessly. So for example, when AV1 is not performing on some low-end devices, the string service can always fall back to VP9, which is less complex and may have Howard support. Here is a simple code for using change type. So initially, we added source buffer using VP9 and later on we can code a change type function to switch to AV1. Let me give you some demo about change type. So we start playing video using H.264. You can see that from the top left corner, by clicking the button, we can switch to AV1. Yeah, now we are using AV1. By clicking the button again, now we switch back to H.264. You can tell there's a transition, right? I think this is really cool. Let's go back to the slide. This is a support slide where you can find information about if you want decoder and change type. Please welcome back on stage Francois, who's going to talk about playback credibility with media capabilities API. Thank you, thank you, thank you. Did you ever wish you could predict the future? I know I did. Be some kind of fortune teller. I personally would love to jump into my future. Have a quick look and just come back. Out of curiosity, I asked people around me what they would do with this kind of power. Some said they would love to know if they would become grandparents. Some would like to see if that project is going to be successful. And some obviously would simply look for upcoming lottery numbers. In a sense, the media capabilities API allows you to predict the future, but only for a media playback on the web. Until recently, web developers had to realize solely on web APIs such as this type spotted or gameplay type to discover whether media could be decoded. While this told them whether media could be played at all, it didn't provide an indication of whether the media playback would drop lots of frames or rapidly drains device battery. In the absence of this signal, developer didn't have to create their own heuristic or just assume that if a device could playback a combination of codec and resolution, it could do so smoothly and power efficiently. For users with less capable devices, this often led to very poor experience. By using the media capabilities API today, you can get more information about the client's video decoded performance and make an informed decision about which codec and resolution to deliver to the user. In other words, it helps ensure adaptive video streaming only selects resolution that will playback smoothly onto a specific device. Here's how it works in Chrome. The media capabilities API used metrics from previous playback to predict whether future playback in the same codec and at the same resolution will be smoothly decoded. When you ask this API about a specific media configuration, it will return asynchronously three booleans. Is this configuration supported? This is the same result returned by this type supported. You can use it, for instance, to detect whether every one video codec is supported by the way. Is playback going to be smooth? It is currently true if less than 10% of frame have been previously dropped for this media configuration. Is playback going to be power efficient? Am I basically going to drain the battery? Is true if more than 50% of frame have been decoded by the hardware for this media configuration. Now, warning, this is not some kind of magic API that will tell you what to play. You are in control and have to make decisions about which media configuration to play eventually based on the result of this API. Speaking of results, YouTube experimented with the media capabilities API to prevent their adaptive algorithm from selecting resolution that a device could not playback smoothly. For users who are part of the experimental group, the mean time between web affairs also known as MTBR increased by 7.1% while the average resolution served to the aggregate group measured by video height only declined by 0.5%. This result, as obscure as maybe to some of you, basically showed that some users of the device saw less frequently on YouTube the frustrating buffering animation. So, just for that, thank you. The media capabilities API is available today in Chrome, Firefox and Safari Tech Preview. As usual, you'll find all documentation and sample you need at this URL. Now, if you remember one thing from this talk about those features, it is that media on the web matters and the web platform is the best place to serve efficient and delightful experiences. One last thing, you can find all audio and video updates in Chrome by simply searching for Chrome media updates on your favorite search engine. This will allow you to stay up-to-date with the amazing media feature that Chrome is adding to the web platform every release. And with that, I humbly thank you for your time. Right, it's time for break. From the CSS4 spec, you can now create when it's time to turn the corner more efficiently and with less jank. I'm pretty excited about it. It's been ten years since Chrome was first released, but of course, there's plenty more. I'm Pete LaPage. I'll be right here to tell you what's new in Chrome. We were here for like 45 minutes. I'm bored of myself. It's time for the quiz. Yes, let's do another big web quiz round. So, get it out to your laptops and phones. Get it out of the website. I think this one is something that you're quite interested in. Quite passionate about. Yes, the language of my country, emojis. So, you are about to see emojis come up on the screen. You are going to see if it's three or more unicode points. Yeah. Which we think this might be an advantage to people with like older phones that might not see these things combined and they'll see them all separately. So, it becomes a counting exercise for them. So, that's pretty good. So, what is the timing for this one? It is two seconds for question again. So, be quick. So, you need to decide whether it's more than three or not. More than three. Right. So, it does say that on the screen. I should have read that. Here we go. Right. So, we'll start seeing the votes coming in. Here we go. When the server wakes up. Here it is. There you go. Yeah. So, if you're not aware of the emojis, the emojis are compound emojis, which means it is compounded of a character and the glue code to create a one emoji. So, if you have really good eye, you can kind of count it and guess how many code points it would be. So, if I just count the people, do I win? Not exactly. Not exactly. Okay. Fair enough. Yeah. But that's the hint for you. Right. And the time is up. Gotta be quick. So, what are we thinking here? So, more than three for the top one? I mean, there's four people there, so according to the rules before. Yeah. And this third one here, this is one that Chrome is not displaying. So, that's interesting. I'm learning. So, what are the answers here? Ah. So, they're all more than three. Oh, except this one. Yes. Top three are compound emoji and then the bottom one is one. So, how does this work then? Let's relate the top one. Yeah. There's two ladies. Yes. And then two babies. Yes. And in between the four of them, there's a glue emoji called little wish joiner. Okay. Which counts to seven. All right. Cool. Right. So, you get the idea. So, next one. Oh, okay. People, that idea, people are pretty confident. Right. Yes. The middle one with two men kissing. It's a two men and the kiss and heart emoji, I think. And then glue code in between. Huh. Excellent. But the two dancing ladies is less than two. What's so... Oh, yeah. So, this is one of the original set. So, this one from 1999. It started with the two person together. And that's one code point. Oh, okay. Fair enough. Next screen. What have we got going on here? Ah. So, we've got another one that's displayed separately. Yeah. It's already giving out the answer. Ah. Okay. Because this would be on some screens with display as a... The one emoji with men with haircut. Brilliant. Okay. Right. So, lesson two. And those ones are more than three. Yeah. I think this is the last set. Ah. Ah. It's tofu ring. What does that mean? So, tofu is a square box because it's a white square box. It's like tofu. So, when that system does not support the griff, that's the griff that supports like a default brilliant. Oh, I like it. We've got to the point in the quiz where everyone is just getting 50-50. No one cares anymore. Just press some buttons. It doesn't matter. Ah. I have so many things to say, but, you know, I already had the stage time, so... Yes. You've had... We've had half an hour here, so we can... We should go somewhere else. So, welcome to the stage to talk about e-commerce and the modern web. It's Cheney and Ramya. Run with those. Hey, everyone. My name is Cheney. And I'm Ramya. We work on a technical consulting team at Google, and we get to see companies all around the world push the boundaries of the modern web. And while you might have heard some great sessions today talking about different modern web technologies, we are here to focus on the real world challenges faced by e-commerce companies in implementing these technologies. So, since we're talking about the real world, a real world metaphor. Maintaining a website is a lot like maintaining a house. This right here is the house that I grew up in. It's probably not like the house that you grew up in. Just like the website you work on is different from, say, the person over there. But some things are the same. You have maybe a place you sleep. Maybe there's a dining room. When something breaks, like the AC, you might say, well, I can fix it. And your friend might say, no, you should just call somebody. Over time, when you moved in, when you moved to that new platform, it might look like there's room over here on the left. But technical debt feature creep because Black Friday and the holiday season is coming up. You're trying to get things in. And it gets kind of messy. And I think the house is a great metaphor because like an e-commerce site these days, it's not just about landing in one place and you're done. It's about the journey. You're going from one room to room. You're going from a page to another page, from the home page to the search page to the checkout page. And if the room is like this, you get kind of confused. You don't want your user to feel confused and get lost in it. You need to upgrade your foundations. Obviously, to remodel, we've talked about a lot of different things throughout today and other talks, things like that. And like all tools in the toolbox, things like progress web apps, amp, Google sign in, Google page, are great tools. But just like I can't go to the store and buy a power drill that I saw a great commercial of and be like, I'm going to nail this remodel, I could very well destroy everything. So like all tools, just because they fit, they're kind of like puzzle pieces. The two pieces fit, but you might not be actually making progress towards a big picture. So what's a big picture? So at Google, we like to say focus on the user and all else follows. And so we really think that that full picture you're building towards, whatever you choose to do in the e-commerce journey, you need to improve user friction. Think about if you're trying to add service workers, if you're trying to try some of the cool new APIs and like WebXR and things like that, how was your solution actually improving the user's journey? Because there's no silver bullet here. To talk about this, I guess, Rami, you had an example? Of course. Satva is a company that sells luxury mattresses and they wanted to ensure that their users have a very smooth and fast online shopping experience. They decided to use AMP as a technology to enable that. In their first iteration of their product details pages and created the AMP version of that page, the user had actually to click from the AMP page to the product details page and then click again to get to the cart page. They realized that this was not the best user experience. So in their second iteration, they changed the design such that the user can see the full details of the product within the AMP page and be able to click to add to cart and even see the cart from the same page. This resulted in reduction of friction between the user going from three tabs to two all in the same page instead of three pages earlier. The results were astounding. Between the two iterations of AMP, they saw a 50% improvement in conversions. Thanks. So it's not just AMP. If you're creating things like a progress web app, you might run into this too. So here's an example from Expedia. Expedia has been working on a brand-new front-end platform. It's four times faster on slow 3G on lower-end devices. It's really great. Their CEO even recently said in the earnings call that speed benefits deliver conversion benefits. Usually, we'll say, we're going to stop here. Great case study. Instead, we're going to go into an interesting learning they had as part of this process. So Expedia, like many of the sites that you might work on, is a large e-commerce site. They're not just going to move from one old platform that's clearly working over to a new platform overnight. They need to do this in a piecemeal way because maybe some teams are on a new platform. They need to socialize it to other teams. And so in doing that, they actually learned an interesting lesson. Users value consistency. So some users, during the migration process, they land it on the old experience. And then after clicking around, you get put into the new experience that's faster. And they might move back and forth a couple of times. That population, to no surprise, actually probably did poorer than if you just stayed in one experience or another. It's very much like if I was up here and I was wearing one brown shoe you probably couldn't take me quite as seriously as you're taking me right now. Hopefully it's on the serious side. So I think users value user journeys that are consistent. So make sure your solutions, not only in that page you're building, but in the entire journey and how you're measuring, is actually reducing friction. Of course, what I'm saying, though, means that if your journey needs to be consistent all the way from, say, the entry point from the search results page to checkout, you're probably going to run into a lot of challenges. So when we work with a lot of e-commerce partners, we've run into three main challenges. Number one is something that many of you probably think about, organizational alignment. Then you get to a technical approach and then finally measurement. So we'll talk about that today. Let's talk about the first challenge, organizational alignment. When you talk to people in e-commerce companies and ask them, do you think investing in performance projects to improve user experience will help improve your core business metrics? The answer is an immediate yes. However, when it comes time to prioritizing performance projects, there's always several hurdles. Lack of time, lack of resources, you name it. One of the key ingredients that's missing here is organizational alignment. Performance is not just an engineering priority. Why? Consider this. Your content creation team wants to generate great content for the users to consume. However, they might create images that are too heavy or not optimized because they are not going through the image optimization layer. The marketing team spends several dollars to bring traffic to the site and care deeply about the user experience. However, they might also have tags that are either old or redundant or obsolete. The design team cares very much about that end-to-end experience, but they might have some really cool new features that do not use shared components. In each of these scenarios, one thing is clear, which is that performance needs to be a priority among every team that affects the website and cares about user experience. Let's look at two approaches, a top-down and a bottom-up approach in order to create organizational alignment, and we'll do this using two examples of partners. 1.800 Flowers, which is a global flower brand, and then Just Fly, which is a Canadian travel agency. 1.800 Flowers believed that the modern marketing department is really marketing plus technology. This is a very big shift in mindset, and what they did was to break out of organizational silos and create a cross-functional team that comprised of product, marketing, design, development, and analytics. This was led by the CMO of the company, and they were given a very clear shared goal, which was to drive growth for the company by investing in emerging product experiences. This allowed this cross-functional team to work really closely to invest not just in one or two, but almost the entire spectrum of modern web technologies, and as you can see, this yielded some great results. Seeing a 50% uplift in conversions does not come easy. Let's look at the other example. Just Fly. Even if you don't have a top-down mandate, you can use data to kind of prove your case and get that consensus. Just Fly did just that. They started off with bite-sized proof-of-concepts and experiments and caught the consensus amongst their management and cross-functional teams to invest in a progressive web app. They started off with this higher-level goal of improving the booking experience on their site, and they used data and experiments for just about everything, from testing different JS libraries to testing different checkout flows and design, and also they had the add-to-home screen icon be installed by a few users to see how those users behave differently from others. And with all of this data, they not only got the buy-in from product and marketing, but they also got the headcount they needed to build the PWA. And they merged the code bases of two of their brands so that they could launch both of them as a PWA in the same time frame that it would have taken for one. Now that's efficiency. And as you can see, with a 35% improvement in bookings, this is a great result for them. All in all, whether you use a top-down approach or a bottom-up approach, it's important to keep in mind that organizational alignment is key to not just prioritizing your performance projects, but also to gain the right visibility so that everybody that cares about user experience also cares about your project. Well, once you get the organizational buy-in, you still have to build the project. So we're not here today to talk about all different ways to build a great progress web app or modern web experience. Well, share a little bit of the successful traits we see of good projects we see out there. One big pattern is that we see success in having a long-term vision and then having achievable goals towards that vision to give an example. So I was recently in Spain with my wife and we visited La Sagrada Familia over in Barcelona. I think it's a great example of this. This has been under construction for five generations and yet it still adheres to the original architect Gaudi's original vision. Despite that, even though it's still under construction to this day, it opens its doors to 2.5 million tourists a year. It's actually very much like some of the e-commerce sites that you have. On Black Friday, not only are people going to check out, you probably are running like 2.5 million AB tests through some platform. So I think it's really interesting to think about that. A lot of teams have that long-term vision and they often tell me that, I'm going to run into under five seconds. But oftentimes, you find yourself, we ran light-house and we're really far away from it. And what happens is, well, we'll get there someday and then there's nothing achievable to get there. It's just something you see often in the distance and the horizon. Oftentimes, they come back and say, well, we're just going to have short-term goals and this is great. They have sprints or everything to get quick wins. Not to say quick wins aren't bad, but oftentimes you kind of get into this where there's something we can achieve before the holiday season and that's all you do, you go in the circles or you regress after it a little bit. It's really important that you have both a long-term vision, a short-term goal and you make sure that they are actually related, that the short-term goals is making meaningful progress towards a long-term vision. Let's make this a little bit more concrete. We all like to say it up on the stage with the other speakers that one of the visions is to improve speed. What does that mean? There's a lot of different ways to measure. Let's go back to what I was saying earlier from a user perspective. Improving speed might be to focus on user navigations. When we say you want to optimize your cold page load or maybe your return page load to see if you're using caching or if you're comparing single page app to multi-page app, a lot of times we're talking about that ease and that latency between me clicking on the user interface and the response in your UI of somebody showing up. That's the speed in that regard. So to go back to the house really quickly, I think doors are a great kind of example of what navigation is. In an e-commerce site, it's like a house. You move from room to room. You might come into the back door, the front door or something like that and throughout the process, to complete the journey in the house, like to complete the journey on your website, you move from room to room. On Thanksgiving, you're holding the hot turkey in the kitchen. Do you want to mess around with the door where you have to find one key and then grab another key to get to the door in the back? It's probably not the door you want. Likewise, there are probably experiences in your e-commerce journey where you don't want the user during sorting, filtering, or searching to have to kind of pay the cost of that locked door. There might be a good place for you to have a door on the right here. I'm not saying that it's always client-side routing or anything like that, but there are different techniques of making that as seamless as possible. So we'll jump into these three examples that I'm going to talk about in a minute. I'm going to talk about some of them in different parts of this modern web journey and kind of dive into how they're breaking down this process. Walmart is a great example. Extremely leading e-commerce site and they recently made a north star goal to reduce time to interactive by 70% on low-end devices such as the iPhone 6S, and this is an extremely aggressive goal to going back to what I was saying earlier. I like that they were able to kind of make that goal. They were able to kind of make engineering teams kind of make meaningful progress towards that and then in the last couple of months, they got 25% of the way there with improvement on some pages. But just because you make good progress from zero to 25% doesn't mean just making quick wins or the same process is going to take you from 25% to 70%. But that work is important because in that work they did, they were able to show significant business improvement and use that to kind of socialize it across their teams. 25% might not be quite as simple. Take this... It's not really my kind of room, but let's say it is. I could organize this room by just taking some books out and reshuffling and making sure they're all in the same order. But if I was going to take that strategy and do that for everything in this room, I don't know about make meaningful progress and finish in time. If I'm going to clean up this room, I might actually say, well, look, this is kind of how I would like one shelf to look like, but let's kind of take all the books away, move the shelves around to actually get a nice looking kind of place. Likewise, what Walmart is doing is that because they had a North Star goal and they made short-term progress, they're now able to start thinking about, well, what other larger shifts can we take to get to the goal we want to do? Is it building a progressive web app? Is it adding service workers in some meaningful way? I think we're really interested to see kind of what they're going to work on in the next year. Speaking of service workers, eBay is actually using one of the service workers techniques to improve user navigation on their site. So in one of the earlier talks, we talked about kind of how you can use service workers to improve user navigation. There's lots of different ways. eBay is just using one potential way and they're testing this right now. So they're doing predictive prefetching of listing pages. So for example, right now, if I go to eBay and say I'm looking for a camera for Christmas for my dad, and dad, you didn't hear that, I don't think you're tuning in. So right now, if I see a camera in the search results, I have to wait until I click on the listing and then the browser makes another HTML request and I'm waiting for that as a user. And depending on what connection I'm in, that could take a long time. Well, eBay is like, if we could predict with high accuracy what you're likely to click on, you're going to click on this camera right here, what if we post message to the service worker that particular file you need and then the service worker is pulling in the background while I'm considering which camera to click on so that when I do click on it, well, it's already there on my phone. So that when I click through, I'm going to see the navigation as possible. Of course, what if you don't have the service worker architecture yet already in place? You're waiting on Q3 for that to be done. Or maybe some of the quick wins for initial page load might not be there and you're kind of stuck there. Because you have a long-term vision of improving user navigation, you might run into a wall at some point of the short-term goal. Because you have that setup, you can start thinking about different ways of getting to that long-term vision. And this is kind of where Airbnb is kind of thinking about. So Airbnb is like, well, sometimes our page rebooting that JavaScript application on every single page might be a cost we don't want every single page load to be. So they saw that what if we can only update certain parts of the page, say, in the search results so that only that refreshes, only that shows up when the user clicks on something. In doing this, they actually measured the user perceived speed of that load. And they saw that loading the search results page in that manner was a seven to eight-time improvement. So they start diverting more user traffic to take advantage of client-side routing so that now 88% of search results loads are leveraging the seven to eight-time speed improvement. Because at the end of the day, it's a user's journey that matters. Hopefully, this is a quick example of how you can keep an eye on a long-term vision, whatever that may be, set short-term goals and realize that there are a lot of different ways when you hit different walls in those short-term goals of getting to that long-term vision. Let's talk about the third challenge, measurement. Let me ask you something. Please raise your hands if you've ever worked on a project with great potential to improve user experience, but the plug got pulled at some point because you couldn't prove the impact. Anyone? Quite a few of us. And this is certainly not uncommon, right? Even if you have a great organizational alignment and a good technical approach to your project, if you have the right measurement strategy, it could still fail. Because of this, even though this is a third challenge we are covering here today, this is one of the first things that I've seen successful companies do in terms of thinking about their performance project. A good measurement strategy is one that is shared, automated and actionable. What do I mean by that? It needs to have shared metrics. To Cheney's earlier point, when you have the not-starred goal, you need to have specific metrics that are mapped to that not-starred goal that are also shared amongst people that care about that user experience. Having these shared metrics is really crucial and you have to define it in a way that people understand exactly what those metrics are and how they are calculated. The second piece here is automated tools. These tools are necessary not just to measure these metrics but also monitor them over time so that you can gain real good insights from them. And once you have those tools, it's important to understand what are the actionable next steps. Now that I have this insight, what changes can I make in order to meet that not-starred goal? Let's dive into shared metrics a little bit deeper because it's often not as well understood. With shared metrics, we have these four big milestones when we talk about a user journey in terms of acquisition, engagement, conversion and re-engagement. Let's focus more on the conversion piece. Let's say you really wanted a seamless checkout experience for your users. One way to measure that is to think about the average time that it takes for, let's say 90% of your users to go from that car page to the order completion page. Understanding from a user perspective how easy or difficult it is for them to convert when that intent is clear is a good way potentially for measuring whether you're having a seamless checkout flow. Another important point to consider is third parties. Third parties are an integral part of that user experience on your site and therefore they should not be ignored when it comes to measurement strategy either. So one way you can hold your third party accountable is to say that the server response time should be under a certain budget that you set for them. All in all, talking about budgets, you've been hearing this term of performance budgets throughout various sessions today. A performance budget is a framework that allows you to understand what changes represent progress and what changes represent regressions. And it does take into account the set of shared metrics and you have budgets for each of those metrics which are shown in this automated tool and made actionable. Let's actually take a look at how three different e-commerce companies took this idea of a performance budget and implemented it so well that they have seen some amazing results. Are you ready? Okay. Cdiscount is a French leader in e-commerce that had a problem that is not very unlike many of us in e-commerce. They saw a big chunk of their traffic on mobile but they didn't see that the mobile sales were at a similar ratio. They realized that this is not an easy problem to solve, right? So they broke organizational silos and they empowered their marketing team to run deep analysis and come up with a set of shared best practices. They then implemented these best practices as budgets within their tool and using the integration between speed curve and Slack, they ensured that when the budgets were not met, automatic alerts would be sent into the Slack channel which were shared amongst all the cross-functional teams. By collaborating in making all of these different changes, they saw some really, really good results. Their mobile website conversion rate was 50% higher than their native app which has a 4 or 4.5 star in different app stores. This is something that all of us really can relate to, right? It's not easy to get there. Let's take another example. Carousel, which is a Singapore-based online marketplace, used this idea of a performance budget because they really wanted their users in Southeast Asian countries where the predominant speed, network speed is 3G. They wanted all of those users to have a great experience on their site. And so they started off with this kind of performance budget. Note that the first line here is about the critical rendering path resources bundle to be under 120KB. This is irrespective of whether that bundle includes first party or third party because they had this commitment that that bundle needs to be really small. They were so intent upon following this performance budget that they included this as part of their continuous integration process such that the code could not be merged until this bundle or this budget was met. In order to do that it was not an easy thing at all. So they really had to collaborate very, very closely with their design and marketing and analytics teams and that really paid off very well as you can see in this next slide. Not only are these metrics really good but there's something here for every team that was part of this project. The product team cares about the page load time. The marketing team cared about organic traffic. The ads team cared about the click-through rates and the transactions team cared about how many buyers and sellers could interact with each other to complete the transaction. So these were great results all in all. Let's look at a third example. As you heard earlier today about Wayfair, the home goods company here in US that really wanted to improve user experience and make sure that they sustain that improvement. When they first started off with the performance budget which they said was one of the most crucial parts that they did, it did not take off that much because these budgets were understood but it was not very clear for everybody. While they made so many changes towards optimizing their M site, they said that one of the most important things they did was to socialize their budget. And the way they did that was to make it clear using performance indicators as you see here in colors. So this is how it worked. They had an acceptable budget and they had an acceptable variance of plus or minus 10%. So if you were within that range, you're an orange. If you exceed that budget by more than 10% then you enter the red area. And if you're better than the budget by more than 10%, then you've hit your goal. Congratulations, you're a green. They also had a competitor line just to make things fun. They had a market and they were able to create a wide range of different types of budget that the company was able to use. And if you look at the market, they were able to create a bunch of different budget pages since the budgets were set at a page type level for every route in the site. So the moment you look at your mobile site within the company, you're able to see these a little bit. You can see that over here where once you hit the goal, the bar was raised, so to speak, and it actually tightened up over there. By the way, isn't this the kind of graph we all want to see going from 2.3 seconds to 0.9 seconds? That's really awesome. This is their performance portal. As you can see in this portal, they had a clear set of shared metrics, not just the metrics, but also the page level indicator, and they have a call to action at the bottom for users to basically reach out to a Slack channel if they had questions about these metrics. Whether you're trying to use a performance budget idea or trying to use the general idea of a shared metrics automated tool and actionable next steps, thinking about your measurement strategy at the beginning of your project is crucial to the success of the project. Another thing to remember about performance budgets is that it needs governance to think about the metrics at a page or route level and also to maintain those budgets and metrics as moving targets as you go forward. It also needs to be enforced in a way that everybody can come together to meet or beat those budgets. And that makes it very, very successful. We would love to go deeper into some of these partner stories, but true to form, we might be slightly above our performance budget for time. So hopefully you learned something from today's talk. I think we want to focus on making sure your tools actually improve the user journey. And along the way, you might run into these three problems, work, alignment, technical approach, measurement. This isn't anything new to us, and we hope some of these partner examples help you move a little further along on that journey. Which means you need to do something differently the next time you work on a performance broad check to build consensus and visibility for your project to cross those technical hurdles to reach the not star goal and also not just to prove the impact, but also to sustain the improvement that you work so hard for. When you have all of these considerations for your project, I'm sure that working on performance projects can be very exciting and rewarding as many of these partners would attest to. I'd like to say a special thanks for all of these partners that allowed us to share their stories with you today and many others who have influenced this talk in some way or another. We would love to hear more from you and your specific challenges when you're trying to adopt modern web technologies. Please come find us at the Ask Chrome section during the break. Thank you. Thanks, everybody. I think we know what's next. It's the big web quiz. Of course it is. One trick ponies, a pair of us. Absolutely. The software was built. We're going to use it. Keep phoning it in. Okay. Not even joking. Yeah, we're doing this all tomorrow as well. Oh, yeah. If you're not enjoying it, sorry. That's how it's going to roll. We're enjoying it. Yeah, we're loving it. So this next question, I'm quite fond of this one. I'm really fond of this one as well. Is it undefined? I mean, undefined is not a function. I've seen that enough. Yeah. So what are the rules for this one? Is it undefined? Quite simple. Pretty simple. Again, couple of seconds for each one. So guess quickly. We'll start having a look. Oh, here we are. Window.window. Document.anchor.number.maxinteger. Oh, low confidence. Oh, it's just changing a lot. What's in the next group, Jacob? Oh, sorry. I was trying to figure them out. I kind of fallen asleep. Ooh, I like that one. Undefined has a 50% confidence. That's interesting. That's going very well. What does that mean? Create element by tag name. Oh, I don't know. We must be close to finished. We are close to finished. I spent too long talking about that. Is undefined undefined? I don't know. That's so meta. Okay! What a split! Did you know, with the window.window thing, you can keep doing it! Window.window.window.window.window.window.window .window.window.window. As long as you want to keep doing it, you can do it. Yeah, it's max safe integer. Or there's max number or something that's like a couple as well. It's not max integer. Max value and max safe integer. Right, prototype contains. Sure. It's prototype includes. We do have some APIs that have contains, like class list and such. That was undefined, undefined. A lot of people think yes. I think yes. And they're right. Right, come on. Yeah, it is absolutely undefined. Yeah, no smooch. Awkward. Oh, yeah. And then the bottom one, there's a 0th entry, and you tried to get to the first entry, so that didn't exist. And they're going to be there. It's kind of the cruelness of people having to guess so fast, right? Console.log at the top there. It is undefined. Yeah, the reason it's undefined is because it's a function call, and the function returns undefined. Not even slightly sad about that one. Very smug. And self itself is the same as the window thing. Yeah. Stuff that Jake, no, he's made that up. It's undefined. It's a shame. I'd like that. Yeah. Right. Should we go away? We shall go away. And we will introduce our next speakers, which are Alberto Medina and Western Ruta. Big round of applause. I'm Alberto. I'm a developer advocate in the content ecosystem team at Google. I'm super happy to be here with my teammate, Western Ruta. We are going to talk about bringing the pillars of modern web development to the space of content management systems. But let's start with the end goal in mind. Our goal as web creators is to maximize the joy that our users get when they go to the websites we build or the content that we create and publish on the web. And although it is true that user experience is a complex matter, there are certain elements that are commonly found in experiences that users love. Specifically, users love presentations that work. Number one, they love sites that load fast consistently and offer good runtime performance. They love sites where they feel safe when they trust their data or they do transactions when they decide to do so. They like sites that feel accessible and they are integrated with the full capabilities of their devices. And they love sites that offer great content quality. The good news is that we can build today experiences that have these pillars if we take advantage of the capabilities that the modern web has to offer. Now, doing this is what we call progressive web development. In essence, so should I probably start correct here? So in essence, progressive web development is developing user first experiences using modern workflows and modern web APIs, following coding best practices and performance best practices, and taking advantage of effective incentive mechanisms and validation structures that are available to us. Now, we can take advantage of these principles in different ways, depending on how we go to the creative process. How do we create things in the web? In general, the things that we create in the web follow in two categories. On one hand, we can build websites or web apps from scratch, in which case we have full control of the whole creative journey from the build process to the functionality of the app to the look and feel. Or we can take advantage of some kind of content management system that are our CMS, for short, that are software platforms that add a layer of abstraction on top of the open web to offer capabilities to create, publish, and consume digital content on the web. If we follow the first route, we have full control. And flu flexibility is very powerful. But also it requires a lot of expertise and resources to follow it. If we follow the second route, we can take advantage of the capabilities that the CMS has to offer to us. But then at the end of the day, the experience that we can offer to our users depends on the characteristics of the CMS that we choose. Now, it turns out that nowadays, the majority of content in the web is created and consumed via some kind of content management system. Specifically, 54% of sites today are built on some kind of content management system. And we saw this year 11% year over year growth. These are staggering statistics. And there are other systems that do not think of themselves as CMSs, but also offer the same capabilities of make it very easy to create and publish content. So in reality, these statistics are much higher. So this basically means that the CMS space is very large and very complex. CMS comes in many shapes and form, and they vary according to certain dimensions. For example, you can have static versus dynamic, open source, closed source, special purpose, general purpose. But although there are differences between them, they also share a lot of commonalities. And specifically, many of them face common challenges when it comes to integrating progressive web technologies into their platforms. So many of these systems are very large and complex. They have many interacting components. They have large code bases. They have legacy code, technical depth, and so on. They are also, some of them have evolved on top of initial architectural choices that make it difficult for them to evolve and become progressive. They also suffer from fragmentation very often in terms of things like the quality of the components that make them, or the expertise of the developer community, or the type of users that are creating and consuming content on the platform. And often, and this is very important, CMSs lack the effective incentives for developers to do the right thing. For example, exposing coding and performance metrics so that users of the CMS can choose solutions based on quality and not just on popularity. Now, we want to tackle these challenges in the CMS space. And in order to reason about how to do that, it is very useful to abstract our thinking about these systems in terms of the components that make them, okay? A CMS ecosystem, one way or another, has certain components that form a unique ecosystem for each of them. There is usually a core, a platform core that offers the basic functionality of the system. There is usually some kind of extension mechanism that allows you to extend the core functionality. These extensions are usually called modules in some CMSs, plugins on another, also themes, if they had to do with the look and feel of the system. There is usually a developer's community that are in charge of evolving both the core and the extensions of that platform. And then there is the user base of the CMS that are either creating content or consuming content on the platform. So when we talk about progressive content management systems, we are talking about bringing the principles of progressive web development to the components of each of these ecosystems, okay? For example, how do we bring coding and performance practices to the developer ecosystem so that they can do the right thing? Can we bring tooling so that it's, make it easy for them to do the right thing on the first place? Or can we bring incentive mechanisms so that they are compelled to do the right thing? Now, Google is working with many CMSs to support them in their effort as they integrate progressive technologies into their platform. So in the next few slides, we are going to show you the results of a particular effort that my team has followed in the context of specific CMS. And then later on, we are gonna touch upon some of the great work done by other CMSs. Our team has been pursuing efforts to bring progressive web technologies to the WordPress ecosystem. This little fellow here there is WAPU, that is the mascot of the WordPress ecosystem. It's not a mouse, although it looks like a mouse. So WordPress nowadays powers about a third of the web. So there's a lot of users, you know, creating and consuming content on this platform. Now, given the size and complexity of the WordPress ecosystem, if we address the challenges that they face, we will not have only an impact on a very large swath of the web, but also we will be able to shed light on how to address these challenges in other CMSs. So our goals with these efforts are twofold. One is to tackle the challenges that the WordPress ecosystem faces and also to gather our learnings that we get along the way and bring them over to other CMSs in the form of objective guidance, tutorials, technology transfers, and so on. Now, our approach, the approach that we follow to make WordPress more progressive consists of two parts. The first part is leveraging the capabilities of accelerating mobile pages of AMP for short. AMP provides basically a well-lead path to modern web development in WordPress. AMP is an open source web component library which provides out of the box a set of capabilities and optimizations that are essential to achieve awesome web experiences in the web. For example, coding and performance best practices so that you don't get things like unresponsive content when you are loading it or good usability so you don't have this content shift in front of your eyes when you are trying to engage with the content. It also provides an incentive mechanism, for example, the capability of getting near instant loading time when AMP content is loaded from the AMP cache. And it also provides a validation framework that helps developers to bring their sites to high-performance states and to keep them in that state as the site evolves. So if we enable AMP content creation in WordPress, that means that WordPress developers can take advantage of all these capabilities without deviating of the normal content creation workflow that they follow in the platform normally. We have achieved this integration by means of an AMP plugin for WordPress. Right now, it is in version 1.0 Release Candidate 2 and the stable version will be released at the end of this month. Now, this plugin has a lot of capabilities and features and unfortunately, we don't have time today to go over them in detail but I want to mention a specific important one, one that assists WordPress developers as they build things and plugins that are fully AMP compatible. The plugin comes with a compatibility tool that supports the development of AMP content in WordPress. In a nutshell, for any URL on a site, the plugin tells us all information about any error that can exist in that URL in very minute details. It tells you even the markup that is generating the error and also the component in the WordPress site that generated the markup in the first place. Was it a team? Was it a plugin? Was WordPress core? What is it? So, if you are a WordPress developer, you can take advantage of this tool and all the other capabilities that the plugin provides to you to create AMP content in your website. If you are a developer from other CMSs, the good news is that they are also implementing AMP and we are gonna tell you more about this in a little bit. Now, the second part of our approach of bringing progressiveness to the WordPress ecosystem consists on integrating modern web capabilities and APIs into WordPress core. We want things like service workers, streaming, background sync, all the goodies that you have seen today. We want them to be supported natively in the WordPress platform. So, the goal of the integration is that there is a consistent and unified approach so that all WordPress developers can take advantage of them as they develop functionality for either core or themes or plugins. Now, last year, our colleague Surma, which you have heard of him, by the way, has a complete disregard for the impossible. He decided to show that PWA with WordPress is actually possible by building a WordPress theme that does all the things, as he called it. He successfully proved his case and he presented his work at CDS last year. But he also faced many challenges along the way and he recognized that it would have been super great if the WordPress platform would have provided him with the building blocks that he needed to build his WordPress adults all the things in a much easier way. So, in addition to our work on AMP for WordPress, our team focused this year on devising and implementing the integration of some of these building blocks into the WordPress core platform, okay? And we emphasize the requirement of doing that in a way that is seamlessly integrated with how WordPress works in the first place. So, I would like to call my teammate, Western, to tell us more where we are in this effort. Gracias, Roberto. Yeah, Surma certainly did build a WordPress theme that did a lot of things and... Start over. He had a service worker that was installed by the theme and that service worker powered things like offline reading and background sync. It facilitated smooth page transitions as you navigate around the site. And while his proof of concept was compelling, it faced a lot of challenges. And it was because WordPress didn't allow for doing things that he wanted to do. And so, but these challenges are not unique to WordPress. They are common to other CMSs as well. So, there were a few fundamental challenges that needed to be addressed. The first challenge is that there needs to be a central API in the CMS for generating a service worker. Because there wasn't this in WordPress, the theme itself had to generate the service worker. And this is a problem because as with Highlander, there can only be one service worker at a time. Multiple service workers could be installed, but only one is active. And so, a CMS like WordPress needs to have... Needs to be able to generate that service worker centrally. The next challenge is that writing service worker logic can be complex and that the CMS should provide abstractions as a part of it that eliminate a lot of that boilerplate code that has to be repeated over and over again. And the last challenge, as Alberto referred to, is that WordPress extension system and other CMSs as well don't have sandboxing. And so, a theme or flag can do things that make it difficult to maintain advanced capabilities like the App Shell model. So, we implemented abstractions in WordPress to solve these challenges, and what we did is applicable to more than just WordPress. I'm going to stand right here. First of all, we integrated the service worker API into WordPress so that themes and plugins can each include their own service worker logic and to be installed by the single service worker that is generated by the CMS. But this is just the base requirement. There also needs to be, as I said, those abstractions to make it easier to generate the complex logic that is needed to power a service worker. So, our integration includes a work box out of the box so that, as you've heard about earlier today, you can take advantage of all the complex logic for generating caching strategies and other advanced functionality like Background Sync that are part of Workbox so that you don't have to reinvent the wheel. And since WordPress developers have historically been PHP-first developers, we have included PHP abstraction that allows you to write PHP instead of JavaScript and so that you can have a PHP call that gets translated into the equivalent in JavaScript. To give you a sense for how this API is used in WordPress, in this first example, you can see a caching strategy being leveraged here at Stale Wall Revalidate to match any request that is made to the AMP project CDN so that it will be then served from the cache while being updated in the background. And here you can see a image being precached by the service worker and then served directly from the cache without hitting the network at all. And if these two examples here look familiar to you, then that is no mistake because they're essentially PHP versions of what you would be doing otherwise in JavaScript via Workbox. But if there is no abstraction in PHP for you to do that, you can also hook directly into the service worker to add your own arbitrary JavaScript to do more advanced functionality if needed. So given these service worker capabilities, one common application is the AppShell model. And AppShell facilitates those seamless page transitions and offline interactions that you often see inside of native applications. But making a AppShell is often very difficult and challenging to do in WordPress because themes and plugins can add absolutely anything to the page, like I just said. And so this, again, is where the AMP plugin plays a key role because it enforces those constraints to make sure that the content behaves in a way that doesn't break the AppShell model. And AMP is also specifically designed to be embedded inside of these kinds of experiences. Here is a demonstration that we built that shows the standard theme 2017 with a nice spinning logo for fun. Keep watching that logo. Notice as I open the nav menu and start clicking around, the content is changing, but the logo keeps spinning and is unchanged. The header nav menu remains expanded and you don't lose that context as you navigate around the site. So it's a single page application built in WordPress using a standard theme as the basis. And we could do it just with a minimal amount of code thanks to the foundation that the AMP plugin provides and the AMP project as well. You can look at the pull request on the AMP plugin repo to see more specifics and how you can get started on your own themes. To give you a glimpse of that, here you can see the theme just has to declare theme support to enforce those constraints to make sure that everything is served available in AMP. You have to make sure that the whole theme is able to serve every page in AMP. So you do add this to your theme. Next, you indicate the region in the template that contains the content that dynamically changes as you navigate around the site. And lastly, you tell the AMP plugin that you're opting into App Shell and then that then causes the AMP plugin to generate the service worker logic to pre-cache the App Shell and to add the JavaScript to the page that will then intercept the clicks and do the history management and all that. And the bottom line here is that the theme developer doesn't have to know the complexities of building App Shell. They just extend what is already in the platform. But service worker is applicable to more than just App Shell. It also is useful for other advanced capabilities, including offline access. Often, if you've developed a WordPress theme, you're familiar with the 404.php template, which is used to serve a page to the user when WordPress can't locate the site, the content that you're trying to load. And in the same way, we have extended WordPress's template system to introduce an offline.php, which will then be served to the user when the browser can't access the site. And so they get that nice branded experience from the theme without the dinosaur. And to opt into caching pages that you visited while online, you can opt into that Stale Wall Revalidate Strategy for Navigation requests so that those will continue to be available when you go offline. And lastly, service worker is useful for commenting offline as well. So here, if I try to submit a comment while offline, then the service worker can intercept that request and then replay it when the user comes back online via background sync. And here, the developer doesn't have to do anything because it's just part of the integration itself. So with that in mind, I'll pass it back to Alberto to talk about the road ahead. Thank you, Weston. It's super good to see all these things running. And we certainly have made a lot of progress in the context of WordPress, and other CMSs have done also a lot of progress. But we are still not quite where we need to be. So in the context of WordPress, we, you know, the AMP plugin is available to you today. You can download it from the GitHub link. So please download the Stale Fermented with your site. We are going to continue expanding this work, integrating AMP content creation into WordPress, specifically in the context of Gutenberg, that is the new editing experience in that platform. In the area, you know, the integration of PWA, the service workers, and all the PWA capabilities into WordPress core, we have made a lot of progress. Right now, that functionality is available in the form of a feature plugin that you can also download and experiment if you are a WordPress developer. And we are going to continue working with the WordPress ecosystem to help them to integrate these technologies into their work at all levels. Now, one aspect that we are working that is very exciting is enabling content creators to create content easily, taking advantage of new powerful format as AMP, such as AMP Stories. As you can see in this example, AMP Stories is a mobile-focused format for delivering rich information in terms of visually-rich and tap-through stories. We are building an AMP Stories editor for WordPress, which is also integrated with Gutenberg, the new editing experience of WordPress. To give you a glimpse of where we are in this work, this that you see here is all the Gutenberg editing experience. And the good thing about Gutenberg is that everything corresponds to a blog. And that design premise lends itself very well to create AMP Stories that are also formed by components. So the goal of the editor is to allow users to manipulate all the components of an AMP Stories, control the parameters, and so on. We want powerful content formats like this to become available across all CMSs. And one interesting development that is happening is that Gutenberg is being ported to Drupal, which is another popular open-source CMS. And this is great because then they are going to be able to take advantage of this editor right away. So much more on this will be coming next year. So stay tuned. So to conclude, I can say that the CMS space is moving forward steadily along the Progressive Web Road. Google is working on supporting many of CMS platforms who help them advance in this path. And we are seeing excellent results on some fronts. Here we have some examples. Drupal, which I mentioned is another popular open-source CMS. Their developer community is working on the AMP PWA modules. They are available in sites today. And they are working on enhancing them. Magento, which is a popular e-commerce platform, they are releasing Magento 2.3 this month. And that release includes PWS Studio that is a set of guidelines and tools to be able to create awesome web experience in their platform. Adobe Experience Manager, which is a popular CMS in the enterprise arena, they have allowed the creation of and publishing a single-page application directly from the page editor. And they are working on also providing service worker integration natively in their platform early next year. At the end, we have ECQ that is one of the leading e-commerce platforms in Japan. They are working with an agency that is called Sunday Systems on an AMP plugin that would also enable the creation of AMP content on their platform. So we want to extend special thanks to XWP, which is an excellent WordPress agency that has been partnering with us. And they have made awesome contributions in the areas of bringing AMP and PWA technologies to the WordPress platform. With that, I want to say thank you and stay tuned for more progress in its coming soon to CMS near you. You know what? It's the last question of the day. Quick show of hands. Big web quiz. Quick show of hands. Who uses NPM? Good, good, good. Let's have a quick question on this. Is it a real NPM module? Or not. Or not. Let me tell you, this was really hard to find fake name. Yeah. So now you've got to really figure it out. Let's start the round. We've given you a little bit longer on this one. Wow. Look at the guft. Really? OK. What's up? Whatever that is. Bunyan. Yeah. Offensive name. Could be. Bantaklaus. I like that one. Interesting. Nanananabatman. Jake. Jake again. Seriously. A little bit longer here. So you can take it a little bit longer. Quite confident on boom. Surprisingly confident. But most of these seem quite low confidence. Interesting. All right. We're all out of time, I think. Any moment now, I'm expecting. Here we are. There we go. Right. How's everybody feeling? Confident, not so confident? Well, let's have a look at the answers in the first group then. Interesting. Fair enough. OK. I mean, really, I'm as confused as you are at this point on these. Let's see what happens. Just waiting for this to catch up with me. Literally, when we were writing this question, they would just throw any noun at it. And then search in PM. And it's like, ha, right? It's also, this is very completely frozen. There we go. Let's go back. There we are. There we are. Right. This next group. Offensive name is fake. I'm surprised at that. I thought it'd be the kind of thing you'd find. Well, you're just like, I just want to be sure that nothing bad goes in. Bantaclaws. I really want to register that now. I don't know what it would do, but I'd really like to register it. Jake, it's lit. People are unsure about if it's... I mean, that was the last question of the day, and you're probably a little bit fatigued, but really? I mean, if it's real now, I'm going to feel... Oh, it's real. It's real. I'm genuinely surprised. Wow. OK. We should do our final talk of the day, shouldn't we? Yeah, yeah. So our last session of the day is about search discoverability. Please welcome to the stage Martin and Tom. Well, good afternoon, everyone. I'm Tom Greenaway, and I'm a developer advocate based in Google Sydney. And I'm Martin Splitt. I work out of the Zurich office, and I'm a webmaster's trans analyst, which is just a fancy name of being a developer advocate for search and web ecosystem. Today, Martin and I are going to tackle this topic. The best practices for ensuring a modern JavaScript-powered website is indexable by search. But what do we really mean by this sentence? Well, by best practices, we mean what every developer should know, the techniques, knowledge, tools, and approach and process. And by modern JavaScript-powered website, I know it's a bit of a mouthful. It means, like, websites which use modern JavaScript frameworks for their front-end and probably are rendering their HTML in JavaScript on the client side. And they're typically, like, single-page apps, but this applies to any website that uses front-end JavaScript. And they might be, say, powered with AJAX or WebSockets for content. And lastly, what do we mean by indexable by search? Well, I mean indexable... By indexable, I mean the content can be understood. And by search, we mean Google search. But these rules, they apply to other web crawlers, too, and such as, say, other search engines or social media services. Right, like Facebook, Twitter, and all the other wonderful ones. Cool. Cool. So now that we know what we want to address here, how are we going to split this up? How are we going to go through this? Well, first things first, I think it makes a lot of sense to quickly go over how Google Search actually does the indexing and figures out what content there is in the web. Then we're going to look into something that as a web developer, I really, really wanted all of these years and we got that now, the tools to help you all and us to debug the things that we are seeing. And last but not least, we don't want to just debug things after they happen. We want to make sure that we're getting ahead of that. And basically, we're going to talk about a few best practices for indexable search. Yeah, indexable content. Cool. So with that out of the way, it's good that I have you here because I have got something to talk to you about. Okay, that sounds a bit intense. Yeah, don't worry. It's not that intense. It's just I have... How do I put this? I have a friend who's called Marvin. Let's call him Marvin. And they are building a single-page web app and they have done that and they have followed most of the things that you do these days, like PWA and all that kind of stuff. But they have issues getting users to find their content online. I see. And unfortunately, that's not an isolated issue. Right, right. So you ran a Twitter poll and other people are encountering this as well. Yeah, yeah, like I got a bunch of responses and it's like, so how can I check if my stuff is findable in search or hey, my stuff's not showing up, but I don't know why. And I think that needs a little bit of a dive and an explanation, I think, right? I guess that's a good idea to do. Well, there are a lot of tools available for debugging on Google Search, but perhaps before we get into those, how about we just go over how Google Search sees and indexes the web? I think that's a fair point. That sounds good. Well, here's a basic diagram of how websites were traditionally indexed. Google bought our search crawler, found pages, downloaded them and processed that content, and then put them into an index and then performed more crawling based on the links it found. Yeah, okay, but what happens in the process step? Like, that seems to be where the magic is, right? Well, when we fetched a webpage from a URL and it was a traditional website, you know, that webpage was complete when it arrived and we call this like, you know, the rendering of the page or, you know, the construction of the HTML. Right, so when you say rendering, you don't mean stuff like putting the pixels onto the screen or like, dealing with DOM transitions and animations and stuff, it's just where does the HTML get constructed like server-side rendering versus client-side or hybrid rendering? Exactly, like traditionally websites were rendered entirely on the server and then any JavaScript that was used was probably just for cosmetic purposes. Right, okay, cool. But now that we have figured out that this is about like the dealing with the constructed HTML, it's no longer that way, right? We have changed our web architecture quite substantially, so what happens there today? In the processing step? Yeah. Yeah, it's a good question. Basically, there's now a renderer inside the process step. So more specifically, we have a version of Chrome that opens the content of the page and runs some JavaScript and then it spits out the final HTML. But we also have a queue as well, which is quite important. And that kind of leads into this next point which John Wheeler and I revealed at IO earlier this year, which is that in a nutshell, the rendering of JavaScript-powered websites in Google Search is actually deferred until Googlebot has the resources available to process that content. Right, deferred. So what kind of timeline are we talking about? What's the delay? Well, it could take minutes or maybe an hour or maybe even days or up to a week before the renderer is actually completed. A week? Yeah, I know. What? Okay, that's... Wow. It's a shock, I know. But you have to understand that the web is really big, right? It's quite huge. In fact, we've found over 130 trillion documents on the web so far. Okay, that's a mouthful and this number is two years old and I guess the web's growing. So that, okay, I understand that. Cool. But is there anything that we could do to help the crawlers a little bit? I mean, if I remember correctly, when I attended the session at IO, you said something about dynamic rendering. Is that something that would come in here? Yeah, exactly. So dynamic rendering is a technique that allows us to sort of short-circuit the rendering pipeline by delivering a server-side-rendered version of your normally client-side-rendered website by rendering that client-side JavaScript on the server with a mini-client-side renderer. For example, a headless browser like Puppeteer could be used. Oh, right, yeah, that's pretty cool. So how does that work in detail? Well, here you can see how a server identifies that the device requesting the page is a user browser and then it delivers, well, it serves a payload of HTML and JavaScript that gets rendered on the client, right? So that's basic stuff. But when a crawler like Googlebot makes a request, the server sends a different payload. And instead of sending the HTML and JavaScript directly to the browser, or in this case, the crawler, we send what is normally sent to the browser, but we send it to the dynamic rendering service and we run the payload through that service. So then the dynamic renderer then spits out completely statically rendered HTML payload for the crawlers. Ha, that's pretty smart, okay. And to be clear, that dynamic renderer could be an external service or it could be like running on the same web server infrastructure. All right, yeah, that makes sense. I guess for this kind of stuff, you can use tools such as, I guess, Puppeteer is one that you mentioned already, but you could also probably use the Rendetron which is a higher level abstraction. Like Puppeteer is basically an NPM module that you install and that kind of remote controls or programmatically controls a Chrome instance that runs headlessly, which is great. But I like something more high level and I think Rendetron steps in there where you basically just have the server running which is a Rendetron server that uses Puppeteer to steer that and you give it a URL to render and you get the render static HTML back. That's pretty cool, okay, it's fantastic. I guess you can also deploy that pretty easily. I think there's this thing called Google Cloud Platform that's probably pretty easy to deploy to, but I guess you can also deploy it pretty much anywhere else, right? Do you have an example? I do have an example, actually. So, there's an NPM module that is called the Rendetron middleware. So if you're using, let's say, for instance, Xpress.js, you can use that as the middleware, but what you're doing here is basically you just first step, you require it, you need to get that into your project, and then what you're doing is you basically configure it to do the right thing for you. In this case, we wanna specifically jump the rendering pipeline of the Googlebot. Rendetron, by default, doesn't do pre-rendering for Googlebot because Googlebot does run JavaScript, but we wanna get the advantage here anyways, so we can just use the pre-configured and pre-built list of render agents that they are rendering for and add Googlebot in there, and then once we have that configuration ready and have imported it, we can go on to actually use it in our application middleware stack. So, we can basically point it to the running server somewhere and say, for all these user agent patterns, pre-render. By the way, now that I have got you here because you never respond to emails timely, which is fine, I neither do I, no offense, and I mean Chrome Dev Summit is a big event, so this rendering does cost a bit of resources, so I'm wondering, is there any way to figure out what Googlebot, what does Rendetron really do to figure out what's Googlebot and how can I verify that it really is Googlebot and not just saying pretending to Googlebot? Yeah, well, the easiest way is to do user agent sniffing for the Googlebot string. Here's an example with the mobile user agent for Googlebot, but you might want to do this for other services as well that you want to serve pre-rendered content to, like social media services, and also for Googlebot, you can do additionally reverse DNS lookup to confirm that it really is coming from the Google service. And like I said, this is the mobile user agent so you can detect the desktop user agent as well, and that URL will give you a list of all the different user agents. Right, that's nice. Which reminds me actually, I should sync with John Mule and check if there are new tools in Search Console for this stuff, or maybe you can tell me. I was about to say, why do you bring John into this? Come on, I'm here, I'm right, like literally a meter away from you, or I don't know how many miles of food or inches that is, I have no idea. But basically we have a bunch of stuff for you and I'd like to walk you through that. So you know the Google Mobile Friendly test already? Yeah. So, you know, this is kind of useful, it shows you if your page is mobile friendly, it does give you a screenshot of what we are seeing in Googlebot, and it's pretty easy to use, you just paste your URL in there. But it does more than just that, because it also gives you this, which is what I always wanted to have. When Googlebot does not render what you expect it to render, you get the JavaScript log messages that you would give them or get them from the Chrome Dev. So you can really debug the JavaScript. You can really debug this. And you know, here's my favorite, I mean, we had this, is it undefined question earlier on? Apparently it is undefined and undefined is not a function, which is unfortunate, but that happens. Also, do you know about the new URL inspection tool that we've got in Search Console? I don't think so, can you remind me? Yeah, sure. So if you have verified for a property in Search Console, you can basically paste any of the URLs that are belonging to that property into the Search Console URL inspection tool, and you get, like, when we crawled it, if it's on Google or not, and you have, like, a bunch of information, and you can run a live test as well. So this blog post here is drumroll, not on the index, but that's fine, whatever. We also have something else. It's actually a pre-announcement that we're going to make at Chrome Dev Summit now. So can we get a little bit of the drumroll? So we'll get the live code editing feature in Search Console. OK, what does that do? Fair enough, I think, yeah, OK. So imagine you're building a website. Let's say there's an afterparty today, so you build the website for the Chrome Dev Summit, and you want to check if your structured data works to get highlighted in search results, right? Or you want to check that as quickly as possible. Yep, makes sense. You want to be able to iterate quickly, and you don't want to have to wait for deployments from your site. Yeah, exactly. I want a development cycle that makes more sense. So what you can do is you can plot that into this wonderful tool. And here we have an example. We are using JavaScript to create a script tag that contains the structured data, and we have all our wonderful structured data for the event here. And then we can click on the little test code button, and what it gives us is this. And we're like, yay, our event is available and this is a code editor over here. And what we see over there is it's missing the performer for the afterparty. Oh, wait, that's a shame. I think it's meant to be like the Chrome Dino. It's the Chrome Dino, so we should add this performer. So what we can do is we can go straight back into the code editor and click the button again, and it live updates as we have retested our code. So we can do all of this in the browser in the single tool. And I think that is pretty fantastic, really. That's pretty good stuff. OK, but yeah, that is definitely a neat way of testing and trying our code out and iterating quickly. That's something that Search Console generally tries to do, right? So you have this really nice flow. So let's say someone in your company or agency or wherever has access to Search Console. I don't have access to Search Console as a developer normally because I have so much other things to worry about, really. And then someone finds an issue through one of the reports. So how do they get this information to me? Well, one way of doing so is basically they can just go and see this issue from the reports page where there's like a bunch of samples here. In this case, the content is wider than the screen. And in the second stage of this, they can basically just say, all right, so our developers might be an external developer. We don't want to give them access to all the data, so we share this particular issue with them. And they get a link. They don't have to sign in or anything. They can just use the link that is shared here to see what the issue is, get access to the documentation that explains what the problem is and how to fix it. And then last but not least, when I as a developer then go like, I fixed this. I have this under control. We know that that's often not true, so what Search Console offers us is this validate fix button. So if I'm like, Tom, I gotcha, I fixed this. You or I can press this button and go like, I gotcha in 10 minutes. Yeah, right, okay. So it really establishes a flow. And it does, it is a really nice workflow and it works across departments, which I think is pretty fantastic as well. Yeah, that's nice. But actually, there is another addition to our DevTools. Right, you were talking to me about that. Yeah, exactly. I'm sure everyone obviously knows about Lighthouse. They've been to the forum space and they've seen the awesome statue we've got there. Oh, that's fantastic. But yeah, what if I told you that there are actually SEO audits inside of Lighthouse? In fact, we've got a few more coming soon. So basically, this can automatically detect things like whether your HV header responses are accurate or not and meta tags as well, like you've got correct title and description tags or that you've got H.R.I.F. Lang set up correctly if you're using that for localization or nationalized. And also descriptive link tags, even for anchors. So, you know, you're like, exactly, you want to avoid click here because it doesn't really communicate what is actually the thing that you're linking to, right? Number five is going to surprise you. And then robots.txt and then the new features we're adding are automatic detection of the size of tap targets and the margin around tap targets to make sure that they're nice for users and also structured data testing as well, which you're just talking about. Yeah, yeah, it's gonna get more of that, that's fantastic. That's really good to see, yeah, cool. All right, so I think from the tool side, we have Lighthouse audits for SEO. We have the search console, we have the mobile-friendly test and the rich results test with editing features. I think we're pretty good on that front, but do you have any recommendations in terms of best practices that I should tell my friend Marvin? Yeah, exactly, my friend. No, I'm serious, this is a friend of mine, it's totally not me. Any best practices that we should recommend to them? Yeah, yeah, let's go through a few. Okay, cool. Well, firstly, it's important to know, remember how I said that Googlebot is running Chrome nowadays? That is fantastic, finally, we have a modern browser. Well, actually, wait a second, it is Chrome, but it's not actually the latest version, so it actually runs Chrome 41. Right, 41, it's not even 42, the answer to life, the universe and everything, it's just 41. It's not quite there yet, but Chrome is working on it. But seriously though, since it's Chrome 41 and Chrome 41 was released in 2015, doesn't support all of the latest features of modern browsers, so for example, it doesn't actually support ES6, so the latest language features aren't available. And while it has web components, it's actually version zero of the spec. And another thing to note is that it's actually stateless, which I'll explain in a little bit. Ah, okay. Yeah, it's interesting one, I'll break it down. For example, with this code, how many ES6 features can you spot? A few. That's a good answer, yeah, okay. More than zero. Yeah, exactly, now we might forget that some of these are relatively recent features, like advanced object literals or const definitions, back quotes and variable substitution. Yeah, you use them every day, right? Yeah, exactly, yeah. So one way to deal with this, if you need a solution is to use something like Babel, which allows you to transpile ES6 code down to ES5 automatically. And you can easily compile a set of files or directory and compile into a single file for serving. And using presets, you can also detect a minimum browser version. You can use that as a baseline. So you can ensure that ES6 features and ES6 code go to browsers, and I can't support it, and then browsers that can't get the ES5 transpiled code. Right, that makes sense, okay. And now while Chrome 41 does have web components, it's actually an older version of the spec that you're probably used to. So after Chrome 41 shipped, several features such as custom elements and Shadow DOM were actually had some changes made to their specs. So depending on exactly which features that you use in version zero, it might be very simple to migrate or it might get more complicated. But the most important thing is to be aware that there are differences. Yeah, there's differences in this. And lastly, this probably shouldn't come as too much of a surprise, but Googlebot, it basically doesn't really have any memory. And what I mean by that is that every time it accesses a web page or a site origin, it just always acts like it's the first time it's ever encountered that website. To achieve that, we turn off a bunch of things. So we don't have service workers running, so we don't have a service worker cache, and we don't have local storage or session storage, and so forth. Makes sense. If you click on a search result, that's like you're coming to that page the first time. So we... Yeah, what is the first-time user experience when you encounter it? Makes sense. So Martin, do you have any suggestions for how we can substitute for some of these things? So I think if you look at things like the web components and a few of these features, like Intersection Observer and all these things, I guess polyfills is a good way. Once you've already done your homework and did the progressive enhancement or graceful degradation, at least. But the problem with polyfills, I feel, at least, is that there's a bit of a risk to ship dead code to people, give them a bunch of stuff over the network, which, depending on where they are and what plan they are on, might be actually pretty expensive and time-consuming. So you wanna reduce that. And actually, I really like this library called polyfill.io. So polyfill.io basically sniffs on the user as well, or the user agent, not on the user. On the user agent of the person requesting or the browser requesting it, so that if the crawler comes by and there's like, oh, this is a Googlebot crawler Chrome 41. So I give them a bunch of stuff that the normal user on a more recent Chrome or Firefox or Edge or whatever, doesn't need. So basically, it tries to give you the right amount of code that you need to make this work. So that's pretty fantastic, I think. That's kinda cool. Yeah. Is there any place where I can figure out more about these feature issues where features are missing in Chrome 41 that are there in the modern one? Yeah, definitely. If you check out caniteuse.com, it's a great resource for this sort of thing, because you can check the features across various browsers. You can also specify specific browser versions as well. You can say, hey, what's different in Chrome 41 specifically? Cool. That's pretty nice. And also the golden rule of any kind of indexability and building a website for search crawlers and that kind of thing is to just make sure you test really frequently. Yeah, fair enough. That's a test, test, test. Fair enough. Cool. Going back to dynamic rendering for just a second and kind of the tools discussion. So if I test my stuff and if I figure out, oh, no, this feature is really tricky to work around and I wanna not change my code, I guess I can use dynamic rendering for that. But I guess there's trade-offs and there's situations where I shouldn't be doing that. So what are the sites that should, generally speaking, look into dynamic rendering? Well, because the rendering queue can introduce some delay, if your site has lots of frequently changing content, like a news publisher or something like that and you've got maybe breaking news and articles that are coming out, changing very frequently, you probably might wanna look into using dynamic rendering to overcome those delays. And if your pages use features that aren't available in Chrome 41 and it's not possible to work around those limitations, maybe in the short term, then using dynamic rendering as a useful workaround until either Googlebot catches up or you have the time to adapt that in your own implementation. Right, that makes sense. And also, while Googlebot is supporting JavaScript, other crawlers might not. So for example, if your site is using social media interactions a lot, those crawlers tend to not run JavaScript. So if I share a link and I'm sharing it on social media, but I want a nice preview card to be created, it probably wants to try and access the image and the title and the meta description or something like that. So if you're actually using client-side rendering for those things, then you might get just the variable templates that you're using in the console. Use the image in it or something like that. Yeah, in the preview, which would be bad. So in order to get better representations there, you can also serve a dynamically rendered version page. Cool. Oh, sounds pretty good. But ultimately, you know, dynamic rendering, it's a powerful technique, but it's still a workaround. So, Martin, do you have any ideas on if we've got plans to improve this on the Google site? Yeah, way to put me on the spot. So I don't like to make predictions on that kind of stuff, but definitely, so there is, the way that our infrastructure is set up, there is a bit of a gap between when we actually execute the JavaScript and then like do the rendering and then the indexing bits and the crawling. So we try to bring these closer together, which is an interesting technical challenge. And then at the same time, we're trying to catch up with Chrome, but we don't want to just catch up because then we're going to have the same freaking conversation a few years, like, oh yeah, Chrome is running version 70 and everyone's like, oh, when does it get into three digit land? So we basically work on a process that we hopefully gonna start very soon. So I make no promises on when, but we are working on figuring out a process to stay up to date with Chrome so that Googlebot does run with the Chrome release schedule so that we are basically giving you an evergreen, yeah, right, that would be fantastic if we could do that, yeah. And also to shorten. Yeah, exactly, that goes hand in hand really because we have to touch the rendering infrastructure anyway. So we might as well do both things, but I think we might get an update quicker than we get the two things together. But okay, cool. We'll see, yeah. Cool, all right. So thank you so much for all the talks about how the indexing works in bits and pieces. That was pretty fantastic. I learned a bunch of new stuff. Yeah, and I hope this helps with Marvin as well. Marvin is gonna love what I'm gonna tell him, I'm sure. Okay, yeah, well thanks for showing me all those search console tools as well. Oh yeah, they are pretty fun. And if developers need more support and help, and like Marvin, can I get somewhere else? So I try to get Marvin to go to our documentation because the documentation is expanding quite quickly and we have a bunch of cool documentation coming up and good documentation already there. We have webmaster hangouts, so you can pop by and talk to us over a hangout and ask us questions there as well. They are recorded, so if you wanna go back to one that has happened and there was an interesting question, you can find it there as well. We have what's called the webmaster forum. A bunch of fantastic people are there called product experts who are also helping if there's anything coming up with search. And last but not least, if you've been to the forum space before, we have a search console booth where you can try out search console and get stickers and stuff, so definitely check that one out tomorrow. So yeah, that's a bunch of resources. Awesome, yeah. Cool, well yeah, thanks everyone. Thank you very much for staying with us for a moment. I'm composing myself. Now, at this point, we're going to add a dart. Fluffy, fluffy, fluffy, fluffy, fluffy, fluffy. It's very fluffy there. Let me put my teeth back in. Fluffy, fluffy. Sorry, I kicked that. And flatten it. Hold on, hold on, you can't flatten it. Um, can we go again? And that's all there is to it. Why are we still filming? Oh, this is it. This is it. Half of it done. Yeah, half of it done. But before we go, we've been doing the big work quiz all day. Don't worry, there's not another question. It's fine. Chill out. But we are going to take a look at the leaderboard. Ooh. Because the main prizes are going to be tomorrow. It's a big poster that we show. The big poster. Yeah, but we did prepare it. We did prepare it. The big poster. Yeah, but we did prepare a tiny token of appreciation. Yeah. Can you see it? It's a fridge magnet. Yeah. Oh, yeah. Yeah. So, should we have a look? I'm very excited. I want to see who won. Okay, here we go. Here is the top three. Ooh. Liz, Ray, and... Oh. To be super clear, these questions ranged, as you noticed, from the highly ridiculous to the massively irrelevant. Um... But don't let that stop you from playing. So, just to be super clear, like, well done to these people, but also, you know, don't feel... Bad about it? Bad if you didn't get any of these. Because, I mean, honestly, as you saw, I was still there going, Jake's not a really... It's not an NPM module. It's an NPM module. Yeah, exactly. So, I mean... He's just silly. And I actually wrote some of these questions. So, you three come to the front at the end, and we'll give you... That fits my head. Beautiful prizes. Yeah. And, yeah, that is it. End of day one. So, we will see you back here 10 a.m. tomorrow. Goodbye. So, I mean... What do we talk about? Um... Um... We're supposed to be doing this now, aren't we? All right. Let's, uh... That's our intro. Okay. What is it? Give me the TLDR. This is not going to be TLDR. This is going to be long. Is it? Me-me-me-me-me-me. We don't want to rock the boat till it's in the bag. And it's like, don't put boats in bags. Like... Ah, no. Farts are still funny. We don't have enough memory for all these tabs. It doesn't matter. We don't need all the tabs. I just want to have a count of how many videos we started with. So... Yeah. We'll keep doing this as a kind of regular thing. So, I've essentially just wasted everyone's time. Well done. Welcome to HTTP 203. But also, the web comes up, right? Oh, yeah. We do talk about the web. And that's all on Google Chrome developers' YouTube channel. Yep. Hey, everyone. Sam here. This is a quick web series about solving web... We'll solve common web problems with the... These techs are... Okay, let's go. Again. Yep. Hey, everyone. It's Sam here. As you can see, I definitely can't do these videos in one take. But I love the web and educating folks about web standards. So, a little while back, I kicked off the standard, covering the simple things you can do, all with standard HTML, CSS, and JavaScript, like... Interacting with your device's gyroscope to create interesting mobile sites. Speeding up your page's rendering by using layers correctly. Using icon fonts, a great way to have cheap, fast, cross-platform iconography for your sites. Sending down compressed videos, not GIFs, and saving your user's bandwidth. My favorite, implementing drag-and-drop on your site using just the right events you need and making the entire page your drop target. Using legacy code in your site with WebAssembly. And what happens when your users leave your page and how your site can deal with it? The web is moving fast, and on the Chrome Developers channel, you can hear from folks talking about the latest and greatest. But for me, I love helping bring every developer up to the standard with these simple, framework-agnostic, actionable tips. So remember, the standard is live on the Chrome Developers channel, and these tips work on any browser and platform. I'll see you on the next tip.