 25ish, maybe? Hey, good afternoon. I'm Alex Russell. I'm a software engineer on the Chrome team. I've been a software engineer on the Chrome team for longer than I should probably admit on a stage this big. But these days, I lead the team that has helped bring progressive web app technologies to Chrome in collaboration with folks in the web development community and our partners at other browsers. And if you saw Rahul and Dave's talk yesterday, what you may have taken away from it is that we're serious about helping you adopt and succeed with two technologies this year. Progressive web apps and accelerated mobile pages, or AMP for short. We don't have time to get into all the technical details about each of those technologies today. So I recommend that you check out Jake Archibald's talk on progressive web app architectures and instant offline. And multi-uble stock on how AMP achieves its performance. They go into details that I wish I could had all the time today to get into. But what we're going to talk about today is end-to-end performance across multiple pages and, in some cases, even across multiple sites, how we can build reliable experiences for users. So I think we all know that performance is money. And we kind of know it intuitively. But the data really helps sort of drive it home. And a big shiny graph doesn't hurt either. So Asta has shared this data with us. And what it shows is that users who experience slow sites bounce away from those sites much more frequently than users who experience fast pages. So we talk about fast, and we talk about slow. And frequently, we use confusing terms for that. Do we mean when something's loaded, how smoothly it animates? Do we mean how responsive it is to me tapping on a button? Something different than that? What do we mean when we say something is fast or it has good performance? To get clarity on this question, last year, Paul Lewis and Paul Irish introduced the RAIL model for web application performance. And RAIL is an acronym that stands for Response, Animate, Idol, and Load. Because we like acronyms, I guess, on the Chrome team. Apologies. But to recap, RAIL, the R stands for Respond. We want to respond within 100 milliseconds to any user action. We also want to animate at 60 frames a second, which means that we have to get frames on screen consistently every 16 milliseconds. Now, the browser has to do some work when we go and change HTML, CSS, or DOM. And that means that it has to reapply that change and then go paint that all the way out to the screen. So we've probably got less than the 16 milliseconds. A good rule of thumb is to try to get your main thread work for animations done in eight milliseconds, or half the total frame time. We also want to avoid timers that run forever in the background, waking up the CPU, burning a hole in your pocket, draining your battery quickly. And so when we're idle, we want to do work in collaboration with the browser. But we also want to break that work up into small chunks, preferably around 50 milliseconds each. At that size, we can make sure that when the user taps next time, we can respond within the 100 millisecond window. The last bit of rail is load. We're going to talk a lot about loading today. The goal here is to try to get something to the user within a second. If it takes longer than a second, the user starts to lose focus. The user experience research on this is decades old now and pretty conclusive. People aren't able to really keep themselves in the task that they were trying to accomplish. So if you aren't familiar with Ilya Grigorovic's high-performance browser networking, I recommend that you read it. It's maybe a good thing to put you to sleep at night someday, or something to read on the way home from I.O. But it goes into great depth and has lots of research about how mobile networks specifically conspire against us. And Ilya's research shows that because of resource transitions at the radio level, DNS, TCP, HTTP, and TLS setup, it may never be possible for a user on a 2G or a 3G connection to hit the L, to hit that one second load time. You can't even get Hello World done in that case. If you have to transition out of a low-power state on a mobile device. And Ilya said something to me the other day that really rang true, which is that a 4G user isn't a 4G user most of the time. And I commute using public transit in San Francisco. And I take a train that is occasionally connected. I see my phone jumping through a bunch of states all the time. And it frequently starts an LTE as I descend into the station. And then I go down to the train. I get 3G for a bit. And then I'm off. And then I'm back onto something else. It gets up to 3G again. And then it transitions up to LTE. Even when I have those full bars and that 3G connection, it can still take seconds for something to come back. And the reason for that delay is the radio resource control and all of that setup time. So the web isn't today particularly reliable, even when we have full bars. It's also the case that we wind up using language that is confusing about what it means to be loaded. So even if we're trying to measure this experience, it's a little bit difficult. Because we don't share a common vocabulary. Some people say loaded, they might mean DOM content loaded. You'll see this in DevTools. And DOM content loaded is when most of the work has been done to construct the DOM. But if you've got asynchronous scripts or if you're delay loading content, that may not actually correspond to what the user cares about. Same for onload. Onload doesn't take into account potentially the fact that you've got asynchronous work happening. So the structure for your page matters and makes these not necessarily reliable indicators of bloating. There's DOM stability, which we think of as when it's actually sort of finished constructing, or visually complete. This is what a lot of industry professionals use when thinking about web page performance today. Because it's less subject to the same sort of error in gaming that you see from other things. But visually complete doesn't mean interactive. I think it can be visually complete, and I can't maybe use it. I think what we really care about when we say something is loaded is time to interactive. When can I start using this thing? You put something on page, when can I tap? Our tools today take us away from that time to interactive goal in some really weird ways. And we're starting to see a new generation of frameworks and tools that are optimized for that first experience, that first paint. But it may not be interactive. Paul Lewis illustrated this pretty brilliantly on Twitter the other day. I'm shamelessly borrowing his charts. And this is basically the first architecture that you could choose. Our tools make this possible today pretty fluidly. You can go and build a single page application, which trades away initial load time for eventual fast interactivity. And this is great. Once that initial JavaScript is loaded, I can do things in Gmail very quickly indeed. But I have to wait a long time. So this is great for experiences that you live in, but it's terrible for experiences that are transient. When you just want a thing, watching the loading bar is basically the worst thing in the world. More and more we're seeing the uncanny valley, which is where you have a server-side framework that takes application state. It dehydrates it into some HTML that captures the current view state, reinflates it on the client side, and then from there, it boots up all the JavaScript to finally make it interactive. This is super unsatisfying. And it's particularly unsatisfying on low-power devices, on flaky networks, like phones. So I think what we're really after is interactivity. The pattern, the purple pattern that Taylor Savage showed you yesterday, really highlights how we can do better here by putting interactive pixels on screen and only loading the resources that we need right now. It's also the case that articles wind up in this uncanny valley today because of third-party content. AMP was designed explicitly to address this for article content. So AMP and purple and a lot of new technology we're putting into browsers and into the web platform are necessary because today the web is slow. The web feels terrible because there's all this third-party stuff. It's analytics, tracking, poorly structured pages, badly considered font decisions. And it's easy to miss how bad this is because we wind up doing most of our development and dev tools on the desktop. So the computer that I lug around with me every day is a quad-core i7 MacBook Pro. It's got six megabytes of L3 cache. It's got four cores that have a 14 deep instruction pipeline. It can dissipate 47 watts. It's got a GPU that's a separate chip. The other device I carry with me all day, every day, is my Nexus 5X. And if that thing dissipated 47 watts, it would release the magic smoke that causes computers to work. So these differences are everywhere and they're most obvious in script execution. This is why mobile is harder than it looks. Here, for instance, is last year's iOS site. It was made with a pre-1.0 version of Polymer. Polymer's gotten a lot faster since. But what we'll see here is we get DOM content loaded at 700 milliseconds on my desktop, on a Wi-Fi connection. This is a fast connection. This is a fast machine. This looks great. That animation after onload, the onload happens a second and a half, and that animation is super smooth. The total JavaScript time for this load is only 600 milliseconds. That animation, again, transitions out very nicely. And we get to interactivity at four seconds. Pretty good. I'd take that. Feels good. This is the exact same document on the exact same Wi-Fi network, except on my Nexus 5 connected over USB to Chrome, colon, and inspect. Same dev tools, but a very different picture. In this version of the same site, we get DOM content loaded at 2 and 1 half seconds. Script blocks the UI thread for two full seconds. Onload happens at 5 and 1 half seconds. And we've got four seconds of total JavaScript running. We still don't get smooth animations for all the work that we're doing. And interactivity slips a full three seconds from where it was on the desktop. Again, this isn't the network conspiring against us. This is just the CPU. OK, so mobile's hard. This is framework X. The names have been changed to protect the slow. The first render feels downright instant on my desktop. It's interactive at 500 milliseconds. The total JavaScript time is less than a second. And on my Macro Pro, this absolutely meets rail. This is an outstanding experience. They've done server-side rendering. But on the Nexus 5, the first paint is really fast. But it goes downhill from there. So the script gets started pretty quickly. Again, I'm on Wi-Fi. But it locks the UI thread for 10 seconds. That loading spinner that they put up in that initial paint stops. It stops spinning because the CPU is too bound. That's pretty bad. And we don't get interactivity to 12 seconds. Our desktop dev tools are lying to us. This isn't good enough. This is nowhere near good enough. So we've been working with a lot of partners over the last year or so. And mostly, I just sit down with them with a phone and I plug it in and I show them what it actually feels like. When we do this, engineers get serious feels. It's not great. Sort of sad-making. Now, it is possible to continue to use a framework. But we have to do it differently. We have to use a different style of constructing our applications. We can't continue to pile into the JavaScript clown car and assume that it's all going to be fine. It doesn't work that way. If you saw Taylor or Kevin's talks yesterday, you might have gotten wind of a new style of constructing applications that the Polymer team is pioneering. They call their pattern purple. And here's the shop demo, again, on my desktop, MacBook Pro. And like Framework X, it's interactive at 400 milliseconds. So what, right? Everyone can do that. And the total JavaScript time is 200 milliseconds, so it's less. But meh, they hit rail. So what's to see? It only shows up when you go to mobile. It only shows up when you're on real hardware. And the differences are huge. Our first paint does acknowledge to the user that we're doing work on their behalf quickly, but we're not seeing the actual sort of UI. This is just a thing at the top that says, hey, we've got some stuff. But when we do paint, those painted pixels are actually interactive. You can use them immediately. And the reason that they're interactive is that the additional work that happens later to do stuff in the background to fill out the rest of the application is all happening in small chunks. We can continue to scroll and interact with everything that's on the screen. This site feels fast on a 3G device. And on my desktop, it's interactive at 1.7 seconds, which I don't know. I'd take that. It's doing a full second and change of JavaScript. We saw earlier how two seconds of JavaScript can be terrible. But this is different. That initial wad of JavaScript is getting run in, what? It's 460 milliseconds? But then after that, it all gets broken up into fine-grained little chunks. And so you don't feel 1.3 seconds of JavaScript. You feel that the site is interactive. And the way that they have accomplished that is to use the platform. To lean on the platform to schedule the work, to use the platform to provide the component model, to use the platform to provide the loading mechanism, to use the platform to provide the caching mechanism, to do granular dependencies and use HTML imports to make sure that things only happen when they need to. And I can't stress this is enough. What we're doing today isn't good enough. And if we use the platform, we can get to good enough. It isn't just device limits. Our other assumptions screw us too. We tend to be connected on Wi-Fi when we're in the office testing stuff out. And Wi-Fi has got its own problems, but it's nothing like a real mobile network. Even DevTools' emulation of mobile networks doesn't really accurately model the sort of wild transitions that the physical layer goes through. And our protocols aren't well adapted to those wild transitions. TCP doesn't know what it means to have a physical layer that does this. Also, a lot of our tool chains have legacy desktop assumptions built into them, from analytics frameworks to font loading tools to your JavaScript framework to start with. They're making fundamentally wrong decisions for mobile. And that really shows up when you're on a real device. So these problems can seem culturally and organizationally impossible to overcome. That makes the goal of using the platform to help us out seem impossible, even when we know what the right things to do are technically. That's what motivated the AMP team, in part, to develop a set of off-the-shelf web components that bake appropriate assumptions for mobile directly into the format. It's been amazing to me to watch the AMP team's progress over the last year, because once upon a time, I helped lead the team inside the Chrome organization that designed web components. In 2010, we expected that there would eventually be a diversity of frameworks and tool kits built on top of web components that would be interoperable as a result. And what we're seeing today is that diversity in the wild. You can use AMP to build really fast articles, or you can use Mozilla's A-frame to build 3D environments, or you can use Polymer or Vodin to build these really expressive, immersive applications. Seeing all the diversity really drives home to me how many problems we're solving with the web. And it also shows how an interoperable component model can make us as developers significantly more productive. What we didn't anticipate, though, was what the AMP team did. They built a validator for their subset. So AMP takes a bunch of the misfeatures of HTML's legacy design decisions, but it marries those with a bunch of fixes that AMP brings along for the ride. And it ensures through the validator that all AMP content meets AMP's design goals. So what are those goals? I'd say basically that AMP is trying to enforce content modesty. To do that, AMP follows some rules. And those rules are all enforced by the validator. AMP only allows JavaScript to run in the main document if it comes from AMP. That cuts out a huge set of anti-patterns, but it also means that you can't add custom non-AMP behavior to your pages. Next, AMP elements play out only once, and they always have proportional sizing, which means that you'll never see that thing where you've got something under your finger, and you're about to tap it, and then that doesn't happen in AMP documents. AMP also goes to great lengths to get content on screen as fast as possible, delaying things that aren't critical to the user experience. It batches work in the DOM to make sure that the layout engine can process things as efficiently as possible. It removes the scourge of the modern web, which is multiple analytics frameworks loading in, all to instrument exactly the same thing every single time. It federates that out from a single listener. And lastly, thanks to all those restrictions and some cleverness about knowing what state of document is being loaded in, AMP makes it finally possible to do smart pre-rendering from AMP viewers. And if all of this sounds restrictive, well, that's kind of the point. But the results speak for themselves. The AMP team reports that they're seeing under one second average load times from Google search results. And at the same time, eight second load times are experienced by 99% of AMP page viewers. Taking the same corpus of pages and looking at their other formats, you see 22 seconds at the median. It's pretty bad. And AMP does so much better. AMP's careful design allows AMP content to also be hosted on the CDN. And that's what's happening here. The Washington Post has published these articles, which I see badged here with that AMP lightning bolt on the lower left. And when I tap in them, the transition is roughly instant because the documents were hosted on Google's AMP CDN. And they're the same documents that the Washington Post put on their own server. But they can be pre-rendered. So that's what happens here. Once I load Google News, I can see that at some point later, once the news has loaded, Google News goes and preloads that thing that I might tap on next. And I can do this intelligently. Now, it's only possible because AMP knows how to avoid doing too much work when it's being pre-rendered. If we tried this with arbitrary web content, it would make my phone really, really, really slow. But AMP's restrictions make it sane to do this. And what it adds up to is a reliably fast experience. When I tap, I get that document. This is a real trace I took. When I tap, I get that document rendered and interactive in half a second on a real device. That's incredible. So thanks to clever preloading, content restrictions, and cooperation between AMP viewers and AMP content, we can finally get to a reliable experience transitioning from browsing to reading. So let's see exactly what that looks like on a Nexus 5. Here we've got AMP via the Google News site, helping us transition to that experience as fast as possible. It's great. And what this does is it puts publishers like The Washington Post and their content in the very best possible light. Their brand experience starts instantly. It's a great way to get introduced to users. So changing gears just a bit, I want to talk a little bit about progressive web apps. If AMP is all about putting restrictions on content to improve first load, progressive web apps are all about improving experiences over the long haul. They're app-like. They blur the lines between web content and apps. But they keep the strengths of the web. They add reliability, home screen access, and push notifications to an already strong set of web capabilities. You keep the same easy discovery, but you become app-like through use. This reduces the cost to acquiring new users. They're more engaging, and they can be re-engaging over time. So progressive web apps, why are they called progressive web apps? Well, because they become apps progressively, you just start using them in a tab. And then over time, they wind up being first class citizens. So this is airhorner.com. It's the canonical demo. You can go try it out on your phone now. No, no, not now. Someone's going to do it, aren't they? But if I go to airhorner.com repeatedly, eventually the browser will prompt me to ask me if I want to install it. It's something I use, and therefore I might like. Once it's installed, it can launch full screen in an immersive mode. It can even be its own top-level activity in a task switcher. And that splash screen that gets generated for the app is pretty nice. Of course, just like a regular native app, it works offline. It'll always work. It doesn't require you to build a separate package, though. You didn't have to put anything in an app store. You didn't have to go do a dance and pray for the update gods to let your new version through. It's just a web site. But it's immersive, and it's exactly the same code that you would run on any browser. Obviously, this works in every browser, but it gets supercharged in browsers that support progressive web apps. Browsers like Chrome, Opera, Firefox, and Samsung's S browser. So they're progressive in two ways. Websites become progressively enhanced with new technology. You just start from the same base of stuff that always worked, and you just add new stuff in. This isn't broken content on browsers that don't support the new stuff. They also become progressively happy. That is to say, they become apps because you choose to upgrade them in your experience to be apps. All these capabilities are available all a card, including push notifications. So I don't even have to have it on my home screen in order to get re-engaged through push notifications, but I can do that, too. For more on that, I recommend you check out Owen Campbell Moore's talk on YouTube. It's also worth noting that these things install instantly. There was no sleight of hand there. When I tapped that button and said add it to home screen, that was true. It was already available offline. There wasn't a process to wait through. And that's because the same resources that are getting used in that full screen experience are the ones that you were already interacting with. By the time that we decide to prompt you to keep it as a user, the site will have already cached that stuff offline. You also don't have to worry about finding a login or digging it out of an Evernote document or somewhere in your Chrome password manager someplace. It uses the same cookie jar. What we're doing here is using the web superpower, which is URLs, to bootstrap a deeper experience with users over time. You don't have to choose web or something engaging. The web can be something engaging. Now, to get there, we have a quality bar. And the first part of that quality bar is to make sure that the icons on your home screen are things that are actually good icons. And to do that, we introduced the web app manifest format. And the web app manifest is a lightweight JSON file that you can host centrally on your site. And it contains metadata that we need to understand what the app-like behavior of your site is going to be. It needs a few properties to ensure a high-quality user experience. First, this is the Washington Post's manifest file. First, we need a short name, which is to say what you're going to see on the home screen. We need a long name so that we can give users more context. It'll have to have some icons, including one that's at least 144 by 144 pixels. So we can provide a high-quality icon and high-DPI devices. You can provide a start URL, which is the URL that gets launched when you tap on the icon. You can tell us which mode to display it in, in this case standalone. And for that splash screen, you configure the background color obviously using the background color property. The other part of the quality bar is that we require a service worker for progressive web apps. Users expect apps that are on their home screen to be reliably fast. They have to start instantly. And service workers are a requirement as a result. To get the add to home screen prompt, airhorn.com had to make sure that the URL listed in the start URL on the manifest will work offline. Service workers are supported today in Firefox, Chrome, Opera, and in Samsung's browser. And Microsoft committed this week to implementing service workers in Edge. I'm very excited about that. So service workers, what are they? Well, they're a programmable local proxy in the browser. For a lot more on that, you should check out Jake Arts and Bald's talk, which I think is already up on YouTube. So let's see how it works. Traditionally, you need to traverse the network every single time that you want to put pixels on screen. And that can fail, particularly on mobile networks. So we're going to have to make that transition from the browser to the server at least once, right? But once we're there, and we've got content back to the user, we can install a service worker asynchronously in the background. Once the service worker is downloaded and installed, it will intercept navigations. And from there, it can hand back content directly from the cache. That is to say, you can boot the document without going to the network. You can do the same thing for content. And you can check for updated content on the fly. You could provide a loading screen or a spinner to say, hey, I'm getting you new stuff while showing the user the old stuff. This is what the best native apps do, and we can do it now too. If we get new stuff back from the network, we can send it back to the document and show an updated UI. This is pretty good. This allows us to be reliable in the face of transient and flaky networks. We got pixels on screen without ever having to go to the network. It's also worth noting that this behavior isn't something that we baked in. It's not a one-off. It's something that you wrote. The developer of example.com, in this case, provided the service worker code. They can choose what to do in each of these cases. It's super powerful. So the lifecycle goes like this. Because we don't want you to break the web by requiring service workers, again, progressive enhancement for the win. The first time you go to a page, it doesn't have one. That's OK. But once you do decide to install one and it downloads in the background, it'll get an install event. And that install event is a chance to go cache assets offline. Once you've done that and you've told the browser, hey, I'm done installing, you'll eventually get an activate event just before the next transition or navigation. If you happen to go idle at some point, the browser will kill the service worker and reclaim the resources that it might be using. But the next time that it's needed, say, to handle a fetch event, we restart it and send the event then. The way to think about service workers is that they're progressive enhancement for the network level. So we're now to the point where between manifests and service workers, we can build applications that deserve to be on the home screen. They can be trustworthy. They can be reliable even on flaky networks. You can always load the app shell instantly and you can show the stale data or UI state while you're going to the network to get new stuff for the user. I think this is an underappreciated advantage that native apps have had over the web until now. We don't trust the web because it isn't reliable and we don't know which pages to trust even if it were. By pulling some tabs out of the tab and onto the home screen and ensuring that those applications have to meet this quality bar, we now have a way of communicating to users that these apps deserve to be on the home screen. And we can do it without mangling our URL structure or putting something in a store. So think of a site that you use every day. I think that site is probably gonna be a painful experience to try to use in the commuting scenario when you might happen to be going into a train or going underground. But now think of some content that you really want right now. You wanna check something on Wikipedia that you've never looked at before. It would be a real pain in the butt to have to go get that from a store. Why should I sit there through an application installed to get that? So web sets are painful for continuous use and native apps have been painful for upfront one-off use. So why is that? I think it's because native apps have been able to amortize the cost of that upfront download over multiple interactions. So there's a tipping point at which you use an app often enough that the lack of variance in the time it takes to load the app and get to the content that you want tips the balance in favor and you don't feel the pain of that upfront install quite so much. With service workers and progressive web apps we've brought that power to the web without having to give up that lightweight first use. The web is now the best way to transition occasional users to engaged users. It lowers your acquisition costs and it gives you the same great reliability that native apps have always had. So progressive apps are websites that have earned the right to be on the home screen by delivering trustworthy performance. So this stuff has been pretty new for a while and debugging it's been a pain. Over the last couple of months one of the teams has been building a new tool called Lighthouse. It's a verification engine that helps you verify that your site meets the progressive web app installation criteria. It automates checks to see if you've got a service worker if your thing works offline, if it has icons that are the right size, all that stuff. It also verifies a bunch of things that are fiddly to try to determine without a real browser running and it does this by automating Chrome. So it works in the command line but what you see here is Lighthouse wrapped in a handy Chrome extension. These are the shop results. You can get it on GitHub today and even though it's in a bleeding edge state where it only works in Dev and Canary versions of Chrome, it saved me tons of time, highly recommended if you're building a progressive web app. So the Washington Post built a progressive web app. You might have seen it yesterday. It's an incredible reading experience and I'd like to introduce Chris Nguyen, a senior engineer at the Washington Post who's been involved in bringing both AMP and progressive web app reading experiences to their users. Hi, I'm Chris Nguyen and I've been working on making AMP a first class citizen of the Washington Post publishing platform. So our newsroom really loves speed and performance. Literally we have monitors all over the newsroom that display average page load times for desktop pages for our mobile site and more recently our AMP pages. So I want to talk a little bit about that. So as a baseline, our responsive mobile site loads in about 3,500 milliseconds. And based on data, since we launched with AMP, we've noticed that it's more than twice as fast. And when we see our content served from the AMP CDN, it's well under the one second espoused by the L in the Rail Performance Model. It's impossible to ignore the performance benefits of AMP, but AMP does have some limits. Not having custom JavaScript means that AMP renders consistently fast and feels fast for users, but it's also difficult to build interactive content and that's something publishers really like to do, like visualizations and quizzes. So how do we bridge this gap? As we've seen from previous talks, PWAs don't have these restrictions. They're normal websites that are progressively enhanced with new features. And it's up to us as developers to design and build the experience. We have full control over what we want to do. So in designing our PWA, we don't want to simply recreate our existing mobile site. We took some inspiration from our native apps and some inspiration from our AMP pages and we use this tech demo as a chance to rethink what's possible on the web today. So can we switch to the video? So we open up a browser and we navigate to Wapo.com slash PWA. So in sections load, they start caching stories for offline reading and when you tap on the sections that you're interested in, those sections will also become available offline. As a result of this caching, tapping on an article feels instant because it's coming from the cache and not from the network. You can also swipe back and forth to move between articles in the same section and that feels instant too. But what happens if we go offline? So we go into airplane mode and now you can see that sections, which I haven't visited yet or marked as such, but the sections that are in the cache can still be viewed. So lastly, if we keep using this app, Chrome will eventually prompt us to keep it to my home screen. So let's go back to it. Ah, there we are. Now you can add it to the home screen and we've got an icon and launching it from the home screen brings me to a immersive version of the Washington Post reading experience, but we didn't actually have to ship a different app or upload it to the app store. So in building this app, we wanted to make sure that the app show was really light. So because application assets are a major factor in load time and we still want to design for browsers where we can't rely on an installed service worker. And to do that, we made the decision to forego libraries and write the app in Verabones JavaScript and the overall application shell is only 280K. Sounds kind of big, but that includes scripts, CSS, images, fonts, the manifest resources, and that's everything to load the app offline. On a 3G connection, on a first visit, the experience is responsive in three seconds and we're already working on improvements to shave off more time. So how does it work? What we're doing is we automatically cache each section the first time that is visited and that's during the time that the user would typically scroll through headlines. And if you swipe to the next article, what's happening is it's a cache hit. So the user doesn't see that you're going out to fetch another article or another piece of content, it just appears on the screen instantly. And that removes the mobile connection variance from the load time equation. So on top of that, if you're on the subway and you lose connection for a few minutes, you can keep reading, you don't get the offline dinosaur. So one thing to note is that even when a service worker is intercepting content fetches, we want different caching behavior for different types of content. Naively caching all the articles might result in displaying stale content and we don't want to waste the user's data by downloading something that they're not interested in. So to give readers a great experience, we looked at all the network calls that we had to make, such as our application assets, our section metadata and article content, and we handled them through the service worker differently based on their usage characteristics. We took advantage of some great tools, namely SW Precache and SW Toolbox, which you can find on GitHub, to help us manage that caching strategy. So let's take a look at the configuration we used with SW Precache. So highly dynamic content, like the section listing, goes to the network first and that way the user doesn't miss out on breaking news. Like the top stories, that changes pretty fast. But in the case of a flaky connection, we fall back to the section list in the cache. But article content doesn't really change as often as the list of articles in the section. So what we do here is we employ the fastest caching strategy, where we look up the article in the cache, fetch the content and return whatever comes back first. The network fetch also updates the cache, so that the next time you do it, you don't see stale content. So with respect to images, those aren't really gonna change. So we don't go to the network at all. If it's in the cache, why bother? So in the case of no connection, we try to fail gracefully by providing some visual cue that uncached content is not available. And through the magic of service workers, that's what makes the app still work. And now we have the same offline reliability of native apps in our web experience. So this experience loads quickly, and as you can see from the waterfall when the service worker installs, it begins to fetch content for articles that there's, so there's always something to read. Despite this preloading, though, the overall size load the whole experience, the app shell, the articles and images, is just over a megabyte. And the user didn't have to go through the app store to get this offline ready experience. So we also have a native app, and it features some of our best content and works reliably offline. But to get it, you have to go to the play store, you have to accept some permissions, and then you have to download it. And that's about 40 megabytes, which is great if you're on Wi-Fi, but that's a pretty hefty toll if I'm on cellular. So in contrast to the web experience, I can't use this app at all until it's fully downloaded. Wapo.com slash PWA gives me content instantly and it's also available offline, which seems pretty great. Something to keep in mind is that service workers are constrained by the web's same origin model. Earlier we saw how AMP enables the edge of your site to extend outward into other hosts like the Twitter or the Google News AMP readers and the Google AMP CDN that powers many of these page views. And when users see a Washington Post article in one of these contexts, they're absolutely experiencing Washington Post storytelling, even though it might not be served from WashingtonPost.com. So once users are already on WashingtonPost.com, it's possible to install service workers to give them some really cool features, but what about users that have never visited the site before? So let's look at one user flow. Alice goes to news.google.com. She sees a Washington Post AMP article front and center, so she clicks on it or taps on it. And this loads instantly because of AMP's clever restrictions and the smart preloading that enables Google News to adopt it with confidence. Next, Alice taps on a link to the Washington Post PWA from the sidebar. If she hasn't been to WashingtonPost.com before, this might be unreliable, slow, or it might even get her the dreaded offline dinosaur. Assuming Alice gets to the Washington Post, the PWA can bootstrap its service worker and subsequent visits will be reliable and speedy. But can we do better? Can we get rid of that offline dinosaur? As of a few weeks ago, it turns out the answer is yes. So working with our friends on the AMP team, we identified a way for the existing AMP install service worker element to register a bootstrap file across origins. So here at the Washington Post, we're now including the snippet in all of our AMP documents. The AMP element checks the origin it's currently running on, and it finds out that it's running on the same origin as a URL for the source attribute. It directly calls the service worker registration method on the URL provided. And this means that if you visit an AMP document on WashingtonPost.com, the PWA will get bootstrap directly. But what if the AMP document's on the CDN? In this case, the element looks for the data iframe source attribute. If it makes a call that there's a reasonable thing to load, mostly by checking to see if the origin document for the AMP file matches the URL in the attribute, it creates an iframe for that URL and then loads it. So here's a bootstrap file we're using on WashingtonPost.com right now. And as you can see, it's not complicated. All it does is load the service worker registration file. So let's take a look at it. So if the browser supports service workers, we call the browser's registration method to ensure that the service worker.js gets started downloading and installing asynchronously in the background. From here, we can handle a few special cases if our app needs to. So for instance, if there's already a service worker registered, we can listen to see if a new version was installed, or if it's not being used, or we can also log errors back to the server in case the installation fails. So let's look at our flow again. Alice goes to news.google.com. She sees the Washington Post AMP article and tapping on the article loads it instantly. But this time, it does something different. The AMP document starts to install the bootstrap file from WashingtonPost.com. So now when Alice visits WashingtonPost.com. Slash PWA, instead of a slow network experience or an offline dinosaur, she's guaranteed a reliable instant load. So it doesn't take a lot of data to install and it runs quickly when we're in the browser. But how about when we started from the home screen? Are progressive web apps really able to feel as fast and fluid as a native app? We took a half-speed recording of both apps starting up and stitched them together to find out. So this is in slow motion, but you can see that once it's installed on the home screen, the PWA loads as fast as a native app. I can start either of the experience, I can start using either of these experiences at roughly the same time. I think there's a step in the right direction for the mobile web. So for more alternative design patterns and future directions, let's go back to Alex. Thanks, Chris. I really love the progressive web app that the Washington Post launched. It has replaced my native feed reader, which didn't support offline weirdly. I'm using it every day. Well, Chris showed us this is a great way to get end-to-end reliability for web experiences, starting at the edges with AMP and getting to deep engagement with progressive web apps. And as a user, I can move between those experiences with confidence. AMP gets me a fast-first load and progressive web apps keep it snappy ever after. This is pretty great. But we can do a lot more. The approach Chris outlined isn't the only way to build experiences that combine AMP and progressive web apps. We've prototyped progressive web apps that act like AMP viewers much the way the Google News viewer does. Thanks again to the AMP install service worker element, it's possible for a user to experience AMP documents directly the first time they land on a site, taking advantage of that fast-first load. And once the service worker installs, subsequent navigations can serve up an AMP viewer from the local cache, exactly the way that the Washington Post progressive web app works today. You can get content from the same URLs, but they can get much more immersive. They can get snappier. They can load custom behavior. I think this is the future of AMP and progressive web apps. The web has always excelled in providing content on demand, and now we can extend that reach even further. Amortized across all the interactions that users have with your apps, the web is now the single best way to deliver those experiences with low friction. In addition to some of the tools we've talked about today, and talks we've recommended that you go check out on YouTube, the code labs on developer.google.com slash web are a fantastic way to get started building progressive web apps. If you've got questions about these approaches or want to chat with us, Chris and I are both on Twitter, and you can try out again, the Washington Post progressive web app at wapo.com slash PWA. Thanks for coming, and we can't wait to see how much faster you make the web with AMP and progressive web apps.