 Hello, everyone. Welcome to Building Instant Loading Offline First Progressive Web Apps. It turns out if you put all the buzzwords in your title, they let you have the big stage, which is great. I'm Jack Archibald. I'm one of the designers of service workers and one of the editors of the spec as well. What I actually want to talk to you about is spam phone calls, which I get a lot of. You know the sort. It's kind of like, have you been in an accident in the last 200 years? Would you like car insurance for all of your pets, et cetera, et cetera? I actually get enough of these phone calls that I've started inventing games to play. And my favorite game is, and in this game, I become the most indecisive person that has ever existed. It goes like this. Hello, Mr. Archibald. Are you interested in saving money on your mobile phone bill? And I reply, well, I suppose if. And at this point, the caller usually tries to hurry things along, but it's very important that you do not let them. Well, we've got a great deal on it. Wait, wait, wait, wait. Do you? Is it one of the, and you get a point for every second that they aren't talking? And it's a really difficult game because the caller gets frustrated pretty quickly because you're directly blocking a conversation, which is a very synchronous transaction where each person wants an instant response, and you're breaking that model. We expect a similar model when it comes to getting data from a computer, but that wasn't always the case. Like 25 years ago, our expectations were pretty low. If you wanted to know the directions somewhere, first you would have to go to the room with the computer in, the room, turn it on, and the fans would start worrying. You get that static crackle as the CRT monitor was into action. Windows 3.1 would start booting up. And then eventually, you would get your desktop, and then it did this because this was an era where booting up successfully was a fanfare-worthy moment. But even after that, you had to find and insert the Map CD and print out directions, then off you went. These days, we don't need to boot up the one computer we own because we have a computer in our pocket that's already booted up. We can ask it for directions. But that data often comes from the internet. If you have zero connectivity and you ask the internet for something, the web's answer is often no. I remember when I first realized how problematic this is. A few years ago, I was working an agency, a web agency, and I found myself needing to go to the toilet following a lunch that my stomach was unhappy about. There were five cubicles to choose from, but in this instance, the first four were occupied. That's usually OK. Even in this situation, I felt that one cubicle would be enough for me. But from previous experience, I knew that mobile connectivity and the Office Wi-Fi only extended to the first four cubicles. And I fought for a moment and decided, no. This is not acceptable. And I returned to my desk, and I waited until later, despite being in some discomfort. That's the day that I discovered that as a human being, I required an internet connection in order to take a dump. So this is a problem worth solving. And until recently, there was nothing you could do about it, especially during the initial page load, but it all changes with the service worker. I was told that this slide wasn't impactful enough. I don't know. It's one of those management buzzwords I don't really understand. But I did give it another go. Came up with this. Apparently, this has branding issues. It's got loads of brands. I don't understand. Bruce Lawson, the deputy CTO of Opera, he gave it a go as well. He came up with this. But it's kind of freaky. And if you stare at it, it almost looks like that the colors are changing. The colors are changing. It's got a filter on it. But Ben Jaff, who at the time worked at Udacity, he had a go and came up with this. For too long, users have been left staring at a white screen. For too long, they've been let down by the cruel seas of network connectivity. And for too long, we've been powerless to help. We've been left waiting. But no longer, new browser feature has arrived. Total game changer, a feature that lets you control the network, rather than letting the network control you. Because this new feature promises, does it bring? Introducing the Service Worker. It's a bit much, isn't it really? I prefer mine. It's got a tie fighter with a cat's head in it. But what does this all mean? What can Service Worker actually do? Well, we're going to take a look at MoJoy, which is a little progressive web app. You can find it at this URL. But it's basically like a simple version of Hangouts. So I guess it's already out of date. It should be Aloe. Anyway, it's like Hangouts. But it only lets you enter emoji. It's died life as a mere website. But over time, it was built up to become a fully progressive web app. And it didn't require a full rewrite to do this. It was something that happened incrementally, bit by bit. This here is v1. It runs at 60 frames a second. It's really simple. So it's only like 25K all in, so really fast to load. Unless, of course, you're offline. I mean, it's fast. But user experience is lacking somewhat. At least this is what it was like when I first launched it. Here's how I fixed it. To begin with, I registered for a Service Worker. Now, this isn't some magic manifest or a config file. It's just JavaScript. Because why should we reinvent some new thing when we have a world full of JavaScript developers and loads of tooling already out there? Oh, but of course, we should wrap our register call in a basic feature detect. Because, I don't know, there are all the browsers out there that don't support Service Worker. And a simple feature detect prevents them from hurting themselves and others around you. But anyway, in that script, I'm just going to put a simple log for now. So now if I load the page and open Chrome's DevTools console, there it is. Cool. Also, we have a look in this new application tab. There's the Service Worker section and there's our Service Worker in there. As you can see, it was last modified in 1970, meaning this Service Worker predates the internet, which is pretty cool. That's a bug. We're going to fix that in a couple of days. So how has this actually changed things? So to find out, we're going to pit the original online-only sites against our shiny new Service Worker version. To do that, let's go to the Comparinator. Fight. First up, the online experience. So, okay. Pretty much the same. Both load reasonably quick. I recorded these with a throttled internet connection, or as most people in the world call it, their internet connection. Also, I did close the browser before clicking the shortcut, so the load time you're seeing there includes the browser opening. What about the offline experience? Okay, so both are failing. Now, you've probably noticed not much has changed. Well, nothing's changed. Okay, one thing happened. It logged hello, because that's all we told it to do. And that's how Service Worker works. It only does what you tell it. And that's great because I'm sick of these magic APIs. Compare it to AppCache. If we gave Emojoy an AppCache manifest, and that manifest just contained the words cache manifest, which is required to make it valid, even online, that would turn the render of Emojoy from this to this. And that can't help feeling that's not what I told it to do. AppCache is a bit of a disaster. It's had a very simple format, but a massive complicated rulebook. And if you didn't like any of those rules, tough. You're restricted to the way that the designers of AppCache wanted you to work. And those designers had not created many offline web experiences. So this whole thing kind of gave rise to the extensible web manifest, though, which most browser vendors are now fully behind. Here, we acknowledge that browser developers and standards developers are not better at building websites than web developers. And we should stop tossing out scraps from our ivory towers like AppCache, like CSS to do a reflection in one particular way. And instead, we should give developers full control, give you as much information as we can, as many hooks as we can. By providing you with this kind of low-level access, you can create things we didn't consider. You can use patterns we didn't invent. And those become evidence for us as well for higher-level features, so we can make the common stuff easier or faster. So ServiceWorker was built to this model. So the things I show you today are just the kind of patterns that I have, but you'll find your own patterns. They'll probably be loads better than mine. The ServiceWorker is driven by events, and one of these is fetch. So we've got a listener for that there, just with a debugger statement. So before I run that, I'm gonna check this checkbox update on reload. I'll cover why in a little bit. But now if I refresh the page, I'm gonna hit that break point. So on the event object there, it's got a request property, and this is representing the request for the page itself. You can get the URL, the headers, the type of request. But I also get one of these for every request that the page makes, so the CSS, the JavaScript, fonts, images. I get the event for these avatar images, even though they're on another origin, so you get all of the requests. So by default, requests go from the page to the network, and there's not a lot you can do about it really, but once you introduce a ServiceWorker, it controls pages and requests go through it. But like other events, you can prevent the default and do your own thing. So instead of triggering the debugger, I'm going to call this event.respondwith, and this is me telling the ServiceWorker, hey, I'm actually going to take control of this fetch, and I'm gonna respond with a response. It says hello. So let's give that a spin. I'm gonna refresh the page, and there it is. So instead of going to the network, the ServiceWorker just took care of it. So this example works offline. I mean, it's rubbish, but it does work offline. You don't have to respond to every URL the same either. You could parse the URLs, so you can pick out the component parts. If the path name ends in .jpeg, you could respond with a network fetch for a cat jpeg. Event.respondwith takes a response object or a promise that resolves with a response. Fetch returns as a promise for a response from the network, so these can post together really, really well. So now if we refresh the page, it's back, but all the avatars are cats. Instead of doing something special based on request, we can do something based on the response as well, so here I'm gonna respond with a fetch for event.request. This is telling the browser to just do the thing it would have done anyway, but because it's a JavaScript API, we can actually get hold of the response before we send it onto the browser, and we can take a look at it. So if the status is like four or four, we could respond with something else, like some SVG or whatever. Otherwise, we'll return the response we got from the network. So now if we refresh the page, the avatars are back, but if I navigate to some sort of nonsense URL, we get the four or four message. So ServiceWorker lets you intercept requests and provide a different response, and you can do that based on the request, the response, or the response of a completely different request. You can do what you want, but this stuff is just playing around really. You probably wouldn't use a ServiceWorker for a four or four page. You let your server do that. So let's do something a bit more practical. A good way to dip your foot into the ServiceWorker pool is to make an offline fallback page, something to show the user if the page fails to load, because the current state of things is pretty bad. The user comes to us, comes to our site, wanting content, and this is our moment to shine, but without a connection, we crap ourselves to the extent that Mama Browser has to step in and defend us, and it does this by blaming the user. Chrome can't display this page because your computer is not connected to the internet. If we're going to be competing with native, this is like an operating system error. We can do better than this. So I created a custom error page. I mean, it's still an error, but at least we're owning it this time, and it's something we can build on later. I want to show this when there's no connection, so it has to work offline, and I need somewhere to save it, and I need to do that upfront when the user does have a connection. The ServiceWorker has an event for this install, and I pass a promise to event.waitingTool to let it know how long the install's taking and whether it works or not. The install event is fired when the browser runs the ServiceWorker for the first time. It's your opportunity to get everything you need from the network, CSS, JavaScript, HTML, and stuff. For storing these requests and responses, there's a new storage API, the cache API. This specializes in request and response storage, but unlike the regular browser cache, stuff isn't removed at the browser's whim. So we put all of our stuff in there, and once that's done, the ServiceWorker can start controlling pages. So let's do that. I'm gonna open a cache, and you can call it whatever you want. I'm gonna call it static v1, and then we add the offline page and the CSS it needs. So if the cache fails to open or runs out of space, all of these fetches fails or returns a 404, the promise rejects, and that signals to the browser that the install failed. Until that happens, this ServiceWorker will be discarded. It'll never control pages. Note that both the offline page and the CSS have a version number in their URL. This means we can give them good HTTP caching headers, and we just change the URL when we change content. You can actually work around bad caching headers with ServiceWorker, but it's much better to work with good caching, so that's what I'm doing here. But now we need to use this cache. So over in the fetch event, I'm gonna respond with a match in the cache, one that matches this request. Matching is done similar to HTTP, so it matches on URL vary headers method, but it ignores the freshness headers. Match returns a promise for a response. So if we request the offline page directly or its CSS, it's gonna come straight from the cache, and that's great. But if there's no match found, it results with undefined, so we need to deal with that. So if response is falsely, which undefined is, we're gonna fetch the request from the network. If fetch fails, which it's going to do offline, the promise rejects, so we'll catch that, and if the request was a navigation, we'll return the offline page. We only want to return this offline page for navigations because it doesn't make sense to return this HTML page in response to a request for some JavaScript or an image or something like that. But that's it. We can now give the page a refresh. Doesn't look like anything's changed, great. But over in the application tab, there's this cache storage section, and we can see static v1, there it is. So if we simulate offline, which we can do in the ServiceWorker panel, there's a little offline toggle there. You can also do it in the Network panel. But now if I refresh the page, there we go. There's the offline page. We can ship this, this is shipable. And that's what the Guardian did. On the developer blog, if you don't have any connectivity, it serves you this custom sorry page. But as a rather nice touch, it gives you this crossword that you can do in the meantime, which I think is a really nice Easter egg. I'm no good at crosswords. Actually, I didn't even look when I recorded this screencast, but it's a clue one across, a Californian city, three letters, nine letters. Probably should have managed to get that one. Of course, we may want to make changes to this in future, add some things like a refresh button or some JavaScript which keeps checking for a connection. And to do that, we need to work with ServiceWorker's update system, which we've been avoiding so far thanks to this checkbox. So let's take those training wheels off. Say we wanted to change this text here to be no connectivity rather than no connection, just a simple change. Well, we changed the HTML, of course, but we need to update the ServiceWorker too. The URLs are generated from the file's content, so we'll need to update that. We'll make the same change in our fetch event, so we're returning the correct page from the cache. I'm also going to change the version number of this cache here from V1 to V2, and I'll show you why in a moment. But let's give that a spin. I'm going to reload the page to pick up those changes, and once again, I'm going to change the network state to offline and reload the page again. But you can see here that the text hasn't changed. It still says connection, not connectivity. Here's what happened. We reloaded the page, which triggered the browser to go and check the ServiceWorker for updates. It fetched the ServiceWorker and went, huh, this one is different to the one I have. It's quite different. And it spins that up as version two, running alongside the old version. The old one remains because the new one isn't ready yet. And this is because the new one has to go through its install event, so it gets everything it needs from the network, including the new offline page that we've changed. Then it puts them in the cache. And this is why we gave the cache a different name, so it wouldn't override the stuff that the version one was still using. Now, by default, the new version waits. It doesn't take over while the old version is still in use. And that's because having multiple tabs open to the same site running different versions, that's a source of some really nasty bugs that a lot of us as web developers very rarely cater for. We could actually see this happening in DevTools. So here, we've got the activated service worker. But below it, there's a one there waiting to activate with a different version number. We can also see in the cache storage, static v2 will be there as well. This new service worker will stay there until the old one is no longer in use. So when this page navigates away or closes or whatever, there's nothing left to control now. The old version isn't needed anymore. It becomes, well, redundant. And it goes away. But that means the new version can move in and start controlling pages. We can make that happen by navigating away. I'm still in offline mode, just navigate to About Blank. Then I'm going to click Back. And there we go, the text has changed. We still have that old cache hanging around though, but we can deal with that. Once the old service worker is gone, the service worker gets an activate event. And we can use that to perform cleanup because we know the old version is gone now. You can migrate databases, delete caches. So I tend to have an array of all the caches that I expect to be there. And then I use the activate event to go through all of the caches and delete the ones that I don't expect to be there. There's a slightly ugly piece of code. It's kind of a bit of boilerplate right now. I think it's one of those things that will develop a higher level API for pretty soon. This behavior of one service worker waiting behind another, that's the default, but it doesn't have to be that way. Your new service worker can call skip waiting, which means it doesn't want to wait behind the older version. When you do this, that just kicks the old version out and it takes over straight away. But when you do this, you need to be aware that you're now controlling a page that was loaded with some older version of your service worker, not necessarily the last one, some older version. You can track this from your page as well. So when you detect this happening, you can show a message to the user, like refresh for the new version, or maybe you could just trigger the page to refresh automatically if that's gonna be an okay user experience. But for most of the development, I really recommend this update on reload thing, this checkbox in the service worker panel. This changes the update flow to speed things up. With update on reload, you hit refresh, the browser fetches the service worker from the network, it treats it as a new version, even if it hasn't actually changed, so it goes through and install, picks all the latest stuff up from your server, and then it puts them in a cache. And once that's done, it kicks out the old version, moves in and then the page refreshes. And that's kind of a lot, but the product of this is you just hit refresh and you get the latest service worker and your latest assets on every load. Okay, that was a lot to take in, but how are we doing? Well, to find out, we must return to the comparinator. Fight. First up, online. Okay, so we haven't really changed anything. Content is still arriving in a reasonable amount of time. How about the offline experience? We're now taking responsibility for network failures. We're catching that error. This is something we can ship, but we're not quite a progressive web app yet because we need to tell the browser that we're ready to offer a native-like experience. So in the head of your page, you can declare a theme color, which Chrome uses to style the location bar. So that's a quick win. It kind of integrates with the operating system a lot better. We want users to be able to add this to their home screen. There's a lot of ways you can do this to cover all the browsers. You'll need to specify an icon, an icon, probably another icon. Did I mention the icon? But most of this meta-crap isn't needed until the user opts into adding the site to your home screen. And meanwhile, it's being sent down with every page, increasing the time to first render. So we got rid of all this, and we replaced it with a single reference to a manifest, and that's only downloaded when it's needed. It no longer clogs up the load of every page. Furthermore, it was a great time to standardize all that meta-crap. The manifest looks like this. And once you have one and a service worker, Chrome will start asking the user if they want to add the site to their home screen, if the browser thinks they're engaged enough with this site. This icon here comes from the icon field. You can specify many icons of different sizes, and Chrome will pick the nearest one that it wants to use. Here I'm just serving one 512 by 512 icon and letting Chrome do the scaling. You should serve smaller icons if your large icon is a big download, but in this case, the big icon is 5K, so that wasn't too fast. The name comes from the name property. You see how this works. If these are taps on add to home screen, they get an icon on their home screen. We've already seen where the icon comes from, but this name here, that comes from the short name. If your short name and your name are the same, you can omit the short name. It doesn't need to be there. Later, when the user launches your app, they get a splash screen, and that's displayed while the browser's spinning up and while the page hasn't rendered. This icon comes from the icon set. It will go for a bigger icon for the splash screen. The name comes from the name. The background color comes from background color, and the color of the status bar here, that's from the theme color. Then once the page is ready, the splash screen goes away, and the page that is loaded is this one listed in the start URL, so it doesn't have to be the same page that the user was on when they added to home screen. But you'll also notice that the URL bar isn't there, and that's because the web app is displayed standalone. And all this adds together to make the whole experience feel like a native app, and that's why it becomes so important that we at the very least own our connection errors, because we don't want the default browser error breaking this native feel. Once again, this is something we can ship. It's an incremental improvement. When developing offline capable sites, it's a common error to start iterating on this. You know, for example, make it show cached messages, make it a fuller offline experience. But that's an online first approach, and that works fine when the user is truly offline, but zero connectivity is not the worst thing we face. This is, I call it, life fine. This is when your phone says it has connectivity, but it doesn't. If you have life fine, and you ask for content, the web says, um, well, and that's it. This is worse than offline. With offline, you get a quick answer. It's no, but it's an answer. But here, you're just left waiting, and I'm sure you've had this before yourself, so you don't want to give up. You keep thinking, well, maybe if I just wait a few seconds, a few more seconds, the page will arrive, but does it? No. You're forcing the user to stare at this or give up, and with every passing second, they hate the experience a little bit more. Our current offline first pattern, online first pattern works great when the user has a good connection. They get the latest messages pretty fast. It's great when the user's offline because they get some, you know, cached data or a failure page, but with life fine, that's it. Chrome removes the splash screen when the page gets the first render, and with life fine, that never happens. So we've improved things for offline users, but life fine users are in the same hell as they were before. This is the problem with online first. We're giving users with some connectivity a worse experience than those with no connectivity. And this isn't always just down to put connectivity on the user's device. I don't know if you've had this before, but I get it all the time. My phone's reporting full signal, but I cannot get a bite down it at all. A lot happens to get data from the web. You know, the phone sends a request off to the Wi-Fi router or the cell tower, then onto the ISP through intermediate proxies, you know, potentially across to the other side of the world. And eventually, the request reaches the destination server, but that's only half the journey because the server responds, and the response has to go all the way back across the world through proxies, through ISPs over the air and land safe and sound, hopefully on your phone. But if something along the way goes wrong or runs slowly, the whole thing runs slow, and therein lies Wi-Fi. And you don't know how good the network connection is until you give it a go, until you try it. And that takes time. There are a couple of APIs on the web that attempt to predict the network such as that's interesting. There's supposed to be a slide there. This isn't part of the act, by the way. This is just some fun Wi-Fi. No, this entire presentation works offline. Maybe I'll unplug it and plug it back in again. Do we have a dodgy connection here? Don't present for... Ed, I do want to present for my own laptop. How are we for the... Oh, my God, it's starting to work. This is amazing. So my colleagues always tell me I'm stupid for doing my own slide framework. There we go. Let's see if this continues to work. Oh, my God, it's working again. Yes! Okay, here we go. Where was I? Okay, so there are a couple of APIs on the web that attempt to predict the network. And these are things like navigator.online and navigator.connection.type. But these are weak signals. Those APIs, they only know about that bit. They don't know about any of the rest. For instance, when navigator.online is false, you have no connection. That much is certain. When navigator.online is true, you have not no connection. Navigator.online is true when you're connected to a cell tower or a router. Though that router may only be plugged into some soil, navigator.online will still be true. Anything after that first hop cannot be predicted. You have to make a connection and see. And that takes time. If the user wanted to come to our chat app and look at past messages, why should they need a connection for that? Why should the user have to wait for a connection to fail just to see stuff that's already on their phone? The great thing about local data is you don't need to make an internet connection for it. This is why the gold standard is offline first. Offline first solves these problems. With offline first, we get stuff from the cache first and then we try and get content from the network. And the more you get to render without a connection, the better. You should think of the network as a piece of progressive enhancement and enhancement that might not be there. So we need to rethink our approach a bit here. I'm gonna create an application shell. And that's just a site without messages. We'll leave it to the JavaScript to populate it. So we're gonna change the install event. So it caches the app shell, the CSS, and the JavaScript. Meanwhile, over in our fetch event, we're gonna start by pausing the URL so we can read its component parts. And then if the request is to the same origin as the service worker and the path name is just slash, so it's the root page, we're gonna respond with the app shell from the cache done. Otherwise, we'll try and respond with cache content and fall back to the network. So altogether, we're gonna fetch the HTML, CSS, and JavaScript from the cache, and that gets us a first render. And then the page is JavaScript, that's gonna go off to the network and get the messages for us, which gets our content render going. If that fails, the JavaScript can show some kind of connection error message as well. So by doing all that, what do we win? It is time to return to the comparinator. Fight. I enjoyed that jingle the first time, but it's feeling like diminishing returns now. Anyway, how are we doing? The online experience. Look at that, we've massively improved the render time by getting to first render without the network, and the messages are still coming from the same throttled network connection, but they get on screen faster because the download starts much earlier. What about offline? Great, the app shell loaded and the pages JavaScript showed the no connection error, but what about li-fi? So we've defeated the blank screen at least. I mean, our JavaScript could do better here, it could show a spinner or something, but things are looking loads better. You can see the benefits of offline first versus online first. Rather than improving things for one connection type, we've improved things across the board. We're back in control of the user experience, and it's taken us very little code to get there. We can ship this. What about caching chat messages so we can display those before connectivity too? Aside from the initial page load, messages arrive one by one. This continual feed of data doesn't really map well to the cache API which is request and response based. Instead, we want a store that we can add and remove messages from. The web platform has such a thing. It is called IndexedDB. IndexedDB has a bit of a bad reputation among developers, I think it's fair to say, but it's only because it's the worst API ever designed in the history of computer science. Other than that, it's pretty good. But seriously, 60% of the awful comes from this weird event system it uses because it predates promises. If it was invented today, it would use promises. And there is an effort in the way to patch it up as best we can without breaking compatibility. I much prefer teaching the web platform rather than libraries or frameworks, but I am going to make an exception here. IDB is a little library that I threw together that mirrors the IndexedDB API, but it uses promises where IDB should have used promises. Other than that, you're still using IDB. All the same method names and everything. Just with 60% of the awful eliminated. It's 1.2K, so it's really small. There are bigger, higher-level APIs out there, which you maybe want to consider, things like DeXie, PouchDB. This library only eliminates the very worst of IDB. But let's use it, let's create a database for our messages. The messages look like this, they're in JSON format. So here's how we build a database for it. I'm going to start by opening the database, giving it a name, a mojoy, a version number, one. And then we get a callback to define the schema of the database. So we need somewhere to actually store the messages. Relational databases call these tables, IDB calls them object stores, created one called messages, and I'm going to tell it the primary key is ID. We're often going to look at messages in date order, so I'm going to create an index for that called by date. And that's it, not too painful. Now we can take this DB promise and add messages to it as they arrive. So say we had a function like this that gets called every time a new message arrives and it's added to the page. We get our database from the promise, create a read write transaction on the message store, and then add it. I don't expect you to remember all this code, I'm just trying to convince you that index DB can be a little bit less scary when you involve promises. Getting messages not too bad either, transaction, go to the object store, get the index where everything's date ordered, get them all done. Of course we can't just keep adding messages to the database, we need to perform some cleanup at one point. Say we wanted to delete everything but the newest 30 messages. I'm gonna create a transaction, get the date index, I'm gonna open a cursor so I can go through them one by one. The prev here means we're gonna go through the index backwards starting with the newest message. I'm gonna advance past the first 30, we wanna keep those, and then I'm gonna loop through the rest calling delete on each one. Okay, so this code example isn't quite as pretty as the others. Like I said, the library only adds promises in, you're still exposed to the rest of IDB's ugliness. But this is loads cleaner than it is with just straight forward IDB. Having a database full of messages means we can fetch the app shell, the JavaScript, the CSS from the cache. That gets us a first render, but then we can render with messages from the database as well. We get content on the screen without going to the network. And then we go to the network for newer messages and avatars, we can update the page. So if this network request fails, that's not a big deal, we're still displaying content, that's pretty good. That's a great offline experience. If the network request is slow, yes, that's okay as well. If the user's just coming back to check past messages, that's fine. To see the benefits of this, we must once again gaze upon the comparinator. Fight. Okay, the offline experience. Check out that performance difference. You know, that's huge. What about the offline experience? We get content and we get it quickly. Okay, the avatars have failed, we'll deal with that in a moment, but this is loads better. This is way better than the sorry message we had before. How about the li-fi experience? We've gone from the most frustrating experience in the world, the white screen of eternal misery to instant content. We can ship this. The only thing missing in terms of a full offline first experience is the avatars, but yeah, we can fix that. Here's our current fetch code. It's like we wrote before. We want to do something special for the avatars, so let's rewind a bit. If the request is to gravatar, which is where I'm getting the avatars from, I'm going to call out into another function, create handle avatar request. Otherwise, we'll just carry on doing what we were doing before. So what does handle avatar request do? Well, we could fetch the avatar from the network, and if that fails, we could serve some kind of default from the cache, and we'd cache that as part of the install event. That's cool, we can ship that, it's good enough. But later, we could do something even better. When we get the request for the avatar, we can try and get it from the cache. And if we get a response, we'll send it back to the page. That gives us the avatar without going to the network, but we should go to the network too, not only in the case that we didn't have something in the cache, but also to update the one that is in the cache if there is one, because users change their avatars a lot. Showing an old avatar is great, that's fine, but we should update it for next time. So off to the network we go, and if we get a response back, we put it in the cache. And that's us done, unless we were unable to give an avatar to the page from the cache, in which case we'll send them the one from the network. And that's done. This is what HTTP calls stale while revalidate, it's one of the cache control options. It's an experimental feature in Chrome right now, we're busy implementing it, it's behind a flag. But we don't need to wait for it to ship, we can emulate it inside the service worker. People can have it today. Thankfully the code for this is actually, well I think it's a lot simpler than trying to describe it with a diagram. We're gonna start by making a network request because we always wanna do that. Sometimes just to update the cache, sometimes it's to send back to the page as well. We just wait until to say, hey we're gonna do some additional work as well as providing a response. And in here we're gonna take the response that we get from the network, we're gonna clone it. And the reason we clone it is because a response can only be read once. The body of the response can only be read once. And this is how the browser works, this is how you can receive like a three gigabyte video and watch it, but that three gigabytes never needs to be on disk all at once, it needs to be memory all at once. We're gonna clone it because we might use it twice. We're gonna send it back to the page and we're gonna put it in the cache. We're gonna open the cache called avatars. Unlike our static cache, we're gonna preserve this one between versions. We're not going to change the version number. And then put the avatar in the cache. Meanwhile, we're gonna return a response from the cache and if the cache doesn't have one, we're gonna fall back to the network one. And that's it. I mean, sooner or later you're gonna have to write some code to go into the cache and look for avatars you don't need anymore and delete them, but this helps a lot. So to see how this affects things, for the final time, I promise, we approach the comparinator. First up, the online experience. Quick, full content. The offline experience. Quick, full content. What about li-fi? Once again, quick, full content. In fact, the experience is the same. Thank you. The experience is the same with every connection type. The network only matters when it comes to fetching new content. We can ship this. So we've achieved net resilience, right? Well, we're doing great when it comes to sending data to the user, but not as great when it comes to the user sending data to us. I really hate this because, for me, that the user's transaction is complete. They have said, here are some smiley faces, please send them to people, done. That's all they have to say about it. But no, we're requiring them to watch it through to completion. We can do better than this. Background sync landed in Chrome a couple of months ago. It's a service worker event that you request. You're asking to do some work when the user has a connection, which is straight away if they already have a connection, or sometime later when they do. So say we had a function that was called whenever the user typed a message and hit send. So we'll add a message to the outbox using IDB or whatever. This is a function we'd write ourselves. But then we'd get the service worker registration and register for a sync event, giving it whatever name we want. That can fail, of course, if the browser doesn't support it or the user's disabled or whatever, so in which case we'll catch it and just send the message normally where the user has to stare at their phone. Otherwise, over in the service worker, we get this sync event, and we can check the tag name so we know what we're supposed to be doing here. We use our old friend, wait until, to let the browser know how long we're going to be doing work for. So then we get messages from the outbox, from IDB or whatever, and send them to the server and remove them from the outbox. The effect of this is the user can be on li-fi or totally offline, but they can use the app as if it were entirely online. When they send a message, so what I'm going to do here, I'm just going straight to airplane mode, so it's a completely offline, and now I can type some sort of message, I don't know, some pictures of some cats. And as soon as I hit send, we can add it to the flow, even though they have no connection. I'll just say sending there in tiny letters, but it doesn't have a lot of emphasis because the user's free to lock their phone and go about their day, they can even close the browser if they want, it doesn't matter. Then at some point later, when they regain connectivity, the message will be sent in the background. There it goes. And the user didn't know that this happened, you know, didn't know that it was sent in the background, because from their point of view, their transaction was complete, you know, they'd already said, please send this. The time they get to know about it is when they receive a push notification with a reply from another user. By using background sync and push messaging, we get out of the user's way. They don't have to stare at the screen while stuff sends, they don't have to check for new messages, we tell them about that. All of this massively improves the user experience. And if you do this sort of stuff, I think it's totally cool to brag about it, this is something the IEL web app does, so it says, yeah, caching complete, this now works offline. I think this is great, but I do hope that it goes out of fashion. Now, remember those little site badges that we all used to use that said, this site was built using CSS2, and now when you see them, you're like, oh, CSS, well done you. I hope one day this will seem as ridiculous, but before that, we do need to build up user trust. I don't know if anyone's seen something like this before. This is what I'm greeted with in the bathroom on board the trains I commute to work in. First, you have to press D to close the door, and then you press L when it's flashing to lock the door. Not that there's Braille there as well, so even blind people know they have to wait for the flashing light, but the buttons aren't proper buttons either that are kind of flat touch sensitive things there. That's horrible. I don't trust this. I don't trust this because once it failed on me, and I was slowly revealed to the carriage like a bad game show prize. Similarly, users don't trust the web to work offline because it's failed them before. So messages like this do help, and yes, all epiphanies I've had about user experience happen in bathrooms, but this is why Chrome requires there to be a service worker there before it will show the add to home screen banner, and in future we're gonna tighten the rules there to try and detect some kind of offline capable experience. We want everything that ends up on the home screen to be competitive with native apps. We want to make the web a first class part of the operating system in the user's mind. So on that note, I wanted to compare the launching of a native app, Google Photos, a well built and well optimized one, to a MoJoy and launch them at the same time. Oh, it's really close. Like a MoJoy is like 0.2 seconds slower to show content to that 200 milliseconds, it's almost nothing, and that's a well built native app. But that's also starting from cold, the browser not in memory at all. If the user had looked at the browser at some point recently, and let's face it, the browser fairly popular app, so that's quite likely, this happens. This is a progressive web app. That's beating a native app to content render by almost half a second. That is the power of service worker and the power of offline first. And achieving that wasn't a matter of rebuilding the entire app, it was incremental, improving the experience at every step for everyone. Things get faster for users with decent connections, things stop being frustrating for users with li-fi, and things become possible for users that are offline. A few people today asked me about Android Instant Apps and what that means for the web. Well, progressive apps are possible today. One app across thousands of devices, operating systems and browsers, already beating a pre-installed native app to render. Service worker is in the stable versions of Chrome, Firefox and Opera right now, and it's a high priority implementation for Microsoft Edge. It's under consideration by Apple, but progressive enhancement means you can use it today, as we've seen people talking about on the stage earlier. And if you use service worker and sites will only become way faster in Chrome and Firefox than Safari, that will give Apple more reason to implement service worker. As web developers, you're in the position of power here. You get to guide the future of the extensible web. Service worker lets us create great user experiences from becoming faster and network resilient to polyfilling new network features. And I know I've gone through a lot of stuff for lightning speed, but there is a free Udacity course, which is fully interactive where you take a website from online only to fully offline first, covering everything I've spoken about in more detail and more. Don't worry about remembering the URL, just Google for Udacity offline, it shows up. But with that, it's been another pleasure. Thank you very much.