 Hey, everyone. I'm Matt with the web developer relations team. I'm here to talk to you about progressive web apps and how tools and libraries can make building them easier. Progressive web apps are the latest buzzword craze in web development. Progressive web apps are cool. And they use all of the latest exciting technologies. And that's probably enough to get quite a lot of developers interested. But far more importantly, building progressive web apps is about building web experiences that are useful, enjoyable, and accessible for your users. And I hope that's enough to interest the rest of you. Part of what makes progressive web apps possible now is a shift in the way that the web platform itself is built. In the old web, you've got custom-designed, high-level features for achieving things that the W3C thought web developers wanted to do. People want images, so you get an image tag. People want to lay things out on tables, so you get a table tag. The new idea in the web standards world is called the extensible web. And it says that instead of creating simple APIs for specific things, new features should be low-level, deep, and powerful, enabling a much broader range of things. So rather than a tag for showing images, we get a tag for drawing arbitrary graphics. And rather than a tag for showing tables, we get CSS properties that allow us to lay stuff out however we want. This gives us the features we need to upgrade our pages into apps. But we also have this gap now between the level we want to work at and what the platform gives us. And we can fill this gap with libraries. So in the extensible web, the community is given the responsibility of providing the simple, easy-to-use libraries for the specific things that developers want to do. This is great, because it's much quicker and easier to iterate on the API of a library than it is to iterate on the platform itself. And it also means that even as we speak, other people are out there solving hard problems so that you don't have to. So today, I'm going to talk about some service worker libraries. I'm going to talk about Chrome DevTools and the Progressive Web App features that are being added there. And we're also going to talk about how tooling can help you understand, is my app a progressive web app? And is there anything else that I need to do? So the most important new technology as far as progressive web apps are concerned is service workers. Service workers are a background thread for your application. And they open up all sorts of new features, like offline, push messaging, and background data sync. For the offline news case, your service worker gets to act as a network proxy right in the client. Whenever your page requests any resource, be it an image, a script, or the HTML for the page itself, an event fires in the service worker that you get a chance to respond to. The service worker can then use the cache and fetch APIs to do something fancy, like serving up cached content. But it's not actually just about offline. Even being online doesn't mean that your users will have a good time. You could have an incredibly slow connection. Maybe you're in a place where data is going to cost you a lot of money. Maybe you're trying to use the Wi-Fi in a Mountain View hotel room. You don't just want to work offline. What you want is connectivity independence. It's about making your app work well, regardless of the network situation. But of course, with complex apps is going to come complex service workers. There'll be a lot of code to implement some of these things that you want to do. And there are lots of new APIs to learn. So we have a library, Service Worker Toolbox. It was created by Google to abstract away some of the common patterns for connectivity independence. So let's run through a pretty simple example of a service worker that's written with Service Worker Toolbox. This is the full script. This is everything you'd need for this particular example site to go offline. So first of all, we import Service Worker Toolbox. And then we have this line toolbox.precache. Now, when a service worker is installed, it gets a chance to act. And Service Worker Toolbox uses that opportunity to download resources that you've told it to with this precache line and stick them in a cache so that those resources will always be available to your application for the whole lifetime of the service worker. And you'd use this for things like the shell of your app, the resources that are shared across all your pages, perhaps the resources needed to show your homepage experience. Or if your app is something that can work entirely offline, maybe a whole app goes here. As an important side, whenever the service worker changes, the install event happens again. So this precache will happen every time you update your resources, as long as you change the service worker as well. So next up, toolbox.router.default equals toolbox.fastest. Now, toolbox.router is the part of the system that handles matching parts of your site to the behavior that you want to have. So here, we're setting the default behavior. And we're setting it to fastest, which I'll talk about in a moment. And then you can have different behavior for different parts of your site. So here, we use toolbox.router.get to say, get requests for things beginning with slash API slash should use the network-first behavior. And we can pass in some options. And again, I'll talk about what network-first is in a moment. So the methods we saw called fastest and network-first are what we call strategies. Typically, you need to think carefully about what's the best strategy for each part of your site, or for each kind of resource on your site. S3 Toolbox comes with five built-in strategies. So fastest, when a request comes in, we race both the cache and the network, and whichever one comes back first delivers its result to the page. And obviously, if the resource is already cached, it will probably be the cache that wins. But this way, you will get the network if it's not already in the cache. Another useful feature is that when the network quest succeeds, if it succeeds, if you're online, it will update the version in the cache. So you're not quite up to date because if you keep refreshing, you keep getting the last version rather than the current version. But this is good for stuff that's allowed to be slightly out of date that you want to be returned very fast. On the flip side, though, this does always make the network request. So you are potentially costing the user money using up their data plan. So as an alternative, you could use network-first. This one, you would try the network. And only if that fails do we go to the cache and return the response. Again, if the network succeeds, this will update the cache. But because it goes to the network first, it means that when you are online, you get that latest resource. So this is good for stuff that should be fresh if you can get it, but that you still want to work offline. So for example, your latest tweets or the emails in your inbox, you want to be able to reload the app and show whatever you had before when you're offline. And this one comes with a pretty important option that I want to highlight. So network timeout on a mobile device can be two minutes. So if you request a resource using the network-first strategy and your device thinks it has a connection, but really it's never going to get anything, your users could be waiting two minutes for that to time out. So we allow you to set a different network timeout. And if that amount of time, here, five seconds elapses, we just give up and return from the cache anyway. Anytime that your users are waiting for a resource to download, you should probably do this and your users will thank you. Cache-first, sort of the opposite of network-first, this will go to the cache. And only if that fails will it try the network. Again, if it does go to the network and it succeeds, it will update the cache. But because it's always going to the cache first, it means that once it has succeeded once and put it into the cache, it will always use that old version. So this is good in some ways. You're not going to the network. You're not even trying the network if it's something you've already got in the cache. But it will always be stale after that first time. So this is good for resources that never change, but that aren't part of your shell so you don't want to pre-cache them. So imagine in the case of a blog, if you have 10 years' worth of articles, you don't want, as soon as any user lands on your page to download all 10 years' worth of your articles. But you do want, if someone's going backwards and forwards, to just use the version in the cache. Obviously, a trick here would be to version the URL of your articles so that if you do want to make a change in the future, users will still get it. Now, cache-only is slightly less useful, but this is handy for the stuff that you've pre-cached. Because a request comes in, we go to the cache, and if it fails, that's it. That's all we do. It never tries the network. So if it's something you've pre-cached or something where you have some other way of updating the cache, this is a good way of saying, don't even try the network. I only want to show cached stuff. And then network-only, which is the flip, only go to the network, and if it fails, it fails. This is basically like not having a behavior set for a particular root with your router. But if you've set a default, this can allow you to override it to get back to that original default behavior. Now, if that doesn't give you the control that you want, you can create your own strategies. So here, I've created a function. This code is, more or less, lifted from this year's IO website. So we show profile images for speakers in the IO website. And we don't want to download hundreds and hundreds of small images and fill the device straight away. So what we do is, if you're offline, when we try and fetch those images, we try and fetch it from the network. And if it fails, we just return a fallback image from the cache. So obviously here, we need to make sure that we've precached the fallback image, that it's actually available when we request it. And then we set up a root that actually uses this strategy. And you can also get fine control over the cache. So here, we've set up a root for posts. This could be blog articles. This could be Google plus posts, something like that. And you want to be able to cache them. You want people to go to the things that they've looked at recently and get them again. But you don't want to just endlessly fill the device as they browse around. So we can set a maximum number of entries, in this case, 500, and a maximum number of seconds that those should be available. So I've set this to five days. But even with all this control, the precaching part can be tricky. You need to make sure that the service worker changes in order to get a new one installed. And if you've got lots of assets in your site, it can be tricky to know exactly which things need to be precached, whether things have changed, and do I need to actually send a new version. And the other thing is that service worker toolbox is pre-cache method. When that install event fires, it redownloads everything. It has no way of knowing what might have changed on the server. So it redownloads everything and sticks it into the cache fresh. So we have another tool called S2P Precache that will help you solve these problems. Based on a few options, this will actually write a service worker for you. So you tell it which files you want to cache. It will take a hash of each of those files and include that data in the service worker it changes. And because it includes the hashes in the service worker, it means that any time a file actually changes, it updates the it causes a new install. But if you have the exactly the same files and none of them have changed, it won't. This also means that during the install step, you can compare the hashes of the files that are meant to be pre-cached by the service worker with the files you already have in the cache. So you only have to redownload things that have actually changed. Now, this can be used as a CLI tool. You install it from NPM. And you just run the S2P Precache command. Here, I've passed in an option to say which folder is the root of my application and where to write the service worker. You can also use it as a node module. So here, we require in S2P Precache, and we use the precache.write method. We say where the service worker should go and which files should be cached. Here, it's all of the HTML and CSS in the project. Now, that works great for the shell of your application, but aren't we losing the power of S2P Toolbox for dynamic resources like API requests? Well, as you see here, we can combine the two. The runtime caching option that we're passing to S2P Precache allows you to say what your S2P Toolbox rules should be. It uses the same patterns and maps to the same handlers. And S2P Precache will write out the code for your S2P Toolbox service worker into its service worker. And if you want full control yourself, you can instead say to set S2P Precache, I just want you to include some other file in the service worker. So you could write the S2P Toolbox rules manually for your dynamic content, and just tell S2P Precache to load that as well. Now, service worker isn't supported everywhere, and appcache is. So shouldn't we actually be using appcache? So appcache works, but I wouldn't use it for any new sites. Appcache comes with a whole ton of problems. It has security flaws, which is why it's actually recently being restricted to secure engines only. And you don't get any of the control that you get with service worker. There are actually a whole bunch of things that I've already just mentioned that are actually hard, if not impossible to do with appcache. However, if you already have an appcache, we have a tool to help you with that. This will help you transition from your appcache site to a service worker. So you import this library, and then you write a fetch handler inside your service worker, and you just tell it that you want to use the legacy appcache behavior. What this will do is it will look into the page that you use to register the service worker and find its appcache manifest. And then it will just try and behave exactly like the appcache, but actually it gets around a few of the more gnarly problems with appcache. And SW appcache behavior, which I failed to call out the name for, is actually just the first of many service worker helpers that we're releasing as part of our set of SW helpers. So as an example of the sort of thing we've been working on, we have offline analytics. This was originally developed for last year's IO website and is used again this year. And it provides an SW toolbox caching strategy function that handles failed requests to Google Analytics, queues those requests up, and then the next time the user is online, it sends them off. And it actually handles putting a timestamp on those requests so that the requests are attributed to the correct time when the user actually made the action. So you can gather metrics from users who are offline. And we're going to be adding a whole bunch of more tools into this repo over the next coming months. And now we move on to other tools. I just used the same transition. This one's not as exciting. So no talk about developer tools for the web would be complete without mentioning Chrome DevTools. And there are some great progressive web app features coming in the latest Canary. So these features, most of it's only available in Chrome 52, which is, as I say, at the Canary stage. So the Resources panel has been renamed to Application. To better reflect that these are things that web apps need rather than just the resources for your site. So here we have the Manifest panel. And the web app manifest gives the system information about your app for things that would be used outside of your page. So for example, icons to use on the home screen, the colors to set the UI for with the theme color, and images to use in a splash screen. And this page gives you a diagnostic of the manifest that it found and the values that were in there. Now, the manifest is an important part of the add to home screen process. So you'll get an add to home screen prompt for your app if you meet certain criteria, including having a manifest. And then to test that out, added to this manifest screen is a new button, add to home screen, which doesn't actually add the app to your home screen, but it triggers the on before install prompt so that you can actually, so the event. So you can trigger that prompt to test how things are working with your manifest. Then we have the redesigned service worker panel. The service worker panel that used to be here was pretty cluttered with all sorts of things. So now you just see a list of the service workers, along with their current state. You have the same controls that you had before, but just in a simplified view. But we also get these cool new features at the top. So the offline checkbox means that it sets it so that whenever you make a request from your page, it just never goes to the network. It just assumes the network will fail so that things always get handled by your service worker. Update on reload means that every time you refresh the page, it will run the service worker install event again, even if nothing has actually changed, so that you can test things with the install process for your service worker. And bypass for network, which means sort of the opposite of the offline case, it means that whenever a network request is made by your page, the service worker won't actually be asked to get involved. So this is helpful if you're testing something that would normally be cached in your service worker, but you want your old save refresh workflow back. This is one of my favorite little features that's been added. It's the clear storage section. So obviously, in the main Chrome UI, you have the ability to clear private browsing data. But the controls you get are pretty limited. First of all, it will clear things for every site that you have browsing data for rather than just the current site. And you only get to control how far back you want to go. Here, we have a more developer-oriented system. So this allows you to say, for this current origin, I want to clear all these things. You will unregister service workers, clear index DB, and clear the cache storage, et cetera. And the last thing that I'd like to point out is the cache storage viewer. This has actually been in Chrome DevTools for a while, but a lot of people don't know about it. And this just lists for a given cache what's actually in there, which can be great for debugging your apps. So we've looked at a bunch of tools that will help you fulfill all of the progressive web app criteria. But how do you know if you're done? What would be cool is if there was just like a button you could click that would scan your site and tell you if you were missing something. So the Chrome team built one. And it's called Lighthouse. And they have a very cool professional-looking logo engineer art at its finest. So this works as a Chrome extension or an NPM module. As a Chrome extension, you load up your page. You click the button, and it will use the remote debugging API to gather all sorts of information about your page. It'll reload your page a couple of times, and then produce a report. It gives you a score based on the progressive web app criteria and tell you how you're doing. And at the end, it also gives you information on some best practices, things that we're not necessarily going to score, but things that you might want to check out. So do you have ARIA elements? Is your manifest set up correctly? The CLI tool does basically the same thing. You install this via NPM again, which I missed. You just tell it a URL to go to. It will load Chrome in the background, load up the site there, do all the refreshing. And it can output either pretty print to the console. It can output JSON, which you can parse yourself. Or it can output the same HTML that you saw in the extension report. And you can also include it as a node module. So you can require Lighthouse and tell it which site to go through. And then you get the results as a bundle of JSON. And using either this or the command line is how you can, Erhana is great. You should all check it out, but don't use it in the talks. At previous conferences, they've used that to signal it was the end of your talk. And I was like, but I have 20 minutes left. So you can use this to hook up the results into your tests or into your continuous integration. Now I want to draw attention to the alpha in the corner here. This is pretty early stages. The rules aren't final. And it only works in Canary. Well, it works in Chrome 52+, which is currently Canary. You can check it out in GitHub. We encourage you to file issues or to contribute, if that's your thing. So we've talked about the fact that progressive web apps are made possible by the extensible web. They're made practical by libraries and tools. Google has made a bunch of libraries for service worker with more coming. Chrome DevTools is awesome, as always. And Lighthouse lets you know what you need to fix. So thank you very much.