 OK. So hi, as Paul was meant to say, I'm Matt Scales on Google's web developer relations team. And I'm here to talk to you about tools and libraries for building progressive web apps. Now part of what makes progressive web apps possible now is a shift in the way that the platform itself is being developed. So on the old web, we got custom designed high level features for achieving the things that W3C thought web developers wanted to do. So developers want images, so we'll give them an image tag. Developers want to lay things out on tables, so we'll have a table tag. Now the new idea is this thing called the extensible web. And it says that rather than building simple APIs for specific things, we should be getting low level, deep, powerful APIs that enable a much broader range of things. So rather than a tag for images, let's have a tag for arbitrary graphics. Rather than a tag for tables, let's have CSS properties that let us lay things out however we want. And this is great and allows us to upgrade our pages into apps, but there's this gap between the level the platform gives us and the level we'd like to work at. And we can fill that gap with libraries. Now in the extensible web, the community is given the responsibility of providing simple, easy to use libraries to handle the specific things that developers actually want to do. And this is great because it's much easier to iterate on the API of a library than it is to iterate the web platform itself. And this means that even as we speak, someone is out there solving hard problems so that you don't have to. So today I'm going to talk about libraries and tools for service workers. I'm going to talk about the new web app features in Chrome DevTools. And I'm also going to try and answer the question of, is the thing that I built actually a progressive web app and is there anything else I need to do? So obviously the most important new technology as far as progressive web apps are concerned is service workers. So just quick recap, service worker is a background thread for your application that opens up new features like offline, push messaging and background data sync. And for the offline use case, it acts as a network proxy right in the client. So whenever your page requests a resource, the service worker gets a chance to respond to that request. It gets a chance to get in the way and do whatever it wants. And also just to remind you, this isn't about working offline necessarily. It's about, because even being online can be a terrible experience. If you have Wi-Fi or if you just have a slow connection, maybe you're connecting to a hotel Wi-Fi or maybe you're somewhere where data costs a lot of money, what you really want is network independence. You want the experience of your app to be great regardless of the network situation. But in order to achieve that, complex apps are going to require pretty complex service workers. There'll be a lot of code and there's a lot of new APIs to learn. So we're going to help with that with a library we built called Service Worker Toolbox, or SWA Toolbox. And this was created by our team at Google to abstract away the common patterns for connectivity independence. So here's a pretty simple example of a service worker written with Service Worker Toolbox. You import the script from wherever it's residing. And that gives you a global object called Toolbox that exposes the API. Here we call toolbox.precache. And we pass in a list of resources. This says that when our service worker is installed, go ahead, fetch all these things, stick them in a cache so that we know that whenever a service worker is running, it has access to these resources. And you use this for your app shell and perhaps any really common small resources throughout your app. And then we use Toolbox.router to match different behavior to different parts of our application. So here we're going to set a default. We're going to say that the default behavior for any root will be something called Toolbox.fastist, which we'll get to in a moment. And here you can see just an example of a more specific root. This is based on Express.js routing for anyone who's built a server in Node. Toolbox.router.get, and then there's a URL pattern, slash API slash, followed by anything. And then the behavior here will be Toolbox.networkfirst. So let's talk about Toolbox.fastist, Toolbox.networkfirst. So these methods are what we call strategies. So typically you need to think pretty carefully about exactly what behavior you want for different parts of your application. So you need to choose a different strategy for each root, or potentially. And that's the way Toolbox comes with five built-in strategies. Fastist, networkfirst, cachefirst, cacheonly, and networkonly. Let's go over what they are. So with the fastest strategy, a request comes in, and we race the network and the cache, whichever one is going to come back first is going to return to the page. So in this example, the cache is going to come back first, which is probably pretty obvious. Though obviously the network will win if it wasn't in the cache in the first place. If and when the network ever does succeed, it updates the cache so that the next time this happens, even if it goes to the cache, it's a slightly fresher version of the resource. So this is good for stuff that you want to be fast, but is allowed to be maybe a little bit out of date. And just as a note, because this one always uses the network, if your goal is to save your user's data plan, this is obviously not necessarily the best thing to do for all your resources. Networkfirst, request comes in. First of all, we try the network, can we give that time to succeed or fail? And only if it fails do we then go to the cache and return that to the page. Now, if the network request does succeed, that updates the cache, even though we're not coming from the cache this time around. So the next time we try it when we're offline or whether the network time's out, we get a more up-to-date cached version. And so you can imagine fastest and network first good for slightly different things. So if you imagine a Twitter client, then when you first load your application, your highest priority is getting stuff on screen so that the user has your application. So perhaps you use Fastest to load the latest tweets because it's better to show old tweets than to show no tweets. Whereas when the user does a pull to refresh, that's a pretty strong signal. They actually want the freshest data. So go to the network and only go from the cache if you can't use the network. And there's an important extra option to this one because it turns out that on mobile devices, that network time-out can be two minutes. So if you have li-fi and your network, your device is absolutely convinced it has a connection, but it doesn't really, then you can have some sort of action and it can be two minutes before you even decide to try the cache. So we added an option for the network first strategy that lets you say, give a more reasonable time-out. So here we're saying, after five seconds, I don't care about what's on the network. I'd rather show something. Cache first, go to the cache only if it's not there. Do we go to the network? Sort of the opposite of network first. Again, the network will update the cache, but one of the important things to realise here is that once that network request has succeeded once, it will be in the cache and so the cache will be consulted every time and it will never be updated. So this is still pretty good for some cases. If you have versioned URLs, if you have some resource where the URL will change whenever the content changes, this will work fine for that. And this might be for things that are... that you want to be able to cache, the things that don't change but aren't part of your application shell. So the example I've been using is blog posts. You don't want to download 10 years with a blog post the very first time someone comes to your blog, but if they have been to your site and downloaded a few posts, it's reasonable to keep them around. Cache only, so this is go to the cache and if it doesn't... if it's not in the cache, fail. This is good for the stuff that you pre-cached because you know it's there. And then network only. Go to the network and if it fails, it fails. This is what you got without service worker and the only real reason to use this is that if you've overridden the default with something like toolbox.fastest, this allows you to go back to the original behaviour just for one route. And if those don't do everything you need to do, you can also define your own strategies. So here we have a function I've created called fallback-ish image. Sorry. What this does is it takes a request and it will try and fetch that request from the network. And if it fails, it will respond from the cache but it won't try and respond with a cache of that specific request. It will always respond with this fallback.jpeg. So you can use this, for example, if you have hundreds of profile images on a page somewhere, perhaps it's not actually that important to have that in your offline experience and you don't want to fill the user's device with those images. So you have a single image that uses a fallback instead. And to make this useful, you have to make sure you've pre-cached the fallback image and then set up a route that actually uses it. So it's just the same as before except instead of using toolbox.whatever, we've used fallback image. And finally, on toolbox, you can get fine control of the cache as well. So by default, when you call pre-cache or when you use cache only or fastest or whatever, the cache that it uses is shared over everything in your whole application. It's a default cache that Asteroid Toolbox creates for you. And you can, on an ad-hoc basis, say, for this route, I want to use a different cache for whatever reason. So here, we've passed in an option called cache and we've set a name so that it will be a different named cache that it uses. And now we can set options on that cache so that this cache can only have up to 500 entries and entries can only be in there for up to five days. And Asteroid Toolbox will go in and clean up periodically. Now, something we've kind of glossed over there with Asteroid Toolbox is that that pre-caching step is actually trickier than it looks there. There are a few problems. One is that in order to get a new install event, you need to change the service worker script. Not something that service worker script imports, but the actual original service worker script. So you have to remember to actually update that every time you update, every time you do a release that changes some of the resources, even if the service worker itself, the logic doesn't need to change. Another problem is that when the install event happens, Asteroid Toolbox will take all those resources and just download them all again, even if none of them have actually changed. And then you also have to maintain a list of which resources need to be pre-cached. It would be quite easy to miss something out of that array and then find that in your next release, your offline experience is slightly broken because a file didn't make it into that. So we created a tool to help with this called SWPrecache. Now what this does is it takes a few simple options and it writes a service worker for you. It's something you can stick in your build step. You tell it which files you want to cache. It will take a hash of each file along with the file name and write those directly into the service worker. So if any of your resources change, the hash will change, the service worker itself will change and you'll get a new install. It also means that when that install event fires, the service worker has a list of every resource with its hash so it can compare it to what it already has and only download things that have changed or are new. Now this can be used as a command line tool, installable via node. And then you run the S2Precache and here's the simplest option set up. You say where the root of your application is and it will just precache everything in that folder. You can also use it as a requireable node module so you require an S2Precache. You call precache.write. You say which service worker file you want to write and then you can pass in some options. So here we've said we're going to use this glob pattern to say which static files we actually want to cache. And if you're wondering whether that means that we've now lost the ability to do that dynamic caching that S2Ptoolbox gave us, there are actually two ways to bring it back. The first is with this runtime caching option. You pass in an array of objects which specify those routing rules that you had in S2Ptoolbox. So you have the URL pattern and then you have which strategy to use. And you can even pass in options so we have the network timeout seconds option. And this is great for most simple cases, particularly if you're only using the built-in strategies. And if you want to do something a bit more fancy, if you want a bit more control, you can just say that you want to import a service worker script that you've written yourself. So you could let S2Ptoolbox handle all of the static resources for your app shell and then do the dynamic caching using a service worker toolbox script that you wrote yourself and it will inline that into your service worker. And this also allows you to add in things like push notifications and background sync into S2Ptoolbox's service worker as well. Now, service worker doesn't work everywhere as Paul said and AppCache does. So just to reiterate, should I use AppCache in a new project? No, because it's terrible and doesn't do what you want and the more you can talk your application to trying for AppCache, the more you find that it's still not going to work and you have to contort even more. And it has security problems and it's just, it's generally bad. Can't stress that enough. However, if you've already got an AppCache for your application, we have a tool that will help you transition. So SW AppCache behaviour is this little library we've created. What it does is you import the library into a service worker script. This could be your entire service worker script if you wanted. You create a fetch handler and you just say in that fetch handler, I want to respond with whatever the legacy AppCache behaviour would be, essentially. What this will do is it will get your AppCache manifest, parse it, work out what the correct thing to do for an AppCache would be, and then do that. But it also allows you to get around some of the security issues and some of the things like never, getting into a state where the AppCache is never updated, service worker won't actually allow you to do that. Now, SW AppCache behaviour is just one of a set of things that we're hoping, that we're releasing as part of our SW Helpers repo. As an example of something else we're doing, there's offline analytics. So this was written for last year's Google IO website and was used again this year. What this does is you set up a route for any analytics requests that uses the strategy function that's provided by the library. And whenever one of those analytics requests fails, it will stash it away somewhere, queue it up, and then when the user comes back online again, it will replay it and add a parameter so that it correctly attributes the event to the correct time it actually happened. And we're hoping to add many more things to this repo over the next coming months. So that was service worker. What are the tools? What are the things that can we help with? So no talk about developer tools is going to be complete without talking about Chrome's developer tools. And there are some great new progressive web app features coming in the next release. So almost all of this is only available from Chrome 52, which is currently in beta. But it's moving towards stable in a few weeks. So first up, the application panel has been renamed... Sorry, the resource panel has been renamed to application to better reflect that this is now where you go to look at things to do with a web app. This first panel allows you to debug your manifest. So it lists what the browser has detected as the name of the application, what icons to use, what theme colour to use, what start URL is, things like that. And also gives you this add to home screen button that allows you to trigger the on before install prompt so you can test code to do with delaying the install prompt. Service Worker panel has been redesigned to hopefully be a lot clearer. All of the service workers for the current application will be shown in a list, which I should probably have got a screenshot of, but it will be less confusing about which one's active and what you can actually do to each of those service workers. It has all of the same features that it did before, but it has some extra ones too. So at the top here, there are these three check boxes. Offline is a shortcut to the same thing in the network panel that says you want the network condition to be offline. So that is great for testing the offline behaviour of your application and to make sure that your service worker is working correctly. Update on reload means that whenever the origin is refreshed, it will check for a new service worker and potentially do an install. If it's changed, do the whole service worker install dance, regardless of how long it has been since it last checked. Bypass for network says that the service worker should load, the install and activate events should happen, it should still be called for push and background sync, but the fetch event should never be fired. Whenever any request comes in, it should not try the service worker to resolve it. This is good if you've got your save and refresh workflow going on for a resource that would be cached for a long time by service worker. It allows you to just go back to your normal workflow in development. The clear storage panel. So, as a consumer of Chrome, you've probably seen the feature of clear private browsing data. It allows you to get rid of cookies and things like that. For a period of time, you say, I want to get rid of private browsing data from a day ago, the last week, or forever, whereas this is a bit more developer-oriented. This is for the current origin only. I would like to clear these things, and the options that it gives you are things like service worker, cache, index DB, and things like that that a developer might be more interested in. I also want to call out a feature that's actually been there a while, but people don't necessarily know about. Down here is this cache storage viewer that lets you see what things are actually in the cache API cache that you're using with your service worker. This can help you debug issues with pre-caching or when a request is failing for something you think should be in the cache. Finally, I was going to talk about the question of, is my thing actually a progressive web app? I don't think we can really answer that question because it's a bit of an open question, but we can try and get a bit further there. One of the things that we call here is if we had a button we could click that would just tell me, is it a progressive web app? Am I there yet? So the Chrome team built one, and we called it Lighthouse. This is both a Chrome extension and a node-based command line tool, which has a whole bunch of different tests in it that it runs on your site and gives you advice on things that you might be missing. And it also has this very cool professional looking logo. So as a Chrome extension, you load up your page, you click the button, and then it will reload the page a few times, connect with the remote debugging protocol, gather a whole bunch of metrics, and then spit out this report. It'll give you a score, which will give you a vague idea of what it thinks of your application, and it will tell you anywhere that it thinks that you might have missed something. It also has this best practices section that is things that it's not going to score you on. Maybe you don't actually need to do these things, but if you do, you should check on them. It just gives you a bit of a guide. You can also run it as a command line tool. Just for anyone who's thinking of typing in URLs they see on the screen, airhona.com makes a noise, give this talk at IO, and about 30 seconds after the slide came up, so I was like... from the audience. So you run it on the command line, and it will output... by default, it will output pretty printed to the console, but you can also output JSON that you can parse yourself, or the same HTML that the extension gives you. And it's also requireable as a node module, so you can say lighthouse and then the URL, and then that returns a promise which resolves with the JSON that you would have got from the command line tool. And obviously the CLI and node module are both good ways of adding lighthouse to your regular tests or continuous integration. I'd like to draw attention to the big alpha in the corner here. I just want to be clear that this is early stages. None of the rules are final. I'm sure there's a lot of discussion to be had about what the rules say and whether they're good rules. If you would like to contribute to that discussion or better yet contribute pull requests, you can find us on GitHub. And I'd also like to point out that this currently only works in Chrome 52+, which is currently in beta. So to recap, progressive web applications are made possible by the extensible web. They're made practical by libraries and tools. There are a whole bunch of service worker libraries out there from Google and from others with more coming. Chrome DevTools is awesome, as always. And lighthouse attempts to tell you when you're done. It's probably a bit too definite in this slide. OK. Thank you very much. I hope that was useful.