 Hey, folks, my name is Jeff Posnick, and I'm a member of the web developer relations team at Google. So as you've heard, I'm going to go on a journey. So everybody who's watching this, either in person or watching via the live stream, we care about building truly progressive web apps. And the thing is, many of us have put time and effort into our current projects. So building a progressive web app doesn't mean starting from scratch, though. As developers, what we need is a device about areas from improvement and different tactical fixes that we could apply to our existing code base. And in that spirit, let's set sail on a journey together. And what we're going to do is, little by little, we're going to transition an existing single-page web app into a progressive web app. All right, so this is our starting point. We've got this web app, something I put together. It uses the iFixit API to access repair guides. Pretty straightforward. The initial implementation is a single-page web app that uses client-side rendering. And while this project uses React, the concepts we're going to talk about really apply to any framework or vanilla JavaScript. It just so happens, I was using React for this project. Let's take a quick look at the initial experience of running this in, let's say, Firefox. Let me just confirm. I have my static dev server running right now. Go up here. Put open. OK, see some repair guides. Click around a bit. It's all doing some client-side rendering. Maybe not the fastest in the world, but it works. You could view your content. Let's take a look at something else. Let's take a look at what the experience is in Safari. And this happens to be Safari with JavaScript disabled. Some users have JavaScript disabled. Some users are in browsers that don't support JavaScript. This is an issue. And this is what they say for a single-page web app. Not great. Again, this is a function of being client-side rendered and relying on JavaScript. So let's see what we could do. Now that we have a feel for what the app does, let's try to figure out how we can improve on the current implementation. And what we're going to do is use that tool that Matt just talked about called Lighthouse to automate a whole suite of diagnostic tests covering areas important for progressive web apps. So what we're doing right now at our starting point is kind of running it to establish a baseline. And it's going to give us an idea of what's working well and areas that we can improve. So we're going to start the test via Chrome extension, in this case. But as mentioned, there's also this command line tool that has the same functionality if you prefer that. And the tests are actually really cool. They use the Chrome debugging protocol. And they simulate a host of different real-world conditions to see how your web app actually responds. So let me pull that up. Do this in Chrome Canary. I am kind of old school and just doing everything in Kangino Windows whenever I'm working with anything. So let's get our current version loaded. Lighthouse is ready to go over here. And we click Generate Report. And you can actually see it's doing its job. It's using the DevTools ability to simulate this viewport widths of different things and all sorts of other stuff. And we can get our scores, hopefully. This is big enough for folks to read. And we're at 30 out of 100. OK. It's giving us some nos here. And it's giving us some things that might be areas where we can improve. Service workers, not there. Meaningful paint times, not great. We have this missing content when scripts are not available. We have a bunch of other things that we could improve. The main thing is that we have a really good starting point. We know what we need to focus on. Before we really dive into the results and interpreting them, I just want to make this point, though. Lighthouse will help us identify areas of improvement, but it's not our end destination. It's not the ultimate source of truth as to whether something is or is not a progressive web app. Ultimately, what matters is the experience that your web app provides to your entire user community. So in addition to running Lighthouse to test, what we're going to do is try web apps across a wide range of browsers, operating systems, and different network conditions. And I recommend that everybody does that as part of their normal testing and release process. Don't just rely on Lighthouse. So we have those results. Here are opportunities for improvement. Our speed scores could be better, and there's no counterpoint on our page when JavaScript is disabled. And what we could do for both of those is introduce server-side rendering. Second, it looks like our web app won't respond well offline, so no surprise, we could add in a service worker. And finally, there's missing metadata about our application, as per the Lighthouse report. And that could all be addressed by adding in a web app manifest and also adding in thoughtfully chosen meta tags to accompany that. So we've got three concrete steps we could take, and we're going to walk through them each now using Lighthouse and cross-browser testing to measure how effective our work is as we go. All right, so let's first look into server-side rendering, what that's all about. What it's going to mean is that our browser gets a fully populated, functional page as part of the initial A should be response, rather than relying on multiple requests to get content on the screen. And equally important, it means that our web app will work in browsers in which JavaScript is unsupported or just switched off entirely. So not every web app is going to be able to run with full fidelity with JavaScript disabled, but in our case, we should be fine. It's a fairly straightforward to add in server-side rendering to our React application using universal JavaScript. And that means that we get to share a lot of code between the client and the server, so not too much work. And we're going to use Express, which is a node-based web server, and it plays really nicely with the React router. So while we're talking about React now, again, other frameworks like Angular and Ember have their own solutions for server-rendered JavaScript, so definitely check those out as well, if you happen to use those. All right, so let's take a look at the actual changes that we're going to make in order to implement that. So I happen to have a handy GitHub diff open. So this is the real code, and I'm going to make this a little bit bigger. So folks, you'd see. And just scrolling up a bit, not too much. That should surprise most folks who've done this sort of thing before. We're adding in a serve task that's going to bring up our server. And scroll down a little bit. This stuff is important, but we can't really focus on it right now. And what I'm going to scroll down to is a part that actually implements our routing and figuring out what to return to the client. And this is very lightly adapted based on the React router canonical example. So what we're going to do is just take our existing routes that we're already using on the client side. And we're going to match against those with the incoming request URL on the server. We have a few things to handle different error conditions. And what we're then going to do is just make the request to the iFixit API from the server. It's going to figure out what data we need. And the server is going to make those requests and provide it to the client immediately without the client having to get anything else. And we have a little bit of code down here. Once all the data fetching is complete, we just have our initial component that we're going to then render to string using this React DOM server helper method. And we are going to also take all that initial state that we have on the server and turn it into something that's serialized that we can then deserialize on the client and just have a nice experience where the client takes over once that initial payload comes down. And finally, just a pretty straightforward index template that just has placeholders for a state and for the React HTML. That's a weird thing. All right, so that is what's changed in this step. And do do do. All right, so what we're going to actually now do is run that new code and confirm, make sure that our changes have the effect that we wanted in both in Lighthouse and in actual browsers. So let me just switch back over here. And it's not really live coding, but it's kind of close. So we're going to actually run our build process again. Get to see how that'll go. And our new server is going to start up. And this time, we're going to be on ports 8080. Once that's done, all right. So OK, let me just start this up again. And we'll test again in Firefox. Things look good. It behaves the way we want it to behave. That's great. We didn't break anything, at least. But let's actually go to that Safari that has JavaScript disabled still and see what the experience is like there. And we actually have content this time. So that's great. It's a net win for those users. We click around a little bit. And see, it's not the same smooth transitions and other things you get if you did have JavaScript enabled, but we're building an application to view this content. And our users, a subset of our user population, can now view it. So we should definitely consider that win. All right, and let's also go and rerun Lighthouse. Should just close this down, start this up again, generate another report. And hopefully, we'll actually be able to assign a number and measure some of the impact that that change had and see in our Lighthouse results. OK, so we're at 30 out of 100. We're up to 44 out of 100. It's progress. And most importantly, we see the changes where we kind of expected to. We have our page contain some content. When scripts are not available, it's now a yes. So that's a win. We actually also see an improvement in the performance. It's actually the page load performance as fast as test is actually pretty significantly better. We were going from 2.4 seconds down to 647 milliseconds for the first meaningful page. So server-side rendering was a win for that. So we're in the right direction. That all is cool. All right, so next up, let's try adding a service worker to address some of the Lighthouse feedback about speed and offline functionality. So first important reminder, I've heard a lot today. We need to treat service worker as a progressive enhancement. Not all browsers support service workers yet. And we want to be careful that we don't do anything to degrade the experience on browsers that don't support it. And it's also important to keep in mind, even on browsers that do support service workers, the very first time your users visit your page, the service worker won't be installed yet. So having a good experience for that case is super important. All right, with that out of the way, what we're going to do is use that AppShell plus dynamic content model that's been mentioned a few times. And this is something that happens to work really well with single-page applications. We're starting with single-page applications. It's kind of natural to adopt it in this particular case. And what it's going to do is take the shell, which is all the local HTML, JavaScript, and CSS that we need to render the outline of our application and make sure that it gets loaded directly from the cache whenever possible, and completely bypassing the network. And taking the network out of the critical path for getting things on the screen is the surest way to get consistent fast performance. So that's super important. And what we're next going to do is take a runtime caching strategy to handle the requests for our dynamic content. In our case, this is coming from the iFixit API. So it's going to populate that after the shell is loaded. And under the hood, not surprisingly, we're going to use a tool that Match has talked about called swprecache. And it's going to generate our service worker for us. And it's going to implement all the caching strategies too. So not too much code that we have to write in this particular case. But what we really do have to do is understand what our network traffic is and understand what we're actually requesting. And this is important because we have to actually set up those routes and can't just put in arbitrary URLs for that. You need to know what's going on. I always turn to the DevTools in Chrome to do this. There are other browsers that have fantastic DevTools as well. But let's just go to Chrome for a second now. And we're going to do this again in a fresh incognito window. And we open up DevTools here. Let me make it a little bit bigger. Go to network. Let's see what happens. I do this all the time. This is just part of what when I'm approaching just a new application, new service worker, you really need to understand what's going on in your network because you are replacing this, basically. This is crucial. You have to make peace with this. So you can see some of the requests going on. This is basically our app shell stuff. This is our local CSS. This is our local HTML. And then it starts making requests to, in this case, it's stuff from cloudfront.net and the images that are being displayed on the screen. And lots of images. We load in some JavaScript and things like that. And let's click around a little bit. What ends up happening, not surprisingly, is it makes a request to the iFixit API on the client this time. So the initial iFixit API request was made on the server, but now it's the client's taking over. And it's just making requests to pull in that information about that guide. So that's being pulled in, more images, things like that. OK, kind of have a feel for what's going on now. So let's take a look at how we actually implemented the service worker in this particular case. So all right, I think. This is the appropriative. All right, so this is what's changed. We're pulling in our SW pre-cache library, and we added a task to our build process. In this particular case, we're using Gulp. So we're using the syntax for that. But there are other ways of using SW pre-cache. Folks who really like NPM scripts, you could put this all as a command line and read and config via JSON. We're pretty agnostic when it comes to that. And there's some boilerplate, but it's kind of the interesting bits for what we're doing over here. So first of all, static file globs. This is a bunch of patterns that match all the local static files, and these are the things that kind of get requests as part of our app shell. So we just need to tell SW pre-cache where to look. And if we end up adding in anything, if we make any changes to our JavaScript, if we add in extra images later on, it will just automatically be picked up as part of the build process, which is great. Less thrust to worry about. Next, we have this dynamic URL to dependencies section. And this is unfortunately a mystery to some folks. It definitely could do a better job of explaining it. So here's my attempt to do a better job to explaining it. You can pretty easily figure out when something that's local on disk has changed. If you have an image, if you have CSS file, that's in a local directory. You could just calculate the hash and figure out it's changed. What we're doing is taking these dynamic URLs. These are not things that correspond to actual HTML files on disk. And one example of that in this case is slash shell, which is a URL we're using for our app shell. And this is effectively server rendered. It's kind of a composite of a bunch of different things. There's some JavaScript. There's some CSS. There's the actual underlying template for the overall structure. And all of those things go into uniquely determining what the content is for slash shell. So in order to make sure that we actually have a cache that's automatically kept up to date for a slash shell, we need to tell swprecache. Take a look at all these local files. If any of them change, consider that this slash shell cache content is basically invalidated and refatch it to make sure that we have the latest. So again, if you're not doing stuff that has server-side rendering, it's not as important. You can get away with not doing it. But it's actually a pretty good solution for this pretty common use case. And very closely related to that is the next section. This implements a brute force service worker routing. It's kind of trisomorphic JavaScript sort of, because we have the server that has to know about routing. We have the client that has to know about routing. And we also have the service worker in this case that has to know about routing. Because once you navigate to some random URL with your initial navigation, like slash guide, slash one, two, three, four, five, or whatever, that's not a real URL that the service worker knows about. And we don't want to just panic and give up and pass on to the server. We want to say, hey, actually, whenever you see something like that, which is a URL that you don't have explicitly in the cache, I would like you to fall back to this cache content of slash shell. So it's basically doing real brute force routing. We might improve this in the future and have a little bit more flexibility. But for now, we're just going to route everything to shell. And finally, as mentioned previously, you can use SW toolbox strategies for dynamic caching within your SW pre-cache config. It just kind of saves having to maintain things in two separate places. And we went through the process already of looking in the network panel and DevTools, and we figured out what our dynamic traffic is. We have these iFixit API requests. That's what they look like. And let's use the fastest standard, which is basically stale while we validate for all those. And for the images, let's do something a little different. We're going to use cache first, because chances are an image with the same URL at this remote server is not going to change. It should be safe, at least for our case. And we have the dedicated cache. We're using that max entries there. This is super important. It's something you can't actually get just with the service worker primitives right now. We had to build on top of those primitives and use index DB to keep track of all the entries in your service worker cache, and then allow you to expire the ones once you reached that limit. And definitely think about it. Folks don't have to use service worker toolbox. They don't have to use free cache. You could definitely do vanilla service workers if you want, but please keep this in mind. If you are building something and you're just caching things as you go, you really do need to think about your user's cache eventually filling up. And then we just have this default over here of network first. We have a little bit more boilerplate. We add in task, and finally making sure that we're only attempting to register our service worker if there actually is service worker support. So that's what's changed. And let's start up modified version at step three now. And as part of this process, you actually see the service worker generation. So SW pre-cache will log a bunch of output telling you what it's caching and what the expected size of the pre-cache initial payload is. So you see that extra output over here. And our server is running again. All right. So we're going to just confirm that in Chrome right now, you can remember, you see all this. This was the previous version, it's fetching stuff from the network. So again, kind of close things out, start fresh. And in this particular case, very first time, everything is still coming from the network. And this is because the service worker is not in place yet. So that's expected. And the next time, though, that I end up reloading the page, for instance. And we see our initial stuff coming from the network. We're going to reload. And now we see what we expect. So we actually see over here from service worker. So this is how we could confirm just visually in DevTools that things are behaving as they should. Our app shell is coming from the service worker. You could actually see the fastest strategy in practice over here, where it's pulling in an additional copy from the network in addition to returning it from the cache. So you are making a network request. But most of the stuff is just coming directly from the service worker, which is great. And let's see how Lighthouse feels about that. So unfortunately, kind of lost our previous reports. But I think we're at, like, 44 out of 100. And hopefully, we're a little bit higher now. All right, so we moved up a bit. We're up at 59 out of 100. So we are making progress in the right direction. And again, importantly, the things that we expect to have changed actually did change confirming that this is not all for naught. We have a registered service worker. It responds when we're offline. And that all is great. Also, let me try just attempting favorite Crumbinary. I'm just going to do the offline over here. I can do a navigation. So worked. If you trust that, that actually simulates things being offline. So OK, I'm definitely making progress in the right direction. And finally, let's address the missing application metadata. So these are the options that allow you to control how your web app behaves when it's added to a mobile home screen primarily, if that's mainly what they're used for. And we'll add the metadata in both the web app manifest format. And whenever possible, whenever it makes sense, you use equivalent meta tags with the goal of supporting a wide range of different browsers. Don't just set this metadata indiscriminately, though, really give some thought about each of the tags. Don't just copy things that you saw in the example. Some of them have really important meanings, like whether you're going to end up behaving like a standalone application that hides the browser's URL bar. Maybe that's not appropriate for your particular case. That's fine. As a developer, you get to make that decision. And you could do what you think is best for your users. That should be the ultimate goal. All right, so let's take a look at what's different. We have our wonderful icon that's been added in. We have our manifest. I am specifying the icon. This is a little interesting. I'm actually using this UTM source equals home screen parameter as part of my start URL. So if you had Google Analytics running for your web application, you could actually keep track of where it's been launched from. SW Precache actually knows about these parameters and will not cause you to have to cache a version that has slash and a version that has slash UTM source. It was home screen. It'll take care of that for you, which is nice. And in our case, we're using display equals standalone. And here we have some later metadata. We're setting theme color, setting Apple Touch icon, and we're setting this mobile web app capable. Townsend equals yes tag. And again, not everybody needs to use this exact setting. All right, so let's take a look after making those changes and see what effect they add. We're going to run through the build process one last time. And what I'm actually going to do first is fire up iOS Safari running in the device emulator. I want to test what that experience is like, because we've added some metadata that's relevant to mobile Safari. Let's see if it's actually doing what we want. I think I have it running over here. We have localhost8080. We have our basic web app. Cool. All right, let's see what happens when you try to add it to your home screen. Add to home screen. All right, I mean, it picked up our icon. In this particular case, we're adding to home screen from the main page. So it's going to use the main page as the URL. Paul Kainlin earlier talked about some interesting ways of kind of handling the case where a user might add to home screen while you're on one of the subpages and redirecting to make sure that it actually always takes them back to the initial page when they do launch that shortcut. But for our particular case, pretty straightforward. And click on Add. We go over here. Scroll over. We have our web app saved over there. OK, that's what we want. Let's take a look at, again, starting fresh incognito window. And let's take a look this time at that if you have a new application panel. This all looks good. We have our metadata. We can see that's paris. We can see the actual theme color live. So everything seems pretty good. Let's see if we have made Lighthouse happier with us. Go through the poor generation process again. And I think we were at something like 60 out of 100 last time, something along those lines. Cool, we're up to 94. So yeah, we went from 59 out of 100. And we had a bunch of nos over here. And we have some yeses now. So that's great. So I think that this process is definitely useful, as Matt mentioned previously, as part of continuous integration in addition to that process. And making sure that you didn't accidentally have some regression that's causing a really big drop in your Lighthouse score. That's super important just as one additional check. But OK, so we're getting to the end of this talk. But as developers, our journey really is never over. New features like constructable response streams that Jake was talking about earlier, for instance, as a world out more widely present additional opportunities for enhancement. And I hope you'll be inspired to follow my lead, though, and use Lighthouse to guide you towards the ultimate destination. And that's a progressive web app that delights your users. You need to focus on your users. Here's a few links. And I know I'm a little short on time. Definitely check those out. I'm going to just highlight in particular the progressive web app code labs. You can do them in person here for folks who are watching this via video. You could go to that URL. Definitely walk you through a lot of the stuff that we talked about in this session in a hands-on way. So thanks, everybody. Please grab me afterwards, ping me on Twitter if you want to chat more about the progressive web app journey.