 Hello, I.O. My name is Adi Osmani. You might have heard of progressive web apps. And it might seem like they went from being this great idea to now being talked about everywhere, sort of escalated kind of quickly. Now, for many of us in this room, maybe you're working on an existing app. Or you're thinking about the next app you're going to be building. Maybe you're building it with a JavaScript library or a framework. And maybe you have progressive enhancement in mind, because you want to make sure that the experiences you're crafting work for as many users as possible. We're going to keep that in mind today. Now, before we continue, it's probably useful to remind ourselves what a progressive web app is. So progressive web apps use modern web capabilities to deliver app-like experiences. They evolve from pages and browser tabs to top-level items that exist on your user's home screen. And they exhibit reliable performance. Now, this is an app called Smaller Pictures that I've been working on. It's basically a photo app that lets you compress images on the go. And it's got features like the web app install banner, splash screen, offline support. And when I got to Mountain View, I thought it would be useful to test this app out. So I took a picture of the bathroom in my hotel. Something that I discovered after doing this was that that bathroom actually has a better GitHub contribution graph than I do. Now, you might be wondering, well, are progressive web apps unique to Chrome? They're often talked about in the context of the Chrome browser. And the answer is absolutely not. In fact, if you take a look at Opera on Android, you'll see that they've got a strong progressive web app story as well. They've got things like the web app install banner working, splash screen, offline support, thanks to Service Worker. And it's not just them. If you take a look at Firefox and Android as well, they similarly got a lot of explorations in this area going on. Add to home screen features, offline support, web app install banners being experimented with at the moment. When you're building a progressive web app and you have progressive enhancement as sort of a core tenant of the experiences you're trying to craft, they just work for everybody. Regardless of whether they're on Safari on iOS or they're trying to look at your content from an area that might have seriously limited connectivity, when you're crafting these experiences with architectures like server-side rendering in mind, it just makes sense to be able to ship faster experiences for everybody. You might also be wondering, well, are progressive web apps a Polymer only thing? At IO this year, we did show a first-class experience for building progressive web apps using Polymer. But you can use any tech stack to create progressive apps. In fact, over the last few months, we've seen an increasing number of really large apps launch using progressive web app features. The first one is Five Miles. It's sort of a Pinterest meets Craigslist. They lost a full progressive web app with an application shell architecture, add to home screen, splash screen. This was built using AngularJS. And they found that their add to home screen experience led to 30% better conversions. Similarly, Flipkart, they saw three times more time on site with their progressive web app, 70% better conversion rate from their add to home screen users. That's an app built using React. And just last week, we saw Celio app, which is a local market app for buying and selling goods locally. This is an app built using React, Express, Flux, uses universal JavaScripts. They found that their average session length inside the progressive web app equals native. And they also found that the user acquisition costs were 10 times cheaper than native. Because they used web push notifications, they saw improved user retention. And they used server-side rendering and found that that reduced bounce rates with things like sharing posts and buying ads. So today, we're going to talk about how to build progressive web apps using React, Angular, and EmberJS. Before we do that, let's talk about waiting. Waiting is something that we all have to do as a part of our daily lives. It's sort of this frustrating thing. You could be at a restaurant waiting for someone to bring you a menu or waiting for someone to bring you your food. And 20 or 30 minutes could easily pass by. And you're still wondering, should I go up? Should I ask someone for a menu? Should I ask someone if the food is ready? You could be waiting maybe an hour or maybe two hours. It gets to a point where you start questioning your life choices and you start wondering, should I just eat my wallet instead? On the web, the equivalent of this is your users considering leaving your site or your app. And that's not something that you want. So the key differentiator between the old web and the new web is native apps have got reliable performance. And that's something that we like to be able to strive for with the experiences we're shipping on the web. So how do we deliver native caliber features while still being able to use JavaScript libraries and frameworks? Well, before we talk about that, there are a few key moments that we need to keep in mind when it comes to user experience and performance. The first is, is it happening? Time to first paint. Now, this is when you're showing your user a splash screen or a loading indicator. Basically, some feedback that communicates your navigation has started in your page. Next, you have time to first meaningful paint. This is the is it useful moment. This generally focuses on paints of above-the-full content, headline text, something that they can actually use and find useful. Then you've got time to first meaningful interaction. So is it usable? If the user goes and starts tapping around your UI, is something useful actually going to happen? Now, at IO this year, we announced a new tool called Lighthouse. Lighthouse is a Chrome extension and a CLI that checks that you have your progressive web app features in check. It also has initial support for loading performance metrics. Also on the slide is Web Page Test. It's a tool that's been around for quite some time, but it basically helps you performance profile your sites on real-world devices with different types of network connectivity profiles. Now, one metric that both of these tools look at is the Speed Index, a metric that looks at how visually complete a page is. And for the apps that we're going to be looking at today, the direct apps, the Angular apps, the Ember apps, our targets on cable are going to be a Speed Index of 1,000. So ideally, things loading in under a second. On 3G, our target is 3,000, so under three seconds. And on repeat view, using Service Worker, our goal is to make sure that we're loading things as close to zero as possible. If it's under 1,000, that's great, but as close to zero as possible. So at IO, we're talking about killing the offline dinosaur. Ideally, you want to be able to build experiences where you're no longer having to play that game. You're actually giving your users something valuable. I spent two hours on this slide and make poor life decisions. I'm sorry. The underpinning of this is Service Worker. It's basically a script that runs in the background, separate from your page, responds to events like network requests, and basically gives you the ability to define really well-crafted offline experiences. Now, Service Worker gives us the opportunity, really, to start rethinking our application architectures. One such architecture is the application shell architecture. And the idea here is that we try shipping down the network as quickly as possible, just to put the content necessary to load up our application UI. So things like Toolbar, the drawer, maybe some cards. And then later on, we dynamically populate that view with our content. Now, this type of model works particularly well for single-page applications, but it's nuanced. You might be working on a content-heavy site, and it might be more important for you to get actual text down to your users first. So just keep that in mind when you're evaluating your options when it comes to these types of architectures. But if the application shell architecture does make sense for your app, the way that you achieve some of these improvements in addition to Service Worker is doing things like relying on critical paths CSS, so inlining the critical styles for your application shell in the document head, and then asynchronously loading in the CSS and JavaScript needing for the current view. We can then use things like the async attribute on script and libraries like load CSS from the filament group for being able to synchronously load in the content that we need. And the performance wins of such a model are actually quite stark, which you'll see if this is working there. Is on repeat view, it's almost instant. This is like the simplest version of the application shell architecture we come up with. And you've got instant performance for the repeat visit compared to the first visit where it took much, much longer. So let's talk about React. React is a UI library for creating interactive, stateful, and reusable UI components built by Facebook. It uses a virtual DOM, try doing the least amount of DOM manipulation, try keeping your components up to date. And it works really well on both the client and the server. Now, I'm a big fan of Hacker News. I use it every day. And I thought maybe it would be useful to be able to try using Hacker News offline. Unfortunately, Hacker News doesn't work offline on the site. You'll get the dinosaur if you try using it. By the way, this is the dinosaur that we have over in the mobile web area. You might notice that it seems to have four legs, for some reason, and two arms. I just wanted to clarify, on behalf of the Chrome team, the back two legs have been deprecated, but they haven't been removed just yet. Plan on. So back in 2014, Hacker News actually announced that they now have an API. An open API that anyone could go and build a Hacker News client with. And that API was based on Firebase. This is a real-time API. Anytime someone goes and makes a comment, you can actually stream that latest content through to your Hacker News clients. And so I decided to go on GitHub and try to find out if anyone had been hacking on a really interesting Hacker News client. And I found this one by Johnny Buchanan. This is a really nice client. It lets you take a look at all the different views available in Hacker News. You've got new comments, show, ask, jobs. And it'll highlight any time a new comment is updated or someone has gone and edited their comment. Unfortunately, this didn't work offline either. And so I thought it would be kind of interesting if you went in and explored what it would be like to take this complex React app and turn it into a progressive web app that worked offline. So the first thing we needed to do was make it responsive. And I know that sounds silly, but it's still quite a common thing for people to forget this one step. So we went in and we optimized things, like the amount of viewport real estate being used on mobile. We wanted to make sure that it actually used up as much space as possible on iOS and on Android. And we wanted to avoid things like overlapping menu items. So we fixed those as well. All of this is just done using media queries, and it's nothing particularly special. The next thing we added was the web app manifest. So when is this project? And I noticed that they had a really tiny favicon, and that wasn't going to cut it. So I read it all of the artwork in Sketch and exported all the icons out and set up a web app manifest so that we could launch this thing from the home screen and get a splash screen. This is what the manifest looks like. It's nothing particularly crazy. We basically have a background color set for the splash screen, and we have another color set for the theme color, and we've got our icons defined in there as well. Now, this application is actually a lot more complex than it might initially seem. We have this large set of complex real-time data being streamed to the client, and then we have this application UI, this skeleton, that we're using across all of our different views. So we're going to split this problem into two parts. The first is caching our application skeleton, and the next part is going to look at content caching. Now, for caching our application skeleton, we're going to use two libraries, SW Precache, which is a build tool that will go and help you pre-cache all of the assets that you need for your first application shell render. And then SW Toolbox, which is a library for handling all of your dynamic views and content that might get rendered later on. Now, in this application, like many React apps, we're just using Webpack behind the scenes for our module bundling. But Precache can actually add it on after the fact. So here I've just got some NPM scripts set up for this project. I've got SW Precache set up as just another step that I run after my build is completed, and I'm configuring it using a Precache config file, in my package.json. So this is what the config file looks like. It's basically importing in Toolbox. It's importing in some rules for runtime caching. And I'm able to configure things like what other files I might like to have cached inside there. Now, in runtime caching, I can specify what other origins I would like to be cached. So in this case, I've set it up to cache anything coming back from Firebase, from the Hacker News Firebase API. And I can also set it up to cache things like Google Fonts or anything coming back from the Google CDM. My last step is going and actually registering my service worker. So I add the spoiler plate code to my index file and just make sure that it's registering the service worker that's being generated by SW Precache. And once I go and I load up my page and I look at a repeat view, I can see that all of my static assets are now being served from the service worker cache and DevTools. So I go into airplane mode and I check out this app. And I'm happy. I've already opened this up before. I'm starting to look at content I've seen previously. But there's something wrong here. Although my application shell consistently renders really, really fast offline, as soon as I close that app, I'm still in airplane mode. I launch it from the home screen. I get this. No content. The reason for that is that Firebase's offline support is limited to the session that the user currently has. As soon as you close that off and you try going and relaunching from the home screen, you're not going to be able to get that same session data back. You need to actually cache it a little bit separately. So that leads us on to content caching and something I've spent the last four weeks trying to solve. So Firebase, generally, because it's real time, it works using WebSockets. This is just expecting some WebSocket frames in DevTools. You've got little bits and pieces of comment data and metadata that you might use to construct a comments page. Now, I first thought maybe it'd be an interesting idea to store anything that comes back here in IndexedV. Unfortunately, I'd forgotten that IndexedV is the worst API known to man and should probably be burned. Luckily, there exists a number of really good abstractions for working with IndexedV, things like local forage, DeXie, IDB by Jake Archibald. And so here we are inside of our React app once again. I'm just consuming the Firebase API the same way you would today. And I thought maybe I could be a little bit clever. What if I built some middleware that would basically proxy anything coming back from the Firebase API to IndexedV? And we just stored that. It seems like a not terrible idea, right? Seems like it would be fine. Unfortunately, what happened there is if you went and saw any story on the Top Stories page and you opened it up about 10 to 15 seconds later, you'd end up with 3,000 or 4,000 records in IndexedV. You wait another minute. It'll grow to 4,000 or 5,000 because you're dealing with real-time data constantly updating IndexedV as you go. On mobile, if you open up an app where you've had a large, large number of records stored because you've been looking at lots of pages, you want all of them to work offline, what'll happen is in this app, we had to iterate over the entire collection. It ended up grinding Chrome to a halt. And that just wasn't good enough. So I then said, OK, well, maybe I should rethink this problem a little bit. What if we just used web workers? So I was using local forage. I thought, OK, I'm going to batch all of my rights for this application and only try storing them maybe every 30 seconds instead. Unfortunately, IndexedV is still really bad when it comes to these types of problems. And I found that that just still wasn't good enough for this app. And I had this revelation. I was sitting in an airport lounge, and I thought, well, I don't really need this data to be entirely fresh. I don't need it to be updating every second so I have every single comment. And I realized that Firebase actually has a REST API. And the REST API allows you to basically go and fetch a static version of content from a JSON endpoint. And that's something that's much, much easier for you to cache. So imagine we have our comment page, and we have all of these different comments. I just got JSON endpoints for every single one of them that I can go and cache. And that's what we do. So I introduced an offline mode into the React Hacker News client. Here I am. I'm still online. I've launched my app. And I'm just going to go and take a look at one of the Google articles here. You'll see that this is a page that has a lot of comments. There's a lot of content here. We're scrolling, we're scrolling. Maybe I'm on the tube or on a bus or something. But I still want to be able to view that content offline. So I'm going into airplane mode. I'm just going to relaunch that app. And with content caching in place now using the REST API, I can actually cache all of that data in the Service Worker Cache API instead of just using IndexedDB. And as you'll see, it just works. I can load up any of the pages I've previously visited, and they'll just work fine offline. I ended up delaying a 300-person flight 10 minutes just so I could get the pull request in for this feature, because that was how happy I was I got this working. Now, one of the lessons there was that I had to rethink what I wanted my offline experience to be. The technical side of getting Service Worker set up and caching set up, that wasn't hard. It was just crafting the right user experience. Next, we're going to talk a little bit about universal rendering. So Paul Lewis put up these really interesting graphs a few weeks ago. He said that in a JavaScript-based render, you're usually reliant on all of the script to be downloaded, parsed, and evaluated before you can render the page. This ends up wasting a lot of time when the HTML arrives to when you give the user something meaningful on the screen. Something slightly better is the situation where we see a lot of frameworks moving in at the moment. You have a server render with hydration where you send a view to the user. The downside of this is that you're still reliant on the JavaScript to actually be loaded up before someone can interact with your application. This can result in a little bit of an uncanny valley where the user sees the application, but they tap and nothing actually happens. Somewhere that would be amazing to see a lot of frameworks moved to is this idea of progressive rendering and bootstrap, where you send a functionally viable, but minimal view in HTML, your JavaScript, and CSS. And the app progressively unlocks more features as the user starts traversing and navigating around it. So I'd love to see more of us move in that direction. For the server-side rendering portion of the React HN app, React Router was just amazing. Worked really, really well on the server. We found that we were able to reuse most of our routes with very little effort. We just put together a very quick express server to get this set up. Some of the little lessons that I learned along the way were just making sure that we properly guarded ourselves around globals. This app was still making use of things like local storage and session storage, so we had to guard ourselves so that wouldn't fall over. We had to minimize reliance on things like the DOM. The previous author was doing things like using the DOM to parse out the host name, so we had to just switch away from doing that. Now here's where we ended up. We started off in a place where, on cable, we had a speed index of 2,063. And by the time we'd added service worker and server-side rendering, and we used all these content caching techniques together, we got to a place where we had a speed index of almost 1,000, so close to perfect. Now on 3G, where it's even worse, we're still in a pretty good place. We started off with a speed index of almost 4,000, and we ended up in a place where, with service worker caching and content caching, we've got a speed index of 1,400. Now, these type of experiences, this is something that I've done just over the last couple of weeks on an existing large React code base. So if you're in a position where you have something similar and you want to turn it into a progressive web app, it's very, very possible. The tools are just there. So I encourage you to try that out. Next up, we have AngularJS. So Angular is a framework that's been around a very long time. It tries to tackle building complex applications, originally focused very much around being data-first and using declarative HTML and data binding. I'm going to talk about Angular 2, but before we do that, I know that there are still people that build on Angular 1 or have existing legacy Angular 1 apps. And you might be wondering, well, a lot of this progressive web app stuff, great. Does it work with Angular 1? Yes, but with caveats. So I tried building an Angular 1.5 app just to demonstrate this. And I was trying to think of what app I would build. I initially thought of building an app that would show me the nearest places I could go and skydive. And I realized that I don't skydive. The closest thing to me skydiving is probably zooming in on a Google map really, really fast. And that's basically the same thing. So I'm fine with that. So this is the app that I built. It's called Cherry. Built us in about an hour or two. This is using Angular 1. As you can see, we've got add-to-home screen features, splash screen, offline support with Service Worker. Unfortunately, the downside to this app is that it just doesn't meet our performance budgets. You take a look at, on 3G, you take a look at where your goals are for fast first paint. It's taking 2,500 milliseconds for first paint in this app. Our speed index metrics are 8,000, 8,400. That's crazy. We shouldn't be taking so long for our users to actually be able to start using this application. Luckily, with Service Worker in place, we are able to slash these numbers in half. So if you're in a place where you can't, in the near term, move over to rewriting your application to use a modern framework, I would consider using Service Worker. You're still going to get some wins there. But it's not going to be quite as good as something that has support for server-side rendering and is developed with mobile in mind. That takes us to Angular 2. So Angular 2 is component-based, uses directives, and has an improved dependency injection model. And the Angular mobile team have recently been focusing a lot of their time trying to make it easier to build progressive web apps using their tooling. So we're going to talk a little bit about that. Now, to get started with Angular 2 apps a lot of the time, they'll recommend using the Angular CLI. This does everything from setting up your TypeScript and typings for you, bootstrapping your app, using System.js to load up your app. And so we're just going to use that to very quickly scaffold up a new app. Now, one thing to keep in mind is a new flag called mobile. Now, what mobile will give you is all of this. When you create a new app, it's going to give you a few extra things that help you make a progressive web app. Things like going and scaffolding you out a web application manifest, a build step to generate your application shell for your application's root component, and server server support for application caching. So things like add to home screen are possible. You'll still get the splash screen. You'll still get your add to home screen icons. All of those are fleshed out for you. And the manifest doesn't look too different to the manifest you've looked at today. It's exactly what you'd expect it to be. Now, most of the time when we're developing apps, we use a site called realfavicongenerator.net to generate our icons for us. It just avoids you having to handcraft these yourself. Next up, we've got the application shell. So by creating a new project with the Angular CLI and the mobile flag, the application already has a build step configured to go and generate you an application shell from your root component. So here, we're going to just use the main component that Angular CLI created for us to create our application shell. We're going to do something with Angular Material. So let's say that we will just add a toolbar to this application. So we MPM install the dependencies we need for this toolbar. And then we register the toolbar inside of our main HoloMobile component in this case. So we see that we've got application shell directors that are going to help us with setting up the application shell. We've got our toolbar in place. This isn't particularly useful just yet. So we're going to go and we're going to add in a proper toolbar in there inside of our template. So it says, hello, mobile. So far, this is fine. This isn't particularly exciting. What is exciting is these two little helpers. So these two helpers are shell render and shell no render. And these are kind of badass. So shell render will allow you to define any UI that you want to display on the screen before your main Angular bundle has completed asynchronously loading in. Shell no render is what's going to be displayed after the entire application has fully rendered out, meaning that you can define your application shell declaratively right inside of your component. So in this case, we can do things like just make sure that the toolbar and at least a loading spinner. So that thing that we wanted to achieve for first meaningful paint is in place. And then we can use shell no render for anything else that comes after the fact. Now the Angular mobile toolkit comes with support for generating a service worker that will automatically cache your assets for you. It doesn't yet entirely support things like dynamic caching, but it does support caching for your static assets. And you can go and you can start using these on-serve, using the prod flag, on-build, again, using the prod flag. I've been hacking on a little dribble API client called Berry using this stuff, and I found it particularly good for that. Offline support wasn't very difficult to get in place. And using the CLI, it's actually going to scaffold you out one of these files in the backgrounds. This is an Angular service worker manifest. It contains basically all of the files that are going to be cached, as well as a hash that allows service worker to know whether or not they're stale or not. So what do the numbers look like? We're going to take a look at an Angular weather app that the team important working on. And this uses the application shell model. It uses Angular CLI. It uses a lot of the tooling that they've recently been working on. And with Lighthouse, these are the metrics that we currently get. So first paint is about 1,083 milliseconds. That could be better. Speed index is 2,271, and the speed index on repeat visit is 599. Now, these aren't terrible numbers. These are actually pretty good numbers, but we can do better. So recently, the Angular mobile team have also been looking at offline compilation. Now, in Angular 1, template compilation can happen many times during your application's lifecycle. And Angular 2 tries to move some of this work into a build step, making it a lot more efficient. So offline compilation happens during a build step, as I said, and it no longer happens during the browser. So we convert things like templates into code on the server so we don't have to ship the compiler at all. And what this leads to in terms of application performance is some stark differences. Fast first paint is happening at 479 milliseconds. Speed index is almost exactly where we want it to be. It's very, very close to 1,000. And repeat visit is close to nothing. You're getting what's basically an instant application on repeat visit. This is kind of awesome. Now, we also wanted to take a look at what server-side rendering would look like for these applications. Because we can do, we can still squeeze out a little bit more performance here. So getting your users a first view really, really fast is kind of critical these days. And Angular Universal basically helps you achieve this with pre-rendering on the server. So the way this works is whenever a user goes and they click on a link to your web app, Angular Universal is going to server-side render the initial response, get it to you. So a user has at least some UI in front of them they can take a look at. A couple of seconds later, it'll then asynchronously load in the rest of the bundle. It'll bootstrap, the client will take over. This is sometimes called hydration. And it just means that you lead yourself to having a pretty good perceived load time performance overall. Some of the interesting things that I came across in this experience over the last couple of weeks was that this notion of user events being stuck in on Tanny Valley is pretty much consistent across React, Angular, and Ember. Even with server-side rendering, your first view, you're going to still end up in places where your user can still end up clicking, touching pieces of UI where they're trying to have things happen. And they'll just be stuck until your entire content bundle has ended up there. So in the Angular community, gap events is the term that they use here. So things that the user is going to try accomplishing while they're still waiting for bootstrapping to occur. But thankfully, the Angular team have been working on an interesting solution called Preboot that basically allows you to respond to and play back events. So I'm not saying this is the perfect solution to this problem. It's kind of very interesting, though. If the user does try interacting with your app, as soon as it has finished bootstrapping, it will replay all of those events for you so that they're not having to redo work. It's kind of interesting. So to get this set up, again, we use Express. We just import in Angular Universal, which doesn't take a lot of work. And this is what the numbers look like. We start off with a speed index of 2,640 on 3G. Once you've added an offline compilation, ServiceWorker, Universal Rendering, we're down to almost nothing. You're in a place where Universal Rendering has made a huge difference. And this is very much true of the React app we were working on earlier as well. ServiceHide Rendering makes a huge difference, really huge difference. And as you can see, with ServiceWorker in place again, repeat visit, it's close to nothing. Users are going to feel it being instant. Now, the Angular mobile team are working on making all of the developer conveniences around this stuff a lot more low friction. Some of this work is still very much early on. They're working on improving their ServiceWorker support, adding pass-through caching support, a data savings mode, making sure that they're looking at HTTP2 and how that interacts with ServiceWorker, and just improving the docs. So if you're working on an Angular 2 app at the moment, check out mobile toolkit. And finally, we have Ember.js. Now, Ember is a framework for building ambitious web applications. Utilizes components. Routing in Ember is a core tenant there. And the Ember router, similar to the React router, works just fine on both the client and the server. Now, interestingly, frameworks have a bad rap when it comes to performance, right? Henrik Jortig recently said that in his opinion, the data for Angular, and this was I think he was talking about Angular 1, and Ember flat out disqualified them for mobile use. And this is mainly down to bundle size. Like, Ember as a framework still has a relatively large bundle size. Now, the Ember team acknowledged this a couple of months ago, and they've been looking at how they could improve the situation with mobile in mind. So they've taken a look at both first boot and repeat boot for this problem. Now, on first boot, they're looking at fast boot. So their solution for server-side rendering. And I really love fast boot. We'll talk about it in a minute. And on repeat boot, they're looking at Service Worker. They're also looking at some other interesting ideas, things like string loading, which basically means shipping JavaScript modules as strings so that you only pay the evaluation cost for modules that you're using. And projects felt for stripping back features that you're not using in Ember. I was trying to think about what progressive app I'd build for this, and I came across the idea of building a puzzle app that would just say go outside when you're complete. Maybe not the best idea. But I ended up on Super Cool Blog, which is basically a blog app. This is the final thing that we end up building for Ember. Basically, this is an offline first Ember application that works instantly, meets our performance targets, uses server-side rendering, works without JavaScript enabled, by the way. So let's take a look at how something like this can be constructed. Now, applications in Ember are crafted using the Ember CLI. And you go and you just install it, let you scaffold out your application. Architecture, a very basic file structure. Go and serve it. You get a very, very basic app out of the end of that. Now, routes and making sure that your routes are right and you have a good routing setup will make things like server-side rendering so much easier. So we make sure that in this application we've got our routes figured out. We've basically got very simple routes. We've got one for different categories we've got going on, we run a display articles for. And all of the content for this application actually just exists in JSON files. I could be using a back-end, an API of some sort. I could just be caching stuff coming back from Firebase. In this case, I just have all of my data locally. And it just makes caching a little bit easier. So this is what our very, very basic application looks like at the very start. So we've used Ember CLI. We've scaffolded out our views, added some CSS. We're just using templates to load in the content. It's nothing particularly complex just yet. Now, the first thing we're going to do is add in the web app manifest. Similar to the other sites, it's not particularly difficult. It's like something that takes five minutes a lot of the time. So we add this in. And this just makes sure that when the user comes to trying to add the site to their home screen or launch it, they're able to get home screen icons, splash screen icons in place. Next up, we have service worker support. Now, about a year ago, the Ember team said that they'd been watching some of Jake Archibald's service worker videos. And they were really excited about them. One of the things they wanted to do, and this is true, I think, of a lot of frameworks, is they want to take web platform features and try to make them usable for their users with as little friction as possible and automate that process as much as they can. So Ember wanted to make this something that's free. They're not quite yet there with their baked-in support for service worker. But luckily, the community put together an add-on for Ember CLI called Broccoli Service Worker. Now, one of the cool things about Broccoli Service Worker is it happens to build on top of Service Worker Toolbox. So that thing that we've been using for our React app earlier on. Now, the configuration for this looks a little bit like this. Basically, we're inside our Ember environment. I'm just saying that I'd like to make sure that debug messages are enabled and the service worker is enabled. But I actually get control over a lot more of my offline experience, being able to set things like network-first URLs, paths to include and exclude. And when it comes to Service Worker Toolbox, you actually get quite a lot of control over different network strategies. We're able to say, well, I'd like you to go out to the network first and then fall back to the cache, or use the cache first and then fall back to the network, or try to race both strategies and make sure that whichever wins actually delivers you the content. You've got a lot of flexibility there. So once we have offline support in place, I'm able to launch my application in airplane mode and just view any of the content I've previously seen. I can go and read articles. I can go and look at menu items. And all of it just works offline pretty seamlessly. Now, next up, we've got server-side rendering with FastBoot. Now, I've got to be completely honest. When it comes to server-side rendering and adding it after the fact to a framework-level application, Ember has got this just down to an art when it comes to FastBoot. I found this like the easiest thing to set up. And it took very, very little configuration. So FastBoot is basically a middleware for rendering Ember apps on the server. The idea is that in a traditional client-side app, your browser is first of all going to have to request a bunch of different static assets from the asset server. And while those are loading, your users are just going to end up having to look at a white screen. So the way that they solve this with FastBoot is by running an instance of Ember in Node on the server. When a request comes in, we are already have the app warmed up in cache and memory. And they just tell the URL you're trying to reach that you want to render on the server and send it back down. So you install Ember CLI, FastBoot, and add on to Ember CLI. And the one thing I had to keep in mind here was just making sure that anything I'm using that's going to be fetching data from other origins or an API takes in mind that you're going to be running this code both on the client and the server. So you end up needing to use an isomorphic or universal fetch implementation. And thankfully, Ember have got one of those. So I just switched in the fetch implementation that I was using so that any time I fetched client side data, it would work fine on both the client and the server. And I end up with this. I'm basically able to get server side rendering working fine. This is actually working pretty fast on Safari on iOS as well. It might not have service worker support yet, but server side rendering just works fine. Now, one of the benefits to this is, of course, that the content for my application is all just there. I'm not waiting for any JavaScript in order for the user to actually be able to read the articles. If they're in a place where they have limited network connectivity, everything will just work. Another benefit is that this works without JavaScript enabled as well. Now, you might be wondering, well, OK, great. I can read content. But what about my user UI? I spent a little bit of time recrafting the drawer menu for this application entirely in CSS. And that just means that my user is able to use some UI with the content in place, even if my JavaScript bundle is still loading with the rest of the framework. And then we've got a little bit of final performance optimization to squeeze as much as we can out at the very end. So this is what our index looks like for this application. It's using some Teplin helpers. And one of the things I want to add is that critical path CSS that we talked about earlier for the application shell. So we install an add-on called Ember CLI inline content. We can figure that inside of our Ember CLI build. Basically, for this application, it's not got a lot of styles, so I don't really need to do anything special to get my critical path CSS. I'm just going to inline the entire style sheet inside of my document head. And finally, we've got that one problem of large framework still being large frameworks. And unfortunately, this is something that the frameworks still have to solve on their own. Now, Ember, before you Gzip it, is almost 700 kilobytes in size. And I don't want that necessarily to block rendering. Now, ideally, I would just be asynchronously loading in both my vendor scripts, as well as the main application code for my app. But in this case, I'm probably going to end up having to use something like defer because Ember doesn't actually concatenate scripts for you by default. And we've also got things like GZipping that we could probably be doing inside of the application. So GZipping is something we can also take care of when it comes to deferring. So the async attribute allows scripts to be downloaded in the background without blocking the page. Defer claims to do the same thing, except it's supposed to guarantee script execution order. The spec says that it should be downloading all of your scripts together and executing them before DOM content loaded. Unfortunately, in some older browsers, that's not exactly the case. In IE, less than 10, it might execute your second script halfway through the execution of your first script. Yay, browsers. But we're still going to just use defer in this case to avoid blocking. But at the very end of all of this, so we've got server-side rendering in place. We've got service worker in place as well. We've taken a speed index of almost 3,000. We've trimmed it down with server-side rendering. We've used fast boot. Trimmed it down with the optimizations we've made even further. And we end up in this place where on repeat visit, it's close to instant. It's close to nothing. Now, I'm really, really excited about the Amber team exploring how to bake in more of these ideas directly into the CLI. I'm also excited about the Angular team doing this and really excited about the React community embracing service worker as a core tenant in the applications we're building there as well. So the three takeaways for today are do less and be lazy. If you're going to be trying to ship bytes down to your users, try to ship the thinnest, smallest bundle possible. This will benefit absolutely everyone. And be lazy just means lazily load in everything else that you need after the fact. Design for constrained environments. A lot of the time when we're building things these days, we're testing things out on high-end phones or high-end computers. A lot of our users aren't going to have that. They're not going to have a perfect internet connection all the time. They're going to be in places where they'll have limited connectivity, maybe using devices that are like CPU bound or aren't as powerful as the things that we're testing on. So if you're developing with constrained environments in mind, you're going to be able to ship experiences that benefit everyone and will hopefully lead to performance improvements there. And go for aggressive, because now is a good time. Looking at all of the frameworks and libraries that we've been looking at today, all of them are exploring things like server-side rendering. All of them have got the ability to use ServiceWorker as a first-class citizen and have it work well and lead to performance improvements, have it lead to reliable performance, that thing that we're trying to hit with progressive web apps. Everything else, everything from like setting up your web app manifests and so on, that stuff is still quite trivial to do. And that's not to say that instant applications with things that I've shown you today are the end. It's really just the beginning. There's still a little bit of work that we have to do on both the JavaScript engine side and the framework side when it comes to performance after the fact. So I've been focusing on load performance a lot today. There's still a lot that we can do after the fact, so that when your users are interacting with the app, it still stays instant after that. So please go check out our new progressive web app page on Web Fundamentals. I hope that this has been useful. Thank you.