 Last year, I was up on this stage talking about the potential for building progressive web apps with frameworks, the promise of being able to use some of the world's best DX to ship some of the best UX on mobile. In the last year, we've seen massive promise here. We've seen many large global brands start to ship progressive web apps as part of their default mobile experience, and many others starting to experiment with this stuff too. So today, we're going to talk about some of the technical insights and the journeys that many of these large companies took to ship these experiences using frameworks. Now, before we get started, I've got a little bit of a confession. Right before I.O., my wife, Ellie, had to tell me to stop playing video games and start working on my I.O. slides. So I did stop playing video games, and I did work on my slides. I built my I.O. deck as a video game. And my main character is my wife, Ellie. I'm going to get in so much trouble for this. Let's get started. Today, we're going to play Progressive Web Gaps. So let's get started. We go on our first level. We start our quest to building a fast progressive web app using a framework, and we run into Alex Russell, who says, use a JavaScript framework that gives you headroom on mobile. Ignore the fact that he's shooting us in the face with an arrow. What Alex means here is that you need to understand the cost of your abstractions, especially on low-end devices. You see, on desktop, the time it takes to parse and compile JavaScript is often several times faster than it is on mobile, where it can sometimes take four or five times as much time in order to process. This is one of the reasons why we see a lot of the web apps being built today using frameworks on mobile, taking about 15 or 16 seconds to get interactive, unless you're keeping an eye on how to optimize that. Now, in order to help you get fast and stay fast, your framework needs to give you the best possible headroom to succeed. Think of it like a budget. If you're trying to get interactive in five seconds, you want to make sure that you have enough budget for your application code and that your framework isn't taking up that entire budget, right? Because if you're going to start slow, you're very likely to stay there. Now, thankfully, over the last year, we've seen a huge increase in the number of good options for starting to build progressive web apps on mobile that have low parse and compile times. In addition to Polymer 2.0 joins us with Preact, ViewJS, and SvelteJS. These are all excellent options, and I'm starting to see an increasing number of companies consider them. Now, over the last year, one of the main pieces of feedback that I've got from large teams that are used building with frameworks and trying to ship progressive web apps is that they wish they had more reference material. They wish they had better demo apps that showed them how to properly hold a framework so that it would perform well on mobile. And so I got together with a number of different framework authors, and today we're happy to announce a new project that's the successor to ToDoMBC. We call it HNPWA, Hacker News Readers as Progressive Web Apps. This is a new project that has an entire suite of progressive web app references implemented using some of the best practices of today's frameworks. And in case you're wondering, hnpwa.com is a PWA itself. Now, our suite of HNPWA apps includes apps built using Preact, Polymer, View, Angular, React, lots of other frameworks. The Preact, Polymer, View Ones are interactive in less than three seconds, even on emerging market connections. And the Polymer one is using the purple pattern which we'll talk about later, and they happen to be hosted on Firebase Functions using H2 Server Push, as is the Preact one, it's using H2 Server Push too. The way that we validate these implementations is using Lighthouse. In fact, we'll link out to a Lighthouse report for every single implementation. And we use web page test to test these apps out on real devices, showing you how long they take to get interactive. So check out hnpwa.com, I hope you'll find it useful. Now, the first framework that we're gonna talk about today is React. React has made it painless to create interactive UIs and build components that manage their own state. And just last week, React crossed a million NPM downloads a week, a huge milestone for them. And it's being used in many PWAs. The first one we're gonna talk about is Twitter Lite. Now, some folks in our team, Alex Russell and myself, we had the honor of getting to work with Twitter on some of their pieces of Twitter Lite, and I thought I'd talk about some of their journey. Twitter Lite took their old mobile website and rewrote it to be a PWA, and they saw massive improvements in both timed interactivity, the number of tweets sent, they saw 76% increase in tweets, and a 2.7% increase in page views. Now, one of the other awesome things about Twitter Lite is that it's interactive in under five seconds on 3G. Now, using a framework like React to accomplish this on mobile requires a little bit of additional work. So let's dive in to what they used to look like. So the old version of this site, when they were first rewriting it, had a few issues. It was interactive in about 15 or 16 seconds. It was heavily dominated by script. They were using suboptimal loading patterns, and so they had to put in a ton of work in order to get to a good place. And so we returned to our quest, and the next place that we get to is performance guru, Sam Sikoni, who has purple bats that are trying to eat our faces off. Now, Sam has got a pattern that he suggests. He says, use purple. Only load what you need for the current view. So what's purple? Well, purple is a pattern for structuring and serving progressive web apps with an emphasis on the performance of application delivery and launch. It tries to encourage you to prioritize loading code that a user is immediately going to use deferring loading other code until idle time. This is a pattern the Polymer team discovered last year, and which has great promise. Now, we're initially going to focus on the push and preload part of this. An idealized version of the purple pattern would try to avoid you having multiple round trips to the server where you're first of all fetching your HTML, parsing it, and then having to go and fetch all the other resources necessary for the page by taking advantage of H2 server push. Because everybody in this room, as authors, knows what's important to their page is more than we the browser vendors do. And so you can take advantage of things like H2 server push to push in one RTT information, the resources that are the most important to your page. And you can avoid having to over push stuff locally by taking advantage of service workers so that on repeat visits, you're just reading from the local cache instead. Now, Twitter weren't able to use H2 server push, at least early on in their journey. So they decided to focus on using resource hints and link rel preload. Now, in this case, the first thing they looked at was using link rel DNS prefetch. Now, DNS prefetching with link rel prefetch is an attempt to resolve domain names before a user tries to follow a link. Twitter were connecting up to multiple servers, and they found that by dropping in link rel DNS prefetch for some of these end points, they're able to see an 18% improvement in their initial load time. They then explored using link rel preload. Now, link rel preload is amazing. It's a declarative fetch that forces the browser to make a request for a resource without blocking the documents on load event. Now, preload can decouple the load event from script parse time. And in many cases, you're able to get this set up with low friction, even if you're in an app that happens to be using react, preact view, there are webpack plugins that you can use to wire up your preload for asynchronous chunks and important scripts in your pages. And in Twitter's case, they saw a 36% improvement just by switching to using this. Now, before preload, the network request usually starts like far further down in your network waterfall, and after preload, it shifts to the left right up to parse time, which is great. Next, we move on to render, so getting meaningful pixels on the screen. Now, Twitter is a heavily multimedia-based application. As you scroll through your timeline, you're going to end up seeing lots of pictures, lots of videos, lots of animated gifs, and those have a cost. Now, one of the things that Twitter ended up doing was more intelligently taking control of scheduling inside their application. They used request idle callback to accomplish this. Your request idle callback is a web platform feature that allows you to schedule work when there's free time at the end of a frame or when the user is inactive. And what Twitterlite found was that by using request idle callback to defer the loading of images using JavaScript, they're able to see a 4x improvement in render performance. Now, something that we're all still much guilty of doing is shipping images that just have way too much bloat on the web. And Twitterlite wasn't an exception. What they found was that in many cases originally, they were shipping images that just had way too much of that dimension. They weren't using all the pixels that were being shipped down. And those images weren't properly compressed. What they found was that by properly optimizing those images and only shipping down images with the correct dimensions, they were able to both reduce bandwidth as well as image decode costs, in some cases taking it down from 400 milliseconds all the way down to 19. Another thing they did was introduce a data saver mode. As we're on the go, some of us end up on very data-limited plans. And so having a data saver mode that can blur images and videos until they're tapped can actually end up saving us on our data plans. This actually introduced a 70% improvement for many users. Twitter were also investing in trying to explore using the save data client hint, which is a nice other web platform feature here. Now, I've been talking about a bunch of web platform features, but there are also React-specific things that Twitter ran into. On slow devices, they discovered that it could take a long time for their main navigation bar to respond to taps, in some cases anywhere up to two seconds. And one of the reasons for that is that mounting and mounting large trees of complex components, like timelines of tweets, can be very expensive in React. Ideally, you want to defer mounting and mounting those complex trees. And so what they ended up doing was using a double-request animation frame trick that was discovered by Owen Campbell more recently, and they created a small, higher-order component on top of that to improve perceived performance if a component wasn't reacting. This effectively allowed frames to complete to allow other components to update and re-render before mounting an expensive wrapper component. This led to almost instant changes as soon as you were tapping through their navigation. Another thing that they ran into was a number of cases where unnecessary updates were hurting their performance. Now, in React, whenever a component's state changes, it's going to re-render the component and its children. Occasionally, the component and its children may not have really changed all that much, and yet you end up rendering everything. In this case in this video, clicking the heart would result in any conversation component also needing to re-render. Using React's should component update allows you to bypass rendering using the virtual DOM completely, and in this case, ended up with fewer updates being necessary as well as CPU cycles being saved. Next, we move on to pre-cache, and we go back to our quest. So who's going to be the next person we run into? Well, the next person we run into seems to have what looks like a JavaScript poop monster, but it's Jake Archibald. And Jake says to leverage the service worker. He promises it won't bite too much. Now, Twitter had a very incremental journey with adding support for service worker. They initially service worker cached their static assets, their JavaScript bundles or CSS, the emoji that you use when you're DMing someone. They then switched up to having a custom offline page, and eventually switched to also using things like the application shell model so that you're able to load the UI for the experience locally, instead of having to keep going back to the network. The result of this is that instead of taking over six seconds on a good 3G network to load the service, on return visits pre-cached, it took less than 1.5. This was a 75% improvement for most users. It was definitely worth investing in. In 2017, if you aren't considering using service worker, you're leaving potential performance wins on the table. So consider using it. And then we move on to lazy load. And in React, Preact and View, we often end up using Webpack to accomplish this. So we go back to our quest. Now, many people's first experience with Webpack can feel like walking through fire. I love Webpack still. And here we've got that guy there maintains Webpack. And here on this screen, we've got Sean Larkin and Prithi who work on Webpack, telling us to always bet on it. It is worth using in the long run. The number of bundle savings that you can accomplish using Webpack are phenomenal. And one of the things they've been trying to do recently is investing in a Webpack CLI to help people migrate from Webpack into two as well as helping people navigate the Webpack complexity waters a little bit better. But back to Twitter, they tried to get things like code splitting initially set up, but it was tricky. They ended up with three JavaScript assets totaling over a megabyte in size. That's about 420 guild bytes gzipped. The parse and compile time of that was still really high on mobile devices. We're talking somewhere between five and a half seconds for most people. So what they ended up doing was investing in code splitting using require.insure with Webpack 1 and the common's chunk plug-in for extracting common modules across all their chunks. Effectively moving them towards route-based code splitting. This meant that they could get faster timeline renders, and they broke up the entire experience into 40 on-demand JavaScript chunks that were more ties over the lifetime of the entire Twitter Lite experience. This meant that it only took three seconds on a 3G network and a 3G condition under a real phone for this JavaScript to actually process and load. To learn more about Twitter Lite's journey, we just published a case study this week that you can check out. And Paul Armstrong on the Twitter Lite team who's one of their web performance experts recently wrote an awesome drill down into their experiences here, so check it out. But they're not the only ones that have been experimenting with some of these ideas. Tinder are also experimenting with a progressive web app. They're using React. They're using React Router and Webpack. Now they've seen near instant repeat loads using ServiceWorker. They've seen a 50% reduction in time to load code just by adopting code splitting and link rel preload. And similar to Twitter Lite they've also been deferring non-critical work using request idle callback. Now the important thing to note here is as you hear about all of these stories the Twitter lights, the tenders, the housing.coms, the flipkarts, you'll start to notice these patterns form of what different teams are independently running into as things that help their overall performance on mobile. I'm excited to see Tinder's PWA continue to evolve and they're not the only ones that are working on PWAs right now. The NBA are also working on PWAs using React and I'm excited to see their work hopefully get released at some point in the near future. And so we return to our quest and one person that we very regularly bother when it comes to React best practices is Dan Abramoff. Dan created, well, he worked on Create React app, a zero configuration tool for making a lot of things easy. And so we've been working with Facebook for the last while on something that I think is a little bit special. I'm very excited about it. This change literally landed at 6am in Europe like yesterday. Create React app, one of the de facto ways for building React apps will now give you a PWA by default. Thank you. I'd like to thank Dan Abramoff, Tom and Jeff Posnick on our team for all of their work in making this possible. This is a huge shift for the ecosystem. As we start to see more and more frameworks in their tooling adopting Progressive Web Apps by default we're able to shift that baseline closer and closer to tools like Polymer App Toolbox. And so what does Create React give you out of the box now? It gives you web app support with service worker for offline caching. It gives you code splitting with dynamic import. It gives you support for Webpack 2 where you can import in ES modules as well as support for performance budget tracking. So you can stay on top of your performance. It gives you helpful overlays for on-cut errors and has just 20 built in there as well for snapshot testing. I'm really excited about this release. This is one of the biggest Create React app releases that have been out in a while. So I hope you go and check it out. A normal, you know, global install of Create React app will give it to you today. It comes with a decent Lighthouse score out of the box. And in terms of the amount of headroom that this gives you on mobile, Create React will give you about 1.5 seconds on that overall 5-second budget for time to interactive for your own application code. Now the thing to keep in mind there is that you probably want to make sure you're using code splitting so you're shipping like a thin core for your application's initial view, your initial routes and then using lazy loading to defer the rest of that loading across the rest of the experience. So I'm really excited about that. Next up, we have Preact. Now, most UI frameworks are large enough to be the majority of an application's JavaScript size. Preact's a little bit different. It's small enough that your code is usually the largest part of your application. That means less JavaScript to download, parse and execute and more budget for your own application code. And what we've been seeing is that many companies that are starting to build progressive web apps are taking advantage of Preact in production. The first one we're going to talk about is Trebo. Trebo is India's top rated budget hotel chain and they operated in a segment of the industry worth $20 billion. They recently shipped a new progressive web app that's using Preact and what they saw was a 70% improvement in time to first paint compared to their old experience and a 31% improvement in time to interactive. You can check out Trebo.com for their experience but I'd like to dive into some of the things that they had to do in order to get this experience out. So they started off with Webpack and using Webpack's default setup they ended up with a monolithic JavaScript and CSS bundle. This had a first paint of about 4.4 seconds and first interactive came roughly after that point. Now like some companies they thought, you know, let's try optimizing our first paint a little bit and see how far we can get and so they invested in trying out server-side rendering. Now it's important to note server-side rendering is not free. It optimizes one thing at the cost of another. However, in Trebo's case using server-side rendering dropped their first paint times in their perceived performance. The user still got a full page with JavaScript disabled. It was still good for SEO but the con was that it had a negative impact on time to interactive rendering. So it had to first of all wait for the server-side rendered HTML to get down the pipeline, had to receive that payload, and then it had to go and fetch and execute all of the JavaScript. This meant that first interactive happened about 7.7 seconds in, which is also not ideal. So the next thing that Trebo looked at was code splitting. This is how they're basically doing server-side rendering by the way. They're just using React's render to string, nothing particularly fancy, and they're injecting it to the server-side. So route-based code splitting. What they did here was they split out their vendor, their Webpack runtime manifests, and their routes into separate chunks. This reduced the time to first interactive down to 4.8 seconds. The con was that it started the current routes JavaScript download only after their initial bundles were executing, which was also not ideal, but it did at least have some positive impact on the experience. So what can they do next? Well, we can go ahead and do server push. If you can, at least experiment with link-rel preload, and that's exactly what they did. Now, again, for route-based code splitting in this experience, they're doing something a little bit more implicit. They're using React routers, declarative support for get component, with a call to asynchronously load in chunks. So with preloading, they use preload to preload the current routes JavaScript ahead of time. This had the impact of dropping their code into the cache. Now, with preloading, they do a little bit of work. The first interactive work was already in the cache when their main bundles executed. It shifted the time down a little bit. So first interactive currently happens at 4.6 seconds. The only con they had with link-rel preload is that it's not implemented cross-browser. However, there's an implementation of link-rel preload in Safari Tech Preview. I'm hopeful that it's going to land and it's going to have a sense of progressive rendering into their application. They would stream the head tag with link-rel preload tag set up to early preload in their CSS and their JavaScripts. They then perform some server-side rendering and send the rest of the payload down. The pro of this was that resource downloads started earlier on, dropping their first interactive and first paint times. The con was that it kept the connection open for a little bit longer between the client and server, which could have issues if you run into the server. So what they're doing, HTML streaming, is effectively defining an early chunk again with the head content that have the main content and the late chunks, all of these being injected into the page. What this looks like is a little like this. Effectively, the early chunk has got their preload statements for all their different script tags. The late chunk has got anything that's going to include state or actually use the JavaScript that's being loaded in. Finally, this is an app that has pre-reacts in production. This had the impact of dropping their vendor bundle sizes from 140 kilobytes all the way down to 100 kilobytes. This is all gzipped, by the way. The pro was that it dropped first interactive times. They were interactive in 3.9 seconds on average mobile hardware, which is awesome. The con was that they did have to end up putting together a few workarounds in order to get preact working exactly with all the different pieces of the react . So, this has been one of my experiences as well. Preact is great for the 98% use case, but there are still a few edge case bugs. Thankfully, Jason Miller, who works on preact and is somewhere in the audience, there's Jason. If you run it to any bugs, bug Jason. Switching from react to preact in production is relatively straightforward. You can do this in your web pat config by aliasing react to preact. You can do this in your web pat config by aliasing react to preact. They've been using a lot of open source software. In return, they've actually open sourced most of their web pat configuration, as well as a boilerplate that contains a lot of the setup they're using in production. They've also committed to keep that up to date, so as they evolve, you can take advantage of them as another reference. They've got a 38% increase in total conversions across browsers. They've been experimenting with preact for the last two months, and it's now sticking in production, but they saw a 15% average improvement to their time since reactive, a medium CCI of about 5 1-2 seconds, and huge savings on their overall start-up time. And it's not just Indian companies that are considering using preact. Forbes.com has also recently been investing in progressive web apps. In March, they've been using preact for a long time. They've been using preact for a long time. They've been using preact for a long time. They've been using preact for their default mobile web experience, and this is an experience that was initially using react. They switched over to using react light, which is a little bit like preact, and they're currently in the process of evaluating preact for production. Overall, this is an experience that gets interactive with preact. And this is a really interesting experience that we've been seeing in preact making work, the largest part of our application. If this particular level represented making a framework that's smaller than preact, which is about 3 or 4 kilobytes, and as much as it does, I don't think I could pass it. I tried passing this level about 15 times, and whatever Jason's doing, whatever magic preact voodoo, I just couldn't pass it. I was like, what if I would have done this with a purple pattern? And I thought, what if we tried combining these two things together? And so we started working with a number of people in the preact community on a new project that I'm really excited to share with you today. Today we're announcing preact CLI, a brand new toolbox for creating purple first applications using preact. We think this is going to make a product that's incredibly efficient at shipping better loading strategies, and I believe that Jason's probably tweeting out about it right now. So preact CLI is available today, and in addition to giving you a 100 on Lighthouse right out of the box, it also includes support for things like automatic code splitting across routes. It's got built-in tracking for bundle sizes, zero configuration for pre-rendering, and server-side hydration. It's got support for CSS modules. It's got support to help you transparently code-split any component, and it goes out of its way to make it easier for you to deploy purple patterns H2 server push support on Firebase functions. So I hope you'll check it out. Preact CLI is available today. It's just an NPM install away, so NPM install global preact CLI. If you notice any issues, please follow them, but I'm really, really excited about this. I think this is a really great opportunity to take some of the great learnings from Polymer's App Toolbox and bring them to the rest of the ecosystem. So what headroom does this give you? Well, Preact CLI will give you three seconds in that five-second interactivity budget for your application code. This is huge. The framework itself only takes up two seconds of it, and so you as an author have got more application budget to write code that's going to be useful to your users. If you're not using H2 server push, that would be great. But we've tried to make it as easy as possible for you to deploy H2 server push using Firebase. We've got a built-in server for it. We've got manifest generation, and we're also excited to take a look at the Polymer team's purple node server implementation to see what ideas we can share there. So that's Preact CLI. The base that it gives you sits on top of four kilobytes. So there's Preact and Webpack. There's two kilobytes if you're looking at this. We're going to take a look at the three kilobytes, which we can conditionally load in for you. Check out the Preact HN implementation. It also takes advantage of a lot of the patterns that we've baked into here. Next up, we've got another framework that I'm a little bit excited about, ViewJS. View is designed from the ground up to be incrementally adoptable. And the author has a lot of other things. It's got a 19-kilobyte core. It's had over 3 million downloads over the last year. And they have a lot of active users. So some of the things coming to ViewJS this year that I'm excited about are over here. Things like support for progressive booting, hydration using request idle callback, support for custom elements, CSS variable theming, seamless support for Webpack code splitting. I would in fact say this is one of the best server-side rendering in the industry. They've got support for streaming support, for component caching. And in fact, they take that even further. ViewJS, if you're using their server-side rendering support, will attempt to infer what JavaScript chunks in your application are important enough to link rel preload, what isn't important enough to use link rel prefetch on, and whether or not there's critical CSS that can be loaded in. I think that this illustrates a step forward we can take with our tooling where we're trying to make the best decisions for you. And I love that ViewJS is experimenting with a lot of these ideas. Now, one of the companies that recently shipped a brand-new progressive web app using ViewJS is Ulima, the biggest food ordering and delivery company in China. They recently switched from an Angular 1 mobile site, this is their old site, over to China's web app, with a time to interactive of about 1.2 seconds on their target devices. They're using ViewJS because it boosts their productivity, views support for single file components which actually look a lot like Polymer imports, enable them to easily share components across pages and adopt things like ViewJS's ecosystem support for views or view router very easily. Now, Ulima is interesting because it's not a traditional single-page application. This is a multi-page app. They actually have a number of pages that are their own dedicated microservices, a number of pages that are technically single-page apps in their own right, and yet they wanted to ship a progressive web app that takes advantage of some of the other ideas that I've talked about today. And so, they looked at the purple pattern. One of the first things they tried doing was HTTP2 server push for their API responses. They found that this cut time to first byte by about 500 milliseconds over regular 3G. And they want to bring this back into the mobile experience. Now, we've been talking about link-rail preload as well and they tried that out. Now, this is one of the first examples of where link-rail preload actually didn't make a massive difference to them. See, routes in multi-page applications tend to naturally fetch only the code that that particular page or that particular route needs. And so, they have a relatively flat dependency graph. And so, Ulima found that by using link-rail preload, they didn't actually see that many gains for time interactive. If however you are building an SPA, I would consider using it. The next thing they tried to do was just improve their overall rendering times. Now, for every single page they effectively have something that's a lot like the application shell model. They pre-render the application shell for the page. They try to make sure that they're only loading in script that's necessary for the current page and keeping them over all times for any content that was possible. This is something that's a little bit more straightforward for them because they're not worried about, you know, the performance of subsequent views quite as much as individual pages. Then they got on to pre-cache. And their performance pre-cached with ServiceWorker was significantly better than their old experience without it. Now, this got interesting. Because the site wasn't exactly a single-page application, so every URL was effectively its own HTML page, they needed to cache the entire HTML page as well as make sure that they were able to cache the entire HTML page. Now, in this application, again, because it's a multi-page app and they're relatively, you know, diligent about only shipping down the JavaScript needed for each page, what they found was that their time to attractive scores were relatively decent with views that were available for each page. Now, in this application, again, because it's a multi-page app and they're relatively, you know, diligent about only shipping down the HTML page, they found that their time to attractive scores were relatively decent with ViewJS. It's got a low start-up time. And so View itself was actually their main bottleneck. They found that the View runtime, components, other libraries you might drop into the page with their major bottleneck. However, they put in a lot of extra work into making this multi-page application set up, still have a good user experience. They invested in skeleton screens for transitioning from one page to the other. They also added a little bit of execution of View's framework itself by using setTimeout on NextTick. Now, LMA have tried documenting a lot of the challenges and workarounds that they ran into in a new technical case study that I hope you'll check out. But after adopting the purple pattern, they effectively saw their time to attractive scores drop all the way down to 1.2 seconds. Now, I do have to caveat this with LMA work in a part of China where they're using a lot of powerful phones. About 85% of their users, in fact, are using Wi-Fi or 4G and have at least something that's equivalent to a Nexus 5 or better. And so it would be unfair to say that they've got a really good time to interact without comparing this on a Moto G4. On a Moto G4, which we consider to be roughly average mobile hardware, they're interactive in under 3.5 seconds. Under 3 with ServiceWorker enabled. This is another site that recently shipped support for Vue.js is Truecaller. Truecaller is the world's largest phone number directory with over 2 billion searchable phone numbers in there. They recently added support for using Vue.js, Vue.2 with UCLI, Webpack 2 with code splitting. They've been using link rel preload and they've actually seen some decent wins. Their time to interactive scores have significantly dropped just by using link rel preload from 2.1 to 3.6. Now in terms of the Vue ecosystem, they're using a global event bus but they're considering using Vuex. They're experimenting with Nux.js, which is another server-side framework that works really well with Vue. They try to do intelligent things around link rel preload for you as well. And they're using ServiceWorker caching. And so we get to the final level of our game. Now I must not have had enough sleep that day because this is the one thing that I'm trying to say to you, the author of Vue.js and this at all. I've tried. But Evan and Sarah Dresner, who are both Vue.js experts, say to use a progressive framework when you're building progressive web apps is something that can enable you to get productive and stay productive for an extended period of time. We've been talking today about this idea of trying to move the ecosystem forward, trying to take advantage of the system. So we've seen Create React app that supports outputting a PWA by default. Preact that supports outputting a PWA default. And today we're also happy to announce a brand new template for Vue.js that's available via the Vue CLI. Vue and NIT PWA will give you a progressive web app by default. Thank you. Now in addition to giving you over a 90 on Lighthouse right out of the box, it will give you code splitting with dynamic import. It gives you version hashing for long-term caching. A fantastic web pack setup, by the way. It's got bundle size analytics once again for helping you stay on top of your JavaScript bundle sizes so that you're trying to make sure that you're as interactive as possible when you're doing things. And it will also intelligently link rel preload or prefetch your bundles depending on whether you've got it or not. So that's what it looks like today. You can go and check out Vue CLI. So install Vue CLI and then Vue and NIT PWA will give you Vue's progressive web app boiler plate. It looks a little bit like this. It will, of course, make sure to pass all of the tests in Lighthouse so I hope that this is useful. And with respect to the headroom that Vue.js's new PWA template will give you on mobile, it gives you about two seconds to ship the user down something useful and get interactive. Now, I don't have a lot of time left, but there was one more framework that I wanted to give a nod to, Angular. I think that the Angular team have done a fantastic job trying to slowly chip away at both bundle size as well as execution time for people trying to use it to build on mobile. We took a new hacker news application built using Angular a few months ago and worked with Hussein Durja in the community to try adopting new features that the Angular team were shipping over the last while. So we started off with a 245-kilobyte bundle that was the default with Angular 2.4. This got interactive in about 23 seconds, so much worse than many of the apps that we saw in our research. With Angular support for ahead-of-time compilation, this dropped all the way down to eight seconds. Now, we're going to go back to Angular 4 with their new view engine. Drop this down to 132-kilobytes from the bundle size and they got interactive in 6.6 seconds. And by using code splitting, something I know that the Angular team are trying to make a default for all their users as well. We got our interactive scores all the way down to 6.3 seconds on real devices. I think that the Angular team are making excellent strides, and they're trying to make a default for all users. What we're trying to accomplish this year is fast being the default for everybody. We want it to be the default for React users and React users and views users and Angular users. And of course, Polymer users get it for free with Polymer App Toolbox. So we're almost out of time. We've announced a few things today that I hope you'll find useful. HNPWA.com, your reference for how to build your web app. We announced Create React App with support for PWAs, Preact CLI, and View and Hit PWA. And so with that, we're almost at the end of our journey. I hope that from this talk you take the idea that it is possible to incrementally ship an instant loading progressive web app using a framework if you're willing to put the work in. The way that I like to think about incremental development is to first do it, then do it right, then do it better. And many companies have demonstrated that this is possible. So with that, I would like to ask for a big round of applause to all the members of the community that contributed to making this stuff possible today. This has taken a few months of work. For their restless nights, I would like to thank the React team, I would like to thank Jason Miller and Chris working on Preact. I would like to thank the Polymer team, Steve, Dan, Kevin, and everybody else on Webpack and other tools that have been trying to make it a little bit easier for us to ship progressive web apps. So with that, thank you. I hope that this was useful.