 I feel like a enterprise Java developer or something. Whoa, don't you need those? We're not does enough. Can you play that? It is in. It's... TOXEDOES! Good morning everybody and welcome to day two of Chrome Dev Summit! Come on mate, come on. We're gonna have a great day filled with talks. We're gonna have some code labs later on. There'll be lunch, that's gonna happen probably at lunchtime. The same as yesterday. Like every good conference should, we have a code of conduct. It is printed out around the venue and on the website. But the gist of it is really that we need to all work together. Share ideas with each other. The game is that we all leave here happier and smarter than we arrived. Perfect. Okay, well we should get on with the show as it were. And in order to do that we're gonna kick off the date with housing.com. Give a massive round of applause for housing.com. Good morning everyone. It feels great to be here. We are really excited to present our journey of how we build housing.com, the mobile version. Okay. So I'm Rahul. I lead the front-end team at housing. As the name suggests, we are into home buying. And for most of our users, home buying is a once in a lifetime event. And it's a long journey before they settle for their dream home. We have native apps to facilitate that experience with native performance, better re-engagement, and offline experience. But we also had few challenges like poor internet connections, mostly 2G and 3G, and low-end devices in terms of computation, memory, and storage. These kept our users from downloading our apps and thus hindered our business to reach their goals. We also had our mobile website, which looked something like this. The problem with this was it was a monolithic code base having desktop and mobile tangled together, and the components were kind of bloated because of the e-files in JavaScript and the media queries in our CSS. That eventually affected our performance. So we thought about we have to cater the growing need of our mobile user base and we chose to upgrade our mobile website to something that can compete with our native apps. The reason being simple, web has better discovery and a wider user base than any other platform. Also, the cost of bringing a user to our mobile web was 50 times cheaper for us than bringing the same user to our native apps. So it was the deal breaker. We started off building our mobile website. The first and foremost thing that we had was to support all the major browsers that are out there that our users use on more than 2,000 different devices that they have. So this was our first aim and then we thought once we have this part done, we'll upgrade that experience to compete with our native apps. So we built Housing Go and we were happy with the kind of metrics that we saw. We were able to bring down our page load times by 30%. We were having 10% longer user sessions and the bounce rate was cut down to 40%. On top of all this, the most important part was 38% more conversions on our mobile website. That really helped our business realize the goals that were much sooner and effectively done earlier. And yeah, I'd like to call up on stage Ritesh who will be taking us through the journey of how we built this. Good morning. So I'm Ritesh. I'm a front-end developer at housing.com. So let's talk about how we actually built it. From the start, we were focusing on four key areas. The first one is that we wanted to deliver assets fast. Then we wanted to bring down the time to first meaningful paint and also the first JS-enabled interaction time. And at the same time, we had to improve the experience of our returning users. So most of the other performance metrics actually depend on when is your asset delivered. So this is a waterfall of a traditional website. First, your whole HTML loads, and then other asset request goes. So you have to wait for whole HTML to load before making a request. So when you analyze your code, you'll find that there's a certain part of your code that needs no computation or no API request. So let's talk about HTML streaming. Now, this is how it looks like on client side. First, we send the initial chunk that only contains the code that needs no computation. You can see this pre-connect, pre-load, and the critical inline CSS. Now, sending the pre-load, we actually start the request for critical JavaScript earlier. I will talk about them. So now this is the full HTML. After the server has made API request and it has received all the responses, it sends all the HTML. Now, this has body, initial one didn't had, and this has all the content. Now, the size of full HTML is around 15 kb, but the first response that the server sent was around 4.2 kb. This is it. So pre-load. Most of the time, the developers already know that a particular route is going to need a few critical resources. You can load them in advance, and so by using HTML streaming and pre-load in combination, we were able to start the request for critical JS much earlier than other assets. So after we were done with asset delivery, we went ahead to improve our render time, and by render, I mean the first meaningful paint. Now, the difference between first paint and first meaningful paint. On the left side, you see that that's first paint. First paint is anything, when there's anything, any pixel available on your phone, most of the time, that's not relevant, and user feels like waiting, and when the relevant content is there, that's the first meaningful paint. Now, we wanted to zero down the difference between these two. So we experimented with server-side rendering. I am saying experimented because you should always measure before you implement. So this is a traditional AppShell model on first load. Till 2.2 seconds, there's a white screen of death. You have nothing to see. Then there's a state where you have something, but it's not relevant to the user. He still has to wait, and in 2 seconds he sees the first meaningful content. Now, the region before 2.2 seconds and 7 seconds is what we call the loading screen of purgatory. So the user doesn't know what's going to happen. He may receive the content. He has to wait for a certain amount of time, and if there's any error, he will have to wait in that state forever. So we wanted to improve this. We wanted to remove this totally. So this is after SSI enabled. Now, when we implemented server-side rendering, the first meaningful paint happened at 2.3 seconds. And that was quite an improvement. And all these tests have been run with WebPest test. It's written on the bottom. So there's also a bonus when we implement a server-side rendering that the basic meaningful content is available for everyone. I mean, as Raul said that we have users using more than 2,000 types of devices. And so there are a variety of users with different browsers, the versions may be older ones. So the basic content is rendered for all of them. So that's a bonus. So till now I've talked in bits and pieces about how we improved JS enabled interaction. But the main thing that you need to do for improving JS enabled interaction is that you need to ship less code, less JS. So less of the JS faster will be the interaction time. Now, when you ship less code, you will have two lazy load resources on demand. And this brings us to code splitting. We are using Webpack 2 for code splitting. And we do both JS and CSS sharding. So generally, the chunks that we make, we divide them into two categories. The first one of them is route-based chunks. Now, when a user lands to a particular route, first we make a call for the main JS file that view will require. And in parallel, we make a request for the corresponding CSS file. So when the CSS file has been loaded, we allow the navigation. And after his idle, we make a call for the next probable route that he might navigate to so that when the user navigates to that route, the transition is almost instant. So next, I come to the second category. That's intent-based chunks. Now, these are the chunks that are only required when the user does a particular kind of interaction, like scrolling or click, and doesn't involve any route change. Now, let's take an example. This is our listings page. Now, on top right, you can see a notify button. Now, by analytics data, we know that this is a kind of button that gets clicked once in a few sessions. And the corresponding view that it requires has... It's of 32KB unzizip. So it makes no sense to load that with the main JS bundle because most of the times it will go unused. So we require it only when the user has clicked on that particular button. So after doing all this, currently we are at a stage where our first meaningful paint happens at around 2.3 seconds, and the first... The JS-enabled interaction starts at around 4.2 seconds. Now, by JS-enabled interaction here, I don't mean the DOM content loaded. We have defined a custom metric, like we calculated this when the component actually mounts, like, since we are using React, for us it is actually component-did-mount. So this is the time when the component-did-mount gets triggered. So I've talked about how we push critical resources using preload. We have improved render using SSR and inlining critical CSS. We are pre-caching assets using service workers, as Raul will tell you, and we are lazy-loading resources on demand. So we are very much near to what the purple pattern promotes. And so, till now, what I have discussed has improved the experience of first-time users. Now, we also had to improve the experience of returning users. So I'll call back Raul to tell you more about that. Yeah. So, as I told you, finding our dream home is a long journey. User comes online multiple times to make his searches, to compare the houses he's seeing, the property he's seeing. So it was very important for us to make a compelling returning user experience so that we make that journey as smooth as possible. We use service workers to pre-cache a few resources on install and act as a proxy for our subsequent network resources so that it can serve them from the cache directly when it's requested second time. By doing this, we have been able to bring down the first meaningful pain that happens for a first-time user at around 2.2 seconds to 700 milliseconds, and the first JS-enabled interaction from 4.2 seconds for a first-time user to 1.1 seconds. This... So, yeah. We also implemented add-to-home skin features to give users the access to instantly interact with our app directly from their home screens. And we implemented push notifications. I like to mention that the kind of conversion rates that we are having from push notifications are almost beating a few of the channels that we have. So that's one thing that's taking us closer to our apps. The offline navigation. This was important for us because whenever a user visits for the actual site visit, the properties are generally at the outskirts of cities where the network is very flanky or absolutely no network. So this experience helped them to actually revisit their session or re-look the properties they have already done on the mobile. We used credentials management API to keep our users almost logged in virtually almost every time so that their information is synced across devices very smoothly. Yeah. So once you are done building the app, the main question that we had is how do we maintain this? We have done a lot of things to maintain the first paint time, first JS enable interaction time. But as the product evolves with a lot of features, it's very difficult to keep in check these metrics. So we came up with our own system of continuous integration with Webpack and WebPace test. And we made it as easy as just putting a label on a PR in GitHub. So if you are done with your code and you raise a PR, you just need to put a label run test and we'll put out all the information that needs to track these metrics right on the GitHub PR, like the chunk sizes. It helps a reviewer know that how this PR is changing or modifying the chunk sizes that we already serve. The few route-based statistics like the first paint versus speed index of the few of the critical routes that we take care of. Also, the network and the timeline view. This helps the reviewer get an essence of what resources are loading and how they are loaded. So I think this was very important to us to close the loop of the complete development from the development and maintenance. It's not always... We are yet to do a lot of things to make the app faster. And one of the things that we are experimenting is moving from React to Preact. We have seen that in our initial prototypes, moving from React to Preact has brought down a difference of 120 to KB in our vendor bundle. And that's huge. That's around 700 to 800 milliseconds of JS-enabled interaction time gain on the client side. We are also experimenting with AMP to let our users have almost instant experience when they come through Google's results page. So thank you. Thank you all. Hello. Over here. Hello. Check you there. Yeah, Paul, I'm here. We were asked to do a quick handover. So we've cut the bit where we walk on stage. That was housing.com. And here's Lyft. Yay! Well, good morning, everyone. I'm Malcolm Ong, product manager at Lyft. And here with me today is my colleague, Mohsen Azimi, lead engineer on our project. And we're going to tell you a little bit about our journey of how we built our progressive web app at Lyft. So just a show of hands. How many of you have taken the Lyft before? A lot of you. Awesome. Well, for those of you unfamiliar, Lyft is the fastest growing on-demand transportation service in the U.S. It actually came out of a hackathon project from our original product, Zimride. And so similarly, our ride.lyft project also came out of a company hackathon. And so certainly in the very beginning when we came up with the idea, there was a lot of internal skepticism as to why we would build a web app at all. You know, could we, if we did, could we actually build a web app that would be a viable alternative to our native apps? And certainly this makes sense because historically Lyft's been a native mobile-first company, has invested a lot of time, resources, et cetera, on the native apps. And so, you know, we said, well, let's go ahead and take a stab at it and see what we come up with. And I think we're pretty happy with our work. This is a desktop view of the app that we built. And our hackathon project gathered enough internal excitement that we said, you know, why don't we go ahead, go out, try to productize this and see if our users would also be just as excited. And so some of the reasons why we built this, right? From our standpoint, a PWA could be a great supplement to native apps for various reasons. We have a greater reach, so essentially allowing a lot of our users that are unsupported or on aged-out devices to use Lyft. Number two, reduce friction. So certainly pushing users through an app store funnel is very, very inefficient. And number three, faster uploads and experimentation, right? So let's talk through each of these. So greater reach. On the pie chart that you see on the right, approximately 8% of iOS users in the market and 3% of Android users will soon be unsupported as we slowly deprecate older OS versions, right? So our progressive web app allows us to support these users. Furthermore, 100% of Windows Mobile and 100% of Amazon Fire devices, for example, simply because we never had Amazon Fire app until our PWA, and we still don't yet have a native app in Windows Mobile either. So this allows us to support these users. And in addition to this, obviously it reduces a lot of operational costs, technical costs, resources, because it means less code and potentially less incidents or support tickets. Number two, reduce friction. So as we all know, sending users through an app store funnel is highly inefficient. There's high drop-offs, high cost per installs. And so the funnel that you see on the right there, we can potentially go from six steps, from web entry all the way down through the app store towards sign-up and finally a first ride. So it's been said that every step of this funnel, you could essentially lose 20% of your users. And so what are the reasons for this? I mean, maybe it's because... I don't know, users don't like the permissions that you're asking about. Or maybe it's because they don't have enough storage on their phones. They have all those Snapchat videos that they saved. So we can basically change this to the three-step process, if you will. So in other words, imagine the PWA can replace the white portion of that funnel. Another interesting thing is deep linking. So imagine we had a developer partner app, like Google Maps, and Google Maps has integrated Lyft. And they want to send users to a seamless experience from Google Maps to Lyft, right? Right now, if you don't have the app installed, it sends you to the app store. You know, not as seamless as it could be. So deep linking from developer partner apps straight into our progressive web app is a much more seamless experience to do that. And finally, faster deploys. So certainly deploying fixes, code, experiments on web takes hours, not weeks. There's no need for this, you know, app-approvable process, if you will. And so running experiments faster, AB testing, and certainly our app is always up to date. You have to constantly update your app. And in terms of, you know, productivity and timeline, our team actually just started with just basically one engineer, Mohsen, and, you know, our alpha MVP that he built was built on top of an Angular stack, primarily because Lyft's historically been on Angular. And we were able to do that in roughly two months' time. And so we proved that our PWA had a lot of promise, right? We, you know, of course, ate our own dog food, used a lot of our, leveraged our public API to hand a lot of server-side things. We had a little bit of support from, like, Design and QA. But for the most part, you know, this was pretty quick compared to developing an native app. And so once we quickly proved the potential of this, we got enough internal support to get more engineers on the team and worked on a new app, a beta version of this, from the ground up in React and was able to do this in, you know, roughly one month's time. And so now I'd like to bring out Mohsen to talk a little bit more about the technical aspects of the app. Hello, everyone. I'm here to tell you a story. A story about a ride. Meet Valerie. She just saw DJ Khaled is giving undercover Lyft rides. So she wanted to try, maybe, her driver. So she goes to Lyft.com. And what we have in Lyft.com is this big pink button that says request a ride. We don't force users to download the app. They can just request a ride right there in the web. This is really important. Because if we forced them to download the app, they had to download almost 75 megabytes of data just to start. So this is where PWA wins. You might think, okay, if she downloads the iOS app, it's going to be a better user experience. It's going to load faster. But in fact, our PWA loads faster than our iOS app. This is under LTE network with a good device. But with slower networks and slower devices, we can get good results. Acceptable results. So now that she's in our PWA, she needs to sign up. For sign up in Lyft, we ask for your phone number and a payment method. If you have your web payment set up, you don't need to put in your credit card. Just use Android payment API. You can do that. With two taps, she has an account and she can pay for her ride. When she requests a ride, oh, I forgot about this, I talked about how the PWA is a better user experience. And that means in any front, animations, anything is the same as the native app. OK. So now that she wants to take a ride, when she taps request ride, what we do is we register the service worker that listens to push notifications from our servers. So if her driver is coming in two minutes, she can put her phone in the pocket and wait for the driver to come in. When driver is here, we send this push notification that lets her know that driver is here. This push notification does another thing. It has a payload that updates the right information in our app. So that means if she taps on this push notification, we are going to take her to our PWA without making any network request because push notification has all the data. So if she is done with the ride, we also send the push notification. That push notification as well includes ride's information. So even if she is offline, she can open up our PWA and access this screen which has all the information about the ride, who the driver was, how much it cost, and things like that. This is really cool. But it doesn't stop here. She can submit her feedback while she is offline. When she does this, what we do in the Service Worker is we are making a post request, putting it in the cache, and then Service Worker is going to wait for online events. When that happens, this post request is going to be submitted. This is without her interaction. It's happening in background. I think this is really, really cool. So making this progressive web app was a really, really interesting journey for us. We are doing it for about two months. It's very early, but we learned so many lessons that I want to share a few of them with you. As other speakers here told you, less JavaScript is better. As much as we love to add new libraries and new dependencies to our code, we should always be considered about the amount of JavaScript we push to our users. This is really, really important for mobile users. Your regular MacBook Pro that you develop on is way more faster than the phone that your users are using. That's why, in Lyft, we are using real devices for development and testing. We forbid ourselves from using our Macs or these powerful computers for development. It's not the most present experience, but that's a user experience we are working on. Another thing that we did was minification. I'm sure all of you minify your JavaScript. But we did something new which is thank to Angular team with their TypeScript to Closure Compiler annotated JavaScript project. With that, we were able to minify JavaScript even further. A method named in minified regular minification process will not change, because JavaScript doesn't know where this method is being called, so we can't rename it all over the place. But with TypeScript and Closure Compiler, we were able to minify it even further. The other thing we did with JavaScript was we were able to minify the JavaScript and Closure Compiler, which is now much easier than before with Webpack and the new minifiers. Here is a quick look of our technology stack. You're not using a lot of dependencies, as I said, just react a little bit of Webpack and things are working pretty great. Our bundle size is less than 40 kilobyte and the last challenge was the other platform. But at the same time, all these PWA APIs are pretty, pretty new. And sometimes we saw MDN documents are out of date. There's not much stack overflow questions to look for. We just ask questions, don't get answers. So that is an interesting challenge to have. The other one is not in our control, so hopefully that's going to be fixed next year. I have one little lesson we learned and that is animation for opacity. If you use CSS animation for opacity infinitely, it's going to crash your browser. Don't do that. So now I'm going to hand it over to Malcolm to talk about business impact of this PWA. Thank you, everyone. Thank you, Mosin. And so putting this all together, what was our early business impact? The initial response from our alpha was certainly very, very positive. We exceeded our initial weekly rides, number of weekly rides goal by 5x. We were able to launch an app, essentially a wrapper around our PWA on Fire app store very, very quickly. And if you're interested, there's a link up there to learn more. And finally, we estimate that we'll have up to a 50% improvement in force upgrade churn. So this is, again, folks that we eventually have to force upgrade due to having lower or older OSs. And so next steps, you know, this is certainly still a very, very early beta. We're still iterating on it a lot. It will be buggy, but we encourage you to try it out. Number one, for next steps, we definitely like to improve and optimize our conversion funnels. Number two, experiment a lot. And number three, finally start adding a lot more features onto this so that we can actually reach feature parity with our native apps. And so I want to thank everyone here for listening. Some of our team members will be here at the conference later today to answer any questions you might have or just chat. And if you'd like to give it a try, go to www.CouponCode.com and in fact, we have a coupon code for you to use for 20% off your ride. All right, thanks everyone. Well, indeed. Indeed. Do you know what I think we should do now? Yes. I think they all know. I love how ridiculous that is. What? Ridiculous. I won't take that. I'm not saying I had a lot of fun making that. Well, we have made a couple of changes since last night. Every question is now worth double the points it was yesterday. However, we have also reset all your scores to zero so the fact we've doubled the scores doesn't actually make a difference at all. We have also doubled the prizes. Oh, yes. You can also get more mugs, not saying that we got loads made and we've got to get rid of them. But there might be a few on hand today. But look, I mean, who wouldn't want that? So get yourself onto the page now because we can start running the first question. Here we go. Excellent. Which of these CSS properties creates a new stacking context? Position relative? Will change transform? Position sticky? Or overflow hidden? Select all that apply. What is a stacking context? Does that give it away? Okay. Let's have a look at the answers coming in. Well, okay. So we've got one sort of low confidence there. Audience again, we saw quite a lot yesterday. So what are we saying here? Position relatives and position sticky higher confidence. Will change transform? It's not a position static. I can see the train of thought. Will change transform is a bit of an unknown, I think, because it's still relatively new. Yeah. So just overflow hidden isn't? Do you know why? Okay, that was good explanation. Let's move on to another question before the next talk. So this is on HTML parsing. Which of the following tags are explicitly happening in the HTML parsing spec? This is the WAPWG spec. Is it the closing sarcasm tag? How would you pronounce that? Nuber. The H tag. The image tag. Spelled out in full. The listing tag. Select all that applies. If you've only selected one, you can go back and change the answer. Let's have a look at see what's going in. Oh, look at that. 72%. That's a yes. That's happening. Oh, and 12%. Close it. I bet that sarcasm. Okay. So low confidence in sarcasm. Low confidence in listing. How do you feel? You feel intense, right? Do I know this? Let's find out. Wow. Can you tell them the story of the sarcasm one? In the HTML parsing spec, there is a section that says when you encounter a closing sarcasm tag, take a deep breath and go to the section for handling unknown tags. It is explicitly handled. The image tag just says alias to image. Go to the IMG. What's listing? Listing is an ancient HTML2 element. Ancient. Well, it's like 20 years, right? That's pretty ancient in terms of technology. And it's very similar to the pre-element. I think it was initially intended to auto-escape HTML. Pre, but with escaping. Yes, but no one. I don't think any browser ever did the escaping. There it is. Once again, we'll take a look at the leaderboard a bit later on and see how you're all going on. I guess it's time for our next talk. It'll be Paul Backhouse. He's going to be talking about mixing up PWAs with AMP. So like, I guess. No, we can't know. That's not going to become a thing. It sounds like an elephant's fart. Paul, on that lovely note. Ladies and gentlemen, stage Paul Backhouse. Thank you. Hi, good morning, everybody. Yeah, my name is Paul. I work on AMP. And I'm also on the Deaf Rail team at Google. And I'm really going to skip that mostly, right? I mean, unless you literally just arrived, you've heard about progressive web apps. They help you turn your site into reliable, fast, and engaging experience. But what if you build the most amazing progressive web app and nobody discovers it? It doesn't wait long enough until the service worker has installed your app shell to make subsequent load snappy. Keep in mind that the service worker is awesome but doesn't hear about all with the first load. Even though the service worker API allows you to cache away all of site's assets from almost instant subsequent load, like when meeting someone new, it's really about the first load that matters. If the first load takes more than three seconds, then our latest double-click study has shown that more than 53% of all users will drop off. So half of your audience will never, ever get to see your content. Now, in three seconds, let's be real. It is an already brutal target on mobile connections that often average around 300 milliseconds latency and come with other constraints such as limited and established in a signal. And so you might be left with a total load performance budget of less than a second to actually do the things you meant to do to initialize your site or app. But don't worry, it gets worse. By the way, you really want to load in under one second, says Rail, the writing that number itself from a book on usability by Jacob Nielsen. Your site does more than three roundtips to server? Well, sorry, but it's not too bad. So don't feel too bad, though. The overall landscape of today's web looks a lot grimmer. The average mobile page loads in about 19 seconds, with 77% of those taking more than 10 seconds to load. Now, the crazy thing about the 10-second mark is that at 10 seconds, actually 100% of all your users bounce. So at 10 seconds, you don't ever get to see a site. And then it's 214 server requests of 50% of which are at related requests. So at first click, that first impression is what matters. We wanted to get rid of slow loading pages, but also solve runtime performance so you can scroll really efficiently and other usability issues at the same time. And fortunately, we have a solution for it, or we think we have a solution for it. We use a solution called AMP short for accelerated mobile pages. AMP short for accelerated mobile pages is an ecosystem consisting of a web component library that allows you to declare a different write HTML, we call AMP HTML, because it's both a SuperSet and a SubSet and AMP caches. Basically, CDNs are more technically correct You can either just build one, use AMP like a library, build just one page, and you have one canonical page for everything. Or you can use a link meta tag in the header, and then just generate two pages in your CMS, and create one HTML page and one AMP HTML page. But that's really up to you, whatever you prefer. In a nutshell, it turns author pages into a highly portable, fast and user-friendly units that platforms like Google, Bing, or Pinterest can safely and quickly embed. And so it's a really rich and growing library of web components. Now, a lot of the baked-in performance optimizations you could probably do yourself if you're an experienced developer. But the AMP cache onto itself is actually a very important component, not just because it's a free, super-fast CDN. The AMP cache works tightly together with the prioritized loading and static layout system of AMP. Documents served from the AMP cache are much cheaper to pre-render because AMP knows where each page element is positioned, even before any assets are loaded. So allowing you to load the first viewport without any low priority third-party stuff. And the actual site owner won't ever know about the preload, which is another very interesting point. Super important for privacy reasons, as the site could otherwise write cookies and mark the page as seen. If you're searching for diarrhea on Google, you might not want every one of those pages to actually write a cookie on your behalf and give you diarrhea ads everywhere. Sorry for putting that image in your head, by the way. So AMP pages are really just HTML and CSS. You can't have any user authored jobs on the page. That's one limitation. Instead, a lot of custom elements and sandbox AMP iframes allow you to still do everything if you want to do something like a crazy, animated graph or something that is really custom to your page. And then the AMP open source library is the same everywhere. So it's hosted on one CDN URL and you include it from there. And that means it's evergreen. It's highly cacheable. We can upgrade all AMP pages in less than a couple of days, a couple of hours maybe, even if we have an important fix to do. And it defines the behaviors for those custom elements and manages rendering and resource loading to optimize performance. Now, the important question that might come up in your head is, do you want to go AMP or PWA? You learned a lot about progressive web apps already in the last couple of sessions. And we've been hearing that question constantly. But in order to be reliably fast, you need to live with some constraints when implementing AMP pages. So if you want to go AMP, you won't get the biggest progressive web app benefits on a first click. So from a progressive web app, you won't get that first click experience. But with AMP, you don't have a custom service worker when it's loaded on the AMP cache. So you can't do anything fancy in there. It means no push notifications, no web app manifest from serve from the cache. And so if something isn't available as a component, you can't just hack a script together, for instance, to support web payments or push notifications. Now, those of you that already read about the individual advantages and disadvantages of progressive web app versus AMP have surely struggled with this question before. And on one hand, now, you have instant delivery, almost instant delivery, optimized discovery, but then, again, no user scripts, and mostly for static content. On the other hand, on the PWA side, you have advanced platform features. Of course, you can build everything you want, really, highly dynamic, but it's a slower first delivery. And really, again, the first delivery is highly important, and it's not easily embedded in third party platforms as well as AMP. So what if there was a way to combine those two, to really reap the benefits from both? In the end, what I think is, what matters is the user journey. The first hop to your site should feel almost instant, and the browsing experience should get more and more engaging afterwards. Now, AMP and progressive web app are both critical components to make that happen. AMP pages for the first navigation, and then your progressive web app for the onward journey. Now, let's first talk about AMP as progressive web app, because I won't cover this in detail in this talk, but it's important to note that many sites won't ever need things out of the boundaries of AMP. AMP by example.com, for instance, which is an examples page that the AMP team uses to showcase a lot of AMP components, is both an AMP page and a progressive web app at the same time. So it has a service worker, therefore, allows offline access and more, and it has a manifest prompting the add to home screen banner. Now, you might have heard me saying, well, you can't have those things, but that only means you can't have those things at AMP when it's served from the AMP cache, meaning the first time you might click on it on a platform like Google Search, you won't get those benefits. But the next time you click out, you actually get it. So once you arrive on the origin, the service worker installs and does a lot of fancy things. So when a user visits AMP by example.com from search, then clicks on another link on that site, you navigate away from the AMP cache to the origin. The site still uses the AMP library, of course, but since it now lives on the origin, it can use a service worker prompt to install, et cetera. AMP, by example, uses that technique to do just that. But I had some fun, AMP project org, at least on a local branch, and thought, what if we already have a service worker in decepting? We insert more random stuff into the page. How about things? I mean, we can pretty much do everything we want in that fetch event, right? How about things that AMP doesn't like? Because the cache doesn't see the service worker. It doesn't really care about the service worker. So we can do pretty much everything we want. So I got a little nostalgic. I'm like, hey, why not throw in some 90s DHMR magic? I found this fancy cursor that I think improves my page. So here we go. Yeah, we are using the service works panel. I reload the page. And yes, I got a JavaScript animated backdrop and a really fancy cursor. And I think it dramatically improves the experience of our documentation pages. So first, what have I done? Second, please don't do this at home. But it's a technique that allows you to make amendments, change what you can do on an AMP page, just insert more stuff. So now to the more interesting bit. Transitioning a user smoothly from AMP page to Progressive Web App. There are two ways I'm combining two. Steps I personally call AMP up and AMP down. AMP up is the background bootstrapping of your Progressive Web App shell while the user is enjoying your AMP page. And AMP down describes reusing AMP pages as a data source for your Progressive Web App. The basics with AMP up are that the first click will be on AMP page, usually served from the AMP cache. And then any links on our page will navigate to your Progressive Web App. So normally that second click would still be considerably slower than the instant feeling preloaded first click on your AMP page. But it's a powerful component baked into AMP. The AMP install service worker tag that without writing any custom JavaScript allows you to install your service worker for your origin. So yes, even when your AMP page is served from the AMP cache, it installs the service worker from your origin, downloads an app shell, and bootstraps your Progressive Web App, which means at the time the user reads your article, your Progressive Web App is already loaded and ready to go. And the Washington Post actually does this. That means while the user is reading your article, yeah, actually, I just said that it can warm up. Service worker can warm up in pre-cache. When a user now clicks on a link on a page or calls to action at the bottom, your Progressive Web App shows up instantly. And Alex Russell called it this pattern, start fast, stay fast. This is what I call AMP-UP. But now you're in the Progressive Web App. And chances are most of you are using some Ajax-driven navigation that fetches content via JSON or some other data backend. Now you can certainly do that, but now you have this crazy infrastructure needs. So you have two totally different backends, one generating AMP pages, and one offering a JSON-based API for your Progressive Web App. But think for a second what AMP really is. It's not just a website. It's designed as an ultra-portable content unit. It's also a data format. The AMP team has asked themselves the logical next question. What if you could dramatically simplify backend complexity by ditching the additional JSON API and instead reusing AMP as a data format for our Progressive Web App? We started with a proof of concept many months ago and iterated on it for a while to see if this pattern actually works, rewriting many parts of AMP to make that reality. So how did we do it? Well, of course, one easy model would be to simply load AMP pages in frames. But iframes are slow, and now you need to recompile and re-initialize the AMP library, the AMP JavaScript library, over and over. Today's cutting-edge web technology offers a better way for that, and it's called Shadow DOM. In the old world, our worldview was simple. One window, one instance of the AMP library and one document. But in that new world, there's one window, one instance of the AMP library and multiple documents. So this results in super-fast transitions between AMP documents as the library only needs to be compiled once. So you only got the one AMP library compiled once. The process looks like this. The progressive web of hijacked navigation clicks then does an hijacked request to fetch the requested additional AMP page and puts the content into a new shadow root and tells the main AMP library, hey, I have a new document for you. Check it out. And it does that by calling attachShadowDoc on the runtime, the runtime which is what we call the AMP library. So even cooler, we have added a conditional CSS class on shadowed AMP documents so you can automatically hide stuff like headers in embedded mode. So the AMP shadow class does that. And shadow slots, which is coming pretty soon, allows you to insert advanced widgets and functionality into your pages that live in the shadow root in your PWA. And so, of course, you can do that last step and the step here. So you can insert and remove things also manually because it's really your XHR call by just accessing the source of the AMP page like I previously showed with the service worker example. But the keys that are above is really easy for everyone and the classes also makes it super, super simple. And so here's something, finally, after AMP up and AMP down is like AMP sideways, up, down, whatever, Konami code, right? So time for an advanced pattern to wrap this pattern up. So we have a pretty good experience now, but if you're in the Progressive Web App, copy a link and share it on Twitter, that link will open the Progressive Web App directly, right? Because you're not on an AMP page anymore. And for a new user who doesn't have a warmed-up service worker pre-cache, it won't feel instant. So that problem is also a problem we can solve in the final step of our developments journey. Instead of creating a separate URL space for the Progressive Web App, so for instance, pwa.youtermain.com, like we did before, we just reuse the existing AMP URLs to load the Progressive Web App on your site's origin. And so we're doing that by having the service worker simply intercept the navigation request. All we need to do for this is listen for that navigation request in the service worker. And then instead of serving a cached AMP page, we serve the cached PWA shell, which then does an XHR to fetch the requested AMP doc. This means that in just one request, your Progressive Web App will show up along with the requested content. So you got one AMP, one Progressive Web App, and one request. Now, best of all, we now progressively enhance our AMP pages with our Progressive Web App. And during that, no matter what, you use this, we get a super fast experience, regardless of AMP or PWA. For browsers that do not support service worker, they simply see AMP pages. And even here, if you don't, if you think, well, I still want my Progressive Web App, even though it might not have all the bells and whistles to work on browsers that don't support service worker, and we have a solution for that too. We have something we call fallback Ervry writing that literally just landed this week in the latest Canary version, which is sort of our beta channel for AMP. And it means that if you arrive on an AMP page with a non-service worker browser, we will realize that in AMP. And rewrite all URLs, all outgoing URLs on your page by pattern matching, based on a pattern that you can select, to redirect to a fallback PWA URL. So you can have for browsers that support it, you just use one URL space. But in browser that don't support it, will still go to the Progressive Web App. And we even load a hidden iFrame that warms up the browser cache, right? It's not as good as having a service worker, but it's still better than nothing. And so now we have, it's time for demos, but this is actually one demo that the AMP team has built. And it's a pretty cool React-based demo for you all to try out and get inspired. But I'm not gonna go talk too much about it because I thought it'd be even cooler to show you a real-world example. In fact, the first real-world example that we know of. And that's Mike, who've done a heroic effort in the last weeks to get us to its current state. Please welcome David Bjorklund, lead engineer of Mike, to show you the new Progressive Web AMP experience, a lesson learned on the way. Thank you. Thank you. Hi, everyone, can you hear me? Yes, awesome. So let's go to the demo. Yeah. So I wanted to start with loading an article. So this is an AMP article. So as you can see in the source code on the right side, we're using the AMP tags, AMP tags. So if you were to come from search, for example, this would be the instant experience. This is what a user sees when they come to this site for the first time. We've never been here before. So let's go here. So let's navigate to the menu. So I don't know if you notice what happened here, but when we loaded the menu, we actually reloaded the page because we want to take you from our AMP article to our PWA as quick as possible. And in this case, since we've loaded the service worker in the background on the AMP page, it has loaded all the data that we need to show the PWA. And since we're sharing the URL space, the URL is still the same. So we're now in PWA. And this to this means that if I click on a link, it's going to be a single page that I say this page app experience. It means that if I go to an article, it's going to load much, much faster than what it otherwise would have done. But it's also going to mean that we can use things that we can't do in AMP natively. So we can do push notifications. And I mean, so now I showed the menu, but this would be the same if you came back to the same article again, or if you came to another page. Like, the service worker is going to be there. Every time you go to this page, you're going to get to see the service worker. So let's go back to the slides. Let's go to the slides. I fear fear that if we can also go, yeah. So I mean, just give me the opportunity to actually show you that we catch the articles. And if we go back to the same article again, it's going to load instant. But can we go to the slides? Oh, yeah, awesome. All right, so I don't want to talk briefly about how we built this. So we set up to do a proof of concept. We gave ourselves two weeks, and we had three engineers that could work on this. And as part of the team, we knew of PWA. Like, we've seen the videos. We read a couple of articles, but we didn't have any hands-on experience on doing PWA. But since we could reuse our existing AMP rendering pipeline unchanged, that really jumpstarted our AMP journey. Like, it was just a few days into the work that we could start working with the fun stuff. We could start working on the service worker. We could start working on all of these PWA features because we had already AMP stuff in place. So that means that we had time to work on performance improvements. So like, we did what a lot of others have talked about today. Like, we made sure that our JavaScript bundle was as small as possible. But it also made us give us time to experiment, to play around a little bit with Eurik's improvements. Like, the menu that I showed earlier, where we do this reloading. And it also gave us time to really dig in and try to learn as much as possible about the service worker. So we could feel like we could come closer to something as a perfect cache. And yeah, as I said earlier, it gave us time to implement web push. And like, in the time frame that we had and with the reasons we had, we never would have time to do any of this if it hadn't been because of the simplicity that AMP, that Ambedic AMP in PWA gave us. So when I did the slides, Paul asked me to be honest with you and talk about the pain points. So I went through it and I tried to do a list. And I talked with my team. And we talked about how has this experience been. But it was actually sweet. Like, it worked as advertised. Like, of course, we had certain pain points doing PWA. But like, everyone, that's what we've been talking about for the last few days. When it comes to actually embedding AMP inside of the PWA, it just works as an advertisement. With that, Paul, I'd like to bring up Paul again so you can wrap up. All right. Thank you, David. So wrap up. Now there you got it. We successfully combined AMP with a progressive web app. And now the user always gets a fast experience, no matter what. Your site is progressively enhanced. You have less back end complexity because you just have one data source and profit from the building performance of AMP everywhere even in your progressive web app. And that really reduces the overall investment. Now before I leave you, keep in mind that this is just one pattern to build sites. And what works for everyone. You probably shouldn't build the next Air Horner or Gmail with it, but focus on sites that have a lot of leave nodes, individual sites with lots of static content. By all means, find out if that pattern is the right one for you and feel free to get in touch to get to discuss. So check out our React-based demo. Check out beta.mic.com to actually see a live experience. And also learn more about progressive web apps on developers.google.com slash web. And of course, more about AMP on amprosa.org. And I really can't wait to see what you've built. Thank you. Right. So break time. You have until 11.30. So you're 11.30. This afternoon, there is a browser vendor panel, very similar to the Chrome leadership panel yesterday, except that it's for folks from multiple browser vendors. And so if you want to ask any questions. It's not just Chrome people, right? It's not just Chrome people. It's not just a group of people agreeing with each other, like all the other panels are. Exactly. So if you've got questions, pop those into the Slack channel. And we will pick them up and we'll try and get through the ones that we can and so on. So don't forget to do that. And that'll make this afternoon a lot more fun. We'll probably have the microphones up front as well for the folks here as well. So enjoy that. And before we bring Addy out, we're going to do a couple of more quiz questions. These are fun ones. These are some of our favorites, we think. Right. Let's move on to the first question. All right. What does the following of JavaScript evaluate to? Lovely noise that just went across the audience there. Everyone likes emojis. I mean, I don't even know how you read that. It's just like American flag dot length. And then there's like a spread operator with another emoji dot length. What is that? I mean, let's see how everybody's voting. Oh, well, we've got low confidence in one answer. So it's clearly spread between those two in the middle. Somebody's trying to copy paste that code, aren't they? So what are people saying then? So we've got, yeah, the most popular answers, four and two. Yeah, so sure about eight, six, eight. Who do we appreciate? The correct answer. The correct is? Which is six. Now, the reason is it's four. It comes as four bytes for the emoji, right? Is this two? It's two. It's two, it's not quite bytes. No, it's like, yeah. What's the correct term? Four items of a string. I don't know of the kind of the UTF-16, US, I can't remember the JavaScript. So I want to shout it out. What's the UCS-2, thank you. You got two of those, and then you got four of those in dot length. And then when you break it apart into the array, you get two of those. String iterators, which is usually code points. I mean, you wouldn't do that in production. I don't think. I mean, you might. I don't know what app you're making, though. Shall we? A quiz, probably. I'm really looking forward to the answer to this. This is hot off the press. We just thought this one up. Yes, here it is. Which of the following are valid CSS properties in Chrome 54? So this is Chrome Stable. So we're looking at CSS properties that have been around for a little bit, what we have here. Caption aside, mask repeat, text combine upright. It's a nasty one, isn't it? Here we go. Let's have a look at how people are voting. Yeah, so we're quite confident about mask repeat. Less confident about the upright one. Caption aside, pretty confident. Unicode range has got very confident on clip path. Interesting. A lot of clip path going on. I'm really looking forward to hearing the noise that everyone makes in a second when they see the answers. Ooh. And now you might dispute. You might dispute, as I did, backstage about Unicode range. Well, let's go through them. Clip path is for clipping. We're going to replace clip with clip path as a new one. Well, I see it's great because clip path hasn't been implemented everywhere, but I think clip is deprecated. So you're kind of somewhere in the middle. I'm going to come back to Unicode range. Caption aside is where you can change the position of a table caption from between the top and the bottom. Never come across that before. Mask repeat, we just made that up. That's a lie. Test, so we've combined a couple of ones, existing ones. Background repeat, what if you get mask repeat? Text combine uprights. This is something for Japanese and Chinese characters, sort of combining them in particular ways when they're in vertical text. Unicode range is a CSS descriptor, not a CSS property. If it's inside an app rule, it's not a property. It's a descriptor. Smugness on that place. Don't know if you know, but I've heard of descriptors. Can I tell you what they are? Yeah? Very pleased to find yourself. Really unpopular right now. So on that bump show. So we built this quiz using preact, in fact. Me and Paul used a framework. Yeah. To learn more, to figure out what we don't know, and it turns out we don't know as you discovered. Yeah, don't look too much of the code. Some of this was written on the plane. I had been drinking. Some of the bugs written on that one flight. Yeah, he misspelled colors. He actually committed it with colors spelled with a U and then referred to colors without the U. British Airways flight. But I was like, how did this even run on your machine? And he said, I don't know. Anyway, here to tell us what we could have done better and how to do progressive web apps with frameworks, ladies and gentlemen, it's Adi Osmani. That was worth the hour it took to render. So in many ways, Polymer has been a sort of Tesla vehicle for the Chrome team, highlighting one path for how you can ship fast, high-performance progressive web apps that work really, really well on mobile. But we work in a really diverse community. Everyone is using different tech stacks. And today we wanted to talk a little bit about how you can use other libraries and frameworks, like React, to build fast progressive web apps and looking at what do you need to do in order to make things like React qualify, to build instant experiences on real devices. Flipkart are going to get up right after me to talk a little bit about their experience shipping React PWAs at scale, and all of the lessons that they learned. And we have a little surprise for you at the tail end of this talk that you'll see in a short while. So let's start off with a statement. Frameworks can be fast if we put the work in. I firmly believe this. I think that we're at a point where fast is not the default for a lot of libraries and frameworks. I think that a lot of framework authors acknowledge that we can do better when it comes to performance on real-world devices. But let's take a look at what's possible today. So this is Flipkart on a real device. They're doing really, really well. They're interactive in just a few seconds. They're shipping just the minimal functional code to get a route interactive very, very quickly. They're deferring a lot of the work that's not needed for this route to a future point in time. And they're taking advantage of techniques like code splitting and purple in order to accomplish this. Housing.com are similarly doing really great work in this area. Again, they're interactive in just a few seconds. But we talk a lot about speed and what it means to be fast at CDS. What do we actually mean by fast? So there are a few key moments when it comes to modern loading performance. And some of these metrics are things you might be familiar with. So the idea of first paint, first meaningful paint. But really there are three phases here. There's the is it happening moment? Is it useful? And is it usable? Now we're increasingly trying to focus on the is it usable phase? So time to interactive. So at one point during loading is your app actually engageable by the user. If they tap on different things inside the app, can they actually accomplish things that are useful to them? And time to interactive is really, it's that point when I can tap and I can get something useful. Now we're saying that ideally, if you're, regardless of what it is that you're using to build these apps, it'd be great to be interactive in under five seconds on a real device under real representative network conditions. So 3G. If you happen to be using service worker caching, you're gonna benefit from sort of trying to ideally hit an instant repeat load. And your time to interactivity is gonna be even better in those cases. So service worker caching definitely worth looking at. In this case, there's actually nothing on this person's phone screen and I think they're going through withdrawal of some sort here. So Lighthouse has been mentioned, Darren mentioned it in his keynote. Lighthouse is currently one of the best ways to easily track things like time to interactivity. It includes a number of different performance metrics. This is a Lighthouse extension, it's also available as a CLI. But time to interact is included inside the performance audits. And if you wanna take a look at how well you're doing, what I recommend is trying out Lighthouse over remote debugging, testing it with a real phone. It'll give you sort of an eye-opening look at your performance on real-world devices. So that's definitely worth spending some time on. So recently, I was very curious about how the React community were shipping down code, how they were tackling things like module bundling. So I put out this call on Twitter, asking people, have you shipped React in production and what were your experiences doing that? And I've published a little bit of the data on that, but let's dive into it. So what JavaScript module bundler do you use? The majority of people are using Webpack. This breaks down into sort of 65% of people are using Webpack 1, a smaller number using Webpack 2, but those numbers are increasing. And the rest of these numbers are sort of browser-fine and other bits and pieces. So Webpack is kind of a big deal. Let's take that from this particular slide. I then asked people if they were using code splitting to chunk up their JavaScript. And I got a very surprising answer. I saw that 58% of people thought that they were. Now this surprised me because when I talked to the Webpack community, when I talked to the Webpack authors, they were like, we don't think that any more than like 10% of people are really using code splitting. And there's something interesting there. Maybe there's a breakdown in terminology. Maybe people are using code splitting, but not necessarily doing it the right way. And I don't kind of blame them because configuring Webpack is so fun. It's the best. But I think that we have opportunity to improve that. Other concepts that people were looking at. So 11% of respondents said that they were exploring service worker support. So that's good. I'd love to see more people doing that. 14% were looking at HTTP2 and what would be involved in sort of granularly shipping stuff down. And 19% were looking at tree shaking. So interesting stats. Now we mentioned the Polymer shock demo quite a lot. And the reason for that is it's using purple. It does really, really well on real world devices under 3G. So on throttle 3G, this happens interactive in about 4.3 seconds. So about four seconds. If you're looking at it with like a really, really bad 3G networks or something with more packet loss, it's still doing pretty good in 5.8 seconds. We take a look at Flipkart and housing.com next. And Flipkart is in between these two apps. I did the averages and they're getting interactive in about 4.5 seconds. So still fairly fast, fairly good. About 6.9 with packet loss, but they're still doing pretty well. So these guys are using basically all the tooling, all the performance best practices that we're encouraging folks to take a look at to ship these experiences down in ways that are gonna ideally benefit their users at the end of the day. So here's the crux of the study. I ended up profiling over 150 React apps that people submitted over the last couple of weeks, breathed the numbers quite a lot of times. It was fun, so fun, on real devices. And what I discovered was that the average React app in that survey was interactive in about 11 seconds. So there's quite a gap there between what's possible and where the average app is right now. With packet loss, we're looking at 12 seconds. Probably the worst app in that particular study was interactive in 24 seconds. So user is gonna be in uncanny valley, just like tapping around the screen and not really seeing anything happening. So this is sort of a timeline trace of what the average React app built with Webpack looks like. In this case, I saw hundreds of kilobytes of script being shipped down just for a single route. A lot of it wasn't being used. They are using code splitting, but they're actually, it's taking eight seconds before all of the script and their common chunks are being shipped down. Thousands of seconds are being spent in parse and eval time. And for anyone that sort of followed Paul Lewis and Paul Irish's guidance over the last couple of years about trying to ship a frame in 16.6 milliseconds, well, these guys have got a frame that lasts 7,970 screen. It's doing really great there, great. But we can do better. So first piece of advice is try not to keep the main thread busy. If you are someone that's shipping down really large bundles of JavaScript, it's gonna take longer to load, parse, execute, and run. It's definitely gonna peg the main thread. Now, some advice comes with nuance. And nuance is something we often lack in these conversations. It's really tricky to pack it into a short amount of time. But basically, if you're working on a page that is not gonna be useful to your user in any way unless you ship that amount of script, you're probably better off shipping it. If you can, however, trim that down so that you're just shipping the minimal functional stuff that's gonna be useful to your users, please consider doing that. It's gonna help them out, because they're not gonna need you shipping like all of the script for the entire site or in the entire app in one go. Other things that can impact sort of the main thread being busy and time to interactivity are sort of suboptimal back and forth between the client and the server. Sam Sikoni touched a little bit on the idea of JavaScript parse compile and eval execution times being a little bit different between desktop devices and mobile. Here we have a meg of scripts, about 250 kilobytes minified. And the amount of time it takes to parse and compile it on what a lot of us, so I see a lot of MacBooks in the room. This is how long it takes to sort of parse and execute that on a MacBook Pro from 2015. And take a look at the difference, like how much our assumptions are broken when it comes to the average phone, something like a Moto G. This is taking about three seconds to parse, compile and execute. And that's not even looking at load time. Like if you're trying to get interactive in under five seconds, this is just not gonna cut it. But all of this, again, it's got nuance. You need to make sure that you're measuring before you're optimizing, but you're ideally trying to make sure you're doing the right thing for users. Test on real phones and real networks. This is something that we've mentioned in a few talks at Chrome Dev Summit. I cannot stress enough how important it is to test on real devices. Emulation is only gonna get you so far. You can be testing with 3G throttling on, with CPU throttling on, on desktop. And the difference between that and the stats you will get out of a real phone are still gonna be multiple seconds. I think there are opportunities there for us to do better at a tooling level. But real devices have got different mixes of cores, GPU, memory. There's gonna be packet level differences for different networks. So do try to make Chrome inspect your best friends and use it. So when Alex Russell carries around all these phones, he's not crazy, mobile web speeds do kind of matter. In fact, on average, faster experiences tend to lead to longer sessions. And one of the, I think it was perhaps the double click report that recently published said that people that did optimize performance were seeing anywhere up to two times mobile ad revenue. So test on real devices, make real money. So let's riff on this other idea. So less code loaded better helps everyone. This is another one of those items that requires nuance. But if you're able to load less code up for a route in order to get it useful, please do so. The nuance part comes again from that part of you may require more script. Me shipping down 300 kilobytes of script may be very different to someone else doing it. There's gonna be different parse and eval times at play there. So again, very important to measure. But let's riff on this idea of less code loaded better. We're gonna use Webpack. A lot of you may be familiar with what Webpack is for anyone that hasn't used it before. It's basically a popular JavaScript module bundler. It packs lots of modules into smaller bundles so you can ship them down to your users. And so we're gonna look at some of these ideas around the purple pattern and how you can serve these things down to your users. The first one is code splitting. So I've been talking about trying to ship the minimal code down to your users. Code splitting is sort of one answer to this problem of serving people monolithic bundles. It's the idea that by defining split points in your code from sort of view to view, for example, a route to route, you can split them into different files that get lazy loaded on demand. That can improve your startup time and help you to getting interactive much, much quicker. Now with Webpack, there are two ways of doing this. Actually, there are quite a few ways. There's not just two ways. With Webpack one, you can use require.ensure in order to do that. Webpack is going to take a look at anywhere you're using require.ensure and create sort of a chunk for you based on that. That's how you define a split point. In Webpack two, they currently use system.import from the loader spec in order to accomplish the same thing. I do believe Webpack are also sort of, they're a little bit future facing looking at what else is happening in the loading space. But basically, these are two ways to do code splitting. There are great articles that cover this in more depth. There are other ways that you can do code splitting as well. The bundle loader is another option. If you don't like the pattern that you just saw on screen, you can actually use bundle loader and prefix the things that you want to require in to your page, and it will automatically wrap those things into a require.ensure for you and take care of the rest. It's also possible to sort of wait for that chunk asynchronously before you do anything with the code. And finally, if you happen to be using React Router, it's actually got really great support for working with require.ensure. So this is a declarative option. It's also got a slightly more imperative one. But basically, when I'm defining routes here, I'm able to use an asynchronous get component. And inside there, I can say, well, go and please get me the user profile view. And then I can go and do stuff with it. So it doesn't necessarily need to be included in a big monolithic bundle upfront. The next thing is the purple pattern. So Sam talked about the purple pattern yesterday. It's basically a pattern for structuring and serving progressive web apps with an emphasis. It's got a lot of emphasis on performant app delivery, maybe looking at the ideas of how you can more granularly do things at a route level. But it focuses very heavily on giving you a minimum time to interact with. So the idea here is push the minimal functional code for a route, render that route, pre-cache the remaining routes, and lazy load routes on demand as needed. Again, lots of nuance here. But we do have a guide on that that you can go and check out. Now with Webpack, it's possible to do something a lot like purple using require.ensure or system.import with an async get component React Router. And there are a few different options here. So Sam talked a little bit about the differences between preload or H2 push. So let's unpack some of the ideas there. So link rel preload, if you haven't used it before, it's basically a declarative fetch directive. In human terms, it's a way to tell the browser to start fetching a certain resource, because you as an author knows that you're probably going to need it. Some people have done really interesting experiments here where they've used stuff like their Google analytics to decide what routes should get preloaded based on the navigation paths of the user. But with Webpack, you can use things like asset Webpack plug-in in order to wire up chunks that are generated at build time up to your markup. There's more you can read up about link rel preload. I believe housing may have mentioned some of their experience with preload earlier today as well. If you're exploring HB2, there's a really violently named plug-in called aggressive splitting plug-in. I'm not sure why it was called that. But this is another option for basically going a little bit more granular with the chunks that you're using. Nuance, again, different JavaScript engines might treat the way that you split things up differently. There are going to be cases where, in fact, shipping a larger chunk will just mean that it's able to stream that JavaScript in and parse and compile it a little bit faster than you going and fetching yet another chunk. So know that this exists. Try it out if you're interested in the idea of H2 with Webpack. But nuance, once again. Now, another piece of interesting data that came back from my research is that code splitting itself does not solve everything. In fact, I just focused on the apps where people self-identified as saying they were code splitting. What I found was that they were interactive in 9.8 seconds. So definitely not where we thought they would be. We expect them to be a little bit closer to those flip carton housing dot com numbers. What I discovered after profiling them in slightly more depth was that a lot of folks were shipping down chunks for a route that were 600, 700, 800 kilobytes in some cases, 1.2 megs of script. And then they were lazy loading even more right after the fact for some crazy reason. But this is something I don't entirely blame people for it because our current tooling doesn't do an amazing job of highlighting these issues. It doesn't really put performance in your face. So ask yourself what's in your bundle. I think it's very, very easy for us these days to NPM install the entire world. It's very easy to include more modules than we necessarily need when we're shipping down code for routes. But I thought that maybe it would be interesting for us to see what we could do about this at a webpack level. So I put together an RFC for an idea I call webpack performance budgets or webpack performance insights. And Sean Larkin, who's in the audience over there, has actually been helping me with this. And I thought that it would be interesting to give you guys a preview of what we think could be a better way of highlighting some of these performance issues earlier on in your development process. So here is what the output you'd normally get with webpack looks like today. I've got a build here where I've got, you know, I've got almost two megs of script in two of these bundles. And I have, as a user, if I'm not really that familiar with web performance, I don't know that there's an issue here that I need to solve. It should be obvious, and these numbers are quite large on purpose, but it should be something that, you know, maybe webpack could tell me I have an issue. So we looked at implementing a proposal that I put together, and this is what it looks like. So you go and run webpack on your project, and it includes this output for you. Let's try to unbundle some of the ideas that are here. So the first thing it does is it tells you if you have particularly large chunks in your bundle. So you'll see at the very top, instead of just listing all of our different JavaScript output in green, it's highlighted in yellow chunks that are particularly large and cross a specific performance budget that's defined by webpack as default. If it notices that you're doing that, so in this case, I've actually customized this a little bit and said that the maximum size for a chunk is 100 kilobytes, it's going to tell you, you know, it's gonna warn you and say, this is an issue. It also can highlight large entries. So trying to look at, you know, defining what budget are you crossing for an entire route or an entire review? Because you might easily have multiple chunks that can post something, and you don't wanna be one of those people shipping down a mega script if you don't need to. So large entry tracking is gonna help you with that. And finally, at the moment in this proof of concept that we've got, we also highlight patterns. So if we see somewhere where you think, where we think you're gonna benefit from doing something like using code splitting, using require.insure or system.import, we'll tell you about it. Now, again, this is a very early proof of concept. We've just been hacking on it over the last couple of days, but I think that we have an opportunity to work together with tooling vendors like Webpack to try solving some of these performance issues together in a meaningful way that will hopefully end up giving users better time to interactivity scores. So something you might also be wondering once again is, you know, can I kind of configure this stuff? And yes, you absolutely can. Using the performance object, you can actually set the maximum asset size, the maximum initial chunk size and turn on or off the idea of getting those hints. There's a preview available today that you can go and check out. At this point, all of the UX you've seen, you might think that that's a really long report in your CLI, but we'd welcome people to try out the proof of concept we've got today and let us know if it helps. Let us know if you've got any feedback on the UX at all. I think that this is just the beginning. So size alone is just one aspect when it comes to script loading performance. There's also things like parse eval times, execution times and so on. There are interesting opportunities for us to use this as a baseline for building up more tooling that then benefits all webpack users. I'd love to explore at some point in the future what things like code coverage could even mean for these experiences. So that's our first preview. Please go check it out and let us know what you think. Now, another thing I wanted to recommend is there's gonna be a point where you're optimizing your progressive web app and you're gonna hit a point where you can't optimize the size of React Down any further. And something that I found is actually really great for just swapping in is Preact, which is a much smaller, it's almost a three kilobyte alternative to React with the same ES6 API. I believe Jason Miller who worked on Preact is in the audience, so thank you, Jason. And a lot of the traces that I've done of Preact apps are showing them, like this is again, on a real device with a real network. They're interactive in under five seconds. I was taking a look, so this is Source Map Explorer. It's a sort of a nice, a little bit like the bundle analyzer tools that Sam was showing in his talk. This gives you something very similar. So this is what my dependency graph looks like when I have React in place on the very right. So lots of stuff going on. When I switched over to using Preact and Preact Compact, this changed quite significantly. This is with almost the same API. Like I did run into one or two bugs, I will say that, and Jason kindly fixed them very, very quickly. But this is definitely something that I consider, if you're running into places where you've tried optimizing your app down, you're still finding a bottleneck, Preact is definitely worth checking out. Especially if you care about your time's interactivity being small. Setting this up with Webpack is actually quite trivial. You can use Resolve aliases to map React to Preact Compact, React DOM to Preact Compact too. Definitely worth checking out. Now, in previous years, Jake has talked a lot about offline and the benefits that you get from instant loading using Service Worker. And I'd like us to consider layering our app so the network is an enhancement a little bit more. When you do this, you're able to actually give your users those almost instant experiences on repeat visit. And you crush your time's interactivity. In this case, this is housing.com. On first visit on a 3G network, on a real phone, they're getting content on the screen in 3.5 seconds. On repeat visit, it's almost instant. It's in under a second. And the amount of script and everything that they were trying to load up initially is no longer an issue. That's already cached using the Service Worker Cache API. And they're able to get interactive really quickly. So definitely something we're taking a look at. A lot of the time when we talk about progressive web apps, we talk about the application shell model, which is this idea of caching your shell and loading in content using JavaScript. There are many different variations of this pattern. This isn't the only one. But if you're trying to get Service Worker caching in place, I highly recommend the SW pre-cache webpack plug-in. This will integrate with your webpack build process. It'll generate a Service Worker that pre-caches your static assets like your application shell. And it just generates a hash of all your file contents as well. There's a lot of best practices for you out of the box. We're checking out if you've tried vanilla Service Workers, found that there's a little bit of boilerplate there and you like it so it just helps you with the rest of your workflow. Jeff is going to talk a little bit more about the SW pre-cache and SW toolbox in his talk. Now, another thing that Lighthouse tries to highlight is progressive enhancement. And I think that this is one of those super contentious topics. Luckily, I'm on stage, so I can't look at Twitter in any shape or form to see people's opinions on PE. But I do like this idea of supporting all of your target users using progressive enhancement and trying to target all the people that are in your market so that your app at least works for them. I think that progressive enhancement has sort of evolved over the last few years as we've gotten support for better primitives like Service Worker so that instead of necessarily optimizing for people that have JavaScript disabled, you're optimizing for network resilience. So if you're using patterns like purple, and again, purple isn't the solution to everything. If you're using patterns like purple, you can end up shipping so little code to users to get them useful that maybe things like server side rendering aren't necessarily as beneficial in those places or as necessary in those places that you might need them to be. However, as Flipkart are going to talk about a little bit later, there are still benefits to things like server side rendering for SEO bots. And there are places where you might need to get content on the screen quicker. For those cases, React supports this idea of sort of server side rendering or universal JavaScript rendering. I mean, it also has a really good story around things like universal data flow and data fetching. So React provides you this method called render to string for rendering markup on the server as part of its story around universal JavaScript. And it's this idea of you ship down your HTML, you then hydrate as soon as React and all of the rest of your components have loaded up, attaching event listeners and so on so that the person can actually interact with the app. So React has got a good story around this. This stuff is actually not too difficult to get set up, as demonstrated by folks like Celio who are using server side rendering with React. However, universal JavaScript has got issues. I don't think that this is something that's talked about enough in the community. I think it's something that we can probably share more data on, definitely. It's very, very easy to get stuck in Uncanny Valley when your server side rendering, where your user is in a place where they're able to see content. They can tap around it, but they can actually really do anything because they're still waiting on the rest of your JavaScript chunks and your modules and so on to load up in order to attach those event handlers. Render to string has also got no issues around being synchronous, so it can affect things like your time to first byte. Streaming server side rendered React can actually help here, and I'd recommend checking out projects like React Domstream. Render to string can also monopolize the CPU and waste resources when it comes to re-rendering components. Component memoization can help there, so take a look at things like React SSR optimization and other project that tries to help with this stuff. But don't consider things like universal JavaScript or server side rendering with React. It's like a given solution that's going to be fast. It's very, very important once again. Consider there will be nuance here, and it's important to measure. If you'd like to learn more about any of the stuff that I've been talking about, I recently published a series of articles called Progressive Web Apps with React, and you can go and check those out. But I'd like to invite to the stage Avinav, who is going to talk about Flipkart's experience shipping production progressive web apps with React at scale. Thanks, Eddie. So I'm Avinav Rastogi. I'm a developer on the web team that built Flipkart.com. I spent most of 2015 working on Flipkart Lite, a cutting-edge mobile progressive web app that some of you may have heard about in the recent times. And this year I've been working mostly leading the team, bringing that PWA goodness to the desktop side. So Flipkart, let me introduce you to it. Flipkart is the largest e-commerce site and a first-class, it's the largest e-commerce site in India, and a first-class progressive web app across all form factors and browsers. And by that, I mean across mobile and desktop. We've got the opportunity to showcase a new mobile website at CDS Chrome Dev Summit last year. And this is what it looks like now on the side. And it's virtually indistinguishable from our native app, both feature and design-wise. So Alex tweeted this today morning that for all of us coming from desktop to mobile, a change in outlook is crucial. Mobile is much less forgiving. And I wholeheartedly agree with this. Luckily for us, we were going from mobile to desktop. So we carried our learnings along, and this is what our desktop site looks like now. So let me go over quickly the kind of technologies that we are using to build this. At a very high level, we are using a combination of React, React Router, Flux, Redux on mobile and desktop, respectively, and a web app to bundle all together, along with a bunch of other technologies that help us build this and sort of pack it together. So that includes, like, ES6 and latest JavaScript technologies, fetch promises, and node on the back end. So let me talk a bit about the architecture. At a very high level, both mobile and desktop sites for us have a very similar architecture. Let's see what that is. We use route-based code splitting on both. We have a smart preloading of chunks and we implement the concept of purple, which we have heard about. We have partial server-side rendering and a concept of build-time rendering on each. And we have, obviously, service workers for caching different kind of resources. But an important thing to keep in mind is that the implementations for us are different based on the requirements. There are significant differences on how you need to treat mobile and desktop users. The requirements are different. The user behaviors are definitely different. The attention spans are different. Network conditions are definitely different. Your mobile will hack and have a flaky network, 2G or 3G. Desktops tend to have a more stable and a faster connection. Device capabilities are very different, as Alex mentioned yesterday. And browser fragmentation, of course, and distribution. For example, in India, the browser distribution on mobile is such that UC browser takes a fair chunk of the pie, a majority chunk. But on desktop, it's the latest version of Chrome, which takes a majority chunk. So how you treat development, and which one you target first and you add, like, you have to take the least common denominator, you solve for the one which is probably going to cause you the most problems and you build up on top of that, supporting more and more features, treating things like network and, you know, access CPU, things like that as a progressive enhancement. So let us look at the differences in implementation, like I pointed out. On the mobile site, we have a concept of build time rendering, which essentially means that we build the app shells out of our code and we create static HTML files, which we serve to the user when we get a request. So there is no request time processing needed. It's a simple file. We have a service worker in place which caches that shell. And obviously, after that, it can work offline first. And for our mobile site, it's a composition of multiple single-page apps, which I will talk about in a bit. On the other side, on the desktop, we have partial server-side rendering. That means we try to optimize what content needs to be rendered on the server. We don't have a concept of build time rendering. And we don't have a concept of app shells. Now, the reason for this is simply user's requirement and the user experience. I feel, and that's what we feel at Flipkart, that the user experience of an app shell can work really well on a mobile device, where you can show a header, a footer, and a loader maybe, and some content. But on a desktop, showing just a header and a loader still leaves you with a pretty big blank page. It's not a very good experience. So therefore, we went for a partial server-side rendering approach. Apart from that, we have a chunk response for our first request, the HTTP response, which allows us to achieve a faster time to first paint. I will explain that in a bit. And we use a service worker for caching other things like data and resources, like images and things like that. So here is the output from a webpack build. Webpack supports code splitting out of the box, like Addy was just mentioning, and it figures out the split points based on how you include your components. It also takes care of loading the appropriate chunk when needed, for example, when it's navigated. The benefit here is that you significantly reduce the amount of JavaScript that you need to render the first fold of your page. Like, for example, the screenshot that I've put up here, the combined build that we had for our website at some point of time was around 206 kilobytes. With code splitting based on routes, we were able to split it, for example, home page only needed 32 kilobytes of JavaScript to render. And similarly, other pages needed anything from seven kilobytes to 100 kilobytes. This really helps a lot. But there is an important caveat here. As I said, webpack loads the, out of the box, webpack will try to load these files on navigation. When the route changes, it figures out, okay, this route is this, JavaScript is not present. And it has a map somewhere, which tells it, okay, load this JavaScript file. Which means it is downloading, evaling and parsing that JavaScript after you have clicked on a link, which is a very bad user experience. So to solve that, Purple comes to the rescue. Implementing these concepts of chunking, streaming and code splitting, you get a picture which looks like this. The first one at the top is what you see before all these improvements for us. So you have got your HTML parsing in blue at the top and all your static resources and JavaScript CSS starts loading when the HTML is parsed and you get a render time of around 2,500 milliseconds and a page complete, a DOM complete around 3,500. With these optimizations in place, you get a first spend of around one second with your resources loading in parallel to the parse of your HTML. This is achieved using things like preload, script defer, and similar things. But this only solves for first spend. What about time to interactive and meaningful content? So we think that your entire content doesn't need to be rendered together for it to be meaningful. For example, what we do is our first paint, our first render that we put on the user's page contains the search box and it functions without any JavaScript, which means that the user is able to interact with the plain HTML that we serve to him which gets rendered even before any JavaScript has started downloading. Since most of our users, a lot of our users start their journey by searching and not just navigating and looking for products on that page, this really helps us a lot. So some major wins for us that we have seen when we did this migration, this adoption of progressive web app concepts on desktop and mobile both, is that route-based code splitting amortizes the high cost that you have of single page apps and frameworks over the session of the user. You don't load all the JavaScript upfront, you load it across the session. Similarly, smart preloading of those chunks and using the purple concepts makes the experience seamless. User doesn't have to wait after clicking on a link for the JavaScript to load. Thirdly, chunk encoding allows us to download JS chunks while HTML is still being parsed. An interesting approach we took was that based on the user requirements that we figured made sense for users in India, we solved for repeat visits on mobile specifically and for first visit on desktop. Of course, we care about both on both the platforms, but we decided to focus on one over the other. Let me talk about the impact now. So up to 2x conversion during sale events after we migrated to these because of the high speed and reliability and the benefits we have talked about of progressive web apps, we have a significantly reduced bounce rate. Interestingly, a lot of people have seen concerns around search engine optimization. How will the crawler crawl the website? What's the impact on SEO? After doing all this, we have seen a 50% reduction in time taken by the search engines to crawl our page and a 50% increase in the number of pages that are crawled by Google search. That's significant improvement. Apart from that, we have also seen a massive 70% reduction in the tickets that are raised, the issues that we get on the website. There are lesser errors in general. Plus, it's much easier and faster to develop and it's more developer friendly to get new developers on board, fix those errors for us to maintain. Of course, there are a bunch of gotchas. Webpack has been a super useful tool for us. That's what we use, as I mentioned, and its documentation is going through some very well-deserved improvements. So working with PRPL and code splitting, you're bound to run into a bunch of interesting issues. And Webpack does provide a lot of help to solve them, but some of it is buried really deep in the documentation, you have to really search for it. And mostly, you find the answer on Stack Overflow before you find it in the docs. So the first issue we ran into was cross-origin resource sharing and route-based code splitting. So an interesting thing that happens is, which might be true for a lot of us here, our JavaScript files and static assets generally are served from a CDN which is on a different origin, as compared to your website. Now, when you do a link preload, you can tell it to load it as a script and you can tell it to load it from a different origin. It's cross-origin, anonymous. And similarly, when you... Anyway, so you can define that it's loading as a cross-origin resource. But when Webpack tries to load a script, like we mentioned, based on the chunks, when it sees it needs a new JavaScript, it will by default not load it as a cross-origin script and your browser may end up blocking it, which causes quite a lot of headaches. So interestingly, it does provide a attribute or a config that you can specify which makes Webpack load those chunks as a cross-origin script. It takes care of that internally. A second one was, as we know, caching validation is a very big problem, apart from naming variables, that when you create a chunk, and usually for long-term caching purposes, the name of the chunk, the file name, usually will contain the hash, right? That's how you determine whether this file is a newer version of, like, if the content is new. So now what happens is that when Webpack creates these chunks, it needs to maintain a lookup table that in your entry chunk, which is loaded at your page load, it needs to know that when this route is opened, this is the JavaScript file it needs to download. Now that URL, that file is gonna change at some point of time. So for example, you have route-based chunks, like I mentioned before, you have these 15 routes on your website, and you have those 15 JavaScript files correspondingly. As each file is supposed, one of them changes. As suppose you make change on one, like a product details page. Ideally, only that one JavaScript file, that chunk should get invalidated in the cache. Only that should be needed for the user to download again. Others should still be served from Service Worker or the HTTP cache. What happens is because that chunk has changed, its file name has changed, the manifest in the lookup table in the Webpack's entry chunk will also change, which means the entry chunk will change, which means the user ends up downloading extra JavaScript, which has not actually changed. So for that, a Webpack provides a thing called the Webpack Manifest. It's pretty simple. In the common chunk plugin, you just define the name for the manifest, and you end up with a separate file, like 500 bytes or something, which will just have that lookup table. And your entry chunk becomes independent of the content of your other chunks. So it's these kind of small things, which we ran into, and a lot of you may run into, when you are implementing these kind of things. So what's next for us at Flipkart is making things faster. So we are looking into things like HTTP2 for enabling push of these resources smartly. We are also working on AMP to make the first visit faster. So that's all from my site. You can reach out to me on this Twitter handle or my team at Flipkart.com. It's great to be here. Thank you. So I've got one more thing. So I'd like to tell you a quick story. I don't have a lot of time, but I'd like to tell you a quick story about a small group of us, Scott, so you write some code for NASA. So a while back, a few years ago, NASA released a master list of software projects they've cooked up over the last couple of years. This is more stuff than you just run on your personal computer. It's like apps that would help with robotics and cryogenic systems and space simulations and all sorts of things. And they had these in a bunch of different places, GitHub, GitLab, SourceForge. It was all over the place. But it was just part of the government initiative to try open sourcing more stuff, and it was kind of neat to see. So off the back of that, NASA released a site called code.nasa.gov that looks a little bit like this. The idea here was that at any time you could come to the site and you could take a look at what NASA engineers were hacking on in the open, which is kind of cool. But I discovered this on Hacker News one day, and my friend, Sam Sikoni, also discovered it around the same time. And we tried looking at this on a real device, and it basically crashed my phone. What happened was we ended up profiling this a little bit, and there were a number of interesting quirks with this particular implementation. It kept the main thread pecked for quite a long time. In fact, we ended up working on a number of performance audits. There's actually a performance audit that I'll be publishing shortly on this whole thing. But we ended up trying to make this existing implementation as fast as we could. This was sort of an Angular 1 app, and at that time, that framework wasn't really built with real mobile devices in mind at the time, and we ran into all these interesting issues, like digest cycles taking up to a second. This particular app had 10,000 watchers, for some reason. They had a GitHub embed for every single entry, so they had three or 400 projects listed on this page, and they had a GitHub embed for every single one so that you could go and fork the project. So that was an additional three or 400 network requests for lulz. They also had a ton of web fonts and other interesting issues here that I don't think is entirely not atypical of something that, if you were new to this stuff, you'd probably run into some of these similar problems. And so we started optimizing this as much as we could, but we reached a point where we thought, this just isn't worth it. It's probably worth taking a look at rewriting this thing. And I know that today I've been talking, we've been talking quite a lot about React and Preact and other libraries, but I like this idea of best practices being automated. I think that some of the ideas we've talked about today around purple and code splitting and so on are things that we can do a better job of building in, by default, into today's tooling. I'd love to get to a point where things like Create React app and Angular CLI and Ember CLI and so on, Next.js, whatever it is that you happen to be using are considering some of these approaches and looking at where they can provide real improvements to developers, so that we balance developer experience with user experience. So Polymer does this kind of well with the Polymer app toolbox. I consider it a good reference for how to do this stuff. It's got, Sam and I think Taylor mentioned some of this stuff. So it's got purple with code splitting built in and lazy loading and offline caching and support for H2 server push. But using the Polymer app toolbox allowed us to actually ship a completely brand new version of code.nasa.gov. This is NASA's very first progressive web app that we deployed last night. Thank you. I've got to give big props to Frankie over on the Polymer team and Keanu, Hannah Lee and all the folks that helped us get this shipped. But basically everything here is faster. Here we were looking up sort of, as you would code for the Apollo 11 mission from all those years ago. Looking up ways in which NASA would publish projects or even share projects with other people. All of these views on a real mobile device perform really well. It's a massive improvement from what they had before. We spent a lot of time on things like making sure that the infinite scrolling for their project list view was really, really fast. So hitting 60 frames a second. And this experience works really great on sort of desktop as well. So the experience there is, again it's responsive, we can see the list there and actually being able to search things really, really quickly, there's no lag in place. But all of the views were just as well there just showing you a slightly different look and feel to this thing. But we profiled this using Lighthouse on a real device with a real network. And this thing was interactive in under four seconds. So under 4,000 milliseconds. We were really happy with that because we actually spent less than a week redoing this site. It's not a complex site by any means, but the idea that you could completely throw away an old code base and try exploring something like a purple pattern in such a short amount of time with a very small team was, I thought, kind of cool. So we really enjoyed hacking with NASA on that site. And I encourage you to contribute to code.nasa.gov. Just being able to tell your mom that you hacked on NASA code is kind of neat. So that's always an opportunity. But it's all open source. This entire app is open source. You can go and check it out on NASA's GitHub organization. So github.com slash nasa slash code dash nasa dash gov. I am certain we will get pull requests from folks mentioning things we've done wrong, but I welcome all of those. So please feel free to check that out and let us know if there's anything we can improve. In closing, I hope that some of the ideas in this talk give us inspiration to perf the web forwards together because we're all in this together. I see browser vendors as being in a good place to tell you about the engine and the performance targets we should be hitting. I see framework authors and tooling vendors as being people that, you know, ideally want to make sure the developers are able to ship the right experiences that benefit their users and the experiences you're shipping for your users. So let's work together. I would love, you know, if you're working on any of this stuff, please talk to me. Please talk to us. And let's, you know, let's move things forward together. Thank you. Right. So. Yes. Right. Okay. Awkward. Hmm. That whole mass repeat thing in the quiz, turns out that exists. Does exist. However, in our defense, it's WebKit prefixed. So we've indicated it's not that in Chrome. It's actually, yeah. So a good shout out to Unicravits for spotting that. But, you know, we like to think that basically we thought it up and the spec writers were so impressed that they got it in there really fast. Yeah. That is pretty much it. I was at TPAC, which is the big W3 meetup where a lot of spec writing happens. And it's a terrifying place. Like I was just walking around, like, you know, after the service worker meetings and I walked into one room and I just heard someone say, I think we really need a child piercing API. And I was like, nope, I'm done with this room. Was that where you met Sir Tim Berners-Lee? Yeah. Because that would terrify me if I met him. Because he's the fitter of the web. Right, we're running behind. Yes, we are. Should we get the next talk? We really should. And it's Jeff Posnick. Jeff Posnick. And he's here to talk about tools and libraries for progressive web apps. A big welcome to Jeff Posnick. Yes. Hi, everyone. I'm Jeff Posnick from Google's Developer Relations Team. So I'm happy to be here today talking about the tools that can help provide the foundation for your service worker, which is a key piece of any progressive web app. And we'll be covering both existing tools as well as providing a preview of the next generation of service worker libraries. So you might be asking yourself, why use tools to build your progressive web app? Many folks have seen canonical service worker code samples that turn up in articles and getting started guides out there. You can be tempting to just copy, paste, and then ship that code into production. Now, I don't want to sound too negative here. I'm personally responsible for writing a lot of those early service worker code samples. And they're very useful for learning the basics of the service worker API. So I don't want to say that they're not useful. But there are certain things that you need to worry about and pitfalls when using those code snippets in production. So for instance, here is a common code pattern for pre-caching a set of resources during your service worker's installation. URLs to be pre-cached here are stored in an array. And you need to increment your version variable whenever you decide that you need to change one of those resources if you want to trigger the service worker update. But in a production web app, if you are following modern best practices, you won't be loading in this list of friendly file names in your URL's array. You're much more likely to have a file name that has some sort of unique hash based on the contents of that file. So those unique file names then need to be in your URL's array. And also, what about that brand new image that you added to your page but forgot to include in your URL's array? And what about scenarios where you have an existing file like that index.html but you make a change and then you forget to update version. These are all some common pitfalls you might run into. And these pitfalls extend to the code that you use to clean up your pre-cache entries, which is something that normally happens inside the activate handler in a service worker. So you can see code snippet here used to delete old pre-cache content whenever a new service worker activates. But what if your service worker updated because of just a small change to a single file? Why throw away everything else that was previously pre-cached instead of reusing the entries that haven't changed? And finally, you may have seen samples like this of a fetch handler that performs runtime catching. And in this case, it's using a cache-first policy. Problem is, the code adds to the runtime cache but never cleans up entries. So you may not realize it, but the cache storage API just ignores the cache expiration HTTP response headers. When you're dealing with the cache storage API, it's not going to look at the max age and things like that and automatically expire entries for you. So you can easily clog up every user's device with larger resources that were needed for one specific page maybe a month ago. But they're never going to be used again. So hopefully I've opened your eyes to a few of the things you want to avoid. But on the flip side, let's talk about what you should implement when you're building a production-ready service worker. So first, your service worker should use an asset manifest generated based on the actual files that you're deploying to your site, including whatever fingerprints might be in the file names and things like that. That manifest should determine which resources are used to pre-cache rather than a hard go-to list. And this means that you don't have to worry about leaving out crucial files that you added in late in your build process and forgot to update the service worker to pick up. Now second, the asset manifest should keep track of the hashes or fingerprints of each file for you and make sure that only files that have been changed are invalidated and downloaded again. So anything that hasn't changed should be kept around rather than being thrown away. And this means you never have to worry about remembering to bump that version variable each time you change a file. And it also means that your activate handler isn't going to throw away content that's still valid and useful. Finally, you need a least recently used expiration policy for your runtime caches. You might use maximum number of entries or you might want to have a maximum age for the entries or you might want to have a mix of both. And this prevents your caches from growing indefinitely and ensures that frequently used content is kept around while assets that were only used in the past will be cleaned up. All right, so how can you make sure that all the production ready checklist items are taken care of? Hopefully everybody wants to go out and build something right now. And not surprisingly, the answer is tooling. So despite that tongue in chic text on the slide, I'm actually a big fan of using the right tools for a given job. And there are a few tools available today that we could specifically recommend for building a production ready service worker. Okay, so first is SW Precache. It's a build time tool that generates your service worker file for you. So the generated file contains an asset manifest and install, activate and fetch handlers that follow the best practices we've been talking about all along. There's also SW Toolbox. And this is a runtime library that extends the behavior of an existing service worker and specifically focuses on runtime caching strategies. So it implements a number of common strategies that you could use right out of the box without having to write your own code or more likely go to Jake's offline cookbook and copy and paste that code. We've done that for you. So even if you don't realize that what you see in this diagram is a still while revalidate strategy, you can make use of it right away. And SW Toolbox also takes care of cache expiration for anything that's added to these runtime caches. So SW Precache and SW Toolbox complement each other and they can be used together to handle both the precaching and the runtime caching for your progressive web app. So in fact, we've got a few projects that are preconfigured to use both of the libraries out of the box. First of all, we have WebStarterKit and it provides boilerplate for common web development scenarios. SW Precache and SW Toolbox are baked right in to the build process that it uses making your new web app offline first by default. Let's have PolymerStarterKit and it's another great jumping off point for developers who wanna build their progressive web app with web components. And the service worker libraries are included here as well. So they ensure that the assets needed to render routes are loaded quickly and that everything works offline. But we know that not everybody's gonna be beginning from scratch with a StarterKit though. Many developers with existing projects are using a webpack-based build process and SW Precache works for them too. So member of the community, well Farley, shout out to Will, is kind enough to maintain the SW Precache webpack plugin and we appreciate his hard work there. So in fact, Lyft talked a little bit about their new PWA. This is what they're using under the hood to generate their service worker. And we know there's folks out there who like doing everything using command line tools either manually or wrapping them in NPM scripts to kick off a build process. So we've got you covered as well. SW Precache just has a command line interface that can be used to hook into these build processes and you can trigger it as part of your normal build setup. So I barely scratched the surface of what SW Precache and SW Toolbox could do or how to configure them. But I'd encourage folks to visit our new service worker libraries landing page to learn more. And we have a ton of examples there, links to some previous videos that we've done and articles that you could read talking about best practices. And we've talked about these libraries a lot, but rather than just taking my word about the value, I wanted to dive into real world deployment that you might have heard of, the Washington Post progressive web app. So let's see how they're using SW Precache and SW Toolbox. Looking at a very lightly edited version of their entire SW Precache configuration. So passing this configuration in during their build processes, all it takes to generate the service worker file that they've deployed to actual production use. And let's dig into some of those specific settings to see how they're making use of them. Yeah, first they're using this option, static file globs, to define a list of patterns that match files in their build directory. And this is that alternative to hard coding on a list of URLs. Anything that matches this pattern has automatically pre-cached. And the cache entries are automatically versioned and kept up to date by the service worker that's generated. Next comes their runtime caching configuration. This is actually a way of using both SW Precache and SW Toolbox together very easily. And it tells SW Precache to automatically include SW Toolbox in the generated service worker file and to configure it based on the provided settings. So in this case, they define a URL pattern, matching things that are going against their content API. And they apply a network-first strategy to those requests. They're also able to define that cache expiration that we talked about, which means that entries from a given period that I've not used anymore will be expired from the cache without just building up over time. And they're also pulling in some additional code that we haven't mentioned yet. And that's a library that automatically queues and retries Google Analytics hits that take place while their user is using the site offline. So they're using SW offline Google Analytics, which sets up fetch handlers that automatically queue any failed Google Analytics requests using index DB. And the failed requests are retried for up to a day, which is kind of an interval that makes sense for Google Analytics. And they're retried whenever the service worker starts up. So the library preserves the original event time automatically for you, meaning that assuming the request is able to make it to Google Analytics because the vice comes back online, the data has the correct time step attributed to it. So everything works as you would expect and you don't lose the nuance about when an event actually happened just because the site was offline. All right, so you've seen a real world example of using the libraries to generate a service worker file. But how do you confirm that your progressive web app is using the service worker effectively? So if you've been listening to any of the previous talks, this is probably not a surprise. We've got a tool for that called Lighthouse. And Lighthouse automatically tests for many of the things progressive web app should do, including whether the service worker is behaving as expected. So this screenshot shows a Lighthouse Chrome extension interface run against the Washington Post progressive web app. You can see confirmation in the highlighted section that their PWA has a registered service worker and that it serves content even when the network is disabled. And Lighthouse will also generate useful performance metrics allowing you to judge other stats that are very relevant to any progressive web app. So Lighthouse is a very useful tool. I'm gonna hammer home a point that we've been making again and again. It's something to supplement testing on actual devices. So please don't use it just exclusively but also confirm behavior of your service worker on real world devices, on real world networks. But while I've been talking about the Washington Post up to now, they're not alone when it comes to production SW Precache and SW Toolbox deployments. So here's just a subset of partners that are using these libraries to power their service worker in production. And you can feel confident knowing that these libraries are ready to use and battle tested. But the flip side of being tried and true those that the libraries were originally written back in 2014. So I'm sure there's like some formula equivalent to like dog years but for JavaScript libraries. I'm sure that by whatever metric these libraries kind of are in like their distinguished elder statement, elder statesman phase. It's getting on a little bit. But I wanna reinforce that they're not deprecated. We're not deprecating everything. Everything is still supported. And if you're already using them or starting a new project today, they really do remain the right choice. But best practices are always changing. And we recently started thinking about what a modern service worker framework would look like. We're still very early on, but we wanted to give everyone a sneak preview of what we're thinking about. All right, so just wanna outline a few of our high level goals for this project. Developers who wanna use just a small bit of functionality like cache expiration should be able to import just the code that they want by bundling in ES 2015 modules with very little overhead. At the same time, developers who opt in to using all the functionality should feel like they're using a single coherent framework. Kind of wanted to get rid of that weird divide between SW pre-cache and SW toolbox where they feel like they're two separate projects. And most importantly, we have a strong set of features that we know developers need in production, and we wanna have parity with those features with our new offering. We don't want this to feel like a regression in any way. All right, so let's dive in a little bit about what we're thinking. Conceptually, we split the new framework into three layers, routing, runtime handlers, and request behaviors. So here's an overview of each of those. So first up is our routing layer, which is responsible for setting up fetch handlers that respond to specific types of requests. So we're envisioning built-in classes to handle regular expression and express style routes similar to how SW toolbox is currently configured for runtime caching. The router class also lays the groundwork for more complex routing in the future, and we're excited to see what we could build on top of that. But going down one layer from routing is our runtime handler layer. And this is a set of classes that implement common runtime caching strategies, like stale while we're validated or network first. And by default, they won't modify the out-going requests and they'll just use an appropriate cache based on the service worker's registration scope. But we wanted to provide a flexible way of opting into different behaviors to customize those defaults in a way that makes sense for your application. And that's where we get to the request behaviors layer. So this is the innermost layer and it allows you to configure the runtime handlers and take specific actions in response to one or more custom callbacks that the runtime handlers know how to trigger. So to start with, we're thinking about three custom callbacks and those are requestable fetch, fetch did fail and cache did update. So here's a quick look at how those callbacks fit into the request lifecycle. Prior to contacting the network, a runtime handler will trigger any registered request will fetch callbacks. And this allows the service worker to modify the request before it's made. So if that network request happens to fail, any fetch did callback handlers get triggered. And finally, if the handler was successful, you have a new response from the network and the cache gets updated. This is our opportunity to call any cached and update callbacks. So let's take a look at some of the behaviors we're planning to implement on top of this system of callbacks and triggers. So first we're exploring responsive image behavior that would be triggered during the request will fetch callback. And it could take current devices capabilities into account and modify the outgoing image request URL accordingly and do some pretty smart things in terms of what it actually loads in terms of the URL. We're also thinking about a background sync queue and the idea is that this would be triggered by fetch did fail whenever there's a network request that fails. And this would be a more general version of what we have implemented and I talked about it a little bit earlier for Google Analytics. And it would just allow you to opt in to the same sort of behavior for your own types of requests. And additionally, it would probably make sense to use that new sync event that gets fired in the service worker for doing smart retries of those requests. So I'd also like to take the cache expiration logic that is currently deeply embedded in SW Toolbox and make it available via a reusable behavior. And this would be triggered by the cache did update. If something gets written to the cache, you'd be able to talk about how the cache would be modified and cleaning up old entries and things like that. And we also have plans for another behavior triggered by cache did update and this one would use the new broadcast channel API which is super cool and it would let pages know when a previously cached resource has been updated. The client page could then take appropriate action like prompting users to reload the article that they're reading and see the latest updates based on the new cache entry. All right, so what does it look like when all these pieces are used together as part of a new framework? Let's walk through the code sample. So first thing we're doing is setting up a request wrapper to configure our caching and also to configure the type of callbacks that are triggered. And in this particular case, we're using the broadcast cache update behavior. Next, we're configuring a routes to automatically apply a handler whenever a condition is met. So here we're just checking to see if the URL ends in JSON and if there's a match, we'll apply the still while we validate handler using the wrapper that we just configured for the cache behavior. So finally we take our route and we use it to configure a new router. And this also lets us set up a default handler to use for other requests that aren't explicitly matched by our route. So you can see here the entirety of the code snippet and how everything hopefully fits together as a cohesive framework. But we know that not everybody wants to opt into using a full framework and we don't want to leave those developers behind. So here's an example of how you could write your own service worker code to manage caches, but just pull in the broadcast cache update behavior and manually use it independent of that automatic trigger cache update. So if you want to mix it into your existing code, that should work too. We'd also like the request handlers to work in a standalone environment, allowing you to write your own fetch event handler while still taking advantage of a canonical implementation of a given strategy. And you can see that here. So we're still at the very early stages of implementing these new libraries. There are use cases we plan on addressing but don't have as much to share about yet. That includes like generating a manifest and doing a lot of things that you currently use SWPretache for. And it also includes build time tooling and ways of automatically integrating into the builds that you have today. So we are definitely thinking about those. We just don't have as much to share quite yet. That being said, you know, if you're the adventurous type and you really want to dip your toes into the water of these new libraries, we do have some very, very clearly alpha quality releases up on NPM right now. Expect the interfaces to change leading up to the official release. We're really just releasing this out there as a way of getting feedback from folks and getting the community to a chance to play around with these in a non-production environment and let us know what the developer ergonomics feels like. And, you know, we want feedback. So please reach out to us. We have this kind of big GitHub issue open that details some of what we're planning on doing. And let us know what you think and what areas you'd like to see us focus on or the areas that seem like they wouldn't work for you. And we're going to definitely take that into account. So last but not least, there's a great new service worker push messaging library that was just released by my colleague Matt Gauntz. And it provides a production ready approach to using Firebase Cloud Messaging, push notifications within your service worker. So we don't unfortunately have time to dive into that right now, but Matt will be at the service worker breakout session later today for folks here live and he'll be demonstrating a bit there. So thanks to everybody for their time, both in person and on video. And we hope that you'll take advantage of all these tools and build a production ready service worker for your progressive web app. So thank you. Right, it's time for lunch, which means if you can be back here for 2PM, that would be great. And if you have special dietary requirements that you've already told us about, on the ground floor in the corner, there's the bar thing over there, that's where your food will be. See you back at 2. See ya. We'll listen to the answer. So the ante has been upped. Indeed. Yesterday you heard Alex Russell mention about what was it, sort of performance challenge to mobile devices, or as some people call them around the world, my phone. And he has very kindly donated three of these. It will be the worst phone you've ever owned. But we wanted to bring the mug and the phone to let you know not only has the ante been upped, but this is what the stakes are. Were you to win the big web quiz today? Shall we take a look at the leaderboard as it stands right now? Indeed, let's settle into that one. Ooh. Here it is. Ooh, and we've got some clipping because I've made a mistake again. I did check that. I checked it. You can deploy while I'm on blinds. I think it's fine, I'll just do my talk. Yeah, straight to production. So, I mean, I had to say that there are actually a lot more people with 30 points, so there's a lot to play for. And I'm not sure what we'll do if there's a tie. We'll figure something out. Yeah, I'm just going to go with whichever the leaderboard comes up with is probably a treat. It's a random enough way to do it. It's whichever way MongoDB does it. We just do a math.random round where you get random points. Shall we do some questions? Yeah, well, we've got to separate that out because we've got two people in second place and that's not good. Okay, here we go. Here comes the first question. So to the nearest thousand, how many links are there in the HTML spec? 8,000, 28,000, 48,000, or 68,000? And the good news is I'm pretty confident you won't get to the spec and get the single page version downloaded in the time that it takes for us to actually close the question. Now, that'll be funny if we fail to do this quiz because the Wi-Fi goes down because everyone in the room is pulling down eight megabytes worth of documents. Make it go. Ooh, we've got some. All right, so it's a low confidence of two. So what are we seeing here? So 28,000, not many people think the two extremes on there. So it's down between the 28,000 and 48,000. Let's reveal the answer. And the answer is 48,000. Some of the crowd happy. It's a big document, eight megabytes. Yeah, eight megabytes of documents. Turns out there's quite a lot of work involved in spec-ing HTML. Who knew? Right, oh, this, I like this one. This is kind of, this is a good one. Here we go. Does the following promise fulfill or reject? I like how you consider yourself a minifier, const p. Yeah, that's probably the wrong place to do that. Yeah, I could have afforded the extra bytes there. It would have been okay. Yeah. Let's have a look to see whether it's going. Oh, we have a crowd that feels they know promises. A clear winner of all. When we show in the people like, the most of the audience saying that 28% saying it'll reject, the answer is, of course, it will reject. And the reason is? Well, resolve is not the opposite of reject. There we go. Oh, is that it? You'll take that. You'll take that as a answer. Well, you go. You go. If you resolve a promise with a rejected promise, it becomes itself rejected. And the same way that, you know, if you throw, it becomes a rejection as well. So, there you go. Ta-da. So there you go. It's clear how I just feel, I get to feel really smart at things I looked up in advance, you know. But I mean, isn't that how we know everything? Yeah. You know, I shouldn't feel too bad about it. It's only a search query away. You didn't know until someone told you what you looked it up. So, there we are. So, I guess we should get on with the next talk. Our next speaker is going to be talking about storage. It's Drew Knox, everybody. Drew Knox. Hi, everybody. So, as they said, my name is Drew Knox. I'm a PM on the storage team. Work on a few other projects, but storage is really what I'm here to talk about today. Before I get started, though, my mom told me right before I came on that my grandma was going to be watching this talk. So, please laugh at all of my jokes. Otherwise, we'll be cripplingly embarrassing just throughout. So, again, please thank you for that. So, before I get started, I want to do a show of hands. I'm the talk right after lunch. So, you guys are all probably in food comas, not really paying attention, catching up on email. So, a little calisthenics to get you guys going. First, how many of you are still capable of raising your hands? You didn't eat too much. You can, yeah. All right, good. A few people lost, but that's okay. All right. Now, on to the real question. How many of you have used client-side storage in a meaningful way, not just playing around with service worker in a demo app in one of your sites? All right, now, keep your hand up if you viewed that primarily as a critical performance optimization, not offline, it's about right. So, when we look at these kind of numbers through Chrome usage metrics, we see that about 2.5% of page traffic uses things like index DB or cache storage. So, my goal today is to convince all of you. So, hopefully you'll all have your hands up next time around at CDS, that client storage is the most important performance optimization you can make for load time in all browsers. And most importantly, because we all know that caching and all this is important, I wanna convince you that it's available today and that it's kind of low hanging fruit for you to pick up everywhere for all of your users. So, why is this important? We've heard this number repeated a lot, which is that you lose half of your users if your site takes more than three seconds to load. I won't belabor the point, but it's pretty scary, right? It's kind of a horror movie. But when you think about it, it's actually a lot worse than that because on the average 2G network, it takes three seconds just to get the first byte. So, we're kind of already hosed, right? We're fighting an uphill battle and what's worse, 320 milliseconds is how long it takes to load one megabyte off the network. This is really hard. The deck is kind of stacked against us. So, we need some tools to help us not just improve our loading performance, but avoid the need to hit the network at all. All right, so, we know it's important. We know we've gotta do something, but I don't just wanna preach a horror movie. I wanna give you guys some actionable steps. So, in my talk today, I wanna walk through how you should reason about spending your time on client storage, where the biggest wins, the least amount of work for 80% of the value, some technologies that you can use along with some libraries that'll make it easier, more ergonomic, how much storage space you have available, and then if you guys are all really good and you laugh at all my jokes and my grandma's really proud of me at the end, I'll give you guys a view of some of the future things we're looking at that are kind of exciting. Now, before I move on, I was told I should explain the first line because nobody thought that how you spend your time that these emojis were conveying that. My girlfriend said it made no sense, but she's also an iOS developer, so what does she know? How are you going to invest your time? Web developers are pulled a thousand different directions. Lots of us are full-stack engineers. Unfortunately, we're working in a place where Flexbox is still one of our most powerful layout primitives. We don't really have infinite resources to do infinite things. So, before I get started, I want you, for the purposes of this talk, to think about storage as cash, not offline support. Offline support is really important and it's something that's been touched on a lot. I just want to focus on cash as a performance optimization here today. All right. As a cash, you kind of have this spectrum of investment that you can make. Browser cash, all the way on the left, is sort of the default. It's relying on the browser to get things right for you, hoping that your responses are cashed and that they aren't cleared before the next time the user visits. And on the other end, you're building a spaceship. This is service worker, cash storage. You're optimizing everything to the nines. You're hitting like three seconds load time. This is sort of what you've been hearing from a lot of folks. So looking at the first one, browser cash, doing nothing, it does have some real benefits. You speed up, repeat visits for your users and that's not insignificant. But unfortunately, it only works for network responses. It's unpredictable. You don't know when it's gonna be cleared out. And it's got pretty coarse granularity. It's the level at which you served up files. So this is not great. It's kind of sad. Optimize browser cash is probably what a lot of you are doing today and it's a really good step. This allows you to not only get repeat visits sped up, but you can actually get some proactive page load improvements using things like link rel equals import or any number of things to try and load things before they come in. But it still only works for network responses. So these are a lot of the optimizations you see people, you've seen people suggesting sort of off-handed as they've been giving talks. It's still unpredictable because again, you're relying on where the browser is storing things. And there's not much granularity here either because it's still on the level of network resources that you've served up. Still not great. Content caching is where the first big step function can come in in terms of improving performance. You get proactive page load improvements like before. But now you can work for all response types. So this is when I say content caching, I mean things like saving image blobs in cash storage if it's available or in index DB and then serving your image tags with a blob URL, all kinds of things like that. You have some predictability, right? Because the things that you're storing in cash storage or in index DB, you have control over. But you're still using network responses for some other things, so it only gets a yellow here. It's not perfect. You have content granularity and this is really important. Granularity is something where you want to be able to change something and not have to redownload your whole bundle. So the more you can break things up and have your cash invalidate for only small pieces, the better. So again, you get granularity for content but not for your network requests. So it only gets a yellow. This is still pretty valuable though. It gets a smiley face from me. Full cash control, the spaceship. This is a lot of work. I'm gonna be honest with you guys. I've never effectively done it and I work on this team and I should theoretically be able to say I've done a thousand of these. So it's great when you can nail it and you've seen a lot of really big production apps that have but with this comes a lot of work. You get proactive page load improvements like before. You get all response types again, that's great. It's fully predictable because now you're pulling in even the network responses into a cache that you control and that's really valuable. You can guarantee your user a certain performance level. You also have content granularity for your network requests and your content that you're serving. And as a major bonus, you get offline support which people have talked about a lot. So I would be crying tears of joy if everyone started building their apps like this today but I understand that that's pretty hard. Realistically, I think you guys are gonna wanna sit somewhere between the optimized browser cache that most people are doing today and content caching. This is kind of the sweet spot, right? If you can serve all of your content from index DB or cache storage or something like that now, you really have access to storing all of your content even if you're not building that spaceship with ServiceWorker. So you can get full performance levels for your site, not just the app shell or something like that. I've talked a lot about how this is maybe not too hard. It's kind of low hanging fruit. It's really important. But let me put my money where my mouth is and dig into some code. So first of all, as I put this together, I really fretted about whether or not I should put my thens on a new line or attached. I was afraid I would get flamed one way or the other. I went with their own line, but please don't hurt me. That's my best go. So here I'm using a Redux app. It could work with any framework, right? You could use your own view binding library. I just create my store, get it set up. Here's the magic part. I'm using an index DB wrapper library to store my state essentially whenever there's a change. This is done asynchronously, so it's not going to block the main thread. And then later, when I'm re-inflating my state, instead of hitting the network or pulling from Firebase or something like that, I grab it and fire a database-loaded event, which will reinstate my state without having to hit the network. So this kind of pattern, as you can see, was only three to five lines of code, and it can avoid an entire network hop for your whole app, right? You have your entire app state all saved local disk really, really easily. So this is a pattern that I think works really well with Redux-style apps, but it really can work for anything. It maybe has a little more work if you don't have a single object you're trying to save. If you wanted to tweet this, here's a good slide with all the syntax highlighting. In general, there are a few best practices. I hinted at them, but just to make them clear, when you're managing your cache on the user's device, you want to make sure you're doing client-side chunking. This means you might pull in an initial bundle and then kick off requests for smaller, more granular pieces so that you can revalidate only small chunks as things change. This pattern is a little more complex but can really save your network bandwidth. You also want to preload pages the user might be about to visit. So if you imagine you're on some news site, you might want to load all of the articles that are shown above the fold or something like that. You also want to save commonly repeated components. So if you have a hero image, a logo, anything like that, just save as many of them as you can. Get rid of as many network hops as you can. What should you be using to do this though? You guys might be aware that the web is not really one for having a single answer to a problem. There's lots of different ways to do things, but thankfully it's pretty simple in terms of what you want to use on the browser. So if your data is URL addressable, you should use the cache storage where it's available. It's really simple. It's kind of just like a key value pair. Works really great with service worker. So it's your no-nonsense easy, easy solution. If you've got structured data or if you have a lot of users who don't have access to cache storage, IndexDB is where you want to go. These two combine. They're asynchronous. They're modern. They're getting lots of attention from browser makers. This is sort of your bread and butter. This is where you want to be doing all your work. Now in terms of availability of cache storage, I have here, can I use usage weighted slides? It's available in a lot of places already. So I know some of you are thinking, oh, I don't want to deal with having to do progressive enhancement or fall back to IndexDB, but you can hit a lot of your users with cache storage today and it's only going to improve in the future. So I have here just a few libraries that we think on the Chrome team are great for helping to improve your interactions with IndexDB. They all give promise support. Some of them give database sync. Some of them even try to recreate SQL syntax, but these are all great libraries in terms of ergonomics, but also they've thought a lot about making themselves minified so that they don't impact loading performance. So that is a really big win. For cache storage, it's a newer API. There's not quite as much available. We heard from Jeff about ServiceWorker Toolbox, ServiceWorker Precache. Webpack has the offline plug-in, but otherwise there aren't quite as many things that are available now for use. All right. If I had to guess, I would say this was probably the area you guys were most skeptical when I started the talk. You are making websites, right? We aren't using device resources. This is the whole point of the web. It's ephemeral. It doesn't stick things around. I'm sure that's been changing as we've been talking about ServiceWorkers and kind of convincing you guys of offline, but it's still a real question. How much space do you get and how reliable is it? So at first I started looking at empirical ways that I could do this, right? Small demo apps to try to fill the cache, fill the storage partition and see what would happen, but then I realized why don't I just email all the different storage teams and the different browsers and ask them how much space is available? And it turns out that worked way faster. So the browser quota limits today are, they kind of fall into two camps. So we have percentage base. Chrome gives you 6% of free disk space per origin. Firefox gives you a little more. It's 10% shared across ETLD plus one. So this is like play.google.com and movies.google.com would share storage. Safari gives you at least 10% of free disk space. And Edge is, well it's a little bit more complicated, but thankfully it's still fairly reasonable, right? Like Edge is largely a desktop browser. It's got, so you can rely on being in one of the higher tiers. So you don't have to worry about all four of these all the time. And based on usage statistics and sort of looking into our own telemetry, we found that a simple number, simple rule of thumb is that you have 50 megabytes available on all devices and all browsers today. So this will get higher as you're working on higher end phones, but you can think of this as your minimum budget that you can use to try and improve performance on your site. If you remember my slide before, or a ways back, if it takes 320 milliseconds to load a megabyte across the wire and you've got 50 megabytes available to you, that's 16 seconds of load time that you can save averaged across all your users' visits. That's pretty huge, right? 16 seconds could take that 19 second loading app down to three if you were able to condense all of those network hops into something that you could cache. If I could I would take this off and I would do a mic drop because I think that's probably one of the most exciting things to me about client side storage. All right, but with great power comes great responsibility, right? We're now using resources on the user's device and this is kind of reinventing what we think is the contract we make with users. So first and foremost, you need to make sure you're measuring and thinking about your app's overall storage footprint. So this is something like figuring out your eviction strategy to make sure you don't just balloon up to 6% after three visits. But the whole point was we wanted to be using the user's device. So we can't just keep arbitrarily lowering our storage footprint, right? That would get us back to where we are today. So the second number that comes in is your read to write ratio. And this is something that Chrome Storage Team thinks about globally as well. Which is where we try to make changes to our eviction policies that reduce the storage footprint without lowering this ratio. That means we're trying to clear out data that's never gonna be read again. Now sometimes you'll clear something and maybe it was gonna be read in three months, right? And sometimes that's right but sometimes you actually do want your cache to stick around for three months. So another metric that you can look at is when you store something, check to see if you had cached the resource before. And if you had, look at the time difference between the two and that'll give you a sense of how long of a horizon was this sitting there for kind of useless on the user's device. So these are the three numbers that Chrome looks at and Chrome really cares about. And I think it's a really useful way to think about storage, but I would love to hear from other people if they have other metrics that they think are important to track. It's a really interesting space. All right. Your eviction strategy is not the only thing at play though. The other browsers have eviction strategies of their own or at least some of them do. So for Chrome and Firefox, we evict your storage or any storage on the device. When Chrome or the disk is full, we evict the most least recently used, most least recently used, that's not great. The least recently used domain from the list. Now it's important to note, this is very rare. Chrome clears a domain storage less than 0.1% of the time. So for the most part, when you store something it sticks around, but it is something to keep in mind. Safari and Edge however, don't clear index DB. So you can treat that as persistent. Now on Mozilla or on Firefox and on Chrome, there is something to try and help you work around the eviction policies when it's important. It's called persistent storage. It shipped in Chrome 55 and it's in development with Firefox. So the way this works is you essentially request the persistent storage permission and then Chrome will exempt you from automatic clearing. Well also, when the user clears browsing data, pop up a prompt for any persistent storage sites that says you're also gonna clear these, is that okay? So this is trying to help make sure that if you've entered a contract with your user that something should be available offline, you can guarantee that. Now unfortunately, when you do user surveys and you ask them about storage, it becomes pretty clear that it's not something they either want or are able to effectively reason about upfront. If you ask a user, hey I'd like to store 100 megabytes of your data, is that okay? They'll either freak out and say no or they just won't really understand the question. But they are very good at reasoning about your storage is full, which site would you like to clear? So because of this, we try to avoid showing a permission prompt when you request durable storage or persistent storage. Instead, we have a heuristic, which we use to either automatically grant or deny. And the heuristic is essentially, if you're treated like an app, you'll get app like storage persistence. And that means if you've been added to home screen, if you have push notifications, if you've been bookmarked, or if we've seen if Chrome has tracked that the user has engaged with your site a lot. So if any of those hold, you'll get the permission, otherwise it'll be denied. We recommend that you hold off on showing offline UI until you've received that permission so that you know it'll be around. Certainly not a requirement, but it's a little bit of a best practice. And then use the quota estimate API to make sure that you aren't ballooning your storage or you want to make sure if you have a regression where you get some kind of storage leak, you can clear it up because now you can't rely on the browser to protect you anymore. You have to kind of take your life into your own hands. All right, that's sort of the end of my practical area of the talk. You guys laughed at a few of my jokes, so I like you. I feel like we've grown close over this time. I want to give you a look into what we're thinking for the future because I think it's important. I think it's going to change the way we think about storage and apps and offline and all of these things. So first and most importantly, we want to give you guys more space. We're kind of channeling our inner Elon Musk and get more people with more space all the time. This is kind of a new paradigm. On Chrome, we're thinking about how can we start giving PWAs or web apps in general access to as much device storage as native apps get. And we want to do this because we think as PWAs become more common, the divide between apps and PWAs and all these things are going to become less clear. And so we want to give developers all the tools that they need to make great experiences. Some of you might be cringing or freaking out. It is a little bit of a divide. You could go to a website and it could take all of your storage. So this is something we're thinking about very carefully. And again, those three metrics I talked about before, we're tracking those very closely and we're going to be ratcheting this up slowly to make sure that bad ecosystem changes aren't coming into play. So this is something though to be looking for storage increasing over time. We're also thinking about some new functionality and it kind of falls into two stages, right? We have one set of things that are in development. They're actually pretty far along. IndexedDB observers are a way for you to help synchronize transactions with IndexedDB across tabs. So if you have a kind of UI that uses something like that, it's really effective for making it quite simple. Async cookies are going to be a big win. They'll also be available in service worker. So that's pretty awesome. Both of these are Wikig or YCG specs that have some degree of implementation. So they're coming soon or sooner rather than later. Then we've a couple areas of exploration that we're looking at. So I mentioned a lot of libraries that will give you promise support for IndexedDB. We know it's great. We know it's the way that people wanna work with async code moving forward or at least a large subset of people wanna work with it that way. But it turns out layering on IndexedDB promises to transactions is kind of thorny. It turns out to hit all the edge cases, it's hard. So this is an area that in terms of baking into the platform we're still thinking about and figuring out. The last one is kind of exciting and it's personally very, very cool to me. We're calling it writable files but it's the idea that we wanna start giving web apps the ability to get reusable handles to files on the user's device. So I'm sure a lot of you have gone to some website where they ask you to upload a file or they offer to download a file and you downloaded it as a zip or every time you made an edit you had to re-download the file. I had to re-upload the file. It's not a great user flow. So instead what we wanna do is create a way for apps to get a handle to a file such that they can just track changes to it like a normal app would. Again, this is an area of exploration. We're figuring out how to get the privacy and security models right. But it is a spec that's available on the Wikig and we would love to get people commenting there telling us what use cases they might have for it, what concerns or mitigations they have planned. It's all on GitHub. We would love to get collaboration from everyone. All right. I hope that I've made an okay, hopefully great, case for the idea that caching with client-side storage is a huge win that you have available to you today. It's not the easiest or the first thing you'll jump at, right? Performance optimization is kind of, in a lot of ways, at least for me, when I'm developing a secondary impulse to making it look right or adding a cool new feature. But it can be a huge win and it can really translate to increased bottom lines. So it's very important. And I hope that I've convinced you guys that there's stuff that you can do via link rel preload, loading blob URLs for images, storing things in IndexedDB that work across all browsers and that you can do today and that it's not too much work to get it working. A few concrete takeaways is that storage isn't just about offline. Think about it as a performance optimization just as much, if not more. Offline is amazing, but until your page is loading quickly, there's not gonna be anything available offline anyways. 50 megabytes is available to you on all browsers, on all devices. And this is gonna go up as time goes on and it will go up when you have users who are using higher end devices, but this is sort of your bare minimum budget you have from Proving Performance. IndexedDB is for structured data. Cache storage, where it's available is for URL addressable data. And that's it. Thank you guys. I really appreciate you taking the time. Right. Super duper. Should we crack on with the next talk then? Absolutely, no time to waste. Then I can sit in the back and be petrified about my talk. Yes. You're sweating for an entire hour. Well, it's just great in the finally crafted. I can't even take this off because the shirt is quite transparent. So it's... Nobody wants that. Breaking all kind of rules if I do that. Yeah, so let's not do that. So in which case we better hand over to our next speaker. It's gonna be Seth Thompson talking about V8. I wanna know when they're gonna release V9 because it's been ages. Oh my goodness. I think I just died a little on the inside on that horrible note. Let's talk about V8 and wasm. It's Seth Thompson. Hey. Hello everybody. All right. Make sure it's set up. Okay. Hi, my name is Seth Thompson and I'm a product manager on the web platform team. I work on V8 and DevTools and WebAssembly. And today I'm gonna talk a bit about JavaScript and V8 and WebAssembly too. So what is V8? Well, V8 is an engine. It's part of Chrome and it runs JavaScript in webpages. So the V8 team has a simple mission. It's right here. It's to speed up real world performance for modern JavaScript and enable developers to build a faster future web. So really there's just two things we care about. We care about making JavaScript fast that's already on the web. And then we care about making it easier to write performant JavaScript. And that's things like our language efforts and standardizing new parts of the ECMAScript language. So what do we mean by real world web? Like what is the real world web? Well, the real world web is different for everybody. It's whatever sites you are most likely to visit. Whether it's the Onion, New York Times, pictures of cats, these are the things that people browse most frequently. So in our testing or in order to try to benchmark V8 and come up with a workload to drive our optimization strategies, we wanted to make sure we understood what this real world web was and what the performance characteristics of it were. So we set out to pick a variety of sites. We call them internally just top 25 sites. They're not necessarily the top 25 in the Alexa ranking, but they're tried to be representative of a variety of different types of sites, different content, and most importantly from around the world. Fun fact internally, we actually use Taylor Swift's web Twitter page as the Twitter example. So there's a variety of sites we use and the reason we have ended up here where we're actually measuring the performance of V8 against real web pages is not sort of obvious. It took us a while to get to this stage where we're actually using real websites to measure performance. Here's a simplified history of how we measured performance on V8. The first error sort of at the very beginning, most JavaScript VMs used micro benchmarks to measure the performance of code. Now micro benchmarks test individual language features in isolation. So you might run a loop to push an element to an array 10,000 times. That's a micro benchmark, it just tests array push. Now this suffice for a while, but we quickly realized that measuring things in isolation in a vacuum didn't necessarily yield performance optimizations that translated to real applications or real code, bulkier code. So the next error was really static test suites, things like octane, you might recognize the name of that benchmark. Now octane is more representative than a micro benchmark because it includes real code. Octane includes a Gameboy emulator actually and a ray tracer. So this is real code that people have written, but benchmarks like octane are static. So we have an updated octane since we released it. And in addition, not all websites run ray tracers. So as I mentioned earlier, Taylor Swift's Twitter page does not have a ray tracer embedded on it. So this third error was necessary because we realized in addition to the previous workloads that we used to measure performance, we needed something that was even closer to what users found when they browsed every day. Another way to think about this is that the sophistication of your performance measurement strategy is probably inversely correlated with the distance from your target. So if you're starting out and optimizing a performance problem and you are four or five or eight times slower than where you need to be, a quick and dirty workload like a micro benchmark could be all you need to just close the gap a bit. But as soon as you start hitting or getting closer and closer to your target or closer and closer to the optimal performance of something, the importance of how representative your workload is raises drastically. And that's why we find ourselves, moving beyond an era where micro benchmarks sufficed and even static test suites like octane and trying to measure against real websites. So what sort of insight do we glean from measuring websites? Well, we instrumented V8 so that we could see where it spent time, this is where it spent time in the V8 engine, when loading the following websites. For example, we run more websites as well, but this was just a sampling. And you can see here that we have a really detailed insight into what parts of the compiler and what parts of the VM contribute to the load time of pages like this. And this helped us change some assumptions we made about where we should spend our time. You can see here that a lot of websites run compiled JavaScript, or at least a lot of websites take a portion of the actual load time running compiled JavaScript. And there's a bunch of other parts of the V8 engine that contribute even more to the total page load. So this was an insight that came from moving beyond static benchmarks and from looking at actual websites and has really driven our optimization in the last year or so. So I'm happy to announce actually that we've improved the median page load for the top 25, these top 25 websites we measure by 5%, which in some sense seems like a small number. But if you think about how long Chrome has been optimizing page loads, and if you multiply that savings across every time that somebody around the world accesses one of these sites, 5% is huge. It's a large, it's incredibly large savings. And the improvements on some pages were even higher than that median. Facebook, YouTube, Wikipedia, Instagram, and discourse all saw big performance improvements. And most of these came from improvements in the runtime and in the parser, not necessarily our optimizing compilers. And I mentioned that static test suites were not as representative as real web pages, but sometimes they can be useful. And in fact, we look at all of these types of workloads when we're working. And the speedometer benchmark, even though it's a static test suite, actually maps quite closely onto these real-wild websites because what it's doing is it's measuring the performance of frameworks, things like React, Angular, Ember, and we were happy to see that the performance work we did for the real websites correlated to speedometer. And so in the last six months, we've improved speedometer performance by 15% to 25% depending on the device. So that's performance, but there's many aspects to making JavaScript run efficiently on a computer or a phone. And in particular, the memory consumption of V8 matters a lot, especially on low-end devices. So these are devices with less than 512 megabytes of memory. And in fact, the number of these devices out in the wild is incredibly large. So we focused on reducing the memory footprint on these devices in particular and actually reduced Chrome's overall memory consumption by 35% by tweaking the heuristics we use and the trade-offs we make in the heap size and zone memory. So that's a big number. Another brief engine update, I've talked previously about the different parts of V8. There's an optimizing compiler, there's a baseline compiler. Well, we have been hard at work introducing an interpreter to V8. Now, you might be thinking an interpreter isn't that supposed to be slower than an optimizing compiler or a jitting compiler. Well, an interpreter has a benefit, which is that it has a lower memory footprint than an optimizing compiler, which has to compile all JavaScript into native code. And in addition, the interpreter is a simpler piece of software. So it's easier to make optimizations with a simplified execution pipeline. And we can also choose, we don't have to use the interpreter always, we can choose sort of how we tear up to full speed. So adding the ignition interpreter actually brought sizes, memory savings. But in addition, it allows us to make some improvements which made ES6 generators almost three times faster. So we're excited that this has showed real wins already. So let's talk about ES6 features or ES2015 and beyond. V8 supports ES6, ES7, and parts of ES2017 or things that haven't been actually formalized into an eqmascript standard yet. And this slide doesn't properly capture just how large of an update to the JavaScript language these iterations were. So if you've been programming or using ES5 and below, it's definitely time to take a look at the powerful features and the idiomatic features that these language updates bring. It's also time, if you look at the Kangex compatibility table, a lot of browsers have support for these features. So if you're using Babel or another transpiler, it may be time to start considering whether you can turn off transpilation of features and whether your users have support for them. So I'd also like to announce, and this is something that we've been spending a lot more time on is how V8 embeds in Node. I think Paul talked a lot yesterday about how many developers are full stack developers and the browser JavaScript environment is just half of the JavaScript environments that you care about. So as of Node 7, all of these features are available by default, with the exception of Async await, which will be in the next major Node version. And we've also been working on the performance of individual JavaScript features. You can see here that we've been working on things like spread, generators, four of arrays, and destructuring. And all of these were getting closer and closer to our optimal performance there. As well as built-ins, like object create, function prototype bind, array prototype push. These are things which are used all the time in framework heavy code and otherwise. But I wanna focus on one feature in particular which we launched recently. This is Async await. And this has been one of the most frequently requested features and most eagerly anticipated features of all of those language features on the previous slide. And that's because this code should look very familiar to you. JavaScript is inherently asynchronous and it's very, very easy to end up in callback soup. So Async and await are primitives, which when combined with promises, can turn code like this into code like this, which looks like the synchronous code you might run or write in another environment. So Async and await bring readability, they bring debugability, and they make it a lot easier to write asynchronous code in a style that maps more onto the synchronous way that you might think about what your application is trying to do. So I'd like to give a brief demo right now and I'm actually working on a little site that looks kind of like Pinterest. So let's, let me make these a little bit bigger here. Okay. And what I want my site to do is, let me see if I can make this source even bigger there. Okay, could people read that? Maybe a little bit. So what I'm trying to do is I want a site that loads on a mobile device or on a port network, very quickly with small thumbnails of the images before it loads the full res versions. So I've gone into my network tab actually in DevTools and I've throttled the network down to 4G, which actually is still quite fast, but I think it'll be enough to see the effect we're going for here. And when I refresh now, you can see that I'm loading in these blurred thumbnails and then as the full images get downloaded, I'm replacing them into the page. Now for the sake of this demo, to show you a single way, I've used the Fetch API, which definitely is overkill and is quite contrived, but what demo is not a little bit contrived. So you can see here, I'm fetching a thumbnail and Fetch is an API which turns a promise. So traditionally the way that you would handle this is using a dot then. I'm decoding the response into a binary blob and I'm taking that blob and creating a URL which I insert into some HTML and insert into the page. Then after that's done, I am fetching the full version of the image and taking the async fetch API, getting a promise back, converting the response to a blob and then inserting or updating the source of my image tag, removing the thumb class, which is the thing that blurred. So you can see here, pretty simple structure, I'm just inserting them into this grid I have. So this code is not terribly hard to understand, but it seems pretty verbose for what I'm doing and it's not quite clear to me that waiting for each of these previous then bodies to finish before exiting the next ones is quite what I want to do and I think that by converting this to async await, we may be able to make some optimizations. So how do you use async await? Well, await is a keyword which waits for a promise to resolve before the statement ends, but it doesn't block the thread. So the browser has an event loop and the reason that JavaScript is so heavily relies on asynchronous code is because the browser doesn't block on IO. So it's very important that even though your code might look synchronous and async await helps it do that, it's important to remember that under the hood, waiting for say a fetch, the promise returned by the fetch API to resolve doesn't necessarily block the main thread and that's really the power of async await. So how do we do this? Well, I'm gonna do one thing first and that is I'm gonna take these snippets of code that are creating these HTML tags. I'm just gonna make them their own function. So let's do that. This will be insert thumbnail and I'm gonna take in my image which is the string there, the name and use that as an ID that needs to be an ID on the HTML. So I'll take that as an input and then it takes a block too. So let's see, that should work there and then I can replace this with insert thumbnail and image block. Great. And I'm gonna do the same for this other code which takes my full resolution image and replaces the source of a particular image tag. So let's make a function of that, say update thumbnail and that also takes the image name so it can find the HTML object or the HTML element and the blob which it inserts there. Okay. So that makes things a little bit easier to see already. Let's make sure that things still work. Yeah, they do. Okay, perfect. So the first thing I'm gonna do here is all the await keyword, if it appears in a function, needs to, there needs to be some indication to the browser that the function has a weights inside of it or that it is following this other model of programming. So we use the async keyword to dictate that a function is gonna have a weights inside of it. So usually the async keyword just goes before the function keyword. But in this case, because we're using ES6 arrow notation here, you can also put the async keyword right in front of the parameters to your arrow function. So this is an async function now. And instead of taking a promise chain here, I'm gonna go ahead and write const response and this is the first of two responses, await fetch. So what does this do? Well, what this does is when this line executes is the fetch API which returns a promise immediately but the promise doesn't resolve until the network request for the image goes through. This line will actually await the resolution of that promise before assigning it to the response constant. But as I mentioned earlier, won't block the main thread while it does this. So the code is as asynchronous as it was before, but to a reader is a single line of sort of synchronous code. So this is incredibly powerful. To continue down the line, we sort of do the exact same thing with all of those then chains. So let's go ahead and quickly write this out and I'm gonna take my blob here. Actually I'm gonna call it thumbs just so we can remember that this is a thumbnail and this is awaiting response one blob. Great. And after that, actually then I can just insert my thumbnail with image, the name of my image and the thumb block. Okay. So already I've gotten rid of all of this code into these couple lines which just look and make a lot more sense when you're reading through things. Okay. Let me just quickly do the exact same thing. I'm gonna copy paste this and this is my second response and this is for the full image and I'm getting back the full blob from this, which I'm inserting rather than inserting, I'm updating my thumbnail. Okay. So let's quickly see if this does what we expect and does. Okay. So you can see here that we've removed all of this code with the then chains and the then callbacks and turned it into six lines of what looks like synchronous code but is really as I mentioned asynchronous under the hood. Now there's one extra thing to show here. In my promised chain, I caught errors at the very end. Now the way to do that in async and await code is as you do it in synchronous code. You just insert a try block around what you want to try to run without errors and you catch any errors if they come out at the other side. So that looks and reads just so much cleaner than before. All right. Let's make sure that runs and we've successfully converted promised based then code to async await. So you can see how powerful this is but by converting things to async await we also sort of can potentially better see a problem with this and that is if we go to the network tab we can see that all of my mini thumbnails here so those are these first 11 lines they all finish before my full resolutions get fetched. Now why is that? Well, if we look at the code it's because we're waiting on the first fetch to finish before we kick off the fetch of the next function. So that's a problem. I want to actually, I might as well be fetching both resources at the same time rather than waiting for one to finish before I kick off the next one. So this is one time where async await has a little bit of a quirk and that is that what you really want to do here is kick off the fetches and await, excuse me and keep track of the promises that they return up here and then await the resolution of the promise in the proper place in this synchronous order because I do fundamentally need to insert the thumbnail before I update it. But if we do things like this instead of awaiting them inline then the fetch actually kicks off right after those first two lines complete and I'm not blocking or waiting on the resolution of that promise until the right time inline. So let's make sure this still works. It does and you can see now that the, make sure this is working. It's hard to see from the waterfall but I believe that the kicking off of the first full resolution is no longer blocks on finishing the download of the thumbnail. So that's an important thing to remember. Sometimes you want to kick off the promise before you await its resolution. Either way, I hope you can sort of see that this code is a lot easier to reason about and understand than the previous code and so if you're using promises at all absolutely you should take a look at updating your code to take advantage of async await. And async await is supported in Chrome Canary and will be in an upcoming release, the next release. So let's go back to the slides. Okay, so async await, very, very cool feature and it just really simplifies the writing of code. So we talked a lot about JavaScript but I want to move now to something that's a bit more on the horizon but promises a really powerful tool for you as developers. So WebAssembly is a cross-browser, plug-in-free, low-level language that makes it easy to run native code in the web. So there have been previous efforts to do something like this. Flash, you might even remember Java applets in the far-distant past, but each of those was encumbered by particular difficulties and WebAssembly is the first cross-browser, open, standards-based solution. So we're really excited that we're off to a good start in the standardization of WebAssembly. Now, what are the benefits of WebAssembly? Well, WebAssembly is a binary format so it's much smaller over the wire than textual equivalent. You may be familiar with Asm.js which tries to achieve some of the same goals but by using a backwards compatible JavaScript syntax. So WebAssembly is small from the start because it's its own binary format. And it's fast. And WebAssembly, you're dealing with memory and performing operations on numbers and int32s, int62s, float32, float64s in memory rather than dealing with higher-level JavaScript primitives which have their own runtime associated with them. So WebAssembly is fast and it's near-native performance. And as I mentioned, it's low-level. So it's possible to compile C and C++ to WebAssembly. And we're really excited about this because we think WebAssembly will unlock a new class of high-powered apps. If you thought it wasn't possible to run a Photoshop equivalent in the browser, it definitely is something that WebAssembly would be capable of running. So WebAssembly is designed to enable sort of high-performance computing and unlock things, not just games, but media applications and other things which traditionally have taken too much raw compute power to run effectively in JavaScript. So we have a demo on WebAssembly.org of an AngryBots game, the AngryBots game running. This is actually a Unity game that was compiled to WebAssembly. And you can see that the performance is a lot smoother than even the Asm.js equivalent. And I definitely encourage you to check it out. And WebAssembly is already implemented behind a flag in Chrome, Firefox, and Edge. So this demo runs in more than one browser. But today I wanna talk about how you as a JavaScript developer might take advantage of WebAssembly without digging into C and C++ or bringing out a compiler to compile WebAssembly. And that's because we think WebAssembly is also gonna unlock a really interesting functionality which might just be in a library. So you might be writing a traditional JavaScript web app and just use a WebAssembly module to perform some functionality which is computationally intensive. So if you're writing a progressive web app that encodes and decodes JPEGs, you could just offload the encoding and decoding of the JPEG file to a WebAssembly module and write all of the rest of the application, the user interface and whatever the features you have in JavaScript. So WebAssembly was created from the beginning to play well with JavaScript and interact just alongside of it. So just to give you a brief demo of this, this is far from an Angry Bots game but I wanted to show you that as a JavaScript developer if you receive a WebAssembly module, you don't have to really understand native code or C or C++ to still use it. So I've got an HTML page here that right now is just using this WebAssembly API because it's exposed to JavaScript the way you start and run WebAssembly is by calling a JavaScript API. So it's gonna load this WebAssembly module and instantiate it which is what we have to do to run it. So let me quickly switch over my server and let's take a look at what this looks like. Okay, so briefly before I forgot the other important thing here is to see what this WebAssembly binary is. Well, it's incredibly tiny. This is the binary representation or the textual dump of the WebAssembly module. That's all there is and this WebAssembly module does something very small. Just adds two numbers together. So I mentioned this isn't a big game demo but it should still be interesting to see how this interacts with JavaScript and HTML. So let's go to our new index here and this file. So what we've done right now is and I'm using async and await actually because all these functions are asynchronous. We fetch the module, we turn it into an array buffer and we use the WebAssembly API to compile it into a module and instantiate the module. You can think of a module sort of like an ES6 class and the WebAssembly instance as an instantiation of the class. The module can be instantiated more than once and you might want to use it in different ways. But regardless, instance is sort of the unit of JavaScript object that we're going to use to invoke this WebAssembly module. So let's actually break here and see what we get. So instance is, let me just actually log it out here and we can see it directly. Oh, I don't know if that turned on. Okay. So what we get out here is, all right, let's go through what we're loading here. It's our server on the right page. Yes. I'm not calling load. That is, wow. Wow, thank you. I really, yeah. I told you this talk was about advanced JavaScript but it goes beyond advanced to expert level JavaScript. Thank you. Okay, so we're gonna load this and we might then see some results. Hopefully, I mean, you could never be sure here. Okay. And what we get out is an object which represents our WebAssembly instance. Wow. Okay. Now, WebAssembly instances in JavaScript have a simple thing on them. They have exports because a WebAssembly module can export a bunch of different functions. So we see here, we get this object, has an exports method, or excuse me, an exports value and property and add to is a value that it exports. So add to is the WebAssembly function. Now, the important thing to note here is add to is actually native code. So again, adding to numbers isn't a particularly impressive example but you can see that from JavaScript in an environment you're very familiar with, you can easily get access to something like a native function and we can call it just like any other. So let's take this and I mentioned that all async functions are return promises, so we'll have to use dot then here but we're gonna take our instance and we're gonna call instance.exports.add to. Let's take two numbers and let's log these to the console. Okay. And let's see what happens. Ah, I keep forgetting I didn't map my persistence over here. Let's do that one more time and this is instance.exports.add to. Let's see what happens. Missing argument after argument list. Did I close this up correctly? We get 84. So you can see that even if you're not a WebAssembly developer or even if you're not compiling these web modules yourself, you can imagine a day in which instead of downloading an MPM module which is JavaScript, it provides a JavaScript implementation of something like a zipping coder or an image decoder or a PDF renderer, you could just rely on a WebAssembly version or a WebAssembly implementation of the same thing and use it with no knowledge of how it was created. So WebAssembly is currently in a browser preview period and we've announced that this is slated where the roadmap is to collect developer feedback during this browser preview and launch WebAssembly on by default in browsers in early 2017. So WebAssembly is really close and it's gonna be really powerful, not just for native developers but for developers making any range of progressive web apps with some computationally intensive code. So thank you very much, that was my talk and if there's one thing to take away it's that V8 is continually investing not just in the performance of JavaScript but in unlocking new capabilities and improving the ergonomics of writing web applications in the first place. So thank you very much, my name is Seth Thompson. Thank you very much. Well, Jake gets himself set up. Oh, he's had the worst time of late with his set ups, believe me, we'll talk about that, I get the feeling. Anyway, the browser panel is still gonna be going on in a little while, so if you've got questions and why wouldn't you, then you can submit those on the Chromium Dev Slack channel and we will pick those up and as I said before, we'll try and get through those. How's it going over there, Jake? Yeah, we're going well. Are you? Right, well, we've got, that's mirroring so we'll make that not mirror. How's it going, Paul, you can keep talking. Yeah, I mean, you've had fairly rough time recently with your presentations, haven't you? Like at Google I.O., you had the big stage and then everything bailed on you. Amsterdam, I think that bailed on you. Yes. Is it bailing on you now? That's better, look at that. I can see what you can see and that doesn't look good. Are you enjoying feeling, Paul? How's that working for you? I'm enjoying watching you go. Yeah, that's great. I mean, I feel like you're all set. I am, the sound in this as well. Have you got sound? You're all set? That doesn't feel good either. That didn't go well, did it? No, see, this is the thing. When I was, the last two Google talks, talks at Google I.O., I've mentioned on Amsterdam. Amsterdam has both been just in the middle of the talk, everything has just failed. So we've got these monitor screens here and when I did this at I.O., I was just going through my talk, everything was fine and then the one that has the slides on just went off and the one that had my notes on went green and then all of the colors in little squares and I was like, well, the notes are gone. I'm just going to slowly turn around and hope my slides are there and there was a big black screen up there. And you can watch the video back and it's like, I just kind of go, oh, I just got to go to the laptop, right. Speaking of which, though. And so what, yeah, basically what you're saying is while we might not realize it, it's a huge goal, a landmark if you will, if it gets through this presentation out without a massive disaster, at least technologically. So without further ado, since you seem to be ready. I think so. A massive round of applause for my friend, Jake Archibald. So I realize that you're all probably quite sick of me now, kind of been around all day, but this is actually only the second talk I've given at a Chrome Dev Summit and the other one was the very first Chrome Dev Summit back in 2013 and it went a little bit like this. And the new thing is the service worker. Actually, I think this is the first talk on it. There's nothing to play with in the browser yet. So this was before anyone had ever written a service worker, there was nothing in the browser at all. But now we have like two fully independent implementations in Chrome and Firefox and that means we get the other Chromium browsers come along for the ride, things like Opera and Samsung and Internet and others. Microsoft, they're working on their implementation now, it's a high priority and bits and pieces are starting to land in their insider builds as well. Savari still haven't made a public commitment, but they have been given implementation feedback on the spec so they've been looking at it in a lot of detail and they've been implementing the Fetch API as well, which is a big part of it, it's a prerequisite if you are gonna implement service workers. But thanks to the progressive enhancement, we've gone from having nothing in any browser to hundreds of millions of page loads handled by a service worker every day and that's just in Chrome. And I'm not talking about service workers that are just there for like push messages and things because there's loads more of those as well. I'm talking about service workers that are actually handling Fetch events like page loads. So that means that today, which I couldn't back in 2013, I can stand here and talk about actual shipped things because in 2013 I basically made stuff up for 30 minutes. I mean, this slide in particular is a total work of fiction. It's great. But I don't know. Look how happy I look there, not wearing a suit. Thanks, everyone. Oh, to anyone who's watching this in the video in the future, they voted that I had to wear a suit for this and it's horrible, thank you everyone. Anyway, but this talk, I enjoyed this talk, it was a bit of a laugh, so I'm gonna do it again. Because a lot of stuff we're starting to implement or starting to think about in service worker land and I'd like to share it and sort of see what you think about it, which things you want in the browser right now and which things, not all that bothered about. I probably should have called this talk seven things that don't so much exist right now, but I'm pretty excited about and you might be as well. It's gonna be a journey to the future. This is a real FAQ page for a train company in Wales and it's just this one question that says, can I buy train tickets for future travel? To which their answer is, yes. I've been to Wales before and it definitely feels like time travel. Maybe not forwards. So what have we got coming up? Okay, so we've got streams. I love streams. And there's a lot of streams already in the browser. You can fetch a URL just by fetch and wait it like we saw before. Get a reader for the readable stream and then we can sort of set up an infinite loop and we can call reader on the reader. And this gives us an object back which is very similar to what iterates return. There's two properties, it's done and value. If done is true, we're done. And otherwise we've got the value. And I think this code could be nicer. I always get very nervous about kind of wild true code. I mean, this works, but I don't know, it makes me nervous. And that brings me to the first future feature that I want to talk about, async iterators. Now, I have learned from my mistakes in 2013. So this is the vagueness graph. And I'd say async iterators are about this vague. But do bear in mind that this graph is itself about this vague. And that's quite vague. I hope that clears everything up. Async iterators, they're being specced right now. They're at stage three of the ECMAScript process. So we can expect some implementations pretty soon. So how do they actually work? Well, instead of this while loop and getting a reader, we can just do this. It's much simpler for a wait value of stream. And it works just the same way that our wild true loop worked before. And when these land in JavaScript, we'll start to see DOM APIs updated to use them. And so thinking about things like the cache API, you can have an iterator to go over caches, or over items in caches as well. I'd love to see this added to index DB curses for going through an entire data set. If you want to know more about async iterators, that is on the TC39 GitHub page, I will tweet out all of the links I show in the talk. But if you can't wait for that, you can play with them today using Babel. This is it running here in the Babel repel. I'm only showing you this because I have an excuse to say Babel repel, which is very satisfying Babel repel. I really love the way we name things in the industry. We just don't care. I mean, look at this. This is a totally legitimate sentence in our industry. My tiny Yelp clone, built with Redux, is now up on Ember twiddle. My tiny Yelp clone is now up on Ember twiddle. I love it. I should have put it on Babel repel when we completed the set. So when you stream values from fetch, each value is a hue into eight array of bytes. But often you don't want bytes. You want some of a format like text. And you can actually do this today using the text decoder. So I'm gonna get the new text decoder there, loop over the stream. But this time I'm gonna pass every value through decoder dot decode. Now instead of logging bytes, it's gonna log strings. But having to call decode on each value, I don't know, it's a bit of a pain. It'd be nice just to have a stream of text. And that's gonna be a lot simpler thanks to the next feature, transform streams. Transform streams, I'd say they're about as vague as asynchronous, maybe a little less vague. They're still being specced. There is a sort of JavaScript implementation, a proof of concept. And some implementation is happening in Chrome right now. So before we introduce the decoder, we are streaming stuff from the network straight into our log. No, okay. Trans... Thank you. Transform streams become this little bit that sits in the middle, that takes the fin and puts something else out. In terms of code, they look like this. You transform stream. And then you pass in an object of methods like start, called straight away, transform. And that's called every time a chunk is received and then flush, which is when the incoming stream has ended. And what you get back is an object of two properties, which is a readable stream and a writable stream, the input and the output. And this works really well because you can pass just one of those bits to another piece of code without passing on the whole transform stream. So we wanna create this text decoder as a transform stream. We'd start off by creating a function that's gonna return it, set up our decoder, the internal implementation, return a fancy new transform stream. We only need the transform function. And in there, just every time we get a chunk, we're going to do controller.onQueue, which is passing a chunk out. And we're gonna call text decoder.decode and pass that chunk through. So if we go back to our fetch code from before, that was logging out bytes. We can change this sort of around about here and we take our stream and we pipe it through the decoder we just created. And this, the pipe through connects the sort of writable, the sort of readable house puts into the transform and returns the readable of the transform. So now all these, the logs will be texted at this point. Now, like async iterators, once this lands in the browser, we'll start to see them appear in the DOM as well, sort of APIs will be changed. Things like compression and decompression. There's a lot of that in the browser already, like GZ, et cetera. Image encoding and decoding, they already exist too. They're just not exposed to developers very well and they'd be perfect for transform streams. But the first DOM API that is going to become a transform stream and we've wasted our time by recreating it is gonna be text decoder. So that's gonna be changed, backwards compatible way to be a native transform stream. And once you do that, you'll get stream out of the stream. So if you wanna dig into streams a bit more, check out the spec and that's where you'll find the JavaScript implementation. I'm really excited about streams landing in JavaScript in case you can't tell. I think it's about time because streams have been behind the scenes of the browser for like 20 years. If a page is well built, you'll see it rendered gradually. And this is because the browser streams the content from the network and passes it through the HTML parser, which supports streams, it can process it as it's arriving. Wiki offline, this is a Wikipedia PWA and it makes good use of this ancient browser feature. On a low end device, a 3G connection emulated in Chrome anyway, with an empty cache, the HTML takes around about, you know, just under five seconds to download. All the while that's happening, the parser is processing what it receives and that means we get a first render in sort of less than half a second. I think the Chrome's throttling is actually quite kind here on a real device that would be a bit later than that due to SSL setup. So at this point, we're just displaying like the top banner, the title, we haven't got the full page of content yet, but at least the user feels like something's happening. And then at 1.8 seconds, we get the first page of content rendered and rendering continues as most is received. As an experiment, I also built Wiki offline as a single page app, which is a popular pattern with JavaScript frameworks. So here, I'm just going to return this, a little bit of HTML, and then letting JavaScript handle the rest. This actually changes the story quite a lot. The HTML fetching and parsing is way quicker because there's not a lot of it. And then here we get the first render, just the shell. So at this point, performance, it's neck and neck. But while this is happening, our JavaScript is downloading and that needs to execute and then it fetches the actual content it needs for the page and inserts it. Now we get to content render, like almost two seconds later than the server rendered version. And I'm being kind here, I think. We regularly see single page apps taking a lot longer than this to get content on screen. It's a little bit of a misleading graph because it looks like the single page app completes everything a lot sooner. The reason for this is in the server rendered version, as it's downloading the HTML, it kind of discovers things. It discovers things like style sheets, images, fonts, all of that stuff. And it starts going, oh, actually, some of this is important for the top of the page, so I'm going to devote bandwidth to dealing with that. In the single page app version, none of that can happen until that content is parsed and that happens right at the end at that render there. So it's load slower. What can we do about this performance problem? Well, we can bring in a service worker and we can store the actual page in the cache and so that makes that a little bit shorter, that download time goes away. We can do the same with the script as well. But the page content still comes from the network. You know, we can't cache all of Wikipedia. The problem we have here is that JavaScript initiates the content download. So we have to wait for the JavaScript to run before we can start fetching the content. We can avoid this using link role preload, which we saw earlier. So doing this means we can sort of run those two things in parallel. But so what? All of that optimization later, like a service worker preloading caching, was still slower than the empty cache server render. So just to let you update for everyone, the screen of my notes in just went off for three seconds. If this could be happening again. But we're still slower than the empty cache render there and that's because we're spending all this time downloading content and then not doing anything with it until we have all of it. So we've traded this gradual rendering model here for one where we just display nothing until we have everything. And it's just because there's no API that can take a stream of HTML and inject it into the page. And we really need that. I hope we get that one day. But until then, we shouldn't be breaking performance by using a single page app, then just trying to limit the damage. We should be taking the well-performing server render and then making that even better. And streams combined with service worker, let us do this. So like we saw before, this streams. The same is true if we put a service worker in the middle. It doesn't really change anything. If the content is coming from a cache, it will also stream, which is still important if it's like a large video file. You know, you still want that to stream from disk. But ideally, we want a mixture. So we want to serve a single HTML response where parts come from the cache, the static parts like the header, but the dynamic parts come from network. And you can already do this in Chrome. In the service worker fetch event, I'm gonna get three parts of the page. I'm gonna get the start from the cache, the middle from the network, I can sort of include, and then I'm gonna get the end from the cache as well. Then I'm going to get readers for all of those because we're gonna process those streams. I'm gonna create my own readable stream and I'm gonna make a response using it so I can just pass the readable into new response and off it goes. Unfortunately, populating that stream is not so easy. It's like this, you know, it's a big bit of code. It's, I'm not gonna talk through it. It's quite ugly and it involves passing every chunk sort of through JavaScript and sort of dealing with it and processing all of those streams in order. This is actually gonna get a whole lot easier thanks to identity streams, which is the next of the 2017 features I wanna look at. I would say these are more vague than transform streams mostly because the API changed less than two weeks ago. So, you know, things are moving around but I think it's pretty stable now. To use this in your service worker fetch event, just as before, I'm gonna get those three parts that I'm going to display but this time I'm going to create an identity stream. An identity stream is just a transform stream that doesn't do any transforming. The input just goes to the output. So I'm gonna respond with that readable part of the transform but then before I do that, this is how we deal with the writable. I'm just gonna do something asynchronously. So I'm gonna have a self invoking async function there. I'm gonna, for each of the responses, promises that we have, going to pipe the body to the writable. I'm gonna say prevent close here, which is just saying, hey, once all of this stream has gone into that stream, don't close the other stream because we've got more to do because we're gonna do for each stream and then we can close it out. And that's it. And not only is this code simpler, it's also faster because we're no longer passing every chunk through JavaScript because the browser can go, oh, hang on, like the stream that we're receiving is from behind the scenes. It's either coming from the network or the cache. And then the thing receiving the stream is the HTML parser, which is also behind the scenes. And it can just do the whole thing in the background and save a whole lot of processing time. So now we're getting the best of both. We're responding quickly from the cache but streaming the rest of the data from the network. And the result of that, so here's where we were before, we can optimize our server rendered version with the service worker in streams. The parse starts earlier because it receives that big lump of content straight right at the start from the cache. And this means our first paint happens much sooner, but the important bit is the content happens way sooner. So we get that quick offline first cached render but still the benefit of the streaming render for the uncached content. So it's now over a second quicker for content than the hacky single page app. And with a model like this, I'm actually kind of happy with full page reloads when it comes to navigating around. So on the left here, I have a single page app. So every time I click a link, JavaScript is going to fetch the data and put it on the page. On the right, it's just a webpage. You click a link, it's going to reload and it's going to load that data. So I set them off at the same time. You can see that with all the complexity I added with making this a single page app and using push data, et cetera, it's still slower than full page reloads, especially when they're supercharged by a streaming service worker. Your mileage may vary, it can depend on the amount of content you've got, but I'm not making this up. Although this is a demo, I actually got hit by a real-world case of this only a couple of days ago. On Monday, I was at Heathrow Airport, browsing GitHub on airport Wi-Fi, which is not so great. Now, GitHub will use push dates and it will use JavaScript for all of its navigations. Unless you're in a new tab, then it will do a server render. So what I'm going to do here is I'm going to click a link on the left here and then I'm going to paste the same link into an empty tab. So here I go. Click the link, paste it, off we go. And we can see that the server render wins by a country mile. It's way faster and this is not throttled or anything, well, not artificially, this is just airport Wi-Fi. And this is because on the left, it has to download everything before it can show anything. At GitHub here, they've written a lot of JavaScript to make this quite slow. Unfortunately, all too often I hear people say that a progressive web app must be a single-page app and I am not so sure you might not need a single-page app. A single-page app can end up being a lot of work and slower. There's a lot of cargo quilting around single-page apps and I know what happens when you just sort of copy someone else without really understanding the situation. You see, I went out for a meal with Paul Irish. Yeah, that's right, for the meal with Paul Irish. He wants to touch me. Anyway, I watched Paul taste some wine. This is amazing. He swilled it around in the glass and he took this huge sniff, like a huge sniff and then I said to him, I thought, wow, Paul is so cool. Like, he really knows what he's doing. This is amazing. And anyway, a couple of months later, I was back in England, out with some friends and we were at a restaurant and we had some wine and I thought, I've got this. I know what to do here. I've seen this done. So I took the wine, I swilled it and I took a big old sniff. But I took the wine glass just a little bit too far and dipped my nose in it. Don't know if you've ever snorted wine before. It is not pleasant. I just kind of sneezed it out everywhere and my friends were just staring at me covered in a wine mist. I'm like, Jake, why didn't you just drink it with your mouth, even so much easier? The moral of the story is, you might not need a single-page app when it comes to, you know, there's a link there. You know, a server render might be enough, especially when you've involved the service worker. And of course, if you're using a client-side framework, server rendering is an absolute must. I mean, React, Ember, Angular 2, Web Components, they all let you get something on screen in a streaming manner before JavaScript fetches. Just make sure you're not displaying things that should be interactive but aren't. So things are looking pretty good. However, Facebook had been prototyping with this stream stuff and identified a problem. If you're serving from a service worker, there is the startup time of the service worker to consider. And that's zero if it's already running. But the service worker shuts down if it hasn't done anything for like 30 seconds to preserve memory. Depending on a user's device or other things going on, that startup can add, in the worst cases, a few hundred milliseconds. And that delays the content fetch just by a little bit. And we are looking to reduce that startup time. But it's always going to be more than zero if your service worker isn't already running. Are we just going to live with that over my tiny Yelp clone we are? So we're going to introduce a navigation preload. Now, I would say this is a little higher on the vagueness scale. We have an implementation in progress, but the spec is still kind of moving around a little bit. So take this with a grain of salt. Our goal here is to start the HTML fetch in parallel with the service worker startup, which you can enable just using this one line here. And you can do that whenever you want, but the service worker activate event is a pretty good place to do it. And this means for navigation requests, the browser will make the request to the network while the service worker is booting up. That response appears on the fetch event as a preload response. And that's a promise, and that will resolve if undefined if it's not a navigation or if the feature isn't enabled. So it's always worth looking in if it's falsely just do a normal fetch if that's what you're wanting. Now, what you do with this is up to you. You could respond from the cache and fall back to the network. But given that this preload can happen pretty early, it becomes realistic that the network may beat the cache API. So why not race the two of them and see which one comes back first? I'm going to pick up on a point Soma said yesterday, because he was very right, that promise.race is not your friend for doing this at all. When you give promise.race an array of promises, it takes the result of whichever one ends first, not whichever one succeeds first. Like, take this race. There's a race. I'd say this race was in progress, because no one has won yet. Promise.race, on the other hand, would say, she fell over. Don't care about anything else. The whole race was a failure because of her. Promise.race is a dick. So you will need to write your own racing function here. You wanted to resolve with the value of the first promise and resolve with a true value, but it's a few lines, but that's what you need. But what about our streaming code from before? A straight up preload wouldn't work here, because we're not fetching the same thing that would be fetched if the service worker wasn't there, because we just want the middle of the page, just that middle bit, because we've already got the top and the bottom in the cache. Thankfully, this is not a problem, because those preload requests are sent with a special header, this header here. And if your server sees that, you can go, oh, OK, I'm just going to serve the middle bit, because this is going to go through the service worker and it knows how to deal with it. So back in our code, we can deal with that. Just right here, use the preload response if it's there, otherwise falling back to fetch. And that means for navigation requests, that will happen at the same time as the service worker's booting up. And this is something that we can improve on even more. With this feature, we can potentially look at doing it as the browser is booting up, which is particularly good for progressive web apps added to the home screen. And we hope we can get there as well, as soon as the user presses it. Just as the browser is booting up, we can have that request started well early. If you want to dig into this a little bit more, there's a huge thread on GitHub about it. I'll post the links later on. What else have we got? Ah. So the current way the service worker works is that request from your page will go via the service worker, your service worker. And that happens even if the request will be different origin, like a font service. Your service worker decides what to do. And this is by design, because it means you can cache things that like images and fonts, even if the destination server hasn't even thought about how that would work or how to do that. The downside to that is many sites may end up with the similar logic for font caching or analytics and can end up storing the same thing independently. And in the future, we could look at ways of deduplicating that inside the browser, but the logic is still being duplicated. So to the rescue here comes foreign fetch. So I would say this is a little vega still, only because I'm pretty certain parts of this API are going to change. But there is a version of it in Chrome Canary already, which you can actually test with real users. I'll put a link up on how to do that in a minute. So what is it? With foreign fetch, the font service has its own service worker in storage. And if you make a request of the font service, it first goes to your service worker, and you get the first shout of what to do. But if you send the request on to the font service, it goes to its service worker. And they get to decide what to do, which could be to get the stuff out of the cache and send it back. So that means now if another website makes the same request to the font service, it can get that caching benefit the same resource that the font service has cached. So if you wanted to do this, if you wanted to be the font service and make this work, in your service worker you listen for this new event, and you're triggered when another origin requests something from your origin. And from there on it's kind of a little familiar, you know, respond with what you're going to respond with, however you want. Let's look to see if there's something in the cache otherwise fall back to the network and then return the response. And this is where things get a little bit different. Rather than just returning the response or a promise for the response, you return an object which has a response property. Now when you do this, the destination server will not have scripting access to the content of that response. It won't be able to get the text of it, but it will be able to include it as a script tag or as an image element or something like that. The same way cause works today, this is like a no cause response. It just won't be able to get at the text or the pixel data of the image. If you want the server to have that access, you add the origin property and you set it to the origin you want to have access. If I have visibility to this resource, I want them to have visibility to it as well, which you need to think carefully if that's what you actually want. Otherwise, you can set up some kind of whitelist or something. You could even sort of get this information from index DB. You're coding, you can do what you want. So this is kind of a representation of cause but with JavaScript. So you can sort of do a lot of different things. We talked about font and image APIs and analytics, but you can use this to create whole REST-like APIs that work entirely offline. One detail that's missing though, how do we get this service worker installed on the user's machine because if it's like a REST API or a font service like where the fonts come from, the user's very rarely going to actually go there and that's when the service worker is installed. So to fix this, when you actually give a resource to a page, you can also serve it with this special header which tells the browser about the service work you have and it will then go and install it. If you're keen on foreign fetch, article by Jeff who was speaking earlier, he covers it and he also covers how you could actually use this on websites today as part of an origin trial. Oh yeah, so earlier on, background sync was mentioned, which is a feature we shipped many months ago. It allows you to defer single tasks until the user regains connectivity. So say the user updated some setting in their profile or sent a chat message when they had no connection, background sync lets you queue that work and now the user can sort of navigate away, they can close the browser and later once they have connectivity, the service worker can wake up and send that stuff to the server. And this is shipped in Chrome like it's done and it's great for small bits of data, like profile updates, sending a chat message, that kind of thing. The problem here is like while the sync happens the service worker has to be awake the whole time and that's bad for privacy and bad for battery. So we're not going to do that. What we do now is if a sync runs for too long we just kill the process. But for large uploads and downloads we're working on something else, background fetch. Now it's quite early days for this one so it's pretty vague. Vaguer than the vague graph itself so it's quite vague. All we have right now is a kind of API sketch and we're starting to explore the issues and then we're going to work. It's a cross browser effort. So here's the idea. From your page or your service worker whichever, you get hold of the registration and then call background fetch.fetch give it an ID and then give it some requests. So for a movie this could include like the video resource but also like some metadata or something, poster image or whatever. And that's it. That fetch will now happen in the background as well. Once the fetch completes you get an event for it and that will give you information about it. You can start having a look what's the tag I'm going to actually cache this stuff so I'm going to open the cache and then event.fetches will be a JavaScript map of requests and responses that arrive so you can do what you want with that. Of course if you're uploading photos you don't want to cache the results you'll just maybe show a notification so you've got the freedom there and during the fetch you can show the progress of the download and because of this high visibility and it being easily cancelable we're hoping that we can deliver this feature without any sort of permission prompts we just need to make sure the privacy aspect is correct and make sure it isn't too abusable. Is this something you're interested in? You can take part on GitHub I will move that repo somewhere a little bit more neutral like the YCG the standards thing. Oh yeah earlier on I showed you this thing here the full page navigations being significantly faster but I know why people go down the SPA routes because they want to do a nice transition from one state to the other it makes me sad because I've seen developers introduce large frameworks just for basic transitions which is a little bit of a shame especially to have to reinvent the entire navigation stack just because you want a nice fade from one thing to another and that's why we're going to take another look at transitions I mentioned them yesterday I really want to have a good plan for this in 2017 but right now the idea is very very vague in fact we have to scale the whole graph down just to sort of see the top of it so take this with a big bag of salt and it's not the first time we've looked into this either Internet Explorer 5 you could use this meta tag to specify a kind of enter or exit from a set of configurable presets so with this page in Internet Explorer 5 the user would click the link and Internet Explorer would crash is what it usually did that was my experience anyway but in 2014 Chrome Dev Summit we pitched this transition's idea we showed demos I didn't really pan out Mozilla have a proposal as well but they're both solutions so I can kind of declaratively say I don't think they're expressive enough stuff like this should be possible and that would be a full page reload utilising the full navigation stack of the browser and the streaming HTML parser because when you do this you get the back and forward buttons working for free if we actually take a closer look at this transition the first part we can do that without any additional data we already have the image we know where it's going if we have that title already stored we can do that bit and we can improve the perception of performance by doing this bit while the actual fetch is happening and then we can bring in the content once it arrives and if it arrives while we're transitioning we can bring it in earlier and sort of make it part of that sliding transition the transition out's a little bit different and we actually need more data to do that transition because we need to know where we're sending the clock back to which depends on layout the scroll position I really think we need an API that allows this something like a navigate event that fires when this page is going to be changed and you can say hey, I'm about to do a transition so keep this document around for a bit and at this point you can start doing the very first part of the transition like you're getting everything into place where you think things are going to be get hold of the new window object which will represent the page that's coming in and that will resolve if undefined the cross-origin navigation I would like us to look at cross-origin navigations as well but they have to be pretty restricted for security reasons but once you've got this new window you've got scripting access to it you can start doing what you want by default I think the new window will draw on top but the transparent parts will show the page underneath so here you can start looking at where elements are what the scroll position is here I'm just going to set the opacity to zero of the new document interactive and then fade that document in so that's a simple fading animation this is a simple example but it's as complex as you want to make it so with this you'll be able to do these expressive animations but retain all of the features the browser gives you for free in navigations if that's interesting to you the details are on GitHub I intend to move this repo somewhere a little bit more neutral the term progressive web app is just over a year old but the work has been happening for years on this stuff and we're not done I think you've heard over the past couple of days how much we love the web and where we want it to go but now it is over to you we want your feedback on this stuff be it in GitHub at the very early stages or playing with this stuff in Chrome Canary so come and talk to us about it I can't put it better than this shop window sign we're not till not happy wait we're not happy till you're not happy no that's not it till, oh no I don't know anyway thank you very much well done buddy I've got to say even though I knew it was you I was like that butler is really well informed I was like wow he's really excited about the future of the web you feel better now apart from the three second outage did you have anything else go it was just enough to give me all of the nerves good excellent excellent well it's time for the big web quiz to come back but obviously we're just waiting for an AV I was excited about the iTunes the terms and conditions that just appeared that was enjoyable that looks good and might as well press F there we go just say don't ask again that will do it well done these are not the final questions of the day are they no we've got a little bit more there's still time to to win so if one gets logged in and we can do a couple of questions are you feeling good? oh yeah my wifi is working excellent you ready? which one would you want I'll do the cause one do you want that one first here we go then this should be appearing at some point never oh here we go of course in a request to another origin what can you set to the content type hang on what can you set to the content type editor that's a bit bizarre mate what can you set the content type editor without incurring a cause pre-flight without incurring a cause pre-flight multi-part form data application XML application x-w-w-w form URL encoded which is of course very long I can never remember that I'm feeling it's like the dream weaver default actually when you made a form interesting now it feels like the audience oh they're very confident on the 67% there oh yeah two thirds oh no so what are we seeing here so we're thinking probably not application XML the conflict with the text plane they're thinking you know pretty solid not that comfortable with multi-part form data pretty confident with text plane interesting let's find out which ones are correct interesting and the reason for this is all of those free you can do with an HTML form you can set the content type to one of those so it's kind of like hey you can do it already so you don't need a pre-flight for it application XML well feels wrong to say XML is new for fun for this it is right should we do another one yep we got time here we go this is a fun one what is the prototype of this custom element before its class is registered is it HTML element HTML unknown element or HTML pending elements I've got a feeling this audience really knows this one well there's definitely an unpopular answer yeah but the other one is the other two so it's basically pending element pretty much disregarding that one as not a thing but unknown element versus HTML element the correct answer is it's unknown element until such time as you tell the browser and it used to be document dot register element it's now custom window dot custom element dot define for v1 no until that point it's unknown and then it becomes whatever it's supposed to be right I don't know who's on next so you have to do this wait to see if you remember Patrick it is in fact Patrick Kettner coming to talk I believe about the progressive keeping the progressive in progressive web apps ladies and gentlemen Patrick Kettner hey everybody how's it going good happy to be almost done or happy sad it's over anyway I'm here today to talk to you about in progressive web apps my name is Patrick Kettner unlike what it says on my badge apparently they thought the Chrome team needed another poll around but my badge is right I do work at Microsoft despite what my t-shirt shows there I'm just a conundrum of confusion it's just a photo I take every year for my birthday I get these pictures of cakes every year my incredible partner Katrina hi if you're watching is usually here with me she's been spending a lot more time at home because we just had our first child Holden was born thank you very much it's he's worth the applause I know I'm pandering a little bit but he's a really really cool baby it's funny when you when you first have a kid all your coworkers all your friends all your family give you the same collection of advice they all say enjoy your life while you still have one or you know have fun eating while you can still have time to do it or have fun sleeping while you still are able to and it's funny because with Holden none of that really applied he's been a super easy baby he's really cute all the time and he's just happy literally happy all the time he's fine now but he got sick and had to go to the hospital and he was adorable in the hospital this is him with a fever and is sick and everything he's a crazy happy baby he's cute all the time and that's when we go on drives he hates getting into the car seat not so much the drive itself but like being strapped in he just I don't know it doesn't like being like not able to move or something and he just freaks out and that's fine when you first bring him home we like literally live a block away from the hospital he was born in and so we walked home like we didn't have to drive it took us a while to discover that he hated this and you know it's fine and then after a while I love to travel and you know Katrina puts up with me liking to travel and we have sitting at home getting a little cabin fever and so we thought we would pack up and go visit Whistler which is outside of Vancouver, Washington I work at Microsoft so I live up by Seattle and you know knowing that he hated being in the car seat knowing he'd have to go into the car seat and being a nerd I do something that I think most of us would do and I go and try and find out something online that's how to make it better and so I look up how can I calm a baby look at all these different ways that people suggest doing it's my first kid and I found this really neat video the frame rate is going crazy it's usually slower than this but it's basically this interesting thing all it is is a video of dancing kind of dots and it snaps him out of it like that every single time and I'm not quite sure what the science behind it is but effectively just the combination of the contrast and the motion it just instantly he forgets why he's crying and he's just like oh crap there's all kinds of stuff and you know he just keeps going literally works within seconds every single time and so I'm like oh sweet great got it put him in the car freaks out show him the video on my phone two seconds later he's fine passed out and we go get in the car head up to Vancouver it's a beautiful drive if you've never been Pacific Northwest is where it's at we have a wonderful day in the Whistler area it's a great place to visit we go and see the old Olympic stadium that's him still sleeping after getting there those dots really work and you know we're there for a few hours ready to go on door car and I strap him in and he starts he wakes up as soon as I put him in obviously and he gets really upset as soon as I do it and so I'm like ah got you this time pull out my phone my phone carrier I have like free international roaming I hit play and I get this and I get this and a baby's crying and my partner Katrina is starting to get upset because it's been a good two minutes now where I'm just standing at the phone and I'm like it's coming don't worry and I wait and I wait and it's been five minutes literally five minutes standing at this screen with a baby screaming getting more and more upset and Katrina getting more and more upset and me getting more and more frustrated because I don't know about you guys but as a person who works with technology especially like web related stuff when the web fails me I take it super personally I'm like I could have done this better something you did was stupid and you know the whole time you know eventually we're just like you know whatever I'm just going to pack up and he'll I don't know cry and he'll be fine and so we just hit the road I'm just getting more and more frustrated and I'm like just 2G just completely ruined my day that we had a nice time but this my phone is just so frustrating and then you know I started to get eventually he cried himself to sleep and I was driving and I'm like you know it wasn't like the 2G that ruined my day because that could happen to anywhere there's nobody there's you know like YouTube can't send everything in like one bit or whatever it was really the fact that it was 45 megabytes of this video and I was super lazy I went out I found a solution and it worked really well on my machine on my incredibly fast home internet and it functioned and I was happy with that without thinking about the facts that it's shit for a ton of people out there I was it's it really really sucks when you're in Canada on 2G and you have no way to load this sure I could have like pre downloaded I should have done a lot of things to make it better as an end user but I couldn't have expected that to happen and so eventually we get back over the border eventually he is able to be calm down again we get back home and you know I sit down still frustrated with having been disappointed in myself for being a lazy engineer and so I do what I feel like a lot of people do is I sit down I open my text editor and try to make a better solution because we're JavaScript developers so we re-implement stuff and so I open up my text editor and I open up Vim and I'm ready to go and then I realize that I'm terrible at anything visual you know I'm like the maintainer for modernizer I can JavaScript around tons of people all the time but anything visual related I'm just horrible at so I'm like it's fine it's just a bunch of dots dancing and then I have like I have no idea how to make that happen like I can put a dot on the screen I can do a basic canvas or something like that but I don't know how to move stuff and so I sit and I wait and I wait I have no idea what to do and so Rise Man once said when you don't have a better idea you can buy a novelty domain that's exactly what I did and I went out and I bought hush little baby and in that time that it took me to come up with this hilarious domain name I was lucky enough to put out a tweet on there saying like does anybody can anyone explain how to do this and Sarah Soudian and I'm sorry for pronouncing it wrong Sarah I'm trying hard though I said like oh you should check out green sock and if you guys don't know green sock is a phenomenal animation library it's kind of like a jQuery for animation but ignore the jQuery animate part it's really really good it's super performant it's super fast it works really great for javascript type manipulations and it worked and I was like holy crap I got this whole thing working literally within like an hour it was less than 6k for the initial rendering and the initial animation I lazy load in a like an mp3 file so it gets you know up to like two megs or whatever but that's fine and it works incredibly fine I did a service worker obviously because I didn't ever want to be stuck in Canada with a crying baby again and you know it's a super simple service worker this is one of the first ones I ever shipped in a personal product and I was happy with how little code it was you know I do obviously a feature detection for service worker and then this was basically our entirety of the service worker it's just a simple like a pre-cache step so obviously on our install we grabbed that event we have our array of URLs of files that we will definitely always have and then we grabbed that event we open up the cache that we have for it and then once that resolves we prefetch every single one of those with a simple request we put it in our cache and then we just also have a very simple fetch step where in case I add something to the domain later and I don't add it to the prefetch step it's just you know automatically added there as well we check to see if it's in the cache and if it is we respond with it and if it's not we fire off another fetch it's you know really it's literally that's the entire content of our service worker just transpiled out of ES6 and it was great we had all this like super smooth working stuff I was super excited because you know Holden got upset because he ran out of a bottle before he finished it because he eats a lot and I picked up my phone and boom and he stopped and I felt like super dead and I was so excited I ran over I showed Katrina she was excited you know because you know it's a bait no one likes a baby crying you want him to feel better and so she takes him into his room later and he we're in an old building for America I know I have British friends who make fun of that but it's like a hundred years old which is ancient for Seattle and it's like thick brick walls and everything and so in his bedroom there's a terrible cell phone reception and so she's like he starts crying I put him into as she's putting him into bed and pulls out her iPhone and then she goes to open up the phone the app and gets this crap once again I felt like it just an absolutely terrible you know problem solver this same problem happened just because I had made it work perfectly on my phone for my one solution and so the problem is that it had no iOS support as we all know service worker support within iOS and so I had to sit back and think about what to do and a wise man once said when you don't have a better idea try out appcache and yes that appcache it's terrible it's terrible it's awful I'm not suggesting that people actually super try it out but what I am suggesting is that you don't immediately discount technology that you hear is terrible there is a lot of stuff that browsers have been shipping for a very long time that might be able to meet your solutions I work at Microsoft I hear about old crap all the time stuff that we've been doing in IE5 that is just now coming like Jake mentioned the navigation transitions there's all kinds of stuff that we did a long time ago that might still be useful occasionally it's worth checking out the very least it's worth knowing what we did wrong on these old specs like applicationcache and in this case what I was doing oh yeah actually it's so bad that it's being removed from HTML I don't know how many things you know of that are actually actively removed from the HTML spec but this is one of them we all agree that this is terrible and needs to just be purged from our collective memory but appcache is incredibly well supported across the board as long as you're not developing for like opera mini or god forbid old IE like you are pretty much guaranteed to have an appcache support and for a site that's as simple as the one I made which is obviously much less complex than I'm sure most of the things you guys would ever make it's really really straightforward and actually feats all of our use cases and so we just take our feature detect from before and we add another little feature detect I don't know a lot of people actually have never used applicationcache and so you might not be aware that in order to use it it's actually an HTML attribute that exists on the HTML element and that has to be there at the very top of the page at load time you can't like dynamically injected it's not again it's a horrible API it's a giant douchebag the attribute has to be there at crush time and so in order to get it to the bottom of the page you have to bend over backwards and do it this way and so inside of this check what I end up doing is creating an iframe hide that iframe and then it's give it the source attribute of like a specific page that only loads loading an appcache it basically just has that attribute and then I append that at the bottom of the page is the entire contents of that HTML document it's super super small does almost nothing but it gets that appcache in there and it starts downloading all the assets and so those assets that I have in my prefetch cache and through like a little webpack transform turned it into a web manifest or an appcache thing out of there and cache manifest is a really simple file format that's one of the reasons why it's terrible it's because it's super like if you do this one thing it makes sense and nothing else so we have the cache manifest and then the files that we had before and then we also have to have like this network star at the bottom the reason for that being that appcache automatically assumes that it knows everything and any URL that is not on this list will automatically 404 even if you're online it's a terrible API it's really really crappy but if you know this edges and if you know what to do and how to get around those problems it can be useful check out crappy apis because there might be like a little glimmer of something useful inside of it it is a douche bag don't forget that but check it out don't immediately assume that just because you heard a technology whether it's something built into the browser or a library or anything else is terrible just because you heard it's terrible find out for yourself and learn um in fact I was thinking about while I'm doing terrible ideas I was remembering when I very first started in websites and we had this anybody remember DHTML anybody ever have that on the resume yes thank you yeah does anybody ever write an HTA a hypertext application thank you thank you thank you yeah so I made it into a hypertext application as well because why not and so what I did was after all those text checks I shipped this as a I checked to see if it's an old Microsoft thing oh I'm sorry a hyper HTA or hypertext application is a proprietary offline application shipped in Microsoft Internet Explorer 5 it's terrible um but but it exists and I figured I was already doing stupid shit anyway so I want to have some fun so if you go to this website hush little baby and IE you actually get this huge pop up because there's no reason to be using old IE unless you're checking out the fact that I was stupid enough to add an HTA and if you download it you get this experience you get this huge nice little one window pop up it has this little icon in the corner and that's basically the only fancy thing about it it has a flash audio player because if we fall back from the web audio player we load a flash player and so you have the full experience again just because it's fun but it's stupid the whole point that I'm making is that tech that PWA is like Jake mentioned are only been kind of a collective the collective concept of a PWA is new when Francis Bearman came up with it only a year ago back in like June of last year but the tools behind it the things the fundamental pieces of PWAs have existed for a very very long time this offline application is not a new concept there's a lot of pieces of the web and a lot of pieces of PWAs that aren't new concepts they're just finally good versions of concepts that we've tried to do multiple times and there's a lot of things that you can do today to kind of implement that one of the things I hear regularly from people is that they're really excited about PWAs but they can't ship them today because they can't wait to do a full site architecture they can't wait to implement all these different features and that's just not true you don't have to wait to do this stuff you can start implementing pieces today for example sorry I forgot my thought leader it's not a radical new way to create websites it can also be a radical new way to update websites you can do stuff today on your website that already exists for example the web app manifest in case you haven't ever seen it the spec it's a very simple document all it is is a JSON object that exists as a file on your page you have stuff like your language attribute the text direction the name of your document the description of stuff so it can show up in like application manager page an array of icons so that you can just support a bunch of different devices even stuff like orientation you can finally lock your screen to landscape or you can do stuff like a theme color so you could affect say the title bar and then you can go on on windows we're looking into what we can do with that you can even actually specify related applications if for some reason your company thinks a native application is better but they're still cool enough to do a pwa you can say if the user has this native application installed fall back to it if they also have that and you can toggle that with the preferred related application flag it's super super simple to do this and the great thing about it is there's no javascript API to this concept you have that one jason document you add it to your page and you are on your way to having a pwa you just inject it in there and you might be noticing actually sorry one of the cool things I wanted to actually talk about was that we are really really excited about pwas on edge and the one thing that the bing team is actually going to be doing is crawling the web to look for these sites that have web manifests and when they do we believe we're going to actually be ingesting them into the windows store automatically for you and so you can easily opt out of this if you don't want to have it as a part of it but what will happen is you automatically get interested into millions and millions of windows users you have this ability to reach out to them and once you're involved in our windows store automatically, just by shipping this web manifest you can start doing feature detections for the windows winRT APIs that basically should pretty much any windows API that you have access to you have full access to in javascript we started doing this work in windows 8 and it's available there so you can do stuff like integrate with the system calendar just by doing a simple feature detection script and then checking to see if you're in windows and if it is show the full calendar you can integrate with Cortana you can integrate with a lot of low level things and have a truly native application feel in website code that you ship out automatically just by adding a manifest it's a really cool thing to check out and to make it even simpler you know so I was sitting and thinking about how a lot of this information was you have that JSON object and a lot of that information that's on that JSON object is already in your application and all these meta tags that we've had show up into this tag soup at the top of our page for a long time now and so I created this node application sorry a node plugin module thing called manifestation it takes a URL and then gives a callback to you that just generates the most fully featured web manifest object it can create and then it's featured like for example our language detection it loads up the whole page using Cheerio just like a server side jQuery and CLD the compact language detection which is a binary object it's very similar to like how Google translate works it can actually look at the context of words and how they're used until a difference between Portuguese and Spanish or something like that and so we go ahead and load up the HTML of your page we go and check for the language attribute if that doesn't exist we go and check because you might be making an HTA into a PWA for some reason then after that check the DC language attribute Dublin core if we have any librarians in the house it's like a super A it's a really old meta tag that no one uses but if it does exist we want to use it in there and then after that we'll fall back to CLD where we'll actually look at every single word on your page parses HTML and try to automatically detect what the document is just like you do if you open up Chrome it's foreign language it's like hey do you want to translate this from Portuguese it's exactly the same type of concept and so after all that we send back the string of what that language might be that's just one of the detections that we do there's for every single field on web manifest we have one this is the beginning the top of like a 200 line file for images where we go and scrape every single image that's declared as an icon we download it check the size ensure that the mime type is correct ensure that the extension is correct and do all this stuff and generate the full icon array for you automatically in fact it's so easy to do this I made a website so that all y'all can do it like literally right now you can go to web manifest this website it exists it's this cool domain it's super simple again I love stupid domains you just go there and you put in the url hit submit and it will download your web manifest file that will automatically be generated from the code on your site already you don't have to do anything you just download this you add it as that link tag and it works it's great and you're the beginning of having a pwa and you can be ingested in the Bing store soon you just get this pop-up download you got it, it's awesome so it's web manifest it's a super simple thing if you want to end up changing what I do because we try to be fully featured you might want to remove some of that stuff if you think it's unnecessary I added a short url to the validator you can just go there you paste in your JSON object that you modified and it'll tell you whether or not it's valid if you have an invalid manifest file it actually won't count as a manifest at all so it won't work you would be blocked from doing a pwa you want to make sure you have a valid piece of JSON object also the spec on this is really pretty simple if you've ever been interested in doing w3c work if you've ever been interested in getting more involved this is a great one to dive into we're all super friendly we'd love to get more feedback on this stuff and if you guys have stuff you want to be involved in it's a great one to check out it's a really fun thing to work on so while we're talking about what we can do today you don't have to wait for all these new features to come Jake just blew our collective minds with the stuff that's coming in the future and there's a lot of really fun stuff but there's also stuff that you can do for a long time on the web that is amazing take for example web workers they actually landed in web kit back in 2008 and 2008 in web years is just geologically the chrome icon here is wrong it actually was that back in 2008 it was a long time ago and a lot of things have changed IE 8 wasn't even released yet it was a long time ago but since it's been around for so long it's phenomenally supported it's supported pretty much everywhere it's back to IE 10 it's really really cool and if you're not familiar with web workers it's a really simple API has anybody here ever implemented a scroll handler like on scroll, don't lie we're among friends you can rank up your whole page and that's because everything on a browser is completely single threaded because the browser doesn't know are you going to modify everything in the DOM in that scroll handler so it can't relay out stuff until your scroll handler is finished because it would be wasting work it sucks because most of the time you're probably not but it can't trust you and so what a web worker does is it gives us the concept of having a background thread you're able to say like it gives you a new JavaScript context that you send strings or possibly objects to depending on the version of web workers you're working with and it's like hey, go calculate this and then send back the results and it can do that completely in the background so it can do some really hardcore heavy number crunching without any way affecting your scroll performance you can do a lot of really cool stuff with it take for example Pokedex.org I'm sure a lot of us saw this it's Nolan Lawson, one of my amazing co-workers created this website as a really advanced PWA it has an incredible performance on the site and part of the things that makes it so fast is because it's like a react-ish powered thing, it's using virtual DOM and it actually calculates the virtual DOM differences inside of a web worker and then sends back the diffs in between so it's able to render stuff faster something the React team is actually investigating currently is doing virtual DOM diffing inside of a web worker it uses like I said virtual DOM it has this concept where it gets the diff, it sends it over to the UI thread where it applies all that stuff this looks important, I included it because it looks sciency but the cool thing is you have this phone, this is a Nexus S which is a 5 or 6 year old phone, I should know this I'm at a Google conference, I'm sorry but you get incredible performance this is like he just did this last year and it's phenomenally fast and look at how smooth that animation is and one of the things that he credits as being able to do this so well is by doing his DOM diffing inside of a web worker there's tons of crazy cool stuff you can do in web workers take for example Modernizer like I said I'm the maintainer we have our website our dynamic builder is completely client side I built in a whole lot of stupid stuff into it like I showed you the HTA right now and all these different things are check boxes and every single time you check one of those off it'll go ahead and download the entire module they're all modular pieces right and one of the problems we had on the Modernizer team is that people were including the entire build of Modernizer and as we grow to more than almost 300 modules that can be a gigantic piece of JavaScript and frankly most of you don't need to know whether or not border radius is supported in the browser right all these tests and all this code that isn't running that we didn't want to include and so to make people aware of what they're doing when they start checking out features we actually have a gzip calculator up in the corner and so as you check stuff off it updates and we do that full gzip comparison inside of a web worker in the browser without ever going to the server we're able to do that and we're able to do that every single time you click off stuff because web workers are so cool in no way does it affect your scrolling your performance the animations or anything else it's just a really silky smooth fast experience and you're able to get some really neat information in fact I love web workers I like to push them as much as I can so much so that I'm looking into a new project right now that I've been working on where it has a terrible name if anybody has a better idea I call it web worker preprocessors we all use transpilers we've all used stuff like Hamel or Jade or other stuff like that I love them I want to take them and run them on this client side I love sites like JSbin or CodePen but I want them to be a PWA I want to be able to run that offline and so what this is doing is actually a tool chain that automatically creates cross-compiled version it transpiles transpilers to run inside of web workers I have a number of ones already working including some Ruby based ones less support, slim support Pug, Jade, the old version of Pug we have Auto Prefixer and even Hamel and I'm currently working on SAS and they work the Ruby stuff is actually transpiled using a cool project called Opal which takes Ruby code and transpiles it into JavaScript and they work I have every single version of slim a similar to Hamel Ruby version throughout its get range like literally 58 versions of it running inside of a web worker passing their own full test suite and it works really well if you're interested I'd love to talk about it because this is what I've been doing at night web workers are cool you can do a lot of crazy cool stuff cross-compile Ruby into JavaScript in order to compile your template it's complicated but I think you all know what I'm talking about the whole point of what I'm saying with all this nonsense and stupid ideas is that when you build stuff try out something stupid like there's a lot we can do all kinds of really crazy cool stuff on the web and you shouldn't be thinking that you have to wait for it to be supported everywhere to do something stupid try something cool today in one browser if a feature is supported in one place be it Safari, be it Edge, be it Chrome be it Firefox, try it out feature detect, do it and ship a feature the way that browsers actually decide on whether or not to support a feature are based on what y'all do if everyone started using web components back when they first were introduced then everybody would be supported on web components it's this weird prisoner dilemma where people think they have to wait for browsers but browsers wait for the people start using stuff that interests you give the developers feedback and make really cool stuff and so you don't need to wait for a rearchitecture in order to start shipping a PWA you can start shipping individual pieces like the web manifest like a really simple caching service worker do it today literally right now almost every single site that can implement that stupid 18 line service worker that I had in a way and you will instantly your whole site might not work offline but you can have faster caching you can have a faster load from almost all your users try out the new signing features and try it out today try something stupid the web is really cool thanks everybody were you going to go off that way? no that doesn't work I've been around forever and ever that's awkward right so what do we know it's a break and breakouts and codelabs yes and you've heard loads from us in the last sort of day and a half two days and this is a conversation it's a two way street so this is your opportunity to have conversations give us feedback on various topics and there is quite a lot going on yes so in the main auditorium which is here from 4 to 430 from 0 to service worker never heard of service worker 430 to 5 scalable loading which is kind of h2 push that sort of thing floor 1 it is ground floor yes because I would have sent you up there if I was in the UK but I'm not so it's down here 4 to 430 is web assembly and firebase as well so there's two out there and then at 430 to 5 it's push notifications and background and web VR so England is 0 based on floor numbers I've got the right thing isn't it I've got to be honest with you I'm happy about that so floor 2 from 4 to 430 it's Ampers starting point at the same time media techniques and approaches 430 to 5 it's Houdini which is all the kind of magic pool css what if you can write your own layout engine things like that it would be amazing and then 430 to 5 the web platform predictability pain points and stuff out there as well and in here so just make yourselves go and get yourselves a drink or whatever and we'll see you back here at 5 o'clock for the browser panel don't forget to submit your questions I've got to pop this here for all the incoming things oh I think a lot of people are still out in the breakouts well your snooze you lose they're going to miss out on the next couple of rounds of the quiz awkward that means you can go ahead huh there's terrible phones up for grabs and let's me forget the mugs oh yes the mugs of course all right shall we go for a question yeah we shall yeah I'll read this one for well that screens so you probably have the question on your screens now but this screen is frozen so let's there we go there it is we don't have sound though maybe nice we had sound could we have sound we'd love it there we go look at the display of subsequent elements while the resource loads sure we did this are you sure you just feel like we did this oh there we got this we've got an image element there we have a link rail style sheet async a script element well so we've got a kind of high confidence one of them there not so sure is that our highest that we've had I think so yeah I mean people are really confident on that 90 percenter we were saying oh people are pretty confident scripts going to block not so sure about the the link rail style sheet thing there let's see if we can get up the actual answers interesting so async attribute on a link element it's a red herring doesn't do anything it only works on our HTML imports if you are going to async load some style sheets you've got to go via javascript really well yeah I think chrome and firefox if you create the element of javascript and append it it'll be async safari will still block recent changing chrome there's some interesting facts okay you're going to read this question right what is the result of the following code is it a reference error does it log true log false or throws a syntax error we'll say it's not a parse error it's got those magic quotes that's a copy and paste error that is exactly it's almost like we had a document with these it almost like it did that so we've got some confidence in one of the answers one of them is more than the others they're very steady there all hovering around oh it's dipping there's a change I feel like we need to settle a little bit before we call it one question I have to ask why event source keeps us falling asleep that's the reason we had to refresh the machine very exciting do you want to look at it yeah let's look at it reference error is the most popular answer logs false logs true syntax error take reveal the answer the answer is it logs true and the reason it logs true is when you define the key of undefined the property it goes oh I'll turn that into a string for you yeah but it's just weird it seems to be the weirdness of javascript why do we even do this job I don't know all the same so then when you ask for is the string of undefined in the object it says yes it is so what's happening now then oh it's the browser vendor panel and I in fact will be needing this bit won't I yes I need to know who it is in fact that is joining me on stage you talk for a second Jake is everyone having a good time we're moving forward to burning this suit I mean maybe even with me still in it I don't mind oh hello Alex this one's Alex Russell my first guest Alex Russell you keep going and we have Jacob Rossi from Microsoft we have Harold Kirschner from are you coming the reward from Mozilla Andreas Bobins from opera keep going keep walking Oslo Maslow Gombosh from Samsung hello and Alex Russell I've already done Tal Oppenheimer there we go well this was well rehearsed wasn't it is it do I go away now yeah don't kick me around big round of applause for the panel though now you know that nightmare feeling where somebody invites you to fancy dress party and then hours before they let everybody know that it's no longer a fancy dress party and yet you show up in your fancy dress I don't feel like an idiot not one bit now the other thing that has occurred to me is that live questions are actually quite a lot of fun and I thought well I have mugs and they could act as an incentive so there are microphones there and I think maybe over there so if you have a question for our panel please do feel free to avail yourself of the microphone and afterwards you get to come and pick up a big web quiz mug that is exciting we'll get on with our first question then let me see so everyone's heard a bunch from Google about what Chrome's priorities are for 2017 for the other vendors what are your 2017 priorities please so I guess we can start with Microsoft Jacob do you want to go for it I hate to say it but actually it's progressive web apps Patrick talked a little bit about this earlier but we're in the process of building service worker and push and all these things and I think we might be one of the first to try and bring a full PWA experience to a desktop that gives some weird challenges that we're trying to figure out but yeah can you elaborate on what some of those weird challenges might be so if you think about adding a PWA to your home screen on a mobile device right like you can hit the button and the browser Chrome can kind of fade out and now you're in the school you're in the app experience what happens in a windowed browser does it lift up out of the tab does it magically become superpowers so there's weird things like that that we have to figure out there's also things like how do you transition from if you have it installed but go to the browser and type in the URL what do we do do we keep you in the browser do we jump you back out to the app there's things like that in a windowed environment cool what else are we going to do the general theme here was performance so that's also something for us to focus on so we probably saw the announcement for project quantum which is we're trying to get pieces from Serbo to paralyze things on the web and also make better use of the GPU that will bring the web forward with service workers and web push we can provide them the greatest experience we can do even more things so that's the part at least for the user experience just on a platform level that on browser level working on web payments and other services that were also mentioned here they're just crucial really to really get from end to end for user like the lift experience of one of the web payments I want to be able to sign in those are definitely the focuses so that's you simply look out for we're looking at the home screen for Fennec for the Firefox on Android version there's this kind of where there's a beaten path there's definitely room to explore other ways like how do you get people to add to the home screen things are not clear like for many users there's big benefits we saw in numbers from performance but what are the benefits for users for adding to home screen other than the three apps where you spend 80% of your time I think there's some room to see how users understand apps and how they want to re-engage with them especially if we see stories here how first users come to the web and then buy something just because it's really easy on the web so what's the return value it's interesting to hear you talk about the web payments it's coming back to you Jacob what's the status with Microsoft and web payments we have a prototype of this it's under flags we're in the standards group we're excited about making the web on an equal playing field with native apps in terms of just buying lots of stuff it seems like a good thing for everybody good, okay Andreas we have been working on implementing progressive web apps over the last year and a half or so I think we have an implementation that's pretty much on par with the implementation of Chrome on mobile we have something missing which is background sync but it's coming real soon as in next week or the week thereafter so that's nice in 2017 some of the things we want to bring to the browser is web Bluetooth as well as web payments is also coming we also still want to experiment a bit further with how to bring the user back from a progressive web app into the browser sort of break out of the installed progressive web app in full screen or in standalone mode and bring the user back to the browser we did some early experiments with that earlier this year but it was sort of an initial idea we have to further fine tune that a little bit and sort of see how which kind of flow makes sense also general web app discoverability should there be an icon in the address bar that indicates that a web a website can be installed as a web app and things like that so there is a lot of UX research and things we have to figure out there one more area is permissions should there be how upfront should you be should there be a model should there be a slide up what is a good way of interaction there and UX pattern so those are the things we are exploring and sort of trying to make a better experience and less slow so I have to start with progressive web apps as well just to make sure that the core experience is there service worker, web push so on and so forth we've been also part of the Chromium ecosystem so implementation is generally on par with Chrome and Opera in addition I want to emphasize web payment as well we have a lot of learning in that area so we are working out for more Samsung Pay initiative so you want to bring that learning into the web and add it to the Samsung browser as well in addition we are focusing on virtual reality that is something we've been involved with for over a year now we have a VR only browser that basically only works in VR and it's a separate download so we have a lot of learning in that space as well around 360 videos and VR so that's a priority for us for next year as well mostly it sounds like progressive web apps from everybody web payments and then a smattering of VR on top for Samsung we have a live question after which fail yourself of a mug won't you go for it perfect thanks although you look really good in that turks but you voted for me to wear it didn't you yeah I did thanks for nothing your question so my question is about html imports I know it's a touchy thing and I know there's a lot of opinions about what we want and what we want what would be a great thing but what's the harm in implementing it I get the sense that some of the audience would really like to see html imports so let's go for it happy to go first so we put it on hold until the modules land and it's not that we declined the spec it might have looked from the bot post if you read it that way I think I read it that way it seemed like it was a fairly firm maybe so we were waiting for how we would be able to use the module for us probably products as in the work doesn't have thousands of engineers so there's always a question of priorities and how far it specs along and are the concerns addressed it's just a standard process basically I've heard that answer a lot before but it looks like it should be pretty simple to implement anyway I mean it works in Chrome and it would be really helpful for a lot of frameworks and a lot of web components really I think the scenario is completely valid I think you heard it from the Google folks yesterday what we land on I think it was Dimitri that said this we're going to have to solve this problem but it's probably not going to be html imports as it is today which as a browser vendor is like red flag I don't know how many of you like to go code something that doesn't end up making it to your customers so you have to think about how these pieces cohesively fit together modules and that's part of our job as standards conciliators figuring out how they all work together happy? can we get him that's all you did there is an elephant in the questions that I want to address very quickly I should start by saying that we actually did invite Apple to join us on stage but they declined and we are glad that they are here at CDS and I'm pretty sure I speak for everybody when I say that we would like it if they could be available for these kind of panels and the question is how does collaboration with Apple actually work they're generally not present at these kinds of events so I was wondering how that translates into the TC39 slash standards process Alex you've got a lot of experience at TC39 so I obviously can't speak for anyone at Apple I can only give you the perspective that we've got on the blink team and from the standards team on the Chrome team and what I can say is that Apple has some of the strongest engineers that we've ever worked with from any browser no offense Apple employs some extraordinary people who really do want all the right things for the web and there's no sugarcoating that there's no asterisk there's no question mark next to that that's absolutely true and we are grateful every chance we get to work with them now we all work for companies and those companies have needs and those companies have priorities and so it would be amazing if we got more opportunities to work with them and we'd like that so the answer is it's great when we get to so could we get more of it just because it keeps cropping up and I thought we should handle that one right in which actually somebody's like Apple and service worker when and I'm thinking somebody from Apple would be the right person to answer that so I'm not sure we really can good an in-person question I think you were there first no that's very decent of you you could have just been like whatever yeah I was here first over this way then a quick Mozilla announced the fly web API to launch a web server from a page and Google has the physical web so I wanted to ask are you guys talking what are the other vendors plans about internet of things regarding this recent API okay internet of things so fly web is a different approach so the end of the things is everybody tries to put URLs it's a very simple approach the fly web approach is the way you could do it offline your light bulb could provide you a service you just open a page so it's the micro network I just log into my light bulb and I just use it like I have to install an app for everything and I have to be online for everything I want to control and it has to be a server that I communicate with and then talks back to my device so it's taking it to a more local level and giving these powers that the web has just have a link to some local service into like this thing that you can then connect to it probably doesn't scale to all devices they're still micro devices they just want to upload the data to your router and then go to the cloud just in a museum you walk around and you can connect to different things there's a ton of use cases where it's really interesting at Opera we implemented a while back we released a labs build with beacon support so you could look and list beacons around you back then we also thought it was a good idea to sort of show you I believe to look at your geolocation and show you Wikipedia entries as in, you know, objects or buildings around you and so on so sort of mix up sort of virtual beacons if you will basically geolocation coordinates and real world beacons and to sort of see how that would work we learned a lot from that labs build but we further fine tuning and rethinking a little bit how we want to how we want to move forward with it in our case the new Opera we've been building for Android has sort of more a card based approach to use items and so on if you download the Opera beta you can see that and so if anything we'll probably look further at can we do something with these cards to show you beacon info around you and so on so it's definitely still an area of interest and of course also Web Bluetooth as was already mentioned which is also strongly related to internet of things is also still a focus so yeah which has a project which also looks at how can we show beacons to the user to discover so we attack it from both sides so one two is actually in platform the other one is in connect devices but they're having different approaches on how to do this that's the kind of stuff and fly web is definitely a newer approach like a very different angle on no problem and we're excited that it's now out so that would be great what people do for us we have there's a version of Windows 10 for IoT as well it enables what we call hosted web apps which you can kind of think of as a precursor to progressive web apps without service worker and that type of thing but it's running on the machine pulled from a server you can access it running on like a raspberry pi or something like this so I wouldn't say internet of things is my personal area of expertise these all sound like proprietary or sort of separate sort of ventures into this area are these kind of coordinated and I don't know about it or are these sort of individual kind of vendor things that like one may survive or whatever so for our design it's the same edge HTML engine that runs inside of the edge browser so the same standard support etc today it's hosted web apps tomorrow it would likely be progressive web apps as well so we're not in that realm there's nothing really here that's proprietary it's all based on the standards we're developing for all devices all right there's a question here which I think is actually pretty good for everybody which is good enjoy it what are browsers doing to combat intrusive adverts on the web which ruin the web experience for everyone in the middle of reading an article mid-page load and suddenly full screen ad or suddenly an ad starts playing video or audio and it just makes me wish I'd use the native app ouch and I guess that same mobile browsers but generally speaking well obviously we've mentioned things like interventions and actually starting to kind of push back on some of these things that users find horrible what is everybody else doing? I think a big part of it starts with talking with ad reducers and one of the features that a bunch of us are working on including edge is intersection observer in terms of giving them the right tools to be able to get out of the way and meet their needs because they're very real just like yours and help them build their business model without impacting your user experience and I think that intersection is a great example of one of the tools we're giving them your take on this is go to the ad networks first rather than a stronger pushback and they're customers like anybody else and I think some people their business thrives off of ads and some people do I'm looking at a few people but they have real scenarios they have real needs and their web developers also and I think instead of just simply hitting them with a band hammer we can try and help them build what they need in a more user friendly way I do think there are places where things like interventions play an important role in helping be paternalistic and pushing them forward with the rest of us okay everybody else yeah in opera we have a data savings mode and that it only compresses HTTP traffic so it doesn't touch HTTPS traffic but if you're on HTTP sites you can or for HTTP sites you can check a box that maybe a bit controversial but we see users taking that box and actually getting a faster browsing experience like that but as I said it only works on HTTP sites so for HTTPS that wouldn't have any impact but that said I don't think it's sort of more of a hack or of something you know putting some power in the user's hands when they're very concerned about data savings and they don't want to waste just bandwidth on they can also turn off images opera is not against images but it's just you know so opera is also not against stats but it's sort of more a tool that users can use to get a better web experience they can also compress videos and things like that but that said I think it's the whole ad or sort of intrusive ad problem that users can also only be solved by talking with ad networks and that's one of the things we've been doing to sort of see what can be done to make it less intrusive and also probably tweak some of the APIs such as for instance the vibration API it's now available I've noticed sometimes I was on well you see on websites some websites that have some really crazy intrusive advertising that make your phone vibrate I was on mirror.co.uk I got a vibration ad in the background somewhere so those kind of things shouldn't happen actually so there it's a matter of for instance the vibration API shouldn't be possible that just any site that you've never visited starts using this or should there be a prompt first for the user to so there's different things that we can design to make those kind of terrible ads at least disappear so it's a bit like pop-up blocking for instance so browsers to care of and then yeah so in something in an advisor we have also a model where on client side we allow blocking certain content or requests we call this a content blocker extension and this is something where we felt that there's an overwhelming demand for a feature like this but we don't necessarily think this is the solution the conversation on how to actually proceed and one of the interesting aspect of our solution is that it doesn't actually leak browsing history back to the extension so that is something that the engine sort of self-constrained and executes these rules and another interesting use that we've seen of this feature is that people actually are really more concerned about the tracking and privacy aspects of ad blocking and not so much about the ads themselves so we have a few extensions that specifically targeting blocking tracking. You think that shows how the multi-fold style of the problem like you mentioned like this is full-screen ad which is a UX problem then it's a data problem there's a privacy problem for the tracking protection which basically just blocks trackers that don't obey to do not track so that's the one angle you just want to have a private secure browsing session and the other angle is just I think talking to ad networks and letting them say like here new APIs you don't have to it's a scroll and make everything slow and there might be other things like payments API like making the ecosystem healthier that websites have better ways to engage with users like if you want to pay for this content or see ads like this and we talked to a lot of publishers on the web and some have control with the ad that works sometimes it's in-house and they can actually talk to people sometimes they're just basically bound to the ad bidding process and you get 40 redirects before you get the ad so it's just just a whole radius right now and I think the ad industry they're moving forward they're definitely don't serve the user in the first place in some cases but I think it definitely feedback forward that has to be solved okay I'm mindful of time and so I know you've been stood up for ages I'm so sorry you could have taken that spot earlier it would have been great sorry okay I'm Lars in Denmark we have a saying something like don't throw away dirty water before you have some clean to replace it but that's more less exactly what you chrome guys did with the removal of apps on the non-chromos devices so in our case particular hardware connection we have a problem with that so I was thinking in a positive note the different browser vendors actively working together on getting some of those more useful features into web extensions rather sooner than later because right now I'm sure it's not only us where the carpet was just pulled anything on that I can see from the chrome side that we're actively working on a bunch of APIs to help connect you better to hardware so I think it was last year we launched Web MIDI to connect you better to MIDI devices we've launched web USB and web Bluetooth as origin trials which are sort of experimental APIs you can try out if you register for a key and those will connect you to a lot of the available hardware it's not everything it's not a serial port I get it and it's not all of Bluetooth it's just the gap profile so there are some aspects about this so we're trying to work in the standards process to design APIs that address most of the needs that we hear about frequently that does leave a long tail and that long tail is difficult to deal with and so we are actively looking at ways of maybe addressing it there's a new generic sensors API for instance that's a proposal out of Intel we're interested in seeing where that goes so those are the sorts of approaches that are coming but I'm sorry the thing is that as I say web USB and many other things are focusing a lot if I can be frank on like hipster technology companies not so much legacy big industry potential partners who have a lot of hardware out there you can't just change many of them use off the shelf chips that you can't reprogram on all this I mean they're left behind where you have a huge opportunity in enabling those guys to work with the Chrome apps I think it's probably worth having a conversation afterwards we will move on we heard a lot about development in emerging markets in the keynote and in AXIS talk what is the most important thing to do to try break into those markets probably so in general my usual answer to that question is the progressive web app given this particular audience that's probably too vague but one of the things we do see in general is progressive web apps especially compared to native are a lot easier to get the user to actually interact with your experience in the first place and when native app developers are facing entering these markets they're realizing that people might find their experience but they don't have their room on their phone to actually download it they don't have the data or the connectivity to download it and even if they get it on their phone they don't actually get it up to date so it's sort of a one time bet and all those sort of move away with progressive web apps but for those of you that are hopefully already invested or thinking about progressive web apps I think the biggest takeaway is actually quite similar to what we think when we think globally which is performance it just means a very different thing in a lot of these markets it means different networks it means most of the users are probably going to be connecting to your experience on a 2G network they'll be losing connectivity so it's not that they went into airplane mode because they got on a plane it's that they just lost connection right now and in line with that they will sometimes go into airplane mode because they don't want to spend data and so it's really the performance that you're thinking about has to take a bit of a different angle but I do think that the key takeaway as you're thinking about progressive web apps experiences for these markets is around performance Adopra as well we have a lot of our user base so for us it's also very exciting to see the potential of progressive web apps in those markets I had the pleasure to join some people from Google on the Progressive Web Apps Roadshow in Sub-Saharan Africa and so it was really interesting for me to interact there with local developers and I really urge you if you haven't talked to the folks building Conga for instance to go and talk with them because they give a very interesting insight into how progressive web apps matter and how they can make a difference one of the things there is connectivity and service worker and so on also background sync plays an important role also the small size of initial download not that you have to start with a 20 meg download of a native app of course makes a big difference and also device storage a lot of people run on underpowered phones if you have the latest and greatest western market phones so to speak and so progressive web apps fill that niche really nicely so they solve this problem by not taking up so much space on the device and generally being much lighter so this is very exciting I would also add the web fonts talking about performance fascinating thing, if you chat to Monica Dinkalesca she was recently away and was stuck on a 2G connection against web fonts she was like, they're the worst thing ever and I'm thinking if you go into an emerging market and you're blocking on web fonts then they might not be that appreciative of that fact it was just a minor thing that occurred to me on the way through that when you're talking about performance we probably can just sneak one final question and if it's a quick one are you there first, right? yeah, you were I'm not missing this one let's make this a quick one let's talk about the progressive performance itself why I think it's not really true one of the problems is most of the time in emerging markets there are a lot of phones that are not really that good and because they have these older browsers you end up with polyfills and shims and you end up actually downloading more libraries and so there is this chicken and egg problem where developers like us who build for emerging markets wait for the features available in all the browsers so I mean the point is if we wait too long and if you're looking for data that the developers adoption are there I don't think that's going to happen so even if it is minimal feature and it's available across the browser I think that should take priority rather than moving fast and fail fast to find out developer adoption for these kind of progressive features it feels like progressive enhancements part of that story there that the planning if the planning requires lots of polyfills and libraries and shims and so on in order to get going this comes back to what Alex was saying about using the platform and maybe just considering the user experience I shouldn't really be answering these questions about necessarily all the same but it just feels like there's a fundamental question there about what experience is trying to be delivered and I totally hear that some of the things you can't necessarily polyfill in a shim like service workers but it feels like if there's other things that are standing in the way like you're downloading large frameworks and libraries and so on before you go there then the platform you should be maybe thinking about that first maybe else I think there's a trend now like that just works for that phone so I think that's where just because we want to minimize the network the server knows hopefully enough about the phone to make a good decision on what he needs to serve so a progressive enhancement kind of kills the whole progressive web idea by adding all these polyfills and you want to make it this way if not fall back to the other way that doesn't work like fall back to the other way so I think that there might be just a trend like if you get a low end phone package so just cut down on the CSS and fidelity and just make it work I think also if you kind of marry Alex's talk yesterday with Patrick's talk earlier like you have to think of new approaches start from a clean slate and maybe a lot of these polyfills and frameworks that were developed were developed in a world where desktop was primary from years ago especially if you consider the year that these browsers you're talking about came out and the state of the art of like polyfills then was very much a desktop world and they were also meant to be full fidelity polyfills in a lot of cases right they're trying to be spec compliant and prove that you could have full backwards compatibility but maybe your scenario doesn't require it and so going and looking through the grab bag of older features that might be available on those platforms and just looking at the pieces of the platform that you need a strategy that helps I think those two strategies married together kind of are a good way to go well we are all out of time don't go until you've got a mug I'm not taking these back to the United Kingdom with me I mean they're highly prized we are all out of all out of time please give our panel a warm thank you oh my safety Jake is back hey how's it going? yeah great mate I just you know yeah don't worry I mean we get one each then that's great one for each arm can we get the quiz up we're going to do a couple more quiz questions in fact the final questions of the day so if that could appear on the screen that would be immensely helpful I can just keep hinting someone will flick a switch eventually and put the quiz on the screen there we go right okay let's do our second last question indeed an ultimate question what is the intrinsic default size of an i-frame 300 by 150 200 by 200 0 by 0 or 100% by 200 pixels i-frames are always the wrong answers to every question I just I can't get my head around i-frames I never have it's like I need I need a kitchen so I'll put a house in here the answer is a little bit more than the others isn't it just a little bit more than the others though not much the rest are sort of equal up shall we log in interesting 300 by 150 the most popular answer there should we find out if you are correct we shall here is the correct answer now interesting audience interestingly it is like it's 302 by 152 because it adds a one pixel border technically the renderable area of the renderable area if I put 302 by 152 in the answer that would have been hard yeah I think so the final one let's do this one let's do this one that's about css padding a padding top of 50% is 50% of what is it the element height is it the element width or the parent's width fascinating well hopefully we do yes wow okay oh no it's not we've got two equal answers coming through it's like two of them so I would imagine since there were two with height two with width I'm feeling that there's probably a match there interesting as a guess shall we find out we're going to block it where you were so it's going to be the element height or the parent's height here it is the parent's width I know isn't the web the worst sometimes that's how the aspect ratio trick works I was going to say the people who got the answer right I guess we know exactly what they've done they've tried to make an element maintain an aspect ratio we do a lot because if you've got an image on a responsive site and you want to reserve that space and then you discover just no there's no aspect ratio can we sneak one more can we sneak one more oh did we do CSS clipping do we do a one about CSS clipping we didn't we'll do it bonus round the CSS clip property only applies if the position value is static relative absolute fixed I think it's a multi-select this is a multi-select one this is the clip property not the clip path this is the deprecated clip property so we just want to mess with you really I only mentioned this deprecate I think it's been deprecated for a long time oh interesting a lot of things being selected draw from that we're selecting all the things now just get the quiz over with shall we look at it yes here we go so we're saying very confident and absolute relative most of it except static that seems pretty reasonable absolute is one here is the answers there's so much disappointment from the corner there there do we really want like Paul Kinlan to give a talk or we just do more questions he's our manager so this is our way of revenge it's interesting for the kind of person who when they're charged with organizing an event like this they go who should do the keynote me I'll do the closing keynote yeah let's do one more do we do that on me what have we got here we've definitely done we didn't do that no I don't think we did either let's do it this is about type coercion what does the following javascript evaluate to that what is it an empty string is it the string 1, the string 2 or the string 12 it's a tough one isn't it there's one answer no one is interested in it at all that's got some now it's exciting there's definitely one can give a talk at some point should we lock it in so what are we saying the string 1 that seems to be the most popular one it's me just proving I can read percentages from a screen excellent we're very tired here we go let's see what the correct answers are what is value of how it seems to have a higher precedence than two strings but the fact that you've added an object to a string like that I mean you should feel awful I mean what happened there so we'll get our final speaker and we're going to do the leaderboard and prizes we are when we come back but for now give a huge welcome a massive round of applause to Paul Kinlan thank you little do they know I don't think I've actually worked out the bonuses yet so it's fine hi everyone my name is Paul Kinlan it's great to see you all here I've had a really great two days actually has everyone had a good two days yeah cool everyone's been super awesome I'd really like the conversations that are going on over lunch over the breaks as well there was a question before about kind of web serial and stuff I saw that's a big one that a lot of people are asking for and it's kind of cool that you can go up to the Chrome engineers and actually pitch them for it so I think that's pretty cool but anyway I'm here to talk about what comes next I don't know how long I'll take it'll be alright I think we'll get out pretty soon it should be pretty cool the thing I was going to say was that this part of the talk and I need to stand before this in fact this part of the talk was it was supposed to be just after Jake's talk and Jake was supposed to talk about all the practical things right the things that we want to see from the future of the web in terms of like the infrastructural elements you know the improvements of the service worker network stack with fetch and background sync all these types of things and then I get to talk about all the kind of the show busy things right like web VR and all that type of stuff so this was me experimenting with it in like I think 2009 2010 or something and it didn't work in the slightest so I just whatever it was kind of fun to play around it I thought I've got a gyroscope, I've got canvas I'll be kind of cool, nothing worked I'm terrible but anyway one of the things I'd like to talk about and I was trying to think about how we frame this talk but if you think about like back to two years ago Alec Russell was talking about one of the or a year ago Alec Russell was talking about like distribution is the hardest problem in software right and the web for us is actually a great way of actually distributing our software because you just click on the link and go to the places and if you've got any experience helping your grandparents or if you ever worked in like in a big enterprise like you'll get these types of experiences where you have to go through and you have to go install the applications and you go and download them and it's just such a nightmare and it's like has anyone worked in enterprise deployment at all and a couple of people like it's an absolute nightmare you build big pieces of software to deliver to enterprises then you have to go onsite and have massive teams to go out and build all this infrastructure and one of the things that I liked and when I used to work for experience years and years ago we moved from this type of model into kind of the web type of model and for us that was great right we could just go to the user and sort of the customer and say go to this URL and log in with your account details and you'll get this experience it was a really great model for everyone but the interesting thing was like we knew at the time it was like way less capable the browser you know although you had ActiveX and a bunch of other stuff the browser didn't do a lot of the things that you'd expect a native application to do or a native experience at that point but we traded that off like we just said the model for delivering this software out to users and all the like the enterprises and any user that's out there is like it's way better than the model that has existed in the past and I tried to think about this model of like distribution right so in the 1970s you'd buy an Apple machine and you'd build it and have to construct it yourself and take it home and then you have to program the software that was on it and then later in the 80s you could go to the store and buy the software there and by the 90s the web came in right and like that's the start of the change right we feel like you could build webpages that were based on CGI at this point with a little bit of JavaScript occasionally and then actually start to build interactive experiences where like you know immediately you got a lot of value from like the web and I think that's quite powerful but at the same time like native platforms are starting to catch up right native platforms especially at the time around like like specifically the iPhone came out is that you know obviously we waited a little bit longer but for native applications to come through this but like at that point in time we got to the bit where the web is great everyone likes the distribution model of the web we need to solve this for the platforms that we're shipping at the moment and obviously things have changed like app stores come along and in the future like chat applications and other kind of different social media or not social media but like different types of experiences will enable people to distribute software more effectively and I think and I want the web to play a massive role and kind of make like be everywhere in all these platforms and be the key reason why you'd actually deploy on the like deploy software because the web is a great model but the way I was trying to think about this again was the reason why a lot of people and at least when you speak to a lot of developers why they moved to the native platforms and went with like native kind of say native hardware or native APIs was it's kind of weird right like when the iPhone first launched the web was the way that you delivered the software right everyone said like this is the way you're going to build applications they introduced a whole bunch of new APIs that were media queries, local storage web sequel app cache you know there's a whole bunch of different APIs that got launched to support the ability to deploy kind of comprehensive software on the web through mobile devices but then everyone was like yeah that's cool but we want like these native APIs we want this kind of ability to have a distribution platform like and then that took off and at the time the web was just like we'll catch up at some point and kind of like continued on for a long time without that much change right we thought we had all the primitives on the platform to be able to deliver a comprehensive and compelling experience but like it wasn't until kind of I guess all these numbers by the way about 2012 like we didn't actually have I think 2013 was when Chrome came to Android at this point but we didn't have like a compellingly competitive mobile browser ecosystem at the time and we weren't pushing out all the kind of the features that we needed we knew we needed to solve payments we knew we needed to solve all these other pieces but we didn't really have the kind of the emphasis behind it to do it at the time so I was thinking about like what is the game plan for the web and like so the whole thing about this is I've seen a presentation by Paul Lewis where he draws like these are most amazing pictures he has custom slides for every single thing well I was like I'm going to do better than that I'm better than Paul Lewis so I bought an iPad and a pen and that is all I could do so anyway the whole idea behind this was that I was thinking about like what is the mobile web game plan and at the time and like for the last three or four years maybe three years at least anyway like it's kind of everyone's like incentive to say we just need to we need to have a whole bunch of these features where we know that native is doing it and it's obviously it's very hard to kind of get this all going with the specification groups and you know other browsers kind of collaborating but you can start to see a trend right you can start to see more kind of involvement across the ecosystem to say yeah well what's this one we need to geolocation JavaScript came through straight away but you know we didn't have access to the camera so we've got get user media we've got all these other different APIs coming through we're not kind of completely compatible across the entire browser we're not going to be able to go system on them at the moment but we have the ability to try and solve those problems and I think it's interesting that we are reaching more of kind of the native parity at the moment which is kind of cool so actually I've lost the slide I was on sorry so this is the kind of the graph that I've got is like we've got all these new APIs coming in I think it's quite compelling that you know we've started to see a massive change in the industry but there is still a lot more to think about so we've got things like geolocations one big one we've recently moved out we've got HTTPS only it annoyed a lot of people that we made that change but we think it's the right thing to do for user security actually we've got cameras which is kind of cool like the interesting thing about cameras and I'll talk about cameras in a minute is like we do have access to the ability to do inline camera access we also have the ability to fall back as well so if you have like a native camera application like an iOS you can choose to choose that again limited only to HTTPS and this is a common theme across all the new APIs coming through to the platform we have to be on HTTPS we think they're powerful we want you to use them but they have to be secure and users have to be able to trust them at that point and again an extension for the camera microphone again same restrictions you have to be on HTTPS you have to be user granted permission from there we have the battery status again this is a little bit contentious it's been removed from some browsers at the moment but the idea is you can understand whether people can access the hard like so you can maybe give a different experience if the user is low on power you can say hey we're not going to do all the fancy animations I don't think people are actually using that API this way at the moment I don't think many people are using that API but that's what it's there for at the time we have permissions on the platform so you can actually build compelling experiences in terms of like we know that you've got access to geolocation I'm not going to try and prompt the geolocation straight away so you can understand the state of the permission model that the user has accepted there's a lot more things to add into this and this is especially important when you're kind of building full screen applications as well we have network information a lot of developers especially when we've been out to India we want to understand the type of the network that the user is on so we can adapt the experience I don't think we're actually using this to our full advantage at the moment we've got things like a thing called downlinkmax which basically says hey we know that the user is on at least a 2G connection or at least they can have the speed of the 2G connection you might want to do something with it and again I don't think that many people are using it right now but you can start to think about how you can adapt your user interface and your experience to the needs of the user based on the types of network that they're on I think that's quite compelling we've got autofill it's kind of boring really hard to get people excited about autofill but we know that it improves the overall experience of the web like for users who are trying to fill in data everyone hates keyboards fill in forms we really encourage people to use autofill but no one really does but that will change over time I think because we know it has a measured and improved benefit for users at that point then obviously we saw the credential management API yesterday I think that was actually really cool right like you can get one tap sign in and have it synchronized across all your devices that type of experience is a really great experience especially when you're thinking about kind of cross-device cross-form factor conversion at that point and then obviously the payment request API would be great to see whether this comes through and how it's kind of supported across multiple platforms but my whole bit about this is that I really think and I think Zach said this yesterday is that you can start to think about if a browser supports the credit card information or the payment kind of has the payment information that you can provide across platforms or at least across the device then once you know that you've potentially got that you can start to think about well I don't actually need to sign the user in to be able to get them to make a payment I can just take the payment and then ship them the product after the back of that which I think is powerful and obviously push notifications everyone's been talking about push notifications for a little while now we know that this actually has a material impact on people's kind of engagement and revenue and you know re-interaction and everything and it's great for on the special mobile it works when the browser is closed I'm not going to talk about this too much today but this is one of those powerful APIs where we don't need to build a full-on progressive web application to actually start to take benefit or receive the benefit of actually seeing this API or using this API at least and obviously we've got offline support we've been talking about kind of building these offline based experiences for a long time now we've got the kind of the tools on the platform across most of the platforms and even if you want to fall back to AppCache we don't encourage it you can actually start to think about how you build these experiences and it's not just about offline full offline support it's about kind of thinking about resilience of your application in terms of like kind of an adverse network at that point so I think it's pretty powerful and finally we get like this whole idea of installability you get to the point where if you meet all the criteria if we think your application should be installed or could be installed then you know the user say hey we can install this and it will be on the user's device at that point and we think that's pretty powerful we think it's pretty good but the thing is it's just a big list of APIs right like we're just talking about these different kind of APIs one after one after one we know that the next API that we need to build is the most critical API that we need to solve and I think that's the thing that I'm trying to say here is like we did a whole bunch of APIs that we kind of thought were kind of cool to start off with getting camera access great but it wasn't until the last maybe two years we've had to say well actually we want to build these resilient applications that are great in the face of adverse networks and actually get to the point where we need service workers we need to make them installable we need to know that users want to get re-engaged through push notifications it's been a lot more tactical about how we actually start to implement those APIs which I think is a good thing but it's like kind of it's hard to actually see that strategy kind of playing out like the thing that I want to get to this point though is like it could still get to the point of like everything is just a random API that we start to build like it could be like I'm not pointing out the web serial API we know that there's use cases for things like the web serial API but we have to think tactically about how we bring those APIs to the web because there are some really important things we need to get done and the thing is like we don't want every single API to be like hey native has got this I'm actually going to talk about some of these so I'm kind of contradicting myself a little bit we know that like the native platforms have got this we're not going to implement this directly we should think about how we want to kind of have them in the context of the web and the context of the web is the thing that I'm kind of interested about and we were thinking about it on the Chrome team a while and they don't really like people using this acronym at least because it's an acronym and like if you've ever been to a Chrome dev summit you get Rail, you get AMP, you get PWI the world is full of acronyms at the moment but the reason why I like this one is Slice is that it at least codifies some of the reasons why I think the web is important the benefits the web that other platforms don't necessarily have so Slice is kind of simple you secure the idea is that we've got a native permissions model and a security model where everything is sandboxed you know in the past we've had some issues but the idea behind this is you don't automatically get access to everything on the user device you know you have to kind of do it kind of if you want access to the camera you have to ask the user for access to the camera like those types of things so it's secure, it's sandboxed you can't just go and pull out data from another website that the user might have been to like the whole kind of web ecosystem is kind of conscious about security so Slice is linkable it's really hard to find a set of links of the last two days which haven't been interesting I suppose but the idea behind it is we have these links once we have the links we can do really interesting things with them we can build these types of sites, we can build indexes we can build news.ycombines.com because it's a link we can go to it and we can do things with it and then once you think about the things that you do with it like indexability, that's the heart of Google for our point of view we can go and archive and organize and aggregate the world's information actually I don't know, does anyone else know what the do you know what the omission statement is sorry I'm pushing a point to my boss here nope, cool that's the one but that's the whole point, we can go out discover the data, it's an easy possible manner sorry about that by the way it's an easy possible manner we can start to understand and then we can do interesting things because it was indexable and because it was linkable and then the idea behind it it's like the next bit is like and we know this from the whole start of the Ajax era was it's composable, we can take JavaScript from somewhere, we can take an iframe I know pulled in the iframe thing before but we can start to mash together and build interesting applications just off the fact that other interesting applications and components exist on the web and I think that's incredibly powerful and then I think the whole idea behind the femurality, this is the Guardians mobile, we were out in one of the breakouts before Guardians mobile labs experiment of like it will deliver you news via notifications you go in, you install it you forget about the web page, you never have to go back to the web page to start experiencing these applications like normally the web lives and dies when the browser tab closes service worker changes that a little bit but like these types of experiences we can build where we say like I'm going to use it once and in this case I was using it, this wasn't for Brexit but like it was on for Brexit and we got to it, I fell asleep and I saw Brexit play out via notifications, once I cried a little bit and then closed everything, it was closed it was gone, never received another notification again and I think that's a very powerful model for the web is we don't have to think about these experiences where you have to go off install it, start to use it just to get some experience out of it you can live and die with how you want it to but the thing is slice is just the model, it doesn't cover all the other benefits that we know of the web it's accessible, should be available for everyone to work on and use irrespective of kind of like whether they can actually see it, hear the experiences from it or even actually interact with it it's installable, it's updatable it's deployable, it's composable there's a lot of even more, I've said composable once before but there's lots of different kind of properties that we know the web to be that actually just don't make this acronym make a lot of sense like if you think about it we've got this idea of huge amounts of different properties but this is the thing, actually I was speaking to one of the PMs the other day I've got this massive ecosystem the thing I liked about the way he was phrasing this it's a massive ecosystem, you can pull in from all these other tools around the web lots of the web developers are building on it if one of those kind of industries goes away it's fine because more people will come back in and likewise there's no one owner for it as well so you get to the point where excuse me, you get to the point where there's no one owner for the web it means you're not behind a gatekeeper you're kind of controlled ultimately by their whims at that point we can go out deploy it and as long as you give the person the link, they can access that link they can start to experience your experiences and I think that's incredibly powerful so for me one of the things I was trying to think about is like if it's not just about a feature race what is it about? Well we've done a bit of doing a lot of work and I think over the last two days we've seen some of it by Rick Byers and everyone as well is we want to smooth out the platform we do definitely want to reduce the feature gap but we want to do it in a way that enables a lot of content and new levels of interaction that we're never going to see from any other platform unless it's the web so one of the things I was thinking about the smooth line out of the platform this is the second image I drew with the iPad I was actually quite proud of it everyone else hates it but the idea is you have this level of lumpiness the web is not even not every single browser implements every single feature and as web developers we quite frequently find that really really frustrating but the interesting thing for us is that there are really big things and some of these big things I'm going to talk about today like things like Bluetooth or ES6 you know that it's not there but you can kind of see it so you can kind of ignore it and go around it and say when that becomes ubiquitous I'm going to start to use it but then there's the really frustrating things like Flexbox where there's two different implementations of Flexbox and it's really hard to work out which individual browser supported which individual version of Flexbox and which syntax those types of frustrations really kind of well they frustrate developers and they can't build great experiences for your users that are responsive and accessible for everyone so one of the things that we've been trying to do is smooth out some of those rough edges and the first one and the one of the most recent is like position sticky is one of the ones that developers have always wanted they've wanted the ability to say anchor an element to the top of the viewport and we had it in Chrome and everyone was like that is great right Apple have got it, Chrome's got it I think Firefox had it at the time and we were like yeah it's not that performant at that point, I mean it might have been the right thing to do ultimately we got to the point where it wasn't compatible right so people couldn't use it, you couldn't rely on it so you couldn't build the types of experiences that this is not a great experience of where you might want to use it but you couldn't do it without JavaScript at that point, you either have to include it or not include it and for developers that is actually a really frustrating part of the experience for them and then we get this idea of things like intersection observer right when you scroll is slow when you scroll and you're trying to kind of keep something in the view or know when something has gone into the view now this isn't necessarily about bringing ubiquity to the platform because I think Chrome is the only one that implements intersection observer at the moment but the idea behind intersection observer is we want to provide a level playing field for performance as well so you can start to understand when elements come into the viewport and when they leave the viewport so then you can do your room or whatever you want to do with your which I think is really cool because then when you start to think about the next part of the future and this is one of those ones where this is really hard to actually see in terms of the code and there's not a lot of detail in this I stole this from Paul Lewis's Polymer Talk which was actually a really good Polymer Summit Talk but the idea behind it is custom elements for a long time have been kind of talked about, they've been deployed in some browsers, we didn't deploy it completely because we had a v0 and now a v1 I think now is the point on the web where developers have been really frustrated that they couldn't do these types of experiences it was completely that's the easiest way of saying it at the moment and it's great to see that this has come to a lot more browsers at the moment it's definitely Chrome the latest versions of Safari definitely had template syntax and now they've got custom elements as well in the Shadow DOM so that whole part of the ecosystem is all starting to play out and it's great to see that a lot of the browser vendors are all starting to work together on we know that these are the important things that need to get done developers have been saying that we need to get these APIs completed and finally we're starting to kind of get around a picture and on that subject is another one is like we as on the Chrome team we made this decision to not support pointer events and I think about two or three years ago we had said we don't want to introduce multiple pointer models or multiple interaction models to the web, we don't think developers want it Microsoft is like you know developers do want it, we've got this experience they want to have one unified model they want to interact with things like touch or interacting with things like the mouse pointer they don't want to have to deal with all the different ways of doing it and so developers shouted a lot and Rick Byers who was on yesterday was one of the engineers who started to kind of implement that and flesh that out and now pointer events is in Chrome so we're trying to start to bring more compatibility to the web to level out that part of the playing field so for you as a developer it makes it ten times easier to work out what you should support and how you should support it so I think that's pretty interesting and then also like Darren mentioned this in the keynote yesterday is that we've been pushing progressive web apps for a long, long time and we've been saying about the last year and a half maybe a year now the whole idea behind it is we want your applications if you want them to and the user wants them to to act and feel like a native like experience like if it's installed on the device it should appear everywhere and if you've actually ever installed one of these yes you can get it on the home screen you can launch it and it's in the tab switcher but that's when the illusion breaks down after that it's a nice model but it's a massive uncanny valley at the point of these aren't actually native applications on the system they don't live in the app view and there's a whole bunch of other kind of edge cases that every single developer has implemented a progressive web app either with push notifications or not has been complaining about so what Darren was saying is like we actually want these experiences to look and feel like they're native and so this is the flow that we've got I think this is the add to home screen flow normally or not the add to home screen flow normally but this is the new kind of install flow so we've taken the whole idea of add to home screen which essentially was a bookmark on the home screen with a special parameter that Chrome knew how to launch the screen into a fully kind of native model so the application is downloaded installed it's still a progressive web application at the time pulled everything's pulled from the web it's not anything kind of packaged up and everything but it's a native application on the user system at that point and I think that is incredibly powerful now you can experiment with this today in a minute but the idea behind it is we want to experiment with this it's a flow that we think is going to work but we do need a lot more feedback but once you actually get these applications installed it's really good so one of the kind of things that we've seen is that we've got the developers or users at least as well wanted their applications to appear in the app drawer and other elements of the system you are as well so it now appears in the app drawer you can actually go and inspect the storage model it's connected to the application not just the Chrome or the web Chrome as a whole and then you can do a bunch of other stuff as well so you can force install it uninstall it you get access to the battery profile and a bunch of other stuff so your application is ultimately accountable versus just being accountable to the browser at that point we also get deep integration with links so if you own in this case airhorner.com and the user clicks on the link to airhorner.com you'll see your progressive installed web application at that point and I think that's pretty cool because all you have to do is update the manifest to actually say how it should actually be intercepted on the user system and likewise for notifications like you click on the notification like the bugs that we had on the system where like you click on the notification and it would go to the actual web app versus the thing that was installed on the home screen which essentially are the same thing we just didn't know and we couldn't launch the application at all because we didn't know it was installed and we were compelling a lot more natively integrated at that point hey thank you the other thing as well is we do continue to respect the launch information as well and the interesting thing about the launch information is that obviously when you click on the home screen or you click on the link or you click on the notification you do want it to launch in portrait or like if it's a game you might want to launch in landscape those type of things we've had a lot of trouble actually making sure that was synchronized across the entire device the cool thing for me at least anyway is that the biggest thing is that we can keep the application name and launch profile in the manifest open all that information up to date as well so the good thing is like if you update your manifest right now because it's just a bookmark we don't know whether your application is updated we don't know whether you've changed your name or the icons change a little bit those types of things we now have the ability to actually say we know it's changed we know that the user's got it installed and we can update it across the device as well which I think is actually pretty powerful the great thing is like if you're already building a native or a progressive native web application that's not the word to say is it if you're building a progressive web application like you don't really have to do anything you have an optional scope attribute and that's pretty much it the scope attribute just says this is the URL string that if the user clicks on it will be open will cause my native application or my web keep saying native application my progressive web app to open up I think that's really cool so it's experimental today you'll be able to get it and you know it's actually really interesting but the thing I would say is we do want a lot of feedback around this because we want to make sure that the model works for users works for developers and we can go more from there but I did say that was kind of smoothing out the platform I think a lot of the things that we've been talking about is just making developers' lives a little bit easier I do want to talk briefly about kind of decreasing the feature gap because this is where for me some of the show busy things come in but the interesting thing about the we're in this weird tension where there's a lot of new APIs coming to the platform some of them are not completely specified at that point like in the past you go through Chrome flags and you enable it to test the API but that's actually really hard for doing kind of in this case Alex Russell was saying doing science on the web at scale like if you want to know that an API works with all your user base and how it works and how users interact with it you somehow have to get that out onto a stable channel somewhere but if it gets out into the stable channel and then developers start finding it it's like you're prefixes but the old prefix model that causes a lot of problems in the long term for developers and we don't want that to kind of happen we want to be responsible about how new features and new APIs are designed but tested at scale so I definitely encourage everyone to look at Alex Russell's post on this because he gives a lot of insight about how we're thinking about this model but the name and Alex alluded to this in the panel session is origin trials now the idea behind origin trials is that we sit there and go well we think that this API is going to be important like in case of web bluetooth or persistence stories that Drew worked on we know that this is like an important piece of the overall kind of API ecosystem it's not fully specified just yet but we want to get it tested out you have to sign up for the API there's basically a link that you can go to on any of these pages you sign up for the API you drop it inside your web page in this case it's a meta tag and then the whole thing is designed to kind of I don't want to say fail but it's kind of it's designed to only run for a certain amount of time so this is like as a developer you know you're actually getting into this experience you know that at some point the API will change it might change significantly it might actually get pulled out once we know that we don't actually want to ship or developers or users don't want this to see this shipped on the web but the point is that these origin trials allow you to have that flexibility to experiment with the API give us a lot of feedback and then we can actually help the specification process move along a little bit more effectively and one API that I want to talk about that is behind an origin trial it's quite close to my heart is the web sharing API like I used to work on the web intense API and the whole idea behind that model was to say the user should be in control of the applications that they use to perform common tasks so if you want to edit an image you would use the image editor application that was on your site or inside your native application or inside your device at that moment the problem with it was it was too broad we learned a lot about building ecosystems and building APIs where you know it's an undefined scope and undefined kind of range about how big this should be we got a lot of feedback from developers and they wanted to edit and I don't want to save I'll do an edit and save intent at the same time it got to a point where we couldn't feasibly deploy this API at scale so one of the things was we said we should go back to the drawing board and design smaller chunks we should try and solve the sharing intent we should try and solve the different aspects of what we were trying to do like the original vision was going to do but do it in an isolated sense so this is the sharing tool this is the web share API at the moment we're testing it out we want a lot of feedback around this but it's a simple API it works pretty well but the idea behind it is you just share some data and then that will pass to the underlying kind of sharing information like in the case of Android it will just do a fire basically a send intent and then the application picker will pick up and then you'll be able to share the data to it it's got some problems still we need to flesh out images so I think it's a powerful API but that's going from web to native and what we're saying is we want the web to be across all the users ecosystem so in this case we want the and this isn't ready yet we're still trying to work this out but the web tag API your web application should appear in the native picker we're trying to do this by the web app manifest and then also the service worker as well but this is one of those things where the intent is clear and then the application if the user installs them act as first class citizens on the web I think it's pretty powerful there's also a whole bunch of media improvements as well and this is where things get a little bit more interesting at least the whole media team have been working on this idea of developers don't have to do everything we can provide a lot of integrated experiences with the user's device so the first thing that we did and this was about a year ago was anything that you did so if you'll play normally no one puts their hands up but if you're playing if you're playing some media that notification will get generated on the user's device passed across to your watch and then you control it from there the developer doesn't have to do anything and that's actually pretty cool you get this kind of thing for free again just making the platform a little bit richer for web developers we've also got the ability to do things like background play so you can take like a movie file or an audio file close it down turn your phone off to go to the home screen to go dark and then you can still control the web experience I think that's actually quite powerful you can start to think about podcast applications and music applications which you can just run in the background continuously still but have the ability to control them from the web and from the user's device and then if you move a little bit further on some of the rest of the work that the media team are doing and this is one of my favourites I did a little demo later on the idea is like capture stream you want to record something from a canvas and actually record it into a movie file like a lot of people have been doing this to try and generate animated gifs and a bunch of other stuff in movies there is a dedicated API now canvas.capturestream it's in background flags at the moment it's in canary normal anyway you basically get the canvas you say I want to capture it at 25 frames a second and in this case I'm just going to attach it to a video element it's probably not the best use case to do anything with that video element but it's a stream you can put it onto something that can read streams and once you can put it onto something that can read streams you can do things like I'm going to put it on a webRTC connection and I'm going to send it out to someone in Australia and they're going to be able to see what I'm doing inside my kind of webGL 3D game which I think is pretty powerful it's very hard to do these types of experiences on any other platform on the web now it's three or four lines of code and you can start to stream your experiences on the web and one of the things I do like is that you can then think about well I've got the stream I actually want to record it and actually kind of save it I can persist it to disk so this is using the media recorder API which takes the stream from the camera at this point and then when the data kind of comes through you append it to a blob and then you just start recording and then once it's completed you get the blob and in this case this is the demo that I wrote it's a little bookmark that I wrote it's a webm file to your hard drive at that point like it's 20 lines of code and you can get this experience where I've not actually seen this type of thing on the web before record a webGL game, kind of throw it up to YouTube but it's pretty cool and pretty powerful I think at that point but once you kind of have the camera like the camera and this is the thing that most people don't know is like you have the streams coming in you've got webRTC, you can send a video stream now you can send a canvas directly to the user one thing that everyone says is like we can do a lot with the kind of user camera right now we've got getUserMedia which gets that stream from the camera that's like a camera app but we only found this out maybe about six months ago if you actually capture a frame from the getUserMedia API it's only like a 1080p, it's not like a raw full dump of the entire kind of camera frame at that point now the thing is we've got the image capture API again it's in Canary at the moment but the idea is you can pull in a getUserMedia stream say I want to take a photo and it will give you the photo so if you've got a 21 megapixel camera in theory you'll get a 21 megapixel image which I think is pretty powerful the more important thing is that you actually get to understand the settings and capabilities of the camera we haven't had this before right we can take the media stream and say what can this camera do well I can zoom in, you can control the ISO you can autofocus, you can do all these other things with it, we now get that piece of information we can get that back and once you can get that information the next thing to do is can I do something with it well the answer is yes roughly and the idea behind this is if you know that the range for zoom is that you can say well I want to do a double zoom and the idea here is that you will obviously do the zoom and this is the video at least anyway where I was trying to record the slides and it didn't quite work but the idea is you have the camera you change the properties and it updates in real time and then when you take a photo it will use those properties as well again I think it's pretty powerful we can build full on camera applications on the web and I think that's pretty powerful and then one of the other ones and this came in last night so this is one of those ones where I was speaking to one of the engineers on this Miguel and he was like Paul I've got this API for you can you talk about it tomorrow and I said what's the API because I'm going to run over time and I've run way over time now already he said I can detect faces I've got an object detection API in the future they'll do QR codes they'll do bar codes right now it does faces and you show me that this is the code you basically do face detector you detect the faces with the image that you just captured from the image capture API and then you can pass it to the underlying system behind the scenes and then it will find the faces and you get the information that comes off the back and I think that's actually pretty powerful I built a QR code scanner a couple of years ago and to get it running at 60 frames a second it's an absolute nightmare to do and if I have one API that lets me do that that is actually a really great thing for me at that point so that kind of brings me on to the next bit we have this idea of sensors behind the scenes in terms of face detector sensors not really a sensor but it's a thing that is there that pulls out data from the underlying device now the generic sensor API is an interesting one because the idea behind the generic sensor API basically provides a common I need to get this right but a common abstraction for how to access hardware consistently inside the browser so that the browser vendors have a way of saying we've got all these different APIs like how do I access them consistently in a relatively sane and equal way across all the different sensors this has been in edit mode for about a year and a half it's only recently that we've started to actually put this inside the browser and it's on I'm really proud of that demo because I was like that's going to annoy so many people but the idea behind it is you can have a sensor that is like the ambient light sensor in this case it landed in Chrome and you can do the same thing you can do the same thing and the idea behind it is it just reads the light values from the image sensor or whatever kind of sensor that you've got which can actually detect light levels at that point you initiate the ambient light or you get a hold of the ambient light sensor you put a handler on it for on change and then you start it and then it will deliver changes it may be a specified frequency if you want you know regularly to your on change handler you can also poll as well so if you don't want to have an on change handler always firing but you only want to do it synchronized to a frame you can actually say what is the data from the sensor or what is the value of the sensor and it will turn the last value and I think that's quite interesting ambient light I don't know how much use there is you might put a dark mode inside your application or you might do something super annoying but it gets a little bit more interesting when you think about a compass a compass needs actually to be able to build a compelling compass for the web and Kenneth from Intel gave me this demo which I'm grateful I think he's up there there he is hello but the idea behind it is there's multiple sensors you need the accelerometer and the gyroscope to actually start to think about how you can actually kind of get the proper compass values and at this point it's quite simple you start both the sensors up and then you kind of get the changes then you sort of store the changes in some global state and then you update and you need to render at that point it's just quite simple the logic behind this is quite like it was harder than I thought it was using quaternions and a whole bunch of other stuff but like that whole point is like you've got two or three sensors on the device you can start to do really interesting and compelling things with them once you start to get that data through and not necessarily you have to rely on a browser vendor making the compass API at that point just to actually solve those problems and I think that's actually pretty cool I should I just talk about the wrong slide I didn't play the video I'm sorry about that so that's kind of like the newer APIs coming through I think some of them are pretty cool some of them are very hardware driven at that point and the one thing I do want to try and get across at least is that I want these web APIs to kind of I say mimic native that's the wrong way of saying I want all the capabilities of the native platforms to be available to web developers but I don't want us to kind of like lose our soul at the point of saying that we must have the exact parity with those APIs there are very webby things that we can do that no other platform can do and that's one of the things that I think is pretty cool especially on the whole ephemeral aspect is like for very short, lightweight experiences whether it might be a marketing campaign that a lot of people get asked to build or just even things like the election users you don't want to have to build a native application deploy it through the stores you just want someone to go to a URL and start interacting with the experience and then when something happens be able to respond to it I think it's a very powerful thing to do and I think if you look at things like physical web beacons today that's cool, quite a few people like we had polymon kind of on there physical web broadcast the URL your phone picks it up or any device that can actually pick up the beacon signature at that point it understands what the URL is being broadcast present you with some metadata and user interface and then you can click on it and start to interact with that experience that's super lightweight, like no one's ever going to build or install an application which is just there to interact with the TV just say for a conference and those types of things and the kind of ephemeral nature of these experiences especially through physical web are really powerful but the really interesting thing for me is yes we can discover a beacon which is kind of cool that points to a web experience but actually sometimes we want to take the web experience the URL that's being presented and it actually to a physical device like I know they were talking about internet things before but this is where you can start to see the tie in with web bluetooth and web bluetooth again we talked about this last year but it's at this point now again where it's an origin trial I think it's an origin trial it's still an origin trial it's an origin trial so you have to enable it excuse me you have to enable it and kind of give you going to use it which is cool, it's fine the API still might change at this point but you can start to build really compelling experiences you can have a piece of hardware this is the play candle Vincent has been walking around the venue a couple of times with the actual play bulbs and we've had a code lab as well so the idea behind it is you don't need a native application to start to interact with that experience it literally links to a website which then connects through to the beacon or not the bluetooth device and you can start to interact with it and then you can walk away this is an added to the home screen progressive web app once you've interacted with it you don't have to install it again and use it I think that's incredibly powerful super lightweight experiences that we can do a lot with and it's kind of interesting to have a whole bluetooth space because if you're not going to build things with bluetooth you probably don't have to understand it too much but you have this idea of the BLE beacon or the BLE device broadcasting a whole bunch of attributes well, you have broadcast a whole bunch of attributes through the GAT server you have this idea of services your device can have multiple super capabilities it could be a battery service it could be in this case like the candle service at that point you connect to the service and then you can get different attributes off the back of it like in this case the battery service you probably only want to ever read the battery level but you can kind of get that and start to read from the data and then you can also get prompted for changes to the data and it's actually a really simple or relatively simple API once you understand how to actually start interacting with the device and you know what data you need to send it and how you should connect to it, it's relatively simple and it gets even simpler when you start to think about the async await syntax as well you're not having to chain promises together the next, the next, the next in this case the discovery phase is you just basically call navigator.dobluetooth.requestDevice tell the type of service that you want to connect to and then you'll get prompted to say well we know that there's the device here do you want to access it, once you have access you can physically connect you get the access to the service you try and get access to the service in this case this is for a heart rate monitor you say I want the heart rate service and then you can say well I've got the service I need to get regular data from it at that point so I'm going to get the heart rate measurement and in this case I want to be notified whenever the heart rate measurement changes and I think that's a relatively easy flow just to start getting some lightweight interactions with the device at that point it gets a little bit more complex when you think about things like WebUSB and WebUSB is it's an interesting API and again this is a demo from Kenneth but the idea is that any web page could connect to a USB device so this is kind of interesting so you send some data through slow type of and then it appears on the device so you've connected to the device and you've sent some data to it but the interesting thing is the first thing that people say is I don't want a web page accessing my USB devices and there's a very great medium document by Raleigh Grant who's the engineer on this project who is basically describing the security model of WebUSB and getting to the whole point is not every single site will be able to get access to any USB device specifically only whitelisted sites by the device so the device has to say this site can access my can actually connect to the device and then only when the user's actually opted in and granted it will the connection and connection be made so the idea is that you can get the USB based experiences you can plug it in and the owner of the piece of hardware will be able to say yes, I'm going to build the web based user interface for this experience a lot of other random sites won't be able to do that I think that's a quite a powerful security model for the web at this point and again the API is very similar to the Bluetooth API rather than Bluetooth.connect request device that you do the same thing with USB as the hardware vendor know your vendor ID and all that type of stuff so you can connect to it the user grants access you get the call back through and then it actually gets really complex I remember a couple of years ago we tried to make an Xbox connect thing it's really complex what you have to deal with the types of control methods and the data transfer if you're into USB or hardware you'll probably understand this I don't particularly understand this because I'm not building hardware interactions but you do get to choose the control method and the data transport mechanisms as well so you get a lot of control over the device at that point and then we also kind of start to think about the new types of experiences those two experiences in theory have been quite lightweight you can have the device or a thing around you and you can start experiencing with it leave and then it's fine right you've not installed a new device the web VR experience I think is an interesting space to be in because it is quite I'm going to say immature isn't the wrong word but it's quite nascent at the moment things are changing everyone's trying to explore what to do in the space of web VR like who's got a Playstation VR one person I bought one they're pretty cool but we don't know how to use these experiences properly we don't know how to build them properly as well but the Chrome team in particular have been working on making sure that you can start to build web-based experiences that are powered by the VR subsystem and it's in Chrome 56 at the moment again it's behind an origin trial and I think the thing about web VR for me is not that it's going to take over the world and everyone must use it but it is uniquely positioned to be able to provide compelling experiences that are very web-like you don't have to install a whole bunch of native applications just to experience some web-based VR content and the interesting thing and if you've used the Chrome Dev Summit site is that we believe at this point progressive enhancement is key to this that you can build experiences that live on the web there for people to interact with irrespective of whether they have the piece of hardware that they need to experience a VR system so that's pretty cool on that side of things the way that we implemented it and we're trying to think about how you implement some of these early VR experiences we're not saying right now that you go out and play a whole bunch of games and these AAA class games to actually take advantage of your web VR but it's a very much more incremental approach so in the Chrome Dev Summit site we had the plain old image you got the picture of this venue then you had this 2D immersive view so if you had a device that had WebGL you could click on it and then you could drag your mouse around and scroll around the page then you had this kind of AR view if you had an iPad or an iPhone or any device with a gyroscope on that's like faux AR at this point but then if you have a headset and I think the headset got launched today you have the headset you can pop your phone in and then experience the the web VR experience first class so this is the experience that we've got on the Chrome Dev Summit site we know that this device doesn't have WebVR but we can provide this immersive experience because we have WebGL and I think that's pretty cool because this is the model right, the plain image view the immersive 2D view so this is where you can move around like not every experience is going to be like this but it's quite powerful that you can do that and then we've got this idea of the full immersion and this is rendered using WebGL using the WebVR view of that Boris must on the Chrome team or in the Google team at least now is that you can basically pop your phone in it will know that you've connected your phone to the hardware and then you move around and it automatically moves into this model I think that's actually pretty cool right because you get to this point of every single user can experience your site you're not building an application for this experience you're not having to get people to go install it if they have the VR kind of capabilities they can start to take advantage of it pretty quickly and you can do this for videos, you can do this for images like a really nice way of doing it now the thing I would say is like you can get to this point where we want to ultimately build these triple class or triple A class types of games I personally don't know whether we're there on the web just yet but I think we're getting into a good place and the final thing I would say is that I I want to get to this point where we have a common understanding and it could be slice, it could be any other kind of model going we have a common understanding of how we want to deploy these experience on the web the web has properties that no other platform has specifically around ephemerality and the ability to linkability and indexability, you can give a link to anyone you can start to use that experience and go anywhere the last thing I would say is if you are interested in obviously the progressive web app space and the future developments from Chrome and these new APIs we do have our developer portal on developers.google.com if you go to slash web slash updates you will get all the new APIs as they come through Chrome but our guidance and our focus is the new and shiny stuff is great it gets people excited and it gets you inspired to actually build the next generation things developers.google.com is our place to build the next generation things for all the technologies that are available today so much more focused on responsive design performance and service work and progressive web applications and obviously developer feedback as well so with that I know I ran completely over time but I would like to thank everyone this rehearsal right by the way you're still talking did they say keep going are you trying to run straight into Chrome Dev Summit 2017 ladies and gentlemen Paul Kinlan not for me oh my god that was like the last Lord of the Rings film just like had eight endings one right oh I should refresh the page shouldn't I because we just did a deploy because that's a sensible thing to do in the last session oh it's the least stupid thing there we go so this is where we can find out five page loads the winner or the top three of the big web quiz give him a round of applause and you can come and get your prizes if you like are you all here do we have Ennicholas here, Philip here, Matasaka here yeah come and get come up here it's not going to sort itself and also you've come for the souvenir selfie haven't you as well you win a terrible phone there you go no one getting this all wrong quick, it's not going to sort itself the best part of the prize we'll send you that you should write a series of books or something pick up a mug it's not going to pick itself up the worst prizes ever and don't forget the phone take the phone take the phone you don't have to have it, it's terrible here we go selfie faces are the best alright thank you very much oh the phone yes okay and smile congratulations round of applause for them wow well that is okay yeah it's felt like a long final talk no a long it's been a long couple of days I've had a lot of fun have you had fun have you all had fun so I think the last thing we do is thank everyone we're not going to do that thing where we just name everyone and we do a separate round of applause for each because we'll be here long enough absolutely so we'll do the speedy version where you start clapping and we'll start making our way through the list and then we're good and then we all go home that's it, that's what happens so a huge thanks to the organisers the caterers the security, AV team the video production team the speakers, panellists so everyone here they make it look really easy it's really hard but also a huge thanks to everyone all of you thank you so much for coming hopefully see you next year from Dev Summit 2017, thank you very much