 Today, we're going to talk about Chrome status, which is the Chrome team's feature dashboard and how we made that into a lightning fast progressive web app using Polymer. That's the official title, but this is the actual title. It's everything I've learned about building apps in Polymer in the last four years in 30 minutes or less. So there's a lot of stuff, a lot of stuff I've learned about web components and Polymer in those last four years, launching production apps, really getting involved with web components. Can't cover everything today, so feel free to hit me up on Twitter. This is me. My name is Eric Bidelman. I'm a developer advocate for the Chrome team. I work with the Polymer team. I'm really excited about web components. I'm a digital Jedi at Google, which just means I talk to developers around the world and try to help them build modern web apps. During my time at Google, I've spent time just building stuff. So on DevRel, we build samples, we write tutorials, we help developers use the new emerging technologies, but every once in a while, we'll actually build real things, which is fun for us, right? We get to do things you guys do out in the wild. We get to use open source technologies. Get to use things like GitHub, Gulp, all this really cool stuff. So it keeps us up to date. We're sharp. We can talk to developers about this stuff. But we also open source all of these projects so you guys can learn from them. A couple of cool examples here are HTML5 Rocks, which is something my team put out just a long time ago. I don't know if HTML5 is really a thing anymore, but it's still around. You can go check it out. Polymer's website we built, and that's gone through a couple iterations as Polymer has changed, and web components have changed. Today we're going to talk about Chrome status. It's Santa Tracker. It's almost the holiday season, so we're starting up that project again. That's using Polymer. You can follow Santa around the globe as he delivers his little presents. Very cool stuff. Very interactive. Good use of new modern tech. Google's Codelab site is done in Polymer, and also the Google IO web application the last couple of years is something we do in-house, and it's a progressive web app and using Polymer. A lot of cool stuff. Today I'm going to talk about Chrome status. How many people know what Chrome status is or have ever used it before? Nice. Not enough people. Good thing I'm up here talking today about it. Chrome status is the Chrome team's dashboard for what was happening in Chrome and in Blink the rendering engine. You can go in here. You can check it out. It's basically the list of features that have landed in Chrome that are going to land in Chrome. Things that we're thinking about proposing and putting in Chrome. If you're ever a question of what landed in Chrome 53, you can go to Chrome status and find that information right away. We also have things like being able to file bugs easy for each of the features in the list. You can drill into features. You can see what other browsers think about a certain feature if they're going to adopt it or implement it or not. We have some other cool information on this site that I'll talk about, such as samples and some other things. This is using Polymer. It's using App Engine, and that's chromestatus.com. It's not just Chrome that's in this browser dashboard business anymore. We have Firefox as a platform status page where you can see what they're working on. WebKit has one of these on webkit.org. You can track what WebKit is landing and things that they're thinking about. And IE has this, too. Edge has a really, really comprehensive dashboard with stats and other things that you can check out. As a developer, this is really, really awesome. You can go in. You can see for each of the different browsers now exactly what they're working on, what they're thinking about implementing, what they're not going to implement. You can head your bets if you want to adopt a feature or not. So let's take a tour. Before we dive into Chrome status, I want to talk a little bit about where this all started, all these dashboards that the browsers are working on. So this all started really with the fork from WebKit when Blink, when the Chrome team decided, hey, we think we can do a new rendering engine. And the web was like, oh, my gosh, there's a new rendering engine. It's going to be the 90s again. Chrome's just going to launch all these features really quickly. We're going to fragment the web. Luckily that hasn't been the case, because Blink set out with a really, really awesome mission. And the Blink mission is to basically improve the web, to innovate on the web as quickly as we can, but also be good citizens about it. So we don't want to just introduce features into Chrome without having some consequences or some justification or going through the standards process. We don't want to be that fragmented web that we came from. So in 2013, we launched an initial version of Chrome, the Chrome dashboard on Chrome status. This is what it looked like. It was literally a single page application in every sense of the word. It was one page. But it was a start. It was an iframe to a Google spreadsheet, and engineers would go in and update the spreadsheet, add new features, you know, file bugs. And as a developer, for the first time, you could kind of understand what Chrome was working on. You could have complete transparency into what we're going to put in Chrome or what we had already put into Chrome. So it sucked as far as usability. Like, again, it was an iframe. You couldn't really navigate this thing. It was a massive spreadsheet with, you know, dozens and dozens of features. So that wasn't good enough, but it was a start, right? And it was part of this being transparent with the blink effort. So I spent the summer learning Polymer and web components. And I think Chrome status is one of the first, if not the first, production application using web components and Polymer. And so that was kind of my goal, was to adopt and try these new features and build this thing out. And so we did. We spent some time building it. That was hard because the specifications were changing. So if anybody's been around for a while and have done web components in the last couple of years, you know that the APIs have changed. And now what we are is in v1 where the browsers are actually implementing these newest standards. But bleeding on the edge was really tough back in 2013. So just two examples here in custom elements. These names were changing literally by the month. So blink engineers would implement APIs. The standards bodies would bike shed the heck out of it. And then they would come back and go back and forth, back and forth. So it was really challenging early on, but we made it through it. And in 2014, we launched Chrome status again. Kind of gave it a UI refresh. You have this nice list of infinite scroll features. It used Polymer 0.5. It was a proper web application with database integration on App Engine. Chrome engineers could go in. They could add a feature. They could edit a feature. We have folks from Opera and Microsoft contributing, keeping this data up to date. It uses, you know, it's a full crowd system. So it's a great example of a server-side rendered application, kind of using old-school technology and new-school technology. So if you want to check out the source and learn how we did it, feel free to do that. It's responsive. So it works on mobile. You can actually use the thing on your device. The really bad part about this early version was that it was slow. It was really slow. So we got feedback. And we're like, oh my gosh, is Web Component slow? Is Polymer slow? So we built this thing. We really jazzed up about Web Components. It turns out, after some investigation by the Chrome team, we used Chrome status as kind of a vehicle, a testbed to benchmark and to optimize Web Components within Blank and Chrome. So there's a lot of low-hanging fruit, right? These were new APIs, new things that were added to the platform. They just hadn't been optimized yet. So there's a lot of cool stuff early on that they basically just looked at Chrome status, figured out why it was slow, and made Blink and Chrome better. One example is ShadowDom, right? The browser doesn't have to do all this style recalculation across the entire page anymore. It can actually do less work since we have ShadowDom and Scope Styles. So we fixed that in this bug here. Another one was just sharing, just being better at caching things across thousands of Web Components, you might as well share the same style information across those components. And so just by doing that one actually, it sped up Chrome status quite a bit. And some other really interesting ones, if you literally put an empty style tag on your page, it went through this crazy n-squared code path in Blink and just destroyed performance, like just the Blink style tag. That's fixed, so you can do that now, but it wasn't at the time. And this wasn't particular to Web Components either. It was just kind of something that no one had ever done, I guess, on the web before, or we didn't realize it at the time. So the Blink engineers made that better. And there are some other good ones on here. Insert before, a really common DOM API that was identified as kind of slow. We made it 20% faster as part of this investigation. So Blink got faster, Chrome got faster, because Web Components got faster. Then Polymer 1.0 came out, so I spent another summer rewriting the app yet again. So I wanted to use Polymer 1.0. It's faster. It's slimmer. Also all the cool material design elements that the team had been working on. So that's kind of where we are today, in 2015, 2016. Using Polymer 1.0, it got a little bit of UI refresh. It's a little bit easier to use across mobile and desktop. It's a full progressive web app now, which means we're using ServiceWorker. It works offline. And we spent a great deal of time actually making it fast, too. So not just for folks that have ServiceWorker, but our first render is really good. We're using things like HTTP2 push and preload to get some of that critical stuff to the screen as quickly as we can. We've also been doing work on samples. So the Chrome dashboard is all about features, but you probably want to know how to use those features. So my team, as part of Chrome releases, we implement new little gits on GitHub. You can check them out. All the source code is up, so you can learn about these features and learn what's going on. Something that's kind of cool with Chrome status that I want to highlight real quick before we jump into the guts of it is the metrics section. A lot of people don't know about this. They don't really understand where it's coming from. Essentially, we had the question of what features are being adopted on the web? And can we expose this information to developers so they can understand what features are being adopted on the web? So if you drill down into this usage metrics button here on Chrome status, essentially, you get this list. There's stack ranks of CSS features and JavaScript features and HTML features. And we have timelines as well of the different features and how they've been adopted over time. So this information actually comes from Chrome. If you opt into this setting, basically, Chrome reports crashes. If your browser crashes, we want to know about that to make Chrome better. But it also reports anonymous usage of CSS features, HTML features, and JavaScript features on web pages. And so we can gain some insight into how these features are being used across the web. What happens when you toggle the setting in Chrome is Chrome starts logging information to this crazy page called Chrome Histograms. And you can see this page is kind of this neat little ASCII graph with these buckets on the left here, 0 through 57. And you can see it's obviously recording something. But what is it recording? Well, it turns out if you dive into the Chromium source, you can see the mapping between this bucket number and the actual feature. So number two here in histograms.xml is the CSS color property. So I can see that by no price surprise that the CSS color property is something that's used across the web a lot. So it makes sense that that bar graph is a little bit taller. So we aggregate all this, and we expose it externally and publicly on Chrome status. So you can see here that if you look on Chrome status, that color property is about 60% of the pages that Chrome loads use the CSS color property. So that's really cool. You can kind of gain insight into how features are being adopted over time. And so we have metrics over time. You can track things like the fetch API. So we're really up and to the right with this one. Developers going crazy for it. It's easy to use. It's part of Service Worker. It's got a great polyfill story. And so that's why I think the adoption is upticking so well. CSS Flexbox, it's gone through a little bit of a shaky ground with different changes in specs over the years. You can kind of see people get excited about it. It drops down and comes back up. Ultimately, people are adopting it. About 20% of the web uses CSS Flexbox. ShadowDom v0 is an interesting one. This is the kind of ShadowDom as in Chrome today, not the new one that Apple and other folks are implementing. What happened in 2014 is that Polymer 0.5 came out, which used native ShadowDom if the browser had it. So you see really good adoption around Google IOTime. People get super excited about it, start using Polymer. Over time, as people have migrated from 0.5 to 1.0, and 1.0 doesn't use native ShadowDom by default, you can see native ShadowDom is kind of on the decline. A lot of people are using Polymer. And because they're not using the native stuff anymore, that's what's happening. Recently, the Chrome settings pages and some really large Google properties have started using Polymer 1.0 and it opted in to native ShadowDom. So you see that usage go back up. So one reason we expose this information is part of this mission, which is to, of course, introduce new features as quickly as we can, but again, be good citizens about doing so. So this actually drives Chrome's deprecation policy. This information, this usage of these different features, drives our decision to deprecate features in Chrome. And so you can see in Chrome's status, there's actually some features that have been deprecated because they've below a certain threshold of usage. We know they're not used on the web anymore. They're below a certain threshold. So we can safely remove them from Chrome. So that's really awesome. All right. So let's talk about Chrome status and how we built some of this stuff. So talk a little bit about the architecture, how we kind of arranged things and built the site, but also performance. And I think you really have to start with performance because it really kind of dictates the architecture of the site. So the most important thing for Chrome status is how fast you go from this white screen to this list of infinite scrolled features. I call that the time to features. That's the first meaningful paint for Chrome status. So for that, we're using the purple pad. And you're going to literally hear the purple said like 1,000 times at this conference. This is my moment to say it. So it's preload, render, sorry, push, render, preload, and lazy load. Too many Ps. But essentially it's be lazy, do less, load fewer things upfront when your page first loads. And so we're really taking advantage of that on Chrome status to do as little as possible when the user first visits the site. So let's take a look at that. Our first paint on a brand new user, Nexus 5 Chrome 3G connection, is about 1.9 seconds first paint, which is pretty good for that scenario. And you can see what happens over time is kind of the app shell loads very quickly. And then the features eventually come in at about 3.8 seconds. So that's that time to features. That's when the feature list is actually there. And users can actually use that page on that site. So that's pretty good. But of course, it gets even better thanks to service worker and having the ability to cache things offline. So the repeat view is 1.9 seconds first paint again. But that time to features, the thing that actually matters for Chrome status, is dropped by about 800 milliseconds. So we're pretty excited about that. And that's again, because of service worker, network is completely out of the picture at that point. They're just loading things directly from the cache. As far as the architecture is concerned for Chrome status, we chose the app shell model for us. It made sense just to have kind of the top nav and some of the UI there load right away for our first paint. And then as the dynamic content comes in right from the database, which is that feature list, that's how we load that in. And so we're leveraging things like the H2 push, server push, and preload. Some of the new protocols you can take advantage of to get resources very quickly to the user's machine. I decided to inline small amounts of JavaScript and CSS because it really didn't make sense to make those extra requests. It's a very small payload. So we just put those directly in the page using server templates. And then async down the board. So async loading polyfills, async loading imports so we don't block the rendering of the page, async loading that features list. So there's a lot of stuff we have to manage asynchronously. And that's basically just using promises. So big fan of promises to manage all that asynchronous behavior. And then things like service worker gets us fast first paint. The structure of the app using this app shell model is just using custom elements. So it's using a little bit of Polymer's custom elements. So the app layout elements, really awesome. For responsive design, we basically get that for free by using these elements. And we basically use server templating. Good old fashioned Django, Python, Woohoo. That gets injected inside of the light DOM of these elements. And you basically have a server-side rendered app. Nothing really fancy here. Just kind of interesting that you're combining these two worlds now. We also hand rolled a couple of our own elements for certain things, right, one-offs that we needed. Things like this list of features, or sorry, this list of Chrome versions, this Chrome metadata tag. It just fetches the list of Chrome versions from a database and renders that in our app drawer. And the bread and butter, of course, is this infinite scroll list of features where you can drill in. You can kind of discover more about what's landing in Chrome. And for that, we're using the iron list element. So this is a really awesome performance tool that we found. We have a list of features. So it made sense to have an iron list of features that was essentially an infinite scroll list, right? It manages the DOM for us. We don't have to worry about features being added over time. And we're just generating massive amounts of DOM because it only renders what we need. So that was really effective to reduce some of the load time on Chrome status. I'll show you how much, too. Before we introduced iron list into these features, we were basically just server-side rendering crop tone of features. 4,400 elements in the DOM on page load, which is way too much, way too much. Like, modern apps use less than 1,000. Gmail, inbox, all of these. That was about 1.2 seconds. So way too slow for an experience like this. When we introduced iron list, dropped this down to about 53 elements on page load. Super good. And of course, the load time dropped dramatically as well. So check out iron list if you have anything like a grid or a list for your page. It works really well. This year, we spent a lot of time making a progressive web app. So all the usual suspects apply here. We have add-to-home screen experience, a splash screen. You can see what's happening in the video here. We spent a great deal of time making offline work really well. So we're using the service worker pre-cache and toolbox libraries for that. So we don't have to deal with some of the craziness that service worker has. Lighthouse, which is an awesome tool for gauging how your app is a progressive web app. We get about a 95 out of 100 on Lighthouse. Check that out. It's a CLI tool, but it's also a Chrome extension. And it goes through the list of things you need to do. Offline is interesting for Chrome status. We're doing something that's a little bit different for the UX. We do have a service worker, which I'm not really going to go through. But the cool thing that we're doing is actually showing users how much data we're pre-caching in the service worker. So essentially, what we do is we wait for this promise to resolve. This loads our imports. Since we have asynchronous load imports, we want to know when that is resolved and loaded. And then we can use our custom elements at that point. After that, we get the list of things we've cached in the cache API and essentially calculate how much we've cached. And then we present this little toast message, which is a custom element that shows, hey, this site's going to work offline. It's ready to use. It's using service worker. And we've cached about 800 kilobytes of data. So oftentimes, people say, how much data should we cache in service worker? How much is too much? And so one thing we're experimenting with here is just showing and being up front with that. And we can track this over time. We can decide, hey, maybe if it gets to a megabyte later, that's too much stuff we're pre-caching. Our app has grown too big. Let's whittle it down. So being transparent about what you're doing with offline. So that's some stuff that's interesting for Chrome status, in particular, to Chrome status. For the last couple of minutes here, I want to talk about just generally good things you should be doing in all of your Polymer apps and just generally good web stuff. Things you might often forget to do. The first is to lazy load. This is part of the purple pattern, be lazy, load less upfront. That goes with the polyfills as well. Polyfills for the life. Some day, we won't have polyfills because the browsers are now implementing these APIs. But until we do, we can lazy load them and use proper feature detection, right? Just as you would any other polyfill. And this applies to the web components APIs as well. So in Chrome status, I wrote a method called lazy load web component polyfills if necessary. That's literally the method call. It's very descriptive, but you know exactly what it does. And we feature detect the v0 web component APIs using this little script here. And then, of course, just dynamically load the script tag if the browser needs it or not. So we can save network requests in browsers like Chrome or any browsers that have these APIs. Number two is opt in to some of Polymer's performance flags. So FYI, BTWs, this is Polymer 1.0. This is not 2.0, the new hotness. So 2.0, actually, you don't need any of this. It's on by default. But these are 1.0 flags that you can opt into. So the first thing you do is basically set up this object before Polymer loads, okay? And you construct kind of an object and you can specify certain performance flags. One of those is to use native shadow DOM, right? If you're in a browser that has native shadow DOM, you should opt into that. You get the benefits of style scoping. You get the performance optimizations I talked about that Blink made to the native APIs. And so you can leverage that by using the DOM shadow flag. Lazy load, max. Basically what this does is it tells Polymer to delay some of the work it has to do when your page loads and delay that until an instance of your component is created. So instead of doing all this kind of fixed up work in shimming styles and mimicking shadow DOM at page load time, it waits. It waits to run that stuff when an actual instance of your component is stamped. So it's another good optimization for good fast page load. The last one here is to use native properties, right? I think all the modern browsers now, at least the latest versions have CSS custom properties. So you might as well opt into those and get the benefits of the native CSS engine instead of using JavaScript to parse stuff. Last pro tip always, if you can use asynchronous imports, this won't block the rendering of your page. It records a little more work on the behalf of the developer to know when things are loaded, but ultimately your users are gonna be happy with you because you're not having that white screen of death for so long. Number three, avoid web fonts. So I did a side-by-side comparison for Chrome status. We were using Roboto, and then you do a side-by-side comparison with just the native installed API font on the system. You really can't tell the difference, right? And so I said, well, let's just remove Roboto altogether. No point to make requests for Roboto, waste bandwidth, and also have to deal with things like flash of unstyled content. Mine as well just show the user the native stuff. So it got a lot faster just because of that. So use those sparingly if you can. Number four is to leverage things like H2 server push and link rel preload. So there's a couple of ways you can do this. The first is to be declarative. You can declare a link preload in your page, and that tells the browser, hey, I'm gonna need this resource eventually. Mine as well just go fetch it. And so that's a high priority resource that the browser will download. It'll stick it in the cache, and then when the browser actually needs import.html, it'll use it from the cache. What I recommend though is using the HTTP header. That will actually get sent with the page itself. So when you're kind of bootstrapping the browser's cache ahead of time before the page ever gets downloaded. And so it's a little bit faster than telling it, hey, download the page, and then look for this preload link. And you can also create it in JavaScript, right? If you want to lazy load things after the fact, you can do that, just create a link tag, set the rel to preload, and then append that to the page. So there's a couple of different ways you can take advantage of this new stuff. So I got so excited about Push last fall that I wanted to make it really easy to do an App Engine, because ultimately Chrome status is on App Engine, so I wanted to make it easier, not only myself, because I'm selfish, but I want to make it easier for you guys too. So I wrote a couple of little things, and hopefully you'll find them useful. The first is this manifest generator. It's a little node script that you can run against your app, and what it does is it finds the static resources on your page. So it outputs this little JSON blob, and in this case, it found a CSS file, a JavaScript file, and my HTML import. And so it outputs this JSON, and then you combine this with another library for App Engine. So you just decorate your handlers and reference this JSON file, and what this is gonna do is it's gonna server push all the resources in that JSON blob with your handler. And this is really awesome, because you basically don't have to know how to construct the link header, you don't have to worry about that, you just run these two scripts, and you get H2 push for free on App Engine. Very cool. Number five is make a custom icon set. So Polymer has all of these material design icons, you're probably familiar with Iron icon and using them, and they're based, they're kind of grouped in different collections. And so what I did for Chrome status was kind of decided, hey, I don't need all of these icons, I just need a few, right? I need one from this icon set, one from this other one. And so I created this tool, you can go to polyiconappspot.com, and essentially it just allows you to create a custom icon set for your app. You can pick and choose the icons you need, it outputs the HTML, cut and paste it in a file, and boom, you've got a custom icon set with only the icons that you need in your app. So this is good, I saved about 100 or 200 kilobytes just of SVG download in my app, just creating custom icon set. So pruning your icons, download less, again, be lazy, load less stuff. Number six is lazy load non-critical components. Now I'm not gonna talk about the mechanics of how to do this, Steve, the next presenter is going to, but I wanna show you how we're kind of leveraging it in a real app on Chrome status. So this usage metrics button here is actually a web component, and when the user clicks it for the first time, that's when we actually load the import for these web components inside of it. So you can see what happens in the network tab, that's when the paper menu, vulcanize bundle and JavaScript file come down. Second time the user clicks it, it's basically a no op because the browser's already downloaded, it's already in the cache. And so at that point, we can just show the dropdown immediately. So this is a great way to kind of load less upfront in your app and lazy load things as users interact with your app. The same is true for this little question mark here, which brings up this help screen. That's just lazy loaded as well using the same technique. Number seven and the last one is measure, measure, measure, measure, measure, measure. So I spent a great deal of time just iterating on Chrome status, making it better, making it faster, but ultimately, you push new versions, right? And the app changes, the profile, the app changes. You continuously have to kind of know what's going on in your app. So if you want, I rolled a little library to make my life easier in Chrome status and hopefully you'll find this easy as well. It's called appmetrics.js. It's a small little library you can use to basically wraps the user timing API. And so if you wanna know how fast, for example, a JSON file takes to load, you can create a new metric, call it features loaded, call start to start the recording, call end and log, and you can log that information to the console so you can continuously see the stuff as you iterate in your app. What's cool about this though is that since it uses the user timing API, that API integrates directly with the dev tools. So if you do a recording, you can actually see features loaded, my two metrics here in this app, and you can see how they kind of stack up with the rest of the timing in my app. So this is really nice. The fact that API, you can visually see it amongst the things into dev tools is really, really valuable. And this also works with webpage tests. So anything that you mark on the timeline using the user timing API or using appmetrics.js will get marked in webpage tests. You can share that link and also track this stuff over time. So we are already using Google Analytics on Chrome status to know what users are doing. So I also just threw in this API call to track performance over time. So if you use this library, you call this call, you're using analytics, you can end up tracking performance in your app over time. There's really no point to roll your own dashboard or set up anything like date queries or anything like that. Analytics has all that for you. If you just use the user timing API and call the raw analytics protocol or use appmetrics. And so you can see here that the load event for that features has kind of slipped over time. Definitely need to do some work after the show. So with that, that is my time. Again, my name is Eric Bidleman. It's at eBidle. And now I believe I'm going to turn it over to Steve. Here's the list of resources if you want to take a screenshot of that before he comes up and all the libraries and stuff I talked about. And thanks. Thanks, London, and live stream. Thank you.