 Good afternoon, everyone. Thanks for coming out and joining me in this awesome web developer igloo thing that we're in. It's pretty cool, right? So I got a ton of stuff to go through. So you guys ready? OK. All right, my name is Kevin Schauff. I'm a software engineer on the Polymer project at Google. And today I'm going to be talking about how a whole series of new technologies shipping in the web platform all fit together really nicely to allow us to deliver progressive, performant mobile app experiences to our users. But before we start, I want to talk a little bit about the Polymer project and how we think about our mission. So Polymer is a team of front end web developers, just like many of you. And we work inside the Chrome organization at Google. And we actually collaborate really closely with the browsers who build Chrome, the browser itself. And while we're most known for our work on web components and the Polymer library, our mission is actually a bit broader. It's to help the web platform evolve in a direction that makes life easier for us developers and the users that we serve. And a recurring thing that we see over and over is that wherever the web as a developer platform has failed us, wherever there are holes or annoying deficiencies, we see an amazing thing happen. We see this vibrant web developer ecosystem rise up and solve all these problems in user space. So these often take the forms of new libraries and tools and JavaScript frameworks that all layer on top of the platform. And on the one hand, this is awesome because it allows us to get our jobs done and without waiting for the web platform to catch up. We can get our app shipped to our users today. But it's also good from time to time to take a step back and realize that a lot of these layers that we pile onto the platform and start to take for granted actually have real costs. So some of the costs come in the form of developer complexity. So we've drifted a long way away from where we could just edit some HTML and refresh it in the browser. To a world where starting a new web project usually means wading through a sea of complicated choices, setting up really complex tool chains, and cobbling together something that hopefully is going to result in something good at the end. But more than the developer experience, as we're layering and layering onto the platform, inevitably, this takes a toll on the user experience as well. So this is a real trace from a typical experience that our modern tool chains and our modern frameworks pushes to today. This is the type of experience that we're delivering to our users on mobile. It actually really stinks. Before the user can interact with or even get that first impression of our site, they have to wait for this big JavaScript bundle to download and execute on the client. And while this worked on desktop, for mobile users, it's a horrible experience. And this is what we're doing to them. And it's not just academic. Mobile really matters. Not only is it where we're all spending a lot more of our time on the web, but it's the mobile web where the next users of the internet are coming online for the very first time. They're going to experience the internet for the first time on their mobile phones. And it's in these far parts of the globe that we can't count on fast devices or fast networks. So if we're really going to delight our users today and capture the next billion users coming online in the next few years, we really need to stop and think about how we can start working a lot more closely with the platform and undo a lot of those workarounds that we've had to layer on top. And this is really the mission of Polymer, is to help identify and fix all those core deficiencies in the web platform so that we can really start to work a lot more closely with it to deliver great experiences to our users. So that's kind of how we think about our mission. So in my talk today, I'm going to cover a few things. So first, I want to talk about the ideal user experience. So if the trace I just showed is kind of the suboptimal, let's talk about what would be ideal. So the ideal user experience, then I'm going to talk about how we actually have a lot of new platform features. The platform has actually evolved to help us achieve this experience. And then we're going to take all those platform features that I'm going to introduce and put them together into a really nice pattern that we can all start using today to start delivering much better experiences to our users. And then finally, we'll see it all in action. So we've built some demos, and we'll pull them up in the DevTools and see it all working. So if we're going to be talking about awesome user experiences, we need to decide what makes an awesome app. And it all comes down to user experience. So first, when a user first finds a link into our application, say they find a deep link in from maybe something on social media or from a search result, they want to be able to load that content and interact with it as quickly as possible. So they want to fast load from a deep link. But then once they're in our application and they navigate to other parts of the application, they also want fast responses to those actions. And then native applications really taught us what good mobile UX looks like. And just because users are accessing their content through the web doesn't mean they want some lesser of an experience. So users want that immersive app-like experience. And then finally, users want to be able to access their content and their applications when they're offline or when the network is failing. So it's really when we can check off all these boxes that we know we're really delivering an awesome experience to the user. We can hit peak awesome on my osmometer. You like that? OK. So how are we doing? Let's use this scorecard and evaluate a few different approaches to building apps and see how they stack up. So back in the old days, we had Web 1.0. So in this architecture, when the user first enters our application through some deep URL, so here the user is saying, I want to see a list of men's t-shirts. That's what the URL says. The server just sent back HTML that gave them exactly what they asked for. They got the HTML that renders out the list of men's t-shirts. And then when they navigate from there, so they click on a product detail link there, and they might go to the detail route, then the server just sent down exactly what they asked for. So this is basic stuff. So let's take a look how it looks on the scorecard here. So because the server was sending down exactly what the user asked for at any point, they got a relatively fast load from the deep link. But it was those full page reloads that really killed the interactivity. And when you access a Web 1.0 site today, it just really doesn't feel like the kind of native app experience that we come to expect. And then applications built like this, of course, are tethered to their server and just really don't work offline and are really a frustrating experience when you're in a subway tunnel or something. So judging by today's standards, Web 1.0 just isn't so awesome on our awesome meter. So fast forward about 10 years, and we got the XML HTTP request, the XHR. So this is the first time that we could communicate with the server from the client without doing a full page refresh. So gradually, we started to adopt AJAX-based approaches to building applications. We started shifting a lot of the application functionality from the server into the client. And so until eventually, we settled on an architecture that we kind of called single page apps today. So in this architecture, when the user first enters a route into our application, the server, instead of sending down that HTML, they send a bundle of JavaScript that contains all of the code you need to handle all of the client-side interactions and render out all of the client-side views. And then once that application boots up, it looks at the route, decides what view to render, renders it out. And then from there, when the user navigates to different parts of the application, they get a much better experience. So if they switch to the detail view, because we can handle that route change on the client, we can immediately activate and render that view without any sort of going back to the server. And if we do go back to the server, we might only be going back for data. So we get a much better user experience here. So because we're cutting out those full page reloads when the user is navigating around, we get a much faster response to user interactions. And because we're able to put all this functionality client-side, for the first time, we're able to really kind of build a more immersive app-like experience. But we had to trade something off for this. So before the user can actually interact with anything in our application, they have to get this big bundle of JavaScript down. And because we're moving more and more functionality into the client, typically we'll enlist the help of a framework to help manage all of that complexity. And it's these big bundles of JavaScript that are blocking that first interaction, the first impression that the user can have with our site. And so a lot of modern kind of frameworks today try to chip away at this problem, through what's known as server-side rendering. So this is where we'll take the framework and try and run it on the server as well and have the framework output. So the first time the user accesses our site, we'll have the server output some placeholder HTML that we'll send down so we can get a fast first paint. But then the user is just kind of sitting there, looking at an interactive site, until that big bundle of JavaScript downloads. They still have to wait for that big bundle of JavaScript before they can actually interact with the thing they asked for. They asked for the Linsman, the List of Men's t-shirts, and they're blocked waiting for the whole application to come down. So this is what we traded off. We traded off that fast load from a deep link to get that fast interaction once it's on the client. And it's really these big bundles of JavaScript that just aren't compatible with those next users coming online that have to contend with poor networks. So here you can see that single-page apps were a big improvement. We really improved the user experience, but we're still kind of in the middle of the awesome ometer. So that's the question. How do we hit peak awesome on the awesome ometer? And if this sounds really hard, I can totally sympathize with you. Because for years, the web has been this bumpy platform. We've had to cobble together solutions and add lots of layers and abstractions on top to make anything go, really. But what I'm here today to convince you of is that the web as a platform has evolved. And we have some awesome new technologies that we can use today that all fit together to make this kind of ideal experience I just talked about actually not only possible but easy to develop. So I'm going to go through each one. Custom Elements is a new browser primitive that allows us to decompose our application into manageable, maintainable components without a lot of framework overhead. HTML Imports is a new dependency loader built into the platform for loading those custom element definitions. HTTP2 is new networking technology that's really well suited to loading granular dependencies such as like we will have when we have a lot of components. And then finally, ServiceWorker is a new capability we have in the platform to control the browser cache and that allows our applications to work offline. So I'm going to go through each of these four new technologies in a little bit of detail. So we all have kind of a baseline understanding. And then we'll circle back to put them all together into a pattern that you can start using to build this awesome experience. OK, so Custom Elements. Custom Elements solves this need that we have as developers to manage UI complexity. So over and over, we see that developers need a component abstraction to help us manage our code. We want a component model for widgets like UI pickers and menus and date pickers, that sort of thing. But we also want components to help us manage our own code. We want to be able to break our application down into manageable chunks of code that we can give us more maintainability and reuse across our applications. And because the browser never helped us with this, it never gave us a primitive for this, we had to turn to user space. And so over the years, we developed lots and lots of libraries. Over the years, you can't count the number of libraries that try and provide this sort of component abstraction to our development workflow. And while they work, they help us manage that complexity and get the job done, they have all the costs that we alluded to earlier. So they add to our JavaScript payload that we have to download. There's a lot of JavaScript runtime costs typically involved. And then once you've chosen a framework for something as basic as this component model, you're locked into it. And then you have to incur switching costs as the fads in the framework space change, which, as we all know, they change very rapidly. So custom elements aims to solve all of this. And the funny thing is that the DOM has had a component abstraction sitting in it for like 20 years that we all carry around in all the browsers that we have. And it's the DOM. It's the document object model. I like to say that the DOM is actually the original web framework. It has the notion of components. It has elements which are well encapsulated. Elements have concepts of data flow through events and their property and attribute APIs. The only problem was that the DOM wasn't extensible. Browser vendors could extend it, like when they added the video tag or the input type date picker. But if we developers wanted to build a better date picker, we were on our own. We had no choice. We had to go bring some library or framework in to do that. And so that's what the web components family of technologies aims to solve, is to unlock the power of DOM so that we can use it as the framework and cut out a lot of the complexity and abstractions that we have to send down with every application. So to give you, we've talked a lot about web components and custom elements at past IOs. So just to give you a really brief overview of how you might use custom elements to help manage the complexity of an application. So here's a typical e-commerce application. And maybe I want to encapsulate one big chunk of the application, like this product detail view. And I can use a custom element to do that. So I might create a new custom element called detail view that encapsulates all the functionality inside of that one particular part of my application. Now, once I register the custom element with the DOM, it becomes a first class citizen of the browser. So the parser, the HTML parser knows what to do when it encounters a tag like this. It can instantiate the tag. And the tag can do, the custom element can do arbitrarily complex work. So in this case, we might have that detail view render out the kind of encapsulated implementation of its views. So it might be composed out of some more elements. And those could be custom elements as well. So we can continue to break down our application using this new browser primitive into a lot of fine-grained components that give us that maintainability that we're looking for. But then once we've kind of defined that big chunk of application view, the detail view, it's all well encapsulated. And we can just treat it like any other DOM node with a nice property and event API. And then finally, as we go and continue building our application, we might start building other parts of the application views and composing them together into something that eventually looks like kind of a modern application. So this is the basic idea behind custom elements. We have a new browser primitive that allows us to manage UI complexity without a lot of framework overhead. So before I move on, I want to touch on just one other detail of the custom element spec. And this is something that the Polymer team fought really hard for in the standards process. And it's the notion of progressively upgrading custom elements. So what does this mean? This means that basically you can use a custom element tag in your markup without regard to whether you've loaded the definition yet. So if the browser sees a custom element tag that hasn't been registered yet, it just treats it as a lightweight DOM node, a very cheap kind of placeholder for a node that can be upgraded later. And then at some later point, we can choose to pay the cost over the network to download the definition for that part of the application. And we can pay the cost of instantiating that part of the application lazily later. So this gives us a lot of fine-grained control over booting up different parts of the application to really improve performance. And this is all made really easy and declarative through custom elements. So we can actually define all of our application together in markup in a very idiomatic way. But then at runtime, decide what parts of the application we want to load. So if the user came into the home route, we might choose to only load and register the home view based on that route and keep the other parts of the application dormant. We haven't paid any cost over the network, and we haven't paid any runtime cost to bring that part of the application up. And then when the user navigates to a different route, say the detail view, we could do the same with that. So we have this really nice platform-centric way of progressively upgrading parts of our application that really allow us to achieve great performance. So again, custom elements is this new browser primitive. One fell swoop. It gives us this platform-based way to break down the complexity of our user interfaces. And it gives us really nice controls over the performance of the application. OK, so that's custom elements. We've broken our application down into a lot of maintainable pieces. The next problem we have is how do we load those pieces into the application? And so this is where HTML imports comes in. So it solves this problem of needing to load components, which may depend on other components. When you think about it, the browser primitives for loading resources just really were not designed for this era of modular app design that we have today, where one module might depend on another module and need to load. Eventually, you need to load a whole transitive dependency graph. The script tag just does not help us with that. And so for years, just like the Kimono problem, we had to turn to user space. We had to design our own JavaScript loaders and module systems. And while these work, and they're kind of super finicky and hard to configure, the real problem can be performance. So when you take the whole dependency graph of your application and you hide it in JavaScript in some opaque loader that the browser doesn't understand, you're not letting the browser optimize the loading of that. And browsers do a lot of optimizations around loading resources. And then when you do other hacks, like take HTML and CSS and encode that in the JavaScript, you're also hiding those resources from the browser as well. It's all blocked on JavaScript. And you kind of opt yourself out of optimizations that the browser would otherwise be able to do, like background parsing of HTML on a separate thread. So Chrome totally does this. But when you hide everything in JavaScript, you're just bound by your own code. And so HTML imports aims to solve this. And it's really well suited for loading the transitive dependencies that you'll need for building custom elements. Because those dependencies typically will involve some HTML in the template, some style to style a component, and the JavaScript to register the element. And because HTML is a format that can contain all of those, you get all that for free in this one loader. So in a typical custom elements example, you might depend on two elements, so element A and B. And so I can import those with the HTML import tag. Then I can use them. So I might use them in a template. And then finally, I can define a new custom element. So this is the new v1 spec that's coming out pretty soon. So I can just define a new HTML element class. In the implementation of that class, I might stamp the template that use those dependencies. And then finally, I can register that new tag with the DOM. So HTML imports gives us this really nice import, which tells the browser to load those dependencies and evaluate them, use, and then export. So it's very similar to the import use export style that ES6 modules, another platform-based loader that's coming down the line that hasn't quite shipped yet, will also give us. But HTML imports is really well suited for when you're carrying all these mixed type of resources, you can define them all in HTML files and get those all through the HTML imports loader. So in a typical application, you'll import an element that you're going to use, and then you can simply use the element. But because the definition for that element might depend on other elements, so declaratively, the browser knows what to go fetch. And basically, the browser can very quickly build up that dependency graph and optimally fetch and start parsing all of the resources that you'll need to build up your application. So HTML imports gives us this really nice dependency loader that's well suited for bringing in custom element definitions. So we've broken our application down into these maintainable custom elements. We now have a platform-based loader for loading those. So now we're going to face the next problem, which is that HTTP, the protocol that underlies all of the web, is actually really bad at loading granular resources. So this is actually one of the reasons why we don't actually go to production with those JavaScript loaders that we create because they would fire off way too many network requests. And everyone knows that if you're going to build a good web application, you can't make too many network requests. So how do we get around that? We end up bundling our whole application together. And it's this bundling that can really kill the user experience. So not only are the tool chains to bundle it all together, super complex and finicky and hard to get right, it's these bundles that can really, really impact user experience. And that's because as your application grows, you might start off with three views, but then your requirements change. You add more views. You add more views. And your application bundle just keeps growing. And remember, this is blocking that first experience that your user is going to have of being able to interact with your application. So the next step that we'll do, once this app bundle gets so big, it's obviously kind of the long pull in the tent for our application. And so we'll add even more complexity to our tool chain. So a lot of bundling tool chains out there now support what's called code splitting. So we'll try and identify all the routes into our application and make optimal bundles for those routes. So if I go into the list route, if the list route requires these three components, list, button, and tabs, I'll make a bundle just for those and send it down so I can get that route rendered more optimally over the network. Then if the user goes to a different route, say the detail route, then I'll create a separate bundle for that route as well. But inevitably, when we start code splitting our application up, we find that we have duplicated dependencies between routes, because we're using these reusable modules, these reusable components. And then we're faced with even harder choices. So do we break that out into a shared module and cause another round trip to the server? It gets us into this really hard trade-off space. So why do we twist ourselves into contortions like this? So it turns out that the root cause, the whole reason we even bundle our applications, comes back to the HTTP protocol. So this is defined 25 years ago. And it is a really simple request-response protocol for fetching documents. And the key thing is that it's a serial protocol. If you request one document, you have to wait until that response comes back before you can make another request. So if that initial page load, once you start parsing that, you find that you have 20 resources you need to download, you would have to sequentially request those over the network and just those round trips that kill our network performance. And so the browser will try and do better. It will try and open a whole bunch of TCP connections to the server to try and parallelize the request that it finds. But it's generally capped at around six. And it's this fundamental limitation of the protocol. It's totally a de facto thing that led us to bundling our applications up. And so the awesome thing is that the platform has evolved. So HTTP2 is the next generation of the HTTP protocol. It really solves all of this. So the key innovation of HTTP2 is that it allows us to multiplex multiple requests over the same TCP connection. So we get rid of the incredible cost of spinning up new TLS connections to the server to try and parallelize our requests. And it avoids that limit that we hit. So now, as soon as we identify all the sub-resources that a page needs, we can just ask for them all over that one TCP connection and make sure that the network bandwidth is totally maximized for fetching those requests. So it's a dramatic improvement in page loading speed. But if we're trying to load these granular components individually as individual files, we still have a problem. And as we load one module and discover that it depends on another module, then we may still have a chain reaction of round trips to the server to get all those resources fetched. And so the awesome thing is HTTP2 has a solution for this as well. And it's called server push. So server push allows a server that has just a little bit of smarts that can understand the dependency graph of a given resource to start pushing all of the transitive dependencies that that resource is going to need in that initial request, in the initial response. If you look at it, it's very much like what we would have put in the bundle that we sent down. So HTTP2 with server push really allows us to do the bundling at the network layer. And it paves a path for us to just totally get rid of all these bundling tool chains, right? Wouldn't that be awesome? So it's not just eliminating the tool chain that's the benefit here. It's because when you start pushing down the resources by file name, you're allowing them to be optimally cached. So we're caching them by file name rather than the actual contents of the bundle being kind of opaque and hidden inside the bundle. And so in that way, when we go to request another route that might have duplicate components, the browser can actually see that those components are cached, so again, HTTP2 is this awesome new platform technology that really needs to make a step back and think about all these best practices we learned about how to server applications. And the cool thing about server push is it's really easy to configure. So if you have an HTTP request that looks like something like this, your application server, which could just crawl the dependency graph of the resource that was asked for, index.html in this case, all I have to do is add this new link header. And that will instruct a push-compatible server to start pushing all those resources at once. And then when you're using a platform-based, really nice declarative dependency loader like HTML imports, it makes it trivial for the server to just crawl the dependency graph and know what to put in those headers. So you can see how all these things are really designed. They all come together really nicely to allow us to start eliminating all those hacks and workarounds that we put on top of our applications. So we'll move on. The last piece of the puzzle is Service Worker. And Service Worker addresses this kind of fundamental beginning of time problem we've had with the internet, which is that websites just don't work without a network. Kind of sounds silly to say. But even though a user may have gone to your website and bookmarked that, if they want to come back to that later when they're offline or happen to be in a subway tunnel or something, it can't load. And up until now, if you really wanted to make your content or your application available to users offline, you had no choice but to deliver that as a native application, right? Because native applications can be installed. And so Service Worker changes all this. Service Worker is at the heart of a new paradigm shift that we're seeing on the web platform that you've probably heard about a lot today. And that's called Progressive Web Apps. So Progressive Web Apps are still websites. You still access them through URL, and they're still sending back HTML and JavaScript. But the difference is that Progressive Web Apps work really well without any install the first time you access them. But then as the user keeps interacting with the site, it becomes progressively more useful to them. So it can then open up without a browser connection because the entire application can be cached. It can get a launch icon on the home screen and open in a full screen experience. And it can even receive push notifications, right? So it really allows us to progress from a website into a full-fledged application. Alex Russell, who wrote a kind of defining white paper on this shift to Progressive Web Apps, said, Progressive Web Apps are really websites that took all the right vitamins, right? They do all the right things for the user, and it's all powered by Service Worker. So there's a lot into Service Worker, and there's a lot on the web you can read about them. I'm just going to touch on a few of the really key points. So Service Worker allows us to basically write code that acts as a proxy between any network requests being made by the browser and the actual network. And so there we can intercept and handle requests using our own code. And that allows us to do things like intelligently cache those resources using our own caching strategy, right? So we have complete control over the network cache. And the cool thing is that once you've cached something in Service Worker, those URLs become available offline. So this is the first time we can actually have that bookmark pulled back up that website even when the user is offline. And then because we have complete control over the network and the caching process, we can do new types of things, like background pre-caching of other application components that the user hasn't required yet, but may in the future. And so through this process, we can progressively build up the loading of the application into something that can work offline. So we'll touch on Service Worker a little bit more. But for now, just know that everyone should be going and sprinkling Service Worker into their applications, because it's this awesome way to layer in this great user experience. OK, you guys still with me? I'm going really fast. OK, so we've got these four puzzle pieces. Custom Elements allows us to break down our application into maintainable pieces. HTML Imports gives us a new dependency loader for loading those. HTTP2 is new networking technology that allows us to get rid of all those hacks around bundling and actually load the components that we need at any given time. And Service Worker lets all those components run while they're offline. OK, so let's put all of these together into a pattern. So we have a four-step pattern that you can use to think about how all four of these technologies go together. So first, we've broken down our application into these kind of maintainable decomposed custom elements. So now when the user first accesses our site, so say they go to that list of minutes t-shirt, so they want the list route, the server, which can understand that HTML Imports dependency graph, just pushes down the components that are needed for that route only. Only the minimum components that are required to get that part of the application booted up. Those are going to go into the network cache. So we've pushed them down. We've primed the cache with the components. We know that that page is going to need. Then when the application boots up, it looks at the route, says, oh, the user wants the list. I know that the list part of the application is encapsulated in the list view element. So I'm going to load the definition for the list view. That's going to cause the list view to progressively upgrade. So the custom element, we get that deferred upgrade. And because we've already pushed all the components that we know the list view is going to need, those all come out of the service worker cache, and we get that really fast first load of our application. And it's not just a splash screen. This is actually the interactive content that the user asked for. OK, so that's the second part, is we're going to render that initial content the user asked for as quickly as possible. Now, the third step, while the user is enjoying their list of men's t-shirts here, we let the service worker boot up in the background. So this can happen async after the initial page load. And the service worker is going to go off and start pre-caching all of the other components that the user hasn't asked for yet, and get those all installed into the service worker cache. So at this point, our application is available for use fully offline. But more importantly, we can now, when the user navigates to a different part of the application, we can lazy load the next parts of the application out of the service worker cache and allow those parts of the application to be upgraded. So if the user went to the detail route, we can now add the import for the detail view. The detail view will upgrade. That part of the application will come to life. And those components that are needed for the detail view get pulled out of the service worker cache. There's no loading required because we've already pre-cached them. And then we get the next view rendered and interactive quickly as well. So let's take a look at how that pattern stacks up on the app scorecard. So because we've only sent down the components needed for the route that the user requested, we get that nice fast initial load from a deep link into the application. Then from there, because any subsequent navigation, we've already pre-cached all the remaining components in the background. So by the time the user goes to interact with another part of our site, they're able to pull those up out of the cache and render that part of the application progressively and get a fast response to those user interactions. Again, we're building up the entire application client side. And as we'll see in a moment, this totally gives us control over creating that app-like user experience. And because we're using the service worker to pre-cache all the components of the application, we also make the application offline so it will work regardless of your network conditions. We have really reliable performance regardless of network condition. So you see we hit peak awesome on the awesome meter, right? So this is such an awesome pattern. It really improves the performance of delivering applications to our users on mobile. And it's all using these browser prunos. So we were like, we gotta give this thing a name, right? So people can start talking about and using it and layering it into their applications. So we tried to like, whiteboard, we tried to put the letters around and we had like splas and splees and stuff. It just wasn't working. And then I just wrote down exactly what we were doing. So we're pushing, rendering, pre-caching, and then lazy loading. I was like, oh my gosh, that thing totally spills something. It spills purple, right? It's the purple pattern, right? And see what I did there? I kind of have a purple pattern on my shirt, isn't it? Yeah, okay. Okay, so again, one more time through the purple pattern. We're gonna push down the components for the initial route that the user asked for. So we can get that fast first load and make that content interactive. We're then gonna render that initial route as soon as possible and get it interactive. We pre-cache the remaining components using the service worker, and then we use lazy loading patterns to lazily instantiate parts of the app as the user moves through the application. So if progressive web apps are like websites that took all the right vitamins, then the purple pattern is really like multi-vitamins for progressive web apps. So let's just do all the right things for the user. It gives us a really nice pattern. Multi-vitamins, like maybe they're even performance-enhancing drugs, right? Because it really improves the performance of our progressive web apps. So enough talk about the pattern. Let's see how the progressive, or how the purple pattern works in practice. So the Polymer team spent the last few weeks building this demo application that really showcases the purple pattern. So it's a typical kind of e-commerce application, but it's very responsive. It has a lot of the modern kind of look and feel that we expect. It's an immersive app experience, all powered by these platform primitives, custom elements, imports, service worker, and ACB too. So I'm gonna pull that application up in DevTools. And as we go through an initial load of the application, I'm gonna point out where each of these, the four steps of the pattern come into play, right? So the first time I accessed that URL, we get a really fast load. If I scroll up in the network panel here, you can see that the initial components that was required to get that initial view rendered were pushed from the server. So this is the Canary DevTools. It'll actually tell you when you're receiving push resources from the server. So we get those pushed to the client really quickly. And you can see right here, because the server's proactively pushing those, normally you would see a bunch of stair steps as we discover transitive dependencies and need to load those and load those, all these round trips to the server. But because we're pushing them, we get this really nice, clean, everything comes out at once, just like as if we had bundled it, right? So the next step is to render those initial components and make them interactive as quickly as possible. So I switched to the timeline view here. So here, using the screenshots, we can see that we got that initial view rendered really quickly. And you can also see that in this application, we took some of the less important components and decided to lazy load those as well. So after we've sent down the initial payload to get that, you know, they asked for the home view with the list of categories here. So once we've loaded that, then we can, you know, spend some more network costs and runtime costs of upgrading other parts of the application like the menu bar that's gonna pull out the drawer on the left, right? So we can really shard up the delivery of our application so we can get a much faster progressive experience. Okay, so the third step is pre-cache. So if we go back to the network panel and scroll down, you can see kind of, we're off to the right of the timeline now. So this is happening after that first initial load. We see that the service worker booted up and started pre-caching all of the remaining components. So we're loading those and getting those into the service worker cache, ready for the next step, which is lazy load. Okay, so if we switch to the elements panel, these are the four view, the five views here that make up the application. And you can see they don't have the arrow. That means they haven't rendered themselves. There's no DOM inside of them. And then as we move into different routes of the application, you can see that those parts of the application progressively upgrade by, again, we're adding an HTML import that loads the definition for that element. And here, you can see that that element that we loaded, the definition for that element and its dependencies were loaded out of the service worker cache. We didn't go to the network. They were already in the service worker cache. And so we got a really fast load for that next navigation in the application. So push, render, pre-cache, lazy load. All right, so hopefully this sounds awesome, right? It's a really new way to work really closely with the platform to deliver this awesome experience. So you may be wondering, can I use purple today, right? Are these like way new platform things or can I use them today? And the first thing I wanna say is really that the purple pattern is really just that. It's a pattern. So it's not married to these particular technologies, right? So as long as you can push down and get just the components you need for the application, you're using a fine-grained dependency loader. As long as you're able to modularize your application, you can basically implement the purple pattern with a lot of different technologies. But it's really when you start using the platform a lot more closely that all these fit together really nicely and it allows us to cut out a lot of those bundling tool chains, a lot of those heavyweight frameworks and get a much better experience for our users. Okay, so just go through the status of the four different technologies real quick. So custom elements, as you probably know, the V0, the kind of draft version of the custom element spec, shipped in Chrome about two years ago. And it's usable today. We just announced kind of in the other keynote that there are a lot of companies going to production today just using the polyfills on other browsers. So we ship a set of polyfills. But the really exciting news, right, is that this year, all four major browser vendors reach consensus on the custom elements and shadow non-specs, which means that really within the, all of them are really hard at work right now working on implementing that version, the kind of standard, the version one of that spec. And within the next year or so, we expect to be able to use all of these web component technologies without polyfills. So that's really exciting. Next, HTML imports, also shipped in Chrome about two years ago and is usable using a lightweight polyfill elsewhere. HTTP2, this new networking protocol, the next generation of HTTP, has basically shipped across all major browsers, right? So you can depend on HTTP2 being on the client wherever. And then you'll want to check with your hosting provider and make sure that they've migrated HTTP2 as well, right? Because it's a huge boost to your serving performance. Google App Engine and a lot of big hosting providers are already moving over to HTTP2. The server push side of it is probably the newest part of all of these technologies. So Chrome and Firefox have both shipped server push support on the client. And then again, a lot of, so Apache and Engine X, a lot of the browser stacks are now, have implementations of server push that you can roll out. And we're working really hard right now on the Polymer team to build a reference server so that people can actually start deploying with this. Last is ServiceWorker. So ServiceWorker also has shipped on Chrome and Firefox. And the thing about ServiceWorker is that it's an awesome progressive enhancement that you can add to your application. So really with all of these technologies, you can build a really performing application. And then as your clients have more of these capabilities like ServiceWorker or server push, you get even more benefit out of them, right? All right, and then we're making this all even more concrete today with the Polymer app toolbox. So we just announced this in the last talk. The Polymer app toolbox is a set of custom elements that are all geared towards really building applications. So they have things like routing, localization, storage integration. But in addition to custom elements, we're adding some application templates and a new Polymer CLI tool chain that allows you to really quickly get started with a template. So here you can install the Polymer CLI from NPM today. Here I'm going to create a new project folder. Sorry, so basically the CLI is a one-stop shop for everything you're going to need to do while you're developing Polymer applications. I'm going to make a project folder here and initialize it with an app template that's all set up for this purple pattern that I described as the lazy loading and builds to an environment that's appropriate for a push server. It comes with a server, so you can just start serving this template and see what it looks like. Again, it's designed to lazy load each of those views as you click through them. And the tool chain is geared towards building this. It has a responsive app layout using some of other app-based components. So this is something you can go, really grab and really with three commands in Node, get started and start playing it around with the purple pattern today. So for more information on the purple pattern in Polymer, you can go to polymerproject.org. We've got a brand new section of the website up there with the Polymer app toolbox and all the information for that. You can get the Polymer CLI out of NPM today. So NPM installed SG Polymer CLI. You can do help there and see all the commands. You can check out the shop demo application, the demo application that I showed you today. It's all set up for server push, so it works really awesome on Chrome that supports the server push. And we have all the code for Polymer for the shop application on GitHub, so you can see how we built it. It's actually built off of that template that I just showed. And then finally, I want to invite everyone to the second Polymer Summit, which we're holding in London this fall in October. So there's a sign up link here that you can get. It's on the website as well if you go to that. And that's it. So I really want to challenge everyone to go out and think about how to deliver awesome applications using the platform. All right? Well, thank you.