 Thanks, Matt. I didn't hope to scare anyone today. Matt made me promise not to turn this into a half an hour of telling you why your websites are too slow. So I'm going to try to refrain myself. I am a software engineer on the blink side of the Chrome team, which is to say, the engine bit. And I help lead our standards efforts on Chrome. But once upon a time, I also helped start the team that built the project that ended up as Web Components and ES6 and a bunch of other improvements to DOM and to JavaScript. We put together kind of a dream team, I think, that set out to solve problems that seemed really urgent to us back in the day. And that problem was why were the web apps that we were making so blow and slow that? We were looking at this from the perspective of Google's own apps, Gmail, Sheets, Slides, Docs, all the rest. And after a couple of years, after we'd released Chrome and a year after we'd released Chrome Frame, what we saw was that the front ends that we were making still weren't using the new stuff that we had finally enabled through Chrome and, hopefully, an IE through Chrome Frame. We had this huge, high query per second system that did nothing but generate rounded corner GIFs. I don't know if you used to make rounded corners like this, but this was one of the highest QPS systems that we had in a bunch of our back ends. Because there was this nine grid system that made a table. So every time you wanted a rounded corner to get it to work on IE6, and IE7, and IE8, all the browsers that mattered, you would put in some JavaScript that would generate a table and all the elements that go along with it and then make an image request and then put that around the outside. And then you get the idea. It was super inefficient. And that really bugged me. And it bugged some of the other folks on the team. We used to have a lot of code hanging around like this. In fact, this is still, if you go to the Closure Open Source repository, don't read this. It's a bit of an eye-tripe. But what you get in the comment there is a sense for the DOM that's generated to give you a rounded top and top left right or left right for your element. And that actually didn't do real rounded corners. That didn't even use the GIF hack. This is just sort of faking it with one pixel offsets. This is how we used to make rounded corners. So perhaps you were lucky to start doing web development after that era, but what we were seeing at Google was that teams that were starting fresh in 2010 still carried around this kind of legacy baggage. Either their frameworks were baking this sort of thing in by default, and they just used it off the shelf. Or they had these assumptions about which browsers they had to support, which may not have been questioned in a while. And so they always seemed to bring along this least common denominator approach. Even when most of the users came from modern browsers, they still used these slow paths. To this day, Closure has components like this. This is not at me, obviously. We're kind of performance is sassed at Google, and this doesn't seem great. But we didn't really have hard data about what we were really losing to this sort of inefficiency. How much was this actually costing us? So I did a little project to find out. I stole a couple of weeks from my 80% project. I kind of just didn't answer any email. I go to any meetings or really. I may not have been on the planet as far as anyone was concerned. Basically slept under my desk and just started hacking up Gmail. I got the servers running. That took half a week, don't ask. And my goal with this project was just to remove as many elements from Gmail's DOM as possible. I wanted to make sure the app was pixel for pixel the same, keystroke for keystroke the same, that it worked exactly the way that it did, but see what you could do if in 2010 you started fresh. Like, what would it take to iterate to use some of the more modern stuff? So there's a lot of stuff in Gmail. This is a screenshot of a recent Gmail. And a lot of things in the engine are linear in the number of elements. Some of the things are linear or super linear in the number of nodes above you in the tree, the depth of the tree. So fewer nodes, the theory goes, and shallower hierarchies would be faster. So looking at where the elements were going, it seemed that modern platform features could help remove some of the inefficiency, rounded corners, for instance, getting rid of a bunch of the extra elements for layout. But that was all conjecture. Turns out it's really difficult to measure the contributions of small elements in large systems. After about a week and a half of sleepless hacking, my prototype got to a 40% reduction in the total number of elements in the Gmail DOM. That 904 today used to be a lot larger. I was using CSS rounded corners instead of nine grid tables and GIFs. I was using Flexbox instead of a custom dual-pass layout system written in JavaScript. We were dropping hugely nested layouts in favor of things that were much more modern, and in general opting for the thing that you do if you could start over circa 2010. So, did it work? And testing locally on a fast machine, my development workstation, which was a 16 core i7 workstation class box, was super inconclusive. It was kind of a bummer. I spent a couple of weeks on this, and I'd really hoped to find that something was gonna be a lot faster about this. I had thought intuitively, knowing something about how the engine worked and something about how websites worked, that this would turn into a faster experience. But for the life of me, I couldn't find a way to say that it was faster. The changes made the toy template, soy templates a little bit smaller, but try to say I couldn't detect any large performance wins on that really fast device. But that didn't keep my friend Emil from the Gmail team from productionizing portions of my prototype patches. Those changes wound up dropping the total size of the Gmail DOM by something like 30%. Given the ambiguous data we'd seen locally, both in local testing and in the population control groups at Google, we didn't really have high hopes for anything other than some code cleanups. We thought that this might lead to a nicer way to work on Gmail, but we didn't think that it was actually gonna be a big performance improvement. Boy, were we wrong. With the reduced element count version in the wild, it turned out that this was a huge impact on real-world latency. Many users saw their Gmail inboxes load half a second faster. To put that in context, this was the single largest latency win in Gmail in years, and there was an entire team staffed to doing nothing but reducing Gmail latency. None of us had any idea that this was gonna work this well. Even the folks who decided to productionize it, like Emil, we'd gotten lucky. We'd gotten super lucky with this, but what lesson do you learn from that? That's a fascinating question. We started to ask ourselves, what should we do next as a result of figuring out that we could make things faster this way? One possible lesson might be that making JavaScript faster leaves a lot of money on the table from the platform perspective in terms of cross-cutting optimizations that you can make inside of your applications. We'd proved that lesson to ourselves quite a few times since in the wake of the blink fork back in 2013 when we started to remove a lot of the ON squared algorithms that were sitting around inside of the layout code, for instance. Another possible lesson that we could learn is that moving common stuff from user space into system space, into the platform itself, let us reduce not just the cost of development time or reduce our complexity for developing a particular feature, but it also potentially let us radically improve the user experience, too. It turns out that this only works if we can get developers to use those features, though. Compelling new features don't need a lot of help here. Developers eventually tend to realize that building and sending their own versions of a standardized feature down the wire is really expensive. This is a little like the way that in the U.S., state laws preempt city laws or federal laws, national laws preempt state laws. When the bigger, slower-moving entity finally acts, it doesn't just add up to an umbrella to keep you out of the rain, it changes the weather entirely. I call this the doctrine of standards preemption. We see this a lot in the platform in other areas, too. Query select role, promises, you get the idea. So with that in mind, we started writing design documents about how we might upgrade the web platform. In time, this turned into a pitch deck for a project that we jokingly called Parkour. This is a slide from that internal pitch deck. The naming of course was very tongue-in-cheek. If you looked at us, we were the least likely set of humans to actually try real Parkour or put the other way, the most likely set of humans to be grievously injured by trying it. So what did we do? Well, every couple of weeks, we sequestered ourselves every Thursday. We cleared our schedules and for most of the day on Thursday, we would meet in a conference room, this crazy U-shaped conference room that used to be the gap, the building used to be owned by the gap, the clothing manufacturer. And so it was sort of this wood panel-like grandiose thing where you expected a marketing pitch deck for the next line of jeans to be presented breathlessly. But we filled it with nerds and we just debated. Sort of halfway between San Francisco and Mountain View near the airport, we just spent our Thursdays digging in, trying to figure out what was in there. What did we need to add? We spent so many hours in that room arguing, prototyping, researching, arguing, prototyping, lots of arguing. And we recruited from the ends of the earth. This is the Sydney team. And they eventually wound up owning a large portion of Blink's style engine. They still do. As a fun aside, some of these folks were just coming off of the Wave project. A lot of our initial design documents were written in Wave, when some lose some, I guess. And we made sure to have experts from nearly every aspect of the platform in the room. Folks like Annie and Eric on the right there knew basically everything there is to know about how to do web development in that era. And folks like Tab on the left and Chris Burroughs were language and standards experts. And together we could figure out solutions that work not just in practice, but also in theory. Kind of valuable when you need to actually go write it down as a standard sometime in the future. We pulled in designers and toolkit engineers and folks from all over the company to help us hone our understanding and try to understand aspects of problems that weren't clear yet. Both the problems themselves, but also the ambiguous design space around how we might potentially solve them. And did I mention the whiteboards? We had a lot of whiteboards. We spent a lot of time at whiteboards. So many whiteboards, all the whiteboards. And when it wasn't whiteboards, it was video conferencing from all over the world. At one point we had folks pitching in from Sydney, Tokyo, San Francisco, Mountain View, Seattle, London, Munich and St. Petersburg all at the same time. We didn't sleep a lot. It was sort of shocking how many designs we worked through in the space of a couple of years. It was exhausting. It was an incredible amount of work and discovery and prototyping. Turns out that inventing new platforms is relatively easy. Iterating on existing ones compatibly is much, much harder. One of the most fruitful veins of exploration for us was digging into the then popular JavaScript frameworks to try to understand how they worked, what was in there, what was common between them. Members of our ad hoc team had built JavaScript frameworks that were at the time powering some of the largest products from Google, IBM and Sun, remember Sun? And we took it upon ourselves to learn the ones we didn't know already. We looked beyond the JavaScript tools too, not just jQuery, Clojure, Dojo and YUI, but we looked at Flex and XAML and Zool and XBL and Java FX and pretty much anything else we could get our hands on. What we discovered were striking similarities but wildly different levels of platform support. There was a huge difference between some of these environments in terms of batteries included versus bring your own. The web it turned out was very much in the bring your own camp. We tried to boil a lot of what we learned down into these documents, these endless documents that sort of cataloged the ways that people were trying to plaster over what the platform did or level it up through common idioms and interfaces. Just to give you a sense, this was what we did, please don't read this. This is one of the investigations we did just to try to compare and contrast the event systems. This is copied directly out of a document I wrote. And it gives you a flavor for some of this research. We wanted to know what toolkits and frameworks needed to provide because the platform wasn't either nice enough to use or expressive enough to get the job done. And we tried to identify areas that were common. Something that really jumped out at us was how much infrastructure each library was bringing around to support creating widgets or components. This is jQuery UI's idea of something like a menu. Behind that dollar dot widget is a huge world of infrastructure being created to support stamping out instances of this thing. Here's something very similar from Clojure, the JavaScript library that traditionally had powered almost all of Google's largest front-end products. That goog.inherits line at the bottom shows that this class really is trying to be a widget. It's trying to inherit from a control class. This is the dojo version. It's very similar, that declare method at the top. That's the thing that does all the magic. It sets up functions, it wires up the prototypes, it mixes in all the interfaces. It does all that stuff for you, but under the covers it's the same thing. There's a function, it's got its prototype wired up. It's a class, basically, that you can instantiate and it creates some DOM from a template and manages its internals in a componentized way. jQuery's approach hung all those custom details and tried to stuff them inside of the element itself, whereas almost all the other libraries that we investigated did the opposite. They created a parallel tree of components which happened to have elements hanging off of them. Dojo, Clojure, YUI created this parallel hierarchy that just wrapped the DOM. Here's YUI's version, for instance. It basically takes the same approach as Dojo and Clojure. Starting with this landscape for a while, it became clear that what everyone was really trying to do was to create a logical hierarchy of components that didn't have all the implementation gunk in the way. Frameworks are spending tons of time and bytes on the wire to make this possible. A major challenge for companies as big as Google was that even when these tools were open source, they were also incompatible. You couldn't take a YUI widget and use it inside of a Clojure application without pulling in the entire library that came with it. There was a huge sunk cost to importing any other thing into your system. You couldn't compose Clojure widgets into GWT. As Google, that sucked. A lot of us were ex-framework authors and at some point in the distant past, we'd all basically had this question at the top of our minds. Why in the heck can't we subclass the HTML element? But when you're working on a framework, this isn't really a meaningful question. You can't do anything about it. It's not actionable. Browsers are browsers, them's the breaks. You just take what you're given and you move on. But now we worked on the browser. What if we could fix it? What if we could make subclassing HTML element possible? It seemed like we'd be able to finally build portable components. And if we were able to define our widgets in terms of HTML, let them integrate exactly into the DOM, we might be able to finally end some of this war between all the different frameworks that we had built or that we had discovered and studied. So to validate our assumptions, we started building prototypes. We built entire applications. And we threw away all the libraries that we had used before. We got rid of them and we built as little infrastructure as we needed to use Tip of Tree, WebKit and V8, circa mid-2010, to see what was still missing. If we could use the tip of what the web platform had, was it all just sitting there and we didn't know it? We just hadn't used our CSS rounded corners in this case. It was fascinating to see what fell away and which parts we had to rebuild. To define menus in our apps, we still needed to contort the word function to mean class in some cases and do some nasty underbar, underbar, proto-wiring to convince Chrome that our instances were actually elements and that you could call methods on them. It kind of worked. It was definitely less messy than the other way, but it still didn't let you say what you meant. We built entire apps, including a cross between an RSS reader and Google Maps. You could see the current news if you zoomed in and search for things all across history, but if you zoomed out and scrolled down, we backfilled with Wikipedia data. It was pretty cool, but I can't get a screenshot of it because I tried to get it working, and unfortunately, we built it on the RSS APIs from Reader. I spent a couple hours trying to get it revived, and Mihai, one of the engineers who worked on Reader, was on the team, and he plumbed a bunch of this cool stuff in, and I'd like to just take a moment and pour one out for Reader. Can we do that? Just a moment of silence for Reader? All right, I'm still sad. As we built these apps, we took note of every single time that we had to build a little bit of library to support ourselves. It turned out that this was a great to-do list for things that we might go and add to the platform, and over the next few years, we attacked those gaps, and I'm proud to say that we were relatively ambitious. We were not shy. Between 2010 and 2012, we developed an attempt to advance work in HTML and DOM, JavaScript, and CSS. We'd seen what happened to other efforts which tried to solve all known problems in web development using a single tool, for example, unbounded extensions to the input element or dozens of one-off CSS properties, because when all you have is one hammer, everything looks like a nail. So we took out some wrenches, we got some saws, we built an entire workbench. We decided that we would try to target our improvements at the places in the platform where they made the most sense, and so we built a team that was broad enough to let us start to attack each of the problems where they lay, not necessarily where we could most easily turn the wrench. By taking a broad view, we were able to target minimal interventions in many areas of the system, which each of which I hope are valuable, but together kind of add up to Voltron. Web components aren't one addition. They're a bunch of different things that eventually end you up with something like Polymer. Peter Helen, the chap on the right here, helped us design classes, traits and modules, async and await syntax, for instance, and prototype it all in the first JavaScript next to JavaScript now compiler called Tracer. Taking them to tc39 turned out to be pretty slow going, but eventually we got a lot of the things that we really needed to make this better few for possible done through standards. Working through standards is sort of the grinding terror of business travel combined with a high stakes whiteboarding. It's just as much fun as it sounds. We persisted, though, and occasionally we even won a few. Incidentally, this is Dimitri Glaskov, my engineering partner in crime and all of this, and now the uber tech lead for all of the Blink project. When I originally posted this photo to Flickr back in 2011, I initially captured it, he's never going to live this down, so I feel like adding it to this presentation is just sort of keeping a promise to myself. Anyhow, it wasn't all smooth sailing. After we opened up the design process and started making public proposals in 2011, we got a lot of mixed signals from other browser vendors. The standards process isn't necessarily something that you can gain. We need to actually forge agreement, but most of what we got was indifference. Browser engineers tend not to viscerally feel the problems that we feel as web developers. The grounding explorations that we were doing to understand the state of the art and where the problems lay were actually not the norm. Most browser engine projects don't do this. A lot of people were surprised to hear that browser engineers don't understand web development very well, but having worked on both sides of this line, it makes sense to me. It's not like you beat the boss at the end of the web development level and suddenly get handed Chromium Checkout and MSDN subscription and a copy of Effective C++. Web development and web engine development are separate skills, and we hire separately for them. The problems we're trying to solve in 2010 and 2011 were partially about speed, but also about expressivity. The problems that we front end community today is trying to solve are very much have the same flavor, except that many of us now are so up to our necks in frameworks and CLIs and tools that we can't really even see how changing something in our apps would actually affect that eventual outcome for our users, especially in the context of the existential threat that we face in terms of mobile performance. Back in 2011, when we put a slide like this up, developers understood that this code didn't feel right. It was div, div, div, div, div, div. It was not saying what you meant. So the expressivity argument tended to work, but the performance argument has kind of always fallen flat. Desktop computers were getting faster, the Wi-Fi was everywhere. Was this really a problem that we needed to solve? A problem we needed to solve right now? We tried to sketch out a simpler future, one where you didn't have to have heavyweight frameworks and build steps to enable the edit refresh cycle that we had grown up on, the one that we were so addicted to and really enjoy. All this code came from a snippet in the slide deck we put together nearly seven years ago. And what we thought it showed was the value of being able to say what you mean. Class, split pane, extends, HTML, div element. We built ES, next to ES now, compilers to prototype all this and try to feel it for ourselves and see if it actually worked out pretty well. And very much to counter to today's ethos in a lot of the JavaScript ecosystem, we built all of this with the hope that we will be able to invest it all into the platform directly, and then we will be able to evaporate those tools away. We built them with a sell-by date. Tracer was meant to go away. Transpilers were meant to be an ephemeral feature, not something that lives forever in the platform. And we didn't get it all, but this code works today. Things like the property initializers, for instances, that were so nice in that last example, aren't here, and they've been stuck in the community for reasons that, frankly, just make me grumpy. But this code works in Chrome and Safari and Opera and Samsung Internet today and without any tools and soon in every other browser because the standards process allows us to forge consensus that's broader than just one platform, just one browser, just one runtime. So we started all this in the desktop era, right? And now when we have web components, you've been hearing about them for at least one day, going on too, but we're not in the desktop era anymore. So, having finally solved all these problems, does it matter? For context, 2G connections still make up almost half of mobile internet connections. If you saw this stat in one of our slides last year, it was nearly 60%. So that's on the decline, which is good. But if you're trying to enter emerging markets, 2G is how your median next user is going to feel your experience. The situation is changing rapidly in emerging markets. Reliance Geo, for instance, is having a massive impact in India this year, but the global reality for the next billion users is that 3G is how people experience your apps and sites. For those users, the experience isn't just about the network. It's also about the hardware that they have in their hands. I promised Matt that I wouldn't make this a performance rant, but we really do need to understand these devices if we're going to deliver the sorts of UI that we want to the users that we want to have experience them. Okay, quick digression. More cores is not faster. More cores is not faster. There's probably no better trade-off in mobile CPU design today than to take all of the extra cores that are actually most of the time spun down and turn them into caches for the cores that are spun up. If anyone from Qualcomm or MediaTek or Samsung are listening or see this video later, just know that we see you. Endless marketing of more cores is better has run its course. The king is actually kind of naked. We need devices that have good thermals, which many of them don't. This is the Nexus 5, of course. The CPUs are super slow for lots of reasons, and it's not just the terrible thermals. I could go on and on and on and on about the 28-nanometer process that coincided with the switch to 64-bit and how the electron-voltage leakage caused everyone to not be able to scale their CPUs up, but I'm not going to do that. What I will say, though, is that most of the web development community has a really long way to go in appreciating how far from good the current practice really is. A user waiting on a huge pile of JavaScript to arrive and execute just to see content or to start using it, to start tapping on it, doesn't give a wit about whether or not your developer experience was very good. Who were we doing this work for? It's a question worth answering. Kevin Salk yesterday emphasized webpagetest.org.z and Time to Interactive, and I think that's absolutely the way forward. Accurate network simulation plus real hardware gets you actually pretty close to ground truth of the experiences that we're building today and shipping to users over unreliable networks. Perhaps the most important thing for us to understand collectively, though, is that when hardware gets cheaper, it has a different effect here than it does in an emerging market. For wealthy users, more transistors every year, Moore's Law, right? Every 18 months, double the transistor count. Here, that turns into basically constant dollar spend or euro spend. If I spend the same number of euros next year on a phone, it's a faster phone. In an emerging market where most people in the world or in those countries don't have devices, that turns into a broader market for a cheaper version of what you already have. That is to say, we don't trade transistors per dollar for faster phones in emerging markets. We trade it for a larger market for the same speed of phone. That creates a larger set of users carrying devices that we might charitably think of as mid-tier back in 2015. And we should expect that trend to continue for a couple of years, which sounds pretty bad. So for example, this is the Samsung Galaxy On 5. You probably have never held one of these. It's just about $100 US or 90 euro, more or less. It's a gig and a half of RAM and about eight gigs of storage. It's pretty popular these days in India. It's, I think, the third most popular phone in Flipkart. The CPU on this device is quad-core, but if you've seen my other talks or heard me rant just earlier, what you'll know is that more cores doesn't actually mean a lot. What's interesting to me about this device, though, is that it runs the latest Chrome. The hardware might be stuck in 2015, but the platform doesn't have to be. So how slow was $100 phone? Well, here are some Geekbench 3 scores for Android's ecosystem system on chip packages. The offline Nexus 5 that used the Snapdragon 808, it's down sort of toward the bottom of this, as you can see. So where's that Galaxy On 5? You kinda gotta scroll a while. Oh yeah, there it is. But you know what I'm not bummed out about? Is that our thesis from 2010 was right. We can use the platform evolution to preempt heavyweight user space apps and frameworks and get some of this back. We can dispense with tools that only work for the wealthy users. And thanks to the platform evolution, we can level the playing field for everyone. You all know about the Shop App by now. It's a beautifully executed example of what's possible with Polymer, but what's interesting to me about it is it demonstrates how this progress plays out and how we can cope with the next few years of evolution in the marketplace as we're waiting for phones to actually get faster. We don't have to drown anything in JavaScript. Here's Shop using Polymer 1.7, and I dug it out from an old checkout. It loads something like 340K of resources total. When served, well, it can be super snappy. It can get to time to interactive on a Moto G4 and a 3G network in something like five seconds, which is, I think, the gold standard. Front first load. But here's the unbundled version using Polymer 2 and ES6. It's exactly the same functionality, but it's 60K smaller. What changed? Well, it's able to use the Custom Elements 1 APIs, and we're able to serve just the things that this browser can support. For users on slow connections and flaky networks, this is huge. Shop was fast already, but every kilobyte we lean on the platform for instead of sending down the wire really pays us back, and it pays us back in terms of reach over the next few years. Polymer is converting on the promise of we believed in back when we started the Parkour project, but in a context that we just didn't see coming. Web Components, it turned out, were the answer to a problem we didn't know to ask. When Darren and Alex and Matt and the rest of us all bet on this effort back in 2010 and 2011, ES6 and Web Components were sort of something we just were gonna figure out, something that we kind of knew were gonna be important, but we didn't understand why. But we got lucky, again. When we started ripping elements out of Gmail, we didn't know that it was gonna make anything faster. Turned out it did. When we started upgrading the platform, we knew it would make it nicer to use, but we didn't know how much faster it could make things. But now we know. We're in a special moment right now. Smartphones are opening up computing to people who've never had access to them before. And the web, I believe, can be the single best way to deliver experiences to those users if we don't drown it in JavaScript. Web's evolution is freeing us up to attack new and harder problems further up the stack. That's great news for us as developers, but it's mostly great news for our users if we use the platform. Thanks for having me and thanks for making your sites fast and small.