 We want everyone to have the best experience possible at this year's Polymer Summit. This is an inclusive community. No matter your experience or background, you're welcome here. We encourage you to be excellent to each other by saying hide a new faces, building on one another's ideas and reporting any uncomfortable experiences. We have a zero tolerance policy for harassment of any kind. This policy is posted on large signs around the venue and our full community guidelines are on the event website. Please share your positive and constructive feedback with staff and speakers. Staff and speakers can be identified by their staff or speaker badges or shirts. Let's make this the best developer event ever by creating an excellent experience at this year's Polymer Summit. Thanks. Hello and welcome to day two of our Polymer Summit. So we've got a bunch of really, really awesome speakers here for day two. And to start off is two of my oldest friends in this industry. I've worked with Ben and Dion for about 10 years now across three different companies, just the latest being Google. And I'm such old friends with them. You saw they let me write their abstract for this talk, which if you notice, they're gonna do some interpretive dance. So definitely pay attention near the end when Dion lifts Ben over his head and twirls him around. It's really spectacular. So with that, I'd like to welcome onto the stage Ben and Dion. It's a pleasure to be here with you this morning to kick off day two of the Polymer Summit. He's Ben Galbraith. And he's Dion Elmer. So to start out with, I do have a bit of bad news. We have actually been practicing our dance number for quite a long time now. Yeah, and we were just informed that we didn't get the riots for the music. Oh man. It's a shame because we had tailored this number to this one particular song. And without it, we're just gonna have to move on to our backup talk, which we'd prepared just in case. There's one little bit of good news though, which is Rob Dodson snuck in and did a little video of us practicing. So maybe we'll get something for next year. Yeah, maybe, maybe. So we've both been working on the web for a really long time. Initially as developers, building things on top of the platform back in the mid-90s. And we've worked in various capacities, places like Google, Palm, like Matt was just talking about, and Mozilla pushing the web platform forward. And now we're at Google. This is Dion's second time, my first, where Dion leads developer relations. And Ben's product lead on Chrome. And in our new roles, we're in positions again to help steer the web. And so we've been thinking a lot about its future. And this morning, we wanna share some of our thoughts with you, just to sort of get a conversation started, both here and afterwards, and hear your feedback about where the web platform should go. And as we've got to riff on this again, and really kind of go deep, we can't wait to share just how excited we are about the future of the web. Which may not be obvious to some of you, because the web's actually been through some tough years lately. Which is kind of a shame, because the web in its early years was such an amazing wonderkin, sort of personified by Mark Andreessen on the slide. It's amazing when you sort of sit back and ponder all that the web was able to accomplish in such a short time. And I think one of the first accomplishments that comes to mind is the global disruption of basically every industry that the web kicked off that's transformed the world we live in, and that disruption is still reverberating today. And before the web, do you remember this? Do you remember using things like Gopher, FTP, IRC? You know, I was at the University of Minnesota. Yeah, there we go, there we go. Wow. I was at the University of Minnesota where we gave birth to Gopher, and it was really useful in the computer lab, but not so much for kind of the average Joe. And so really the web, to me, kind of packaged up the internet and made it the first consumer-friendly and really developer-friendly platform. I gotta tell you, I'm a little upset about all the applause because I'm gonna have to listen to stories about Gopher now for the next two weeks. It was amazing. You know, the iPhone, we sort of forget about that eye in iPhone sometimes, and that goes back to the iMac where it explicitly stood for the Internet Mac because back when Steve Jobs came to Apple a second time and sort of figured out how to bring the Mac to market, he felt like he needed to do something different to compete with the Windows app ecosystem. He had to change the game, and so he marketed the iMac as the best way to get to the Internet and position the Internet as really the thing that mattered, not apps. And he kind of had a point back then because the Internet was where all the cool new content was that people wanted to engage with. And so the Internet, in addition to its other accomplishments, it also opened up the marketplace again for a platform like the Mac to come in. Yeah, and it wasn't just the Mac. Can you imagine using Linux without being able to use a browser? So at this point in time, the web was a truly unprecedented success. Then what happened? The web had a little younger brother come along, we'll call him Mobile, and when your first younger sibling comes along and got this cute little baby, who doesn't love a little baby, right? He doesn't love her. Look how cute it is. And even had cute little toys that it could play with. These were so fun. You could do some fun, cute little things with them. And like a lot of younger brothers, you try and kind of emulate and act like the older brother. So he tried to do cute little web-like things back then in Mobile. And then before you knew it, that little brother, Mobile, grew up and turned it to something that was, maybe something that someone might think is a little bit stronger, a little bit more capable, a little bit more talented than his older sibling. And people suddenly noticed. And before you knew it, Mobile apps became the new thing. And at this point, we started to question, were the web's best days actually behind us? And you know, the web's been through lots of difficult moments before. If you remember things like Java applets, Flash and Silverlight, all tried to go over the top of the web and replace the web with a different UI layer. And the web's constantly been able to reinvent itself in pivotal moments like Firefox coming to market and restoring browser diversity at a time when people openly questioned whether there was even a need for browser innovation. Or Ajax spurring a reinvention of web apps, resetting what was even possible to build on the web. Or Chrome bringing speed, stability and security to a whole new level and enabling the landscape we have of really sophisticated high-performance web applications today. So the web has this history of reinventing itself to adapt to the changing landscape and new opportunities. And then mobile certainly became another one of these opportunities, right? At this point in time, the web as the vendor kind became maybe a little bit bloated, maybe a little bit out of shape compared to mobile apps. And it was time to kind of get back to the gym to get fit again. And we did that. And that resulted in things like AMP, PWA, all of these capabilities we've been talking about the last few years, making the web truly awesome on mobile. And that continues as we come with new technologies like WebGL, WebAssembly, Houdini, et cetera, et cetera. The web's working out hard to enable these new class of applications. And so now the web's kind of catching up to that younger brother, but... What about the future? As we look forward to what's on the horizon, we actually think the future of the web is set up to be really exciting. And to explain that, we want to talk about three trends that are just around the corner and how the web is situated to meet these opportunities. First, the proliferation of even more devices. And we want to talk about augmented reality for a minute. And finally, we want to talk about assistive interfaces. To start out with, let's consider today's smartphone desktop world. There are so many devices. It's really hard as a native developer to keep up with compatibility across all these different devices in all the worldwide markets. And that explains why things like watches and smartphones, sorry, watches and tablets aren't universally supported because of the effort required to extend to these. And then consider smart TVs, which are pretty ubiquitous, and other smart devices that are now hitting the market and new ones that are just around the corner. How are we going to manage developing for all of these different devices? And it sort of helps you realize that native is really fun when you're targeting a relatively homogenous device surface, but it can be a nightmare when you're tackling all of this complexity. And so it really makes you appreciate that native makes the most sense when you're creating an experience that's tailored to a particular device or device sort of profile. And that's really why you sort of use the word native because it sort of indicates that you're targeting something that's really specific. And then also help explain why one of the world's most popular, vertically integrated hardware software companies has their own native SDK split into different flavors for watches and TVs and tablet smartphones and laptops. It's sort of like us coming to Copenhagen for the best experience. We'd love to like fully understand and learn Danish, but A, that's kind of hard. B, we're only here for a week and that doesn't really scale. Like we're going to Poland in a couple of weeks to speak at a Google Developer Day. Fortunately for us, there's a cross-platform language called English that's kind of a fantastic alternative. Thanks very much, by the way. And so the web's always been about this trade-off of sacrificing sort of tight native coupling for the ability to adapt to a wide variety of devices and platforms. And the idea of the web adapting itself to these wildly different device types has been back to the web in the very beginning. Even the original version of CSS contemplated the feature to enable a web page to adapt to text-to-speech devices, for example. And these mechanisms in the web were largely ahead of their time. You could use them for theming and you can see this in sites like CSS ZenGarden, but now in the multi-device world we're entering, we're finding new use for these mechanisms and things like media queries bring them to a new level of maturity and usefulness. So as we enter into this multi-device world, we think the web is actually ideally situated to help us cope with its challenges and it's ready for this fundamental trade-off that will be required to scale to this world we're entering of new platforms, form factors, and modalities. Let's talk about augmented reality for a second. You know, with AR, we're seeing a lot of these different scenarios pop out that are pretty compelling. You know, we're merging digital and physical objects for shopping, gaming, and much, much more and seeing it through a phone is neat. It's already useful. It's nice to be able to kind of go zoom in and play around with things, but what's gonna happen when this explodes in the future and we get full field of vision through glasses or whatever technology is actually gonna bring this to you, we think this is actually gonna be pretty fantastic. And when you think to this future, you look at the heart of AR, to us, it looks kind of like a browser. It's about mapping physical addresses to virtual content and this is what the web does every day through URLs. And what do you want here? You wanna link to third-party content to seamlessly browse around and explore different things that are out there in the world. You don't want to install apps for every experience as you go around. So to us, when we look at AR, it seems like a natural extension to the model of the web itself. The last thing we wanna talk about are assistive interfaces because now that computers can understand you when you speak to them and they can speak to you, it's a fantastic way to get things done. I mean, really, why use the linked list of native app launchers and bespoke app interfaces when you can use the voice hash table to get right to the functionality bucket that you're looking for? I mean, it's pretty fantastic if you can say set a timer for 30 minutes instead of having to open up the app and navigate through it, it's really effective. But not always. Does anyone remember calling into phone hotlines, something like a movie phone listing where you're sitting there on the phone while this voice comes back with like, there are 10 movies close to you. First movie? Does anyone remember calling in? Was that a thing here? There's like a hand in the back. There's one hand. It was like, all right. Painful voice trees. Everyone's gone through that a little bit. It was a very frustrating experience. And it really shows that assistive UIs to be effective, they have to span all of the screens in our life because you wanna be able to start a session and have that session seamlessly go to whatever screen is around you. In the movie phone example, you wanna be able to use your voice to say what movies are playing, then you wanna see the listings on a tablet or a TV that's close by and communicate which one you wanna see. And so this means that the assistive ecosystem that we're running into also is coupled to the device problem we mentioned earlier, but it introduces another problem because now the assistant has to act as an agent for the user and in order to do that, it has to understand the content of the experience. Where have we heard the term user agent before? So in order for this to work, we have to introduce a semantic layer into our content which again takes us back to the future of the web. As a recent article by Zach Bloom on the history of CSS sort of reminded us that included this awesome quote from Mark Andreessen where he talks about having to explain to developers that it's a feature of the web that you don't actually get to control the presentation of your content. That the web is just about content. That actually eventually failed as browsers bowed to pressure to hack in presentation tags into HTML. And the web tried again with CSS. The original vision of CSS was to enforce this separation and also there was a failed pivot to XML and XSL. Do you guys remember that anyone? Again, no, we got a few, more than a movie phone actually. So we've tried and I think these attempts really ultimately failed because they were just too far ahead of their time. Ultimately, everyone was just trying to display pixels on the screen and this sort of multi-device, multimodality future was science fiction and that there's one thing we're really good at as developers is finding the easiest path to solutions. But with the need for intelligent agents to broker these experiences and for those agents to understand what's happening, we think it's time for us as a community to revisit these ideas and evolve them for the future that we're running into. And now we have web components where we get to separate semantic markup from the actual implementation. We've barely scratched the surface on how we can separate those things at the moment. And so in review, when we look at these trends, when we look at the devices that are coming and augmented reality and intelligent agents, we think they actually play to the web's strengths. And it's one of the reasons why we're so bullish and excited about the future of the web and it also paints for us a potential roadmap of where the web needs to go from here. So coming back to the brothers again, they each had their kind of time in the sun. At this point, they've kind of matured. We've got mature platforms on both native and on the web. And we believe both actually have a long future ahead of them. We've got interesting things to come. Unless, of course, another little brother or sister comes along and gets to usurp them. Thank you. Thank you very much. So I know the lack of dancing was very disappointing for everyone. It was disappointing for me. But happily, since the Codelab area isn't actually being live streamed, I'm pretty sure we have the rights to play their music over there, seeing if they're still paying attention. So there's a lot of things we do on this team that aren't actually Polymer. That's why it's called the Polymer Project. There's the polyfills that everyone thought were Polymer. It's the elements we build that everyone think are Polymer. But we do a lot of experiments as well. And one engineer in particular, we have really, really, really likes to experiment a lot. And here to talk about one of his latest experiments is Justin Fagnani. Hello, hello. So my name is Justin. And I work on Polymer tools. So normally, I'd be up here giving you a tools talk. But like Matt said, I like experiments. So right now, I'm going to do something a little bit different and talk about or give you a sneak peek on this library we've been working on that we're very, very excited about. And this is only a 15-minute talk, so I'm going to go really, really quick. Hold on. All right, so I'm going to talk about templates. And obviously, we're going to talk about HTML templates. And so we're very familiar with those here, because templates are pretty central to the Polymer developer experience. But HTML templates and other techniques for updating DOM have been around for a long time. And they're constantly evolving. So before I get into this library I'm so excited about, I wanted to give you an extremely brief history of HTML templates. So in the beginning, we had manual DOM manipulation. You had to create some nodes individually, wire them together in a tree, and then maybe hold on to some of these nodes so you could update them later. And this was a lot of code. It was verbose. It was not very fun to write or to look at. Alternatively, you could try to use inner HTML. You could build up a string of HTML. Then you inject it and let the browser take care of creating the nodes. But then if you wanted to update it, you had to traverse this tree or use query selector or something like that. So it ended up being a lot better. So then we had this Cambrian explosion of declarative template systems that embedded expressions inside of them and then control flow constructs like ifs and for loops. And Polymer belongs to this category here as well as systems like Angular or Ember. And then a few years ago React introduced JSX and VDOM, virtual DOM, into the world, which take a different approach, both syntactically and functionally. So JSX isn't standard JavaScript syntax. It's an extension that embeds markup into JavaScript, which is then compiled out with a transformer. And virtual DOM took a new approach to updating the DOM. Instead of pinpointing exactly what data is embedded into the DOM, it just goes ahead and re-renders the entire tree, the entire template. And it does this in a virtual tree and then it compares that to the last data it had, figures out what changed, and then applies those changes to the DOM. So with all this evolution of templating solutions, what comes next? Something does come next, right? There's always a next. Well, we have one idea. And since it's 2017, obviously that means that idea is going to come from ES 2015. And that, we think, is JavaScript tag template literals. Template literals are one of the most unsung features of ES6 or ES 2015, because they're way more powerful than they appear at first. They're basically strings, but instead of quotes, they use backticks. And they can span multiple lines. And you can also embed JavaScript expressions inside of them, which is great for building up strings from data. And this is where people usually stop with template literals. But interestingly and very powerfully, they can be tagged. And a tag is a special function that processes the template literal, both the literal and the expression parts. And it can return whatever it wants. It doesn't have to return a string. So this is interesting. Template literals are actually designed to allow domain-specific languages to be embedded in JavaScript. And HTML is a domain-specific language. So you can see how these features together let you write something that at least syntactically looks like the HTML templates that we're already familiar with. So this seems like a promising direction to explore, especially because of our move to JavaScript modules that we announced yesterday. Polymer templates inside of modules are just strings. And when we worked with this a little bit, we wanted something a little bit better. And we really wanted to be able to take advantage of the fact that when we're writing templates in JavaScript, we can be right in the JavaScript scope that holds the values that we want to put into the templates in the first place. And we also wanted to take advantage of the fact that we could use real JavaScript expressions, which are going to be as fast as possible. The key, though, was figuring out how to keep updates fast and make minimal updates to the DOM. And we're starting to see a number of other libraries now that use a similar approach, including libraries like HyperHTML, HyperX, Bell, and T7. And now I'd like to introduce you to the library that we're so excited about, which is called LitHTML. Now, LitHTML is still very, very experimental and underdevelopment. But we wanted to give you an idea of what templating in a module-only world might look like. So we call it Lit because, well, it uses literals. And it's very little. And we needed a name, and this name was cool. So what is LitHTML? LitHTML is a combination of JavaScript template literals, the HTML template element, and a render function that lets you efficiently render and re-render these templates to the DOM. Let's see what this looks like. So LitHTML templates are just JavaScript template literals. They contain embedded JavaScript expressions. And they're tagged with LitHTML's HTML tag. And then we have a render function which lets you render a template to some place in the document. Now, on the surface, this looks like it. We could just be building up an HTML string and using innerHTML to set it on the document. But that's not what's happening at all. If we were using innerHTML, we could just replace the expressions with their values, join them together into a string, use innerHTML, and then we have our template. But this would be very difficult to update because we've lost the locations of the expressions from our template. So instead, what LitHTML does is it replaces the expressions with these placeholders. And it creates a string out of that. And then it uses that to create your template. And after it's created the template, it goes through and it finds the placeholders again. And it remembers where they are. And we call these things, these places where these expressions are, parts. And then we remove the placeholders from the template. And now we have a template that we can use over and over again to create DOM. And because we remembered the locations of these dynamic parts, we can replace them with values after we've created that DOM. And because we have those locations, we can efficiently update them with new values when the values change. Now here, we're probably going to use templates mostly to create Shadow DOM for custom elements. So what would it look like to combine LitHTML with Polymer? Well, here's a Polymer 3.0 element in a pure JavaScript module. And we have an inline template here that's returned from a static method, the template method. If we convert this to Lit, you can see that it's very, very similar. Except for that instead of a static method, we have an instance method, because we want access to that instance-level state. And even though this doesn't appear very different, the developer experience is actually a lot better, because the expressions are real regular JavaScript. So now you get all the syntax highlighting, linting, formatting, type checking if you use TypeScript, all those features that you're used to from your tools because it's just JavaScript. It's also more powerful because it's just JavaScript. You can do arithmetic in expressions. You can call functions. You can build up templates from parts using if statements and for loops. And you don't have to hang everything that you want to access in expressions off the prototype anymore. You can just import it or declare it and use it. Next, I want to talk about the three main goals of Lit HTML. First, we want to make it efficient, because quickly rendering DOM is one of the primary tasks of a template system. Then we want to make it expressive, because you should be able to build very interesting templates or with your template system. And then finally, we want to make it extensible, because no template system can build in all the features that you're ever going to need. So first, let's talk about what makes Lit HTML efficient. So one of the things is template literals themselves. They have some really nice properties that we take advantage of. One is separating static and dynamic content. Let's look at why this is really important. So every HTML template system is responsible for creating a tree of DOM nodes. But only some of those nodes are ever updated after they're created. So templates have this structure. There's a separation between static and dynamic parts. And then you have the updates. Not every dynamic part changes at once. Sometimes you update just one value. And so when thinking about the cost of templating, it's useful to have these two questions in mind. How many nodes are updated? And what's the cost per node? And if we look at something like Polymer, we notice that Polymer has this association directly between the data and the nodes that depend on it. And so Polymer can scale its cost with a number of changes that you have. If you only change one thing, Polymer only has to do one thing to apply that change. And if you look at something like VDOM, it takes a very, very different approach. VDOM re-renders an entire virtual tree every time any of your data changes. So it actually scales with the total number of nodes that you have in a template. But they try to make up for this by driving the cost of each node down very, very low by using a virtual tree. Well, with lit HTML, we try to be somewhere in the middle and get the best of both worlds here. We're only going to update the dynamic parts of your template. So we don't update every node in the template. But we also try to drive that cost down very, very low so we can be as fast as possible. And this all falls out quite naturally from using template literals. Because with template literals, we know exactly what parts are dynamic and might change and what parts are static and will never change. And we don't even have to do any work to figure this out. The syntax of JavaScript just does it naturally for us. So besides separating static and dynamic sections, they also have this property where literals that are passed to a template tag are exactly the same for every call to that tag. This lets us do one-time setup work like the HTML template preparation I was talking about. Let's take a look at an example for that. So let's say you have a function that takes some data and then puts it into a template. So we're going to call this function. We're going to say hi to Amy. And this function is not going to return DOM. This function is going to return a template result. It contains the template that we want to render and the values that we want to render into that template. And then if we call this function again, now we're going to say hi to Alex, we get a new template result which contains the template we want to render and the values that we want to render into it. Well, it turns out that these template objects, they're not just the same, they're exactly identical. So we only have to create it one time the first time we ever see a template. From then on, every time you call this function, all we're doing is passing along the values that came with it. And that's very, very, very cheap. So another reason why template literals are great is because parsing strings in JavaScript is extremely fast. Parsing streams is somewhere between three and four times faster than parsing generic JavaScript expressions, which is what most HTML and NJS solutions use. So also, we're fast because we try to use the platform as much as possible. We use the built-in JavaScript and HTML parsers. We don't ship any parsing bits ourselves. And we use the template tag for fast, deep content cloning of our templates. This is the full minimized source code of HTML. I stole this slide from Jason Miller of Preact, where he did this one time showing Preact on a slide. And I wanted to show that we can put it all in here in 18-point font with room to spare. It's less than 1.7K minified. I was conservative when I put less than 2K up there. So it's quite small, and this helps us be fast, too, because it doesn't have to load much data over the network. So that's efficiency. Now, let's look at what makes it expressive. Let's take a look at what kinds of values you can hand off to lit. So you have simple string values. We can say, hello, summit. We also have expressions inside, or you can put expressions. Like, imagine you're showing a page number on a template. And your users are probably not computer scientists, so they're expecting a one-base index. And you have a zero-base index that represents your page number. You can just add one to that inside the expression. So things like this are extremely easy. You can also set attributes, obviously. Here, we're setting the value attribute of an input element. But you can do some more interesting things, like you can directly hand-lit HTML DOM nodes. So you might have some complicated structure that you want to create, and it's easier to do that imperatively by hand. Once you do that, you could just hand the result off to lit, and lit will put it in the right place on the template. You don't have to worry about using query selector or some other method of finding that dynamic spot that you want to manage. We also support nested templates. So you can build up templates by parts, and you can share these parts around with different templates. Here, we're injecting a header into a container template. And because we support nested templates, you can build up templates dynamically with logic. So here, we're going to show a different message to the users based on whether they're logged in or not. We support arrays. So if you hand an array to lit, it will just render every item in the array. And this becomes extremely powerful when you combine arrays with nested templates. So now, you can take an array of data, map it over a template, and you get an array of templates, you hand it to lit, and lit will efficiently render that to the DOM. There's some other interesting value types we handle like promises. So let's say you do a fetch of some content over the network. You can just hand that promise to lit. And when that promise resolves, lit will render the result in your template. You don't have to do anything there for it. So if you put all this together, you can see that we can create some really complex and expressive templates. You can take a string and split it by new lines and generate a paragraph element for every paragraph in the string. You can dynamically choose what kind of templates you want to render based on the data. And then at the end, you can put this together in a more declarative form because we support that too. So we try to give you the spectrum of doing things very dynamically and imperatively to being able to do things declaratively and very structured. All right, finally, let's look at extensibility. There's two ways that you can extend the HTML. One is with directives. So a directive is a function that lets you customize how a value is handled by the HTML. So I told you about promises where you could give it a promise. And when it resolves, lit would render that content. Well, a lot of times you want to render some default content, a placeholder that renders until that promise is ready. And so that's what this until directive does. You give it both a promise and a placeholder. It renders the placeholder. Then it renders the promise when it's ready. And I wanted to show you how easy this is to implement. So this is the entire implementation of the until directive. And a directive is basically a function that gets a part. And a part has an API, setValue is the method it has. And so you can just call setValue. Here we call setValue with the default content. And then you can call setValue again when the promise resolves. And that's your entire directive right there. So it's very easy to extend and customize. Thanks. Another directive that we bundle is a repeat directive. This is very similar to DOM repeat and Polymer. This lets you do efficient lists where you can reorder the items in the list. So if you change the order of the list and then give it back to lit, it can track what data went to what DOM node and then just rearrange the DOM nodes without having to re-render your entire list. OK, another way to customize lit is by using custom parts. Custom parts let you change how values are handled across an entire template. And we've bundled a couple of these together in a library we called lit extended, which gives you Polymer style sugar on top of text nodes and attributes. You get property bindings by default, and we have declarative event handlers. So for property bindings, let's say you have a custom element and it has some property where it's expecting some rich data to be bound into it. You can just go ahead and with lit extended use the normal bindings syntax and you get a property binding. But notice something here. Notice that capital P up there. It turns out that because we're storing the attribute names in JavaScript, we can go back and get the case preserving names of the attributes there. So now we don't have to do this kebab case to camel case mapping anymore. We can just use the real property names. And that should eliminate a lot of errors that people accidentally get into sometimes. We can also fall back to attributes if we need to by using the dollar sign suffix just like Polymer. And we support declarative event handlers. This looks like Polymer because we have the on-dash prefix, but it's a little bit different because we actually take a function value into the binding expression. And so you can write inline handlers like this, or you can just reference a method on your custom element. OK, and that's what makes it extensible. And that's basically all of lit HTML. The next question is where to now? Like Matt said, this is currently a bit of an experiment. So we have a lot of testing and optimization and benchmarks to do to make it real. But from what we've already done, we know that lit is very fast and that it works great on the modern browsers. And then we want to write a mix in for Polymer so that you can use this together with your Polymer base classes. And then finally, we want to add tooling support so you can get syntax highlighting, code completion, hover over documentation, all the features that we give you in our Polymer IDE plugins. So if you want to follow along, there's a GitHub repository. It's in Polymer Labs, lit.html. And you can already install this from MPM. I'm going to push a new version this afternoon, and you can just MPM install lit.html. And if you follow me on Twitter, I'll be talking about this as I develop it. All right, thank you very much. Thank you, Justin. So we've been doing this quite a while, and I wanted to have someone who was there from the very, very beginning come out and talk about what we've been doing all along. He happens to be another really old friend of mine. He can be a little scary on Twitter, but he's not quite as scary in the office, but still a little scary. Please welcome to the stage Alex Russell. Thanks. Thanks, Matt. I didn't hope to scare anyone today. I, Matt, made me promise not to turn this into a half an hour of telling you why your websites are too slow. So I'm going to try to refrain myself. I am a software engineer on the blank side of the Chrome team, which is to say the engine bit, and I help lead our standards efforts on Chrome. But once upon a time, I also helped start the team that built the project that ended up as web components and ES6 and a bunch of other improvements to DOM and to JavaScript. We put together kind of a dream team, I think, that set out to solve problems that seemed really urgent to us back in the day. And that problem was why were the web apps that we were making so blow and slow did? Blow did and slow, that. We were looking at this from the perspective of Google's own apps, Gmail, Sheets, Slides, Docs, all the rest. And after a couple of years, after we'd released Chrome and a year after we'd released Chrome Frame, what we saw was that the front ends that we were making still weren't using the new stuff that we'd finally enabled through Chrome and, hopefully, an IE through Chrome Frame. We had this huge, high query per second system that did nothing but generate rounded corner gifs. I don't know if you used to make rounded corners like this, but this was like one of the highest QPS systems that we had in a bunch of our backends, because there was this nine grid system that made a table. So every time you wanted a rounded corner to get it to work on IE6 and IE7 and IE8, to all the browsers that mattered, you would put in some JavaScript that would generate a table and all the elements that go along with it and then make an image request and then put that around the outside. And then you get the idea, right? It was super inefficient. And that really bugged me. And it bugged some of the other folks on the team. We used to have a lot of code hanging around like this. In fact, this is still, if you go to the Closure Open Source repository, don't read this, it's a bit of an eye-try. But what you get in the comment there is a sense for the DOM that's generated to give you a rounded top and top left-right or left-right for your element. And that actually didn't do real rounded corners. That didn't even use the GIF hack. This is just sort of faking it with one pixel offsets. This is how we used to make rounded corners. So perhaps you were lucky to start doing web development after that era. But what we were seeing at Google was that teams that were starting fresh in 2010 still carried around this kind of legacy baggage. Either their frameworks were baking the sort of thing in by default, and they just used it off the shelf. Or they had these assumptions about which browsers they had to support, which may not have been questioned in a while. And so they always seemed to bring along this least common denominator approach. Even when most of the users came from modern browsers, they still used these slow paths. To this day, Closure has components like this. This is not at me, obviously. We're kind of performance is sassed at Google, and this doesn't seem great. But we didn't really have hard data about what we were really losing to this sort of inefficiency. How much was this actually costing us? So I did a little project to find out. I stole a couple of weeks from my 80% project. I kind of just didn't answer any email, I go to any meetings, or really. I may not have been on the planet as far as anyone was concerned. Basically slept under my desk and just started hacking up Gmail. I got the servers running. That took half a week. Don't ask. And my goal with this project was just to remove as many elements from Gmail's DOM as possible. Now I wanted to make sure the app was pixel for pixel the same, keystroke for keystroke the same, that it worked exactly the way that it did, but see what you could do if in 2010 you started fresh. Like what would it take to iterate to use some of the more modern stuff? So there's a lot of stuff in Gmail. This is a screenshot of a recent Gmail. And a lot of things in the engine are linear in the number of elements. Some of the things are linear or super linear in the number of nodes above you in the tree, the depth of the tree. So fewer nodes, the theory goes, and shallower hierarchies would be faster. So looking at where the elements were going, it seemed that modern platform features could help remove some of the inefficiency, rounded corners, for instance, getting rid of a bunch of the extra elements for layout. But that was all conjecture. Turns out it's really difficult to measure the contributions of small elements in large systems. After about a week and a half of sleepless hacking, my prototype got to a 40% reduction in the total number of elements in the Gmail DOM. That 904 today used to be a lot larger. I was using CSS rounded corners instead of nine grid tables and GIFs. I was using Flexbox instead of a custom dual pass layout system written in JavaScript. We were dropping hugely nested layouts in favor of things that were much more modern. And in general opting for the thing that you do if you could start over circa 2010. So did it work? And testing locally on a fast machine, my development workstation, which was a 16 core i7 workstation class box, was super inconclusive. It was kind of a bummer. I'd spent a couple of weeks on this. And I'd really hoped to find that something was going to be a lot faster about this. I had thought intuitively knowing something about how the engine worked and something about how websites worked, that this would turn into a faster experience. But for the life of me, I couldn't find a way to say that it was faster. The changes made the toy template soy templates a little bit smaller. But try as I might, I couldn't detect any large performance winds on that really fast device. But that didn't keep my friend Emil, Eklend, from the Gmail team from productionizing portions of my prototype patches. Those changes wound up dropping the total size of the Gmail DOM by something like 30%. Given the ambiguous data we'd seen locally, both in local testing and in the population control groups at Google, we didn't really have high hopes for anything other than some code cleanups. We thought that this might lead to a nicer way to work on Gmail. But we didn't think that it was actually going to be a big performance improvement. Boy, were we wrong. With the reduced element count version in the wild, it turned out that this was a huge impact on real world latency. Many users saw their Gmail inboxes load half a second faster. To put that in context, this was the single largest latency win in Gmail in years. And there was an entire team staffed to doing nothing but reducing Gmail latency. None of us had any idea that this was going to work this well, even the folks who decided to productionize it like Emil. We'd gotten lucky. We'd gotten super lucky with this. But what lesson do you learn from that? That's a fascinating question. We started to ask ourselves, what should we do next as a result of figuring out that we could make things faster this way? One possible lesson might be that making JavaScript faster leaves a lot of money on the table from the platform perspective in terms of cross-cutting optimizations that you can make inside of your applications. We'd proved that lesson to ourselves quite a few times since in the wake of the blink fork back in 2013 when we started to remove a lot of the ON squared algorithms that were sitting around inside of the layout code, for instance. Another possible lesson that we could learn is that moving common stuff from user space into system space, into the platform itself, let us reduce not just the cost of development time or reduce our complexity for developing a particular feature, but it also potentially let us radically improve the user experience, too. It turns out that this only works if we can get developers to use those features, though. Compelling new features don't need a lot of help here. Developers eventually tend to realize that building and sending their own versions of a standardized feature down the wire is really expensive. This is a little like the way that in the US, state laws preempt city laws or federal laws, national laws preempt state laws. When the bigger, slower-moving entity finally acts, it doesn't just add up to an umbrella to keep you out of the rain, it changes the weather entirely. I call this the doctrine of standards preemption. We see this a lot in the platform in other areas, too. Query select role, promises, you get the idea. So with that in mind, we started writing design documents about how we might upgrade the web platform. In time, this turned into a pitch deck for a project that we jokingly called parkour. This is a slide from that internal pitch deck. The naming of course was very tongue-in-cheek. If you looked at us, we were the least likely set of humans to actually try real parkour or put the other way, the most likely set of humans to be grievously injured by trying it. So what did we do? Well, every couple of weeks, we sequestered ourselves every Thursday. We cleared our schedules and for most of the day on Thursday, we would meet in a conference room, this crazy U-shaped conference room that used to be the building, used to be owned by the gap, the clothing manufacturer. And so it was sort of this wood panel, like grandiose thing where you expected a marketing pitch deck for the next line of jeans to be presented breathlessly. But we filled it with nerds and we just debated. Sort of halfway between San Francisco and Mountain View near the airport, we just spent our Thursdays digging in, trying to figure out what was in there. What did we need to add? We spent so many hours in that room, arguing, prototyping, researching, arguing, prototyping, lots of arguing. And we recruited from the ends of the earth. This is the Sydney team. And they eventually wound up owning a large portion of Blink's style engine. They still do. As a fun aside, some of these folks were just coming off of the Wave project. A lot of our initial design documents were written in Wave, when some lose some, I guess. And we made sure to have experts from nearly every aspect of the platform in the room. Folks like Annie and Eric on the right there knew basically everything there is to know about how to do web development in that era. And folks like Tab on the left and Chris Burroughs were language and standards experts. And together we could figure out solutions that work not just in practice, but also in theory. Kind of valuable when you need to actually go write it down as a standard sometime in the future. We pulled in designers and toolkit engineers and folks from all over the company to help us hone our understanding and try to understand aspects of problems that weren't clear yet. Both the problems themselves, but also the ambiguous design space around how we might potentially solve them. And did I mention the whiteboards? We had a lot of whiteboards. We spent a lot of time at whiteboards. So many whiteboards. All the whiteboards. And when it wasn't whiteboards, it was video conferencing from all over the world. At one point we had folks pitching in from Sydney, Tokyo, San Francisco, Mountain View, Seattle, London, Munich, and St. Petersburg all at the same time. We didn't sleep a lot. It was sort of shocking how many designs we worked through in the space of a couple of years. It was exhausting. It was an incredible amount of work and discovery and prototyping. Turns out that inventing new platforms is relatively easy. Iterating on existing ones compatibly is much, much harder. One of the most fruitful veins of exploration for us was digging into the then popular JavaScript frameworks to try to understand how they worked, what was in there, what was common between them. Members of our ad hoc team had built JavaScript frameworks that were at the time powering some of the largest products from Google, IBM, and Sun. Remember Sun? And we took it upon ourselves to learn the ones we didn't know already. We looked beyond the JavaScript tools too, not just jQuery, Clojure, Dojo, and YUI, but we looked at Flex and XAML and Zool and XBL and Java FX and pretty much anything else we could get our hands on. What we discovered were striking similarities but wildly different levels of platform support. There was a huge difference between some of these environments in terms of batteries included versus bring your own. The web it turned out was very much in the bring your own camp. We tried to boil a lot of what we learned down into these documents, these endless documents that sort of cataloged the ways that people were trying to plaster over what the platform did or level it up through common idioms and interfaces. Just to give you a sense, this was what we did, please don't read this. This is one of the investigations we did just to try to compare and contrast the event systems. This is copied directly out of a document I wrote. And it gives you a flavor for some of this research. We wanted to know what toolkits and frameworks needed to provide because the platform wasn't either nice enough to use or expressive enough to get the job done. And we tried to identify areas that were common. Something that really jumped out at us was how much infrastructure each library was bringing around to support creating widgets or components. This is jQuery UI's idea of something like a menu. Behind that dollar-dot widget is a huge world of infrastructure being created to support stamping out instances of this thing. Here's something very similar from Clojure, the JavaScript library that traditionally had powered almost all of Google's largest front-end products. That goog.inherits line at the bottom shows that this class really is trying to be a widget. It's trying to inherit from a control class. This is the dojo version. It's very similar, that declare method at the top. That's the thing that does all the magic. It sets up functions. It wires up the prototypes. It mixes in all the interfaces. It does all that stuff for you. But under the covers, it's the same thing. There's a function. It's got its prototype wired up. It's a class, basically, that you can instantiate and it creates some DOM from a template and manages its internals in a componentized way. jQuery's approach hung all those custom details and tried to stuff them inside of the element itself, whereas almost all the other libraries that we investigated did the opposite. They created a parallel tree of components which happened to have elements hanging off of them. Dojo, Clojure, YUI created this parallel hierarchy that just wrapped the DOM. Here's YUI's version, for instance. It basically takes the same approach as Dojo and Clojure. Studying the landscape for a while, it became clear that what everyone was really trying to do was to create a logical hierarchy of components that didn't have all the implementation gunk in the way. Frameworks are spending tons of time and bytes on the wire to make this possible. A major challenge for companies as big as Google was that even when these tools were open source, they were also incompatible. You couldn't take a YUI widget and use it inside of a Clojure application without pulling in the entire library that came with it. There was a huge sunk cost to importing any other thing into your system. You couldn't compose Clojure widgets into GWT. As Google, that sucked. A lot of us were ex-framework authors and at some point in the distant past, we'd all basically had this question at the top of our minds. Why in the heck can't we subclass HTML element? But when you're working on a framework, this isn't really a meaningful question. You can't do anything about it. It's not actionable. Browsers are browsers, them's the brakes. You just take what you're given and you move on. But now we worked on the browser. What if we could fix it? What if we could make subclassing HTML element possible? It seemed like we'd be able to finally build portable components. And if we were able to define our widgets in terms of HTML, let them integrate exactly into the DOM, we might be able to finally end some of this war between all the different frameworks that we had built or that we had discovered and studied. So to validate our assumptions, we started building prototypes. We built entire applications and we threw away all the libraries that we had used before. We got rid of them and we built as little infrastructure as we needed to use tip of tree, WebKit and V8 circa mid 2010 to see what was still missing. If we could use the tip of what the web platform had, was it all just sitting there and we didn't know it? We just hadn't used our CSS rounded corners in this case? It was fascinating to see what fell away and which parts we had to rebuild. To define menus in our apps, we still needed to contort the word function to mean class in some cases and do some nasty underbar, underbar, proto wiring to convince Chrome that our instances were actually elements that you could call methods on them. It kind of worked. It was definitely less messy than the other way but it still didn't let you say what you meant. We built entire apps including a cross between an RSS reader and Google Maps. You could see the current news if you zoomed in and search for things all across history but if you zoomed out and scrolled down, we backfilled with Wikipedia data. It was pretty cool but I can't get a screenshot of it because I tried to get it working and unfortunately we built it on the RSS APIs from Reader. I spent a couple hours trying to get it revived and Mihai, one of the engineers who worked on Reader, was on the team and he plumbed a bunch of this cool stuff in and I'd like to just take a moment and pour one out for Reader. Can we do that? Just a moment of silence for Reader? All right, I'm still sad. As we built these apps, we took note of every single time that we had to build a little bit of library to support ourselves. It turned out that this was a great to-do list for things that we might go and add to the platform and over the next few years we attacked those gaps and I'm proud to say that we were relatively ambitious. We were not shy. Between 2010 and 2012, we developed an attempt to advance work in HTML and DOM, JavaScript and CSS. We'd seen what happened to other efforts which tried to solve all known problems in web development using a single tool. For example, undounded extensions to the input element or dozens of one-off CSS properties because when all you have is one hammer, everything looks like a nail. So we took out some wrenches. We got some saws. We built an entire workbench. We decided that we would try to target our improvements at the places in the platform where they made the most sense and so we built a team that was broad enough to let us start to attack each of the problems where they lay, not necessarily where we could most easily turn the wrench. By taking a broad view, we were able to target minimal interventions in many areas of the system, which each of which I hope are valuable, but together kind of add up to Voltron. Web components aren't one addition. They're a bunch of different things that eventually end you up with something like Polymer. Peter Hall in the chat on the right here helped us design classes, traits and modules, async and await syntax, for instance, and prototype it all in the first JavaScript next to JavaScript now compiler called Tracer. Taking them to T-39 turned out to be pretty slow going though, but eventually we got a lot of the things that we really needed to make this better few for possible done through standards. Working through its standards is sort of the grinding terror of business travel combined with the high stakes whiteboarding. It's just as much fun as it sounds. We persisted though and occasionally we even won a few. Incidentally, this is Dimitri Glaskov, my engineering partner in crime and all of this and now the uber tech lead for all of the Blink project. When I originally posted this photo to Flickr back in 2011, I initially captured it, he's never going to live this down. So I feel like adding it to this presentation is just sort of keeping a promise to myself. Anyhow, it wasn't all smooth sailing. After we opened up the design process and started making public proposals in 2011, we got a lot of mixed signals from other browser vendors. The standards process isn't necessarily something that you can gain. We need to actually forge agreement, but most of what we got was in difference. Browser engineers tend not to viscerally feel the problems that we feel as web developers. The grounding explorations that we were doing to understand the state of the art and where the problems lay were actually not the norm. Most browser engine projects don't do this. A lot of people were surprised to hear that browser engineers don't understand web development very well, but having worked on both sides of this line, it makes sense to me. It's not like you beat the boss at the end of the web development level and suddenly get handed a chromium checkout, an MSDN subscription, and a copy of Effective C++. Web development and web engine development are separate skills, and we hire separately for them. The problems we're trying to solve in 2010 and 2011 were partially about speed, but also about expressivity. The problems that we front in community today is trying to solve are very much have the same flavor, except that many of us now are so up to our necks in frameworks and CLIs and tools that we can't really even see how changing something in our apps would actually affect that eventual outcome for our users, especially in the context of the existential threat that we face in terms of mobile performance. Back in 2011, when we put a slide like this up, developers understood that this code didn't feel right. It was div, div, div, div, div, div, div. It was not saying what you meant. So the expressivity argument tended to work, but the performance argument has kind of always fallen flat. Desktop computers were getting faster, Wi-Fi was everywhere. Was this really a problem that we needed to solve? A problem we needed to solve right now? We tried to sketch out a simpler future, one where you didn't have to have heavyweight frameworks and build steps to enable the edit refresh cycle that we had grown up on, the one that we were so addicted to, and really enjoy. All this code came from a snippet in the slide deck we put together nearly seven years ago. And what we thought it showed was the value of being able to say what you mean. Class, split pane, extends, HTML, div element. We built ES, next to ESNow compilers, to prototype all this and try to feel it for ourselves and see if it actually worked out pretty well. And very much to counter to today's ethos in a lot of the JavaScript ecosystem, we built all of this with the hope that we would be able to invest it all into the platform directly, and then we would be able to evaporate those tools away. We built them with a sell-by date. Tracer was meant to go away. Transpilers were meant to be an ephemeral feature, not something that lives forever in the platform. And we didn't get it all, but this code works today. Things like the property initializers, for instances, that were so nice in that last example, aren't here, and they've been stuck in committee for reasons that frankly just make me grumpy. But this code works in Chrome and Safari and Opera and Samsung Internet today and without any tools and soon in every other browser because the standards process allows us to forge consensus that's broader than just one platform, just one browser, just one runtime. So we started all this in the desktop era, right? And now when we have web components, you've been hearing about them for at least one day, going on too, but we're not in the desktop era anymore. So, having finally solved all these problems, does it matter? For context, 2G connections still make up almost half of mobile internet connections. If you saw this stat in one of our slides last year, it was nearly 60%, so that's on the decline, which is good. But if you're trying to enter emerging markets, 2G is how your median next user is going to feel your experience. The situation is changing rapidly in emerging markets. Reliance Geo, for instances, is having a massive impact in India this year, but the global reality for the next billion users is that 3G is how people experience your apps and sites. For those users, the experience isn't just about the network. It's also about the hardware that they have in their hands. I promised Matt that I wouldn't make this a performance rant, but we really do need to understand these devices if we're going to deliver the sorts of UI that we want to the users that we want to have experienced them. Okay, quick digression. More cores is not faster. More cores is not faster. There's probably no better trade-off in mobile CPU design today than to take all of the extra cores that are actually most of the time spun down and turn them into caches for the cores that are spun up. If anyone from Qualcomm or MediaTek or Samsung are listening or see this video later, just know that we see you. Endless marketing of more cores is better has run its course. The king is actually kind of naked. We need devices that have good thermals, which many of them don't. This is the Nexus 5, of course. The CPUs are super slow for lots of reasons, and it's not just the terrible thermals. I could go on and on and on and on about the 28-nanometer process that coincided with the switch to 64-bit and how the electron-voltage leakage caused everyone to not be able to scale their CPUs up, but I'm not going to do that. What I will say, though, is that most of the web development community has a really long way to go in appreciating how far from good the current practice really is. A user waiting on a huge pile of JavaScript to arrive and execute just to see content or to start using it, to start tapping on it, doesn't give a wit about whether or not your developer experience was very good. Who were we doing this work for? It's a question worth answering. Kevin Salk yesterday emphasized webpagetest.org slash easy and time to interactive, and I think that's absolutely the way forward. Accurate network simulation plus real hardware gets you actually pretty close to ground truth of the experiences that we're building today and shipping to users over unreliable networks. Perhaps the most important thing for us to understand collectively, though, is that when hardware gets cheaper, it has a different effect here than it does in an emerging market. For wealthy users, more transistors every year, Moore's Law, right? Every 18 months, double the transistor count. Here, that turns into basically constant dollar spend or euro spend. If I spend the same number of euros next year on a phone, it's a faster phone. In an emerging market, where most people in the world or in those countries don't have devices, that turns into a broader market for a cheaper version of what you already have. That is to say, we don't trade transistors per dollar for faster phones in emerging markets. We trade it for a larger market for the same speed of phone. That creates a larger set of users carrying devices that we might charitably think of as mid-tier back in 2015. And we should expect that trend to continue for a couple of years, which sounds pretty bad. So for example, this is the Samsung Galaxy On 5. You probably have never held one of these. It's just about $100 US or 90 euro, more or less. It's a gig and a half of RAM and about eight gigs of storage. It's pretty popular these days in India. It's, I think, the third most popular phone in Flipkart. The CPU on this device is quad-core, but if you've seen my other talks or heard me rant just earlier, what you'll know is that more cores doesn't actually mean a lot. What's interesting to me about this device, though, is that it runs the latest Chrome. The hardware might be stuck in 2015, but the platform doesn't have to be. So house low was $100 phone. Well, here are some Geekbench 3 scores for Android's ecosystem system on chip packages. The offline Nexus 5 that used the Snapdragon 808, it's down sort of toward the bottom of this, as you can see. So where's that Galaxy on 5? You kind of got to scroll a while. Oh, yeah, there it is. But you know what I'm not bummed out about? Is that our thesis from 2010 was right. We can use the platform evolution to preempt heavyweight user space apps and frameworks and get some of this back. We can dispense with tools that only work for the wealthy users. And thanks to the platform evolution, we can level the playing field for everyone. You all know about the Shop App by now. It's a beautifully executed example of what's possible with Polymer. But what's interesting to me about it is it demonstrates how this progress plays out and how we can cope with the next few years of evolution in the marketplace as we're waiting for phones to actually get faster. We don't have to drown anything in JavaScript. Here's Shop using Polymer 1.7 and I dug it out from an old checkout. It loads something like 340K of resources total. When served, well, it can be super snappy. It can get to time to interactive on a Moto G4 on a 3G network in something like five seconds, which is, I think, the gold standard front first load. But here's the unbundled version using Polymer 2 and ES6. It's exactly the same functionality, but it's 60K smaller. What changed? Well, it's able to use the custom elements one APIs and we're able to serve just the things that this browser can support. For users on slow connections and flaky networks, this is huge. Shop was fast already, but every kilobyte we lean on the platform for instead of sending down the wire really pays us back and it pays us back in terms of reach over the next few years. Polymer is converting on the promise of we believed in back when we started the parkour project, but in a context that we just didn't see coming. Web components, it turned out, were the answer to a problem we didn't know to ask. When Darren and Alex and Matt and the rest of us all bet on this effort back in 2010 and 2011, ES6 and web components were sort of something we just were gonna figure out, something that we kind of knew were gonna be important, but we didn't understand why. But we got lucky again. When we started ripping elements out of Gmail, we didn't know that it was gonna make anything faster. Turned out it did. When we started upgrading the platform, we knew it would make it nicer to use, but we didn't know how much faster it could make things. But now we know. We're in a special moment right now. Smartphones are opening up computing to people who've never had access to them before. And the web, I believe, can be the single best way to deliver experiences to those users if we don't drown it in JavaScript. Web's evolution is freeing us up to attack new and harder problems further up the stack. That's great news for us as developers, but it's mostly great news for our users if we use the platform. Thanks for having me and thanks for making your sites fast and small. Thank you, Alex. All right, so we're ready for our morning break. I just have a couple of quick announcements. So there will be food and refreshments across the way. Just a reminder, there's the Ask Polymer Lounge where you can get your questions answered that is staffed right now, and we'll be back here at 11.30. So one quick announcement, we're gonna be doing a panel later on. This is everyone's favorite chance to put us on the spot. Give extra hard questions this year because I'm not gonna be on it or moderating it. In order to put a question on, we'll have microphones here, but also if you have any questions you think of in the meantime, tweet them at hashtag Ask Polymer. That way it doesn't get confused with the other summit stuff. So as you can tell, we're big into use the platform. A lot of times that gets misconstrued online. If you really wanna know what it means to us on the Polymer team, if you go to polymerproject.org slash about, we've got a nice article there about it and what it means to us. To find out what the platform, what the web platform means and how it's made, please welcome to the stage, Mariko. Hi, so, hi, Mariko. This is my Twitter handle and GitHub and pretty much everybody on the internet. I am from web developer relations at Google, but it's leading new job for me. I joined about five months ago. I was a web developer before that, so it's kind of like a new field for me right now. So the events like this are great. Not only I get to meet you all who use the platform, but also I get to meet the team behind the people who's building and a lot of people who works on the Coleman stuff. So, you know, I go to people and then ask, like, hi, I'm Mariko. I'm new here. I recently joined. What do you do? And then I get the reply, like, I'm on the platform team and, you know, okay, cool. And then I go to my email and then I get invite, like, what platform meeting. Cool, I'm gonna go to that platform meeting. And then in that meeting, the conversation goes like this, like, well, so, like, X, some framework, use web technology, but not on the platform or, like, from a platform point of view, blah, blah, blah. And then also there's a whole hashtag, use a platform, which, you know, I'm sure you've seen it, like, yeah, banners. So platform, platform, platform, platform, platform. So, like, do you know those souvenirs that you get at the tourist destination where it has all the city names and, like, tote bags and everything? So if I were to design a swag for my five months of experience at the Google, this will be the textile design. Um, so, web platform, right? To me, this keyword is kind of like a sort of, like, a programming technology that I use, like, you know, one of those things that I kind of know what it is and, like, copy and paste it and use it. But as soon as somebody challenges you, like, so explain to me, I don't know, JavaScript promise, I can, I'm going, like, I use it, I can't really explain what it is. But platform me, to me, is like that. So I tried to find that example of, like, me kind of pretending that I know. So this was from March. Somebody asked me, like, I don't really understand what the Node.js is in, like, the relation to V8. So I looked at the diagram and the diagram itself is not important. But then, like, zoom in. I used the web platform, even though I had no idea what the platform was. I just, like, titled it, web platform. So it got me thinking, like, I don't really understand what web platform is, and I'm not the kind of person, kind of like a toddler who asks everything and questions everything in the universe, like, I must know everything. So I got to people, engineers on the team who works on the platform and asked them, like, tell me what exactly is web platform. One thing I got was, like, things you built, like, websites and apps out of language of the web. And I'm like, okay, it's like, I don't know, building blocks, Lego, I don't know. Another one I got was a set of interoperably implemented technologies available to developers across the web browsers. And I'm like, that's great, I have no idea what you mean. So platform, right? And then I got kind of like, I must understand this and me dive into it. The thing is, in my past life, before becoming a developer even, I worked on the platform. I was in the software platform industry and I would go into client and tell them about, like, you should join a platform, build a product on a platform. I was on the company that make the video platform. And then I go back to office and talk to engineers and product managers and other stakeholders about, like, what should be in a platform, what kind of API should we expose and most importantly, how are we gonna make money from selling this platform? Because in 2008 and nine, like, SARS, the software as a service, was like a one thing, one wave, and then, like, platform as a service was a thing to do. So I know what platform is in software sense, but when I started web development, nobody called me or emailed me saying, like, you should use web platform or nobody charged me money to use, to start writing HTML, right? I just started, like, and opened up the text editor, read the talk, FTP to the server, and then, voila, there was a website. So what is the web platform? Because even, I don't even remember who told me or when I was told, but there was this, like, general understanding that web is open. It's the open web platform. So I was like, okay, that's a keyword I can Google. There is a definition on the W3C Wiki, which is the open web platform is the collection of open, loyalty-free technologies which enables the web. So let's dive into this sentence and what it means. So first of all, open, loyalty-free. Almost everything, anything I feel like, that goes into web platform is done and logged in open. And what I really mean is open, that you can find a conversation as far back as 91, logged on the internet, about discussion about what the worldwide web should be. My favorite part of this log is the first email in this log is titled, Test Again from Tim Berners-Lee, inventor of the internet. And then he says, like, Test Again, clearly he sent these messages before because it's a gain. And then, like, if you got this, delete it, sorry. But this is the one that did not get deleted and now it's logged forever. Well, joking aside, if you go through those thread, there's, like, really interesting history and artifacts of how web was formed. So this thread, I think, is particularly interesting. It's a Mark Andreessen who was at the time working on the Mosaic Blizer proposing the image tag. And it goes, like, I'd like to propose a new optional HTML tag, IMG. The acquired argument is SRC equal URL. And if you read through this thread, there is a conversation about, do we even need to embed an image in HTML? Should we just have a link to some of the external resources and not even display the image? Can we use other tags that we were considering instead of IMG? And it's, like, a whole bunch of discussion. And it ends with Mark Andreessen going, like, yeah, that solution doesn't need to work for us. I don't think that works for us. By the way, we already wrote the code and we are shipping it. So that's the open part. The discussion is done in open. Nobody is trying to say, like, whoa, my HTML are going to have embeddable image, and I'm going to charge money for it. No, like, the ideas and technologies and how the API should be is discussed in open. Another part is that it is a collection of technologies that enables the web. We looked at the image tag, for example, but it's not only just defining the HTML tags. So what kind of technologies do we have on the web platform? Well, the web is evolving ecosystem. So one example is the pointer event. So we used to browse websites with keyboard and mouse, but then now we browse on those devices and we use our fingers and two fingers. So we needed a way to address those interaction in the web, and we added a pointer event. Another one, the web used to be just a document that we shared across the internet, but now we make applications. And which internet application needs to get access to a local file system, maybe, so that you can upload an image and make something like Instagram. So a file leader API was added. This is the API that is near and dear to my heart. This is the API I use for the first application I wrote on my job, not the side as a hobby, but the first application I wrote on the job. I use this file leader API because I had two weeks of off time. I wasn't a web developer yet, but I was on the business side, and I had a week, two weeks of off time that client wasn't sending me assets. I was like, well, I learned JavaScript recently, and I know how to console log, and I know how to HTML. I'm going to try to make this like a little tool that's going to help me do this analytics thing. And what I used was file leader to load all of the data, the attributes that I needed to automate my job. So when I did this, my company did not buy me license. I did not ask for money to start making the thing. I just opened up my text editor and did it, and that's great. So collection of technologies which enables the web. And this definition is a lot close to the description of the platform I got from the engineer, a set of technologies for the web. But that's only half of what he described as the web platform. There's a whole other set here, which is interoperably implemented across web browsers. So I went back to the same engineer who said this and asked him, like, so how does browser interoperably implement things? And his answer was, well, we used to just copy each other's bugs, but now we write spec for standards. And that's the way we make platform. And Alex, the speaker before this, kind of talked about how that came about. So you got that thing. But every web API and the platform features that we use that as a web developer, we might encounter it on MDN or any tutorials that we publish. Behind that is thing called specification documents. And it looks like this. And if you go in, here's the custom elements section where it defines the element. It goes in detail what to look for, when to look for, what to check for error, and what kind of error to throw error when. It goes in step-by-step detail also how this API and how this feature should work. And you might have ended up this in page when you were trying to search for some tutorials, and then you might look it away, because I did. I certainly remember this design of a blue bar, side bar thing, but I don't ever remember just getting any information that I can use on a web development job to finish this PR from this document. I was just like, ah, that's not for me. I'm just going to look away. While I was just having this conversation, another colleague of mine asked me interesting questions. So he walks on standards I thought at Google. He asked me, so I have a question. Why don't you just contribute to standards? And my answer was, that's an interesting question. I have never thought that's possible, because somehow I think the standards are like a mystical creature called W3C and ECMA standard gives me this golden document that we can use. And I've never thought that there was somebody or some human close to me writing these things. So I told him that I just never considered as things that I can do. And he was like, oh, that's interesting feedback. Glad to know, because we never thought that was what you thought that way. And he told me, by the way, here's the way to contribute to the spec. And it turns out it's a lot closer to how we make web application and how we do software development. So let's look at how we work on spec. So let's say, for example, if you want to edit or add something to the existing spec, what does it take? Here's a HTML spec that is defined by WG. And here's a link to the document that is kind of like I felt like that's not for me. But all of those are hosted on GitHub. And there is an issue tracker that has all of the thing that needs to get updated. It even has a label for good first bug. And it's interesting that this spec is an English text. When they say mean bug, it means fix this English text. So if you look at the PR that is made against this HTML repo, you find things like this. Like explicitly mention possible values for image smoothing quality. So this was from one of the good first bug. It just means that somebody noticed that in the spec text is a little ambiguous. When they say the values image quality, it didn't say what kind of quality is possible. So this edit just adds three more words to what kind of image quality needs to be added. And now, the person who made this PR is contributor to the HTML. Much like we write tests for software, web platform and spec has a test called a web platform test. And if you dig into this repo, you can find how each of the web platform features should behave. And you can lead through what each of them should do. And I'm kind of the person who sometimes when you don't really understand bug, you lead the test and then see like, OK, this is what it's supposed to do. And here's what I need to fix. I really enjoy looking at web platform tests and knowing how things should work. But what if you have a new idea? What if you're working for a company or working on a project that is completely out of what we consider as a web right now and you need something standardized? How do we go about that? I'm going to show the example of how to do it on W3C. But many of the standards follow the same steps. There is a thing called web platform incubator community group, YCZ at W3C. And if you read the announcement, which happened, it's a little bit recent thing, like an announcement from 2015. It specifically says, we will develop spec use cases document like we develop any open source software. So how do we do that? YCZ have discussed. And then there's a value of things and features and discussions are happening there. So let's say you have an idea, a new API that you want to bring. You create a new thread. Here's one from Sarma, who is speaking later today, and makes a proposal. Usually, those thread comes with a little bit of explainer. So in the GitHub, there's explainer like one markdown file says, here's a problem we have and here's a potential solution that we want to solve with this new API. If you read these threads, the threads go into people asking like, well, what do you mean by that? Can you clarify your question? Is it really a valid problem? Or any other things like, can't you just solve this with this other thing that already exists in the web platform? And those discussion happens. Once the mature discussion happens and community agrees that this is actually a problem that we need to solve and this cannot be solved by existing technology, we should make a new thing, then it moves on to the next step. So if you are curious about those specs and explainer docs, YCZ, WebIncurator Community Group has a GitHub org and you can browse through all of the LEPOS with all of the explaining things. It's kind of a good place to know what kind of new things like WebUSB are considered for web platform. Once that discussion matures, they do think of intent to migrate, which means that we identify the problem, community agrees that we should work on it, and we want to formalize the process. And it moves on to what is called Working Group. Working Group is kind of like a subset in the W3C that works on specific features and discuss. And it brings a lot of stakeholders together to discuss how can we actually make interoperable technology. So CSS has Working Group, Service Worker have Working Group, and Working Group usually have GitHub. And you might think, well, we already had the GitHub, what's the difference? Well, the LEPO is now under W3C org and there is a whole bunch of process to move the spec in order to be official spec or recommended spec. So there is a document that feels like a gazillion pages of the process. If you're interested, you can scroll through and you can find things like, OK, if you have a spec, it goes from walking draft, WD, to CR, to PR, to recommendations, and you can lead a whole through that. But once you identify the problem, then identify or agree that we should actually work on it, then there is a whole other set of process to bring more stakeholders together and make sure that the specs are done. So if you're curious, if you're browsing MDN, you can see what kind of specification this platform feature belongs to and what steps in those spec process that is in. And it's in the table for the MDN and it's kind of fun to look at. So as Sam writes, I talked about W3C case, but generally other standards bodies follow the same thing. Somebody has an idea and that somebody start the conversation about, hey, I have this problem. I want to solve this. What do you think? And people say, I also think that's a problem. And if somebody, that conversation happens one and two and many cycles, and then if they agree that, yeah, we should actually invest the time and bring the stakeholders together, it moves to a little more organized steps, and then they do another more set of discussion around it. And then once that's done, then it's verified and now everybody's happy and go party. So I asked the same engineer who told me about the web platform, how many web platform standards or how many standards goes into making a web browser? And his answer was all of them. There is a lot of standards bodies specifies many bits of parts of the platform. So W3C has CSS, a lot of web API. What WG defines DOM and HTML. ECMA is specifically ECMA Technical Committee 39 is the one who's defined the JavaScript. Unicode, most importantly to me, defines emojis. And you can propose new emojis to Unicode. There is a process for that. And IETF has things like HTTP. So before going into this, I felt like this person to person, somebody who is definitely not me working on spec, was a process for spec writing. But after this, I came out that, well, it actually many parts of standardizing a web platform and defining the what platform should do, it's something that I, it looks like something that I do day to day. It looks a lot like software development. But I warn you, it takes time. And sometimes it's a little frustrating. I'm sure we were at a Polymer summit and we get frustrating, ah, is the spec ready? Is it like already recommended? Like why is it moving so slow? And there is a reason for that. Web platform is platform built on many different environments. Since joining Google, I got opportunity to attend one of those standards meetings. And I witnessed interesting conversation. So there was a conversation about internationalization API. And one browser was saying that, well, our browser is built on top of specific architecture, specific OS that does not use this internationalization library, which other browser uses. Two other browsers share the same library. So their point was that, well, you too might think this is easy to implement because it's in your library. But it doesn't work for architecture. It doesn't work for OS. And it's really hard for us to implement. And those discussions, making sure that it is interoperable, takes time. Another one is that web is really unique ecosystem. You have to support HTML from 20 years ago. And you have to think about supporting HTML from 100 years from now. You know, we can't just say, like, oh, excuse me, sorry. In order to use Polymer 3, you need to install web 2017. That's not how it works. It has to work backwards, comparatively. And then it also has to work in future. So a little story time, when they were considering those backwards-compatible and future proofing, what kind of conversation happens at the standards? So 2014, one of the engineer who was sitting next to me, I think so, Rob's talk from a Google I.O. And he was really excited and told me that you need to write this thing called Polymer and use web component. This is the new and Chinese thing. You should do it. And I was like, OK, great. I can write my own HTML tag. I want my HTML to look like this. I was in business, so I was thinking about data dashboard. And this looks pretty. I can define my elements and make my thing. Well, my first prototype didn't work, because I didn't hyphenate the tag's name. And that's one of the spec. And so Polymer doesn't work that way. I have to make it my dashboard and bar chart and hyphenate the name. And I was like, oh, like why? I had this specific markup in mind that I want to write. I was kind of frustrated that Polymer didn't let me do it. And at the time, I was like, Polymer didn't let me do it. I didn't know anything about web platform. So that was my frustration. Two weeks ago, I was in the conversation with the engineer who was in the meeting when they decided that the custom elements should be hyphenated. It was just an enlightening experience to me. So when they were discussing, OK, let's have this thing called custom elements. And developers defined their own tag, basically. How should the syntax be? And this engineer who was at the meeting told me that they were like, OK, well, we need to define the way that doesn't conflict with existing HTML and HTML tag that might come in future. So we need to namespace it. Well, what about namespacing it with colon? Turns out XML use this syntax, and SVG use XML syntax. So let's no go, because it conflicts with what, SVG? So colon, no. Another one was, what if we just prefix it everything? So x dash custom element or something. But like, marking up these kind of looks like a ridiculous. So they were like, oh, we really don't want to do that. So at the end, they ended with just simply hyphenating the word custom element. And everybody's happy, and that's how we code now. But this is so much more than just having a neat looking markup. When the specification decided that this is going to be, a custom element is going to be hyphenated, that means that they decided in future, when they were to be considered new HTML native element, then they will not use a hyphenated name. It will always be known hyphenated name, because the future HTML element, if they have a hyphen, then it conflicts with custom element. So that is kind of scary things to discuss. And you have to be careful about it, because HTML, I think, will way overlive my life. And you have to take care of what would happen in the future. So it takes time. So hopefully, you kind of got the idea of what the platform is and what kind of discussion that goes into. And I admit, as a web developer myself, sometimes it's frustrating. I'm like, why is this doesn't work this way? And I go into Twitter and just like, I hate this thing. Why? But I hope you got the idea that where you should go, if you have those frustration, there is a community discussion forum that you can send to many of the engineers who are at this, engineers and people who work on the standards at this conference are more than happy to talk about why certain things that you're unhappy about happen that way. And I specifically want to thank the people who answer to my most London and Andrews question about, I don't understand the platform. I don't understand what the web platform is. So thank you very much. And thank you very much for Polymer Team for letting me share what I discovered. And thank you very much for listening to me. I think between Alex's pictures from a long time ago and the fact that I was in that meeting about the hyphens, I'm feeling kind of old today. So when we were putting together the schedule for this summit, one of the things we really wanted to do is make sure that we had a lot more members of the community that were speaking themselves. That's why we did an open call for topics. And we didn't just want people from the Polymer community. We wanted people from other frameworks and other libraries that are using web components now as they get more and more popular. So today I'm very excited to announce the next speakers, which are Max and Adam from Ionic. Thank you. So today we're going to be talking about Ionic moving to web components. We're super excited to be here, especially kind of a late edition. So if you don't know what Ionic is, we'll talk about that first. But before we do that, a little quick intro about ourselves. I am Max Lynch. I'm the co-founder and CEO of Ionic. I'm also one of the co-creators of Ionic Framework, the open source project. And I'm Adam Bradley, lead developer of Ionic and also a co-creator of Ionic. So really quickly, if you're not familiar with Ionic, Ionic is basically a cross-platform UI component kit. Initially kind of focused on building native style apps for the app store. So people used to call this, they still do call it sometimes, like Bootstrap for mobile. And if it's helpful to think about it like that, that's really what it is. It's open source, MIT licensed cross-platform UI kit. So you can build iOS and Android apps. That's really kind of our bread and butter. You can build universal Windows platform apps. But increasingly, people are building progressive web apps, responsive web apps and electron apps all with the same code base. So the right once run anywhere thing that's always been kind of elusive and people still laugh at, a lot of our developers are actually doing that today because it's just based on the web and the web runs pretty much everywhere. So we're 100% based on web technology, HTML5, JavaScript, CSS. And if you're used to building mobile apps, one thing people really like about a web approach is it's just really easy to customize. CSS and HTML is one of the easiest things to style. And so we're proud that we're based 100% on web technology. Ionic's one of the top of the source projects on GitHub and we've had millions of apps created. Some you might use like Dow Jones MarketWatch and Untapped, which is a social network for people who like to drink beer. And then one thing we're seeing a lot more of that I'm really, really excited about, these big enterprise companies are starting to build a lot of apps internally and Ionic is powering a lot of these. So we're really, really excited about that. They're starting to do their own mobile development and it's really great. Finally, it's a real business behind Ionic. We build services and tools for app development. If you wanna learn more about that, check us out at ionicframework.com. So real quick, we wanna go over a brief history of kind of how we got here today and why Ionic and the decisions that we made, why it is the way it is. So this was our original goal. Goes way back to when we first started that we really kind of wanted to focus on letting developers focus on building their applications and not so much the details. So in this case, we want to be able to enable developers to just add a toggle. Not worry about how does a toggle work, how to add gestures, what happens when it gets all the way to 50 pixels to the right or left. Rather, we just want the developer to worry about I wanna add a toggle there because I'm focused on my own app. So if you look back, we started this in 2013 and Alex had a great talk earlier kind of talking about the details of how difficult this was to do. And I will freely admit that in 2013, I had never heard of a thing called Web Component. But what we were at was we were largely using jQuery. And jQuery kind of was great, but it also had required a lot of kind of manual setup and things like that. And Alex's talk did a really great job of showing that. So it really wasn't the ideal solution. So then we came across Angular and Angular Directives. And personally, I was blown away by it kind of was exactly what we were looking for. So with Angular Directives, we were able to make components and provide a large UI library to all of our users so that they could have this cool UI library, pick and choose which components I wanna use and build large applications with it. But again, meeting our first goal of letting developers build large applications. And so we fell in love with AngularJS from the start. But then also, if you look at our very first commits, we had a bindings directory. And in that bindings directory, we had AngularJS, Backbone, Amber, and jQuery. And a few others were in there. So we were kind of naive and kind of ambitious to think that, well, we'll just make a binding for all of them because it's gotta be that easy, right? Within a week, we quickly realized that's not gonna happen. So we decided to stick with AngularJS, but it was kind of eye-opening for us of like how difficult it is to build components for just one framework, let alone many of them. And even if you were to fast forward today, you would have had to build even another 50 more components or 50 more things through the frameworks. So fast forward to today, I still love Angular. I'm personally a huge fan of it. I have nothing but the utmost respect for the core team and what they've brought to the web development community. But the reality is, is that Angular is not the only awesome framework right now. And then we've moved from which is the best framework, conversation to now, which framework do you prefer? Because they're all actually in reality, they're all great. So it kind of comes down to the team that's using or the developer of their choice of which framework they wanna use. So I think you can see where I'm going with this is that as web components emerged, as we realized it's actually a viable solution, we started a question and it's like, maybe this is trying to meet our original goal of maybe of what we wanted to do of kind of writing one component and having it work everywhere. So this is what we're here to talk about today. So that was kind of our journey to discovering web components and actually considering them a viable option. But at the same time, around this time, progressive web apps really emerged on the scene. And considering Ionic's goal was really right once from anywhere, kind of a UI framework for building apps on all platforms, it was obvious that we were gonna jump right in and embrace progressive web apps. And our early experiments were actually really, really disheartening for our team. We initially were, and still to this day, like we've focused on Cordova apps in the app store. And on a Cordova app, all your code is bundled next to your app in the same local domain. So there's no network latency, there's no 3G to worry about. And when we took that approach of building with a framework like Angular and bundling all our components together, we were seeing really, really bad results when it came to going to progressive web apps. We're talking like 73 scores on Lighthouse, on 3G emerging market tests, 13 seconds time to interactive. It's just not even usable. And we were shipping like over a megabyte of code for the first view. And the worst part about this, that Alex has mentioned a lot and it's totally true, we couldn't solve it. We hit an engineering wall. Our current approach of bundling all these components up and then trying to pull them apart and do code splitting once everything was already bundled together was just not working. We spent a lot of time, we spent a lot of effort and frankly spent a lot of money trying to figure it out and we couldn't. And so we kind of took a step back and at the same time, we had bet the farm on Angular and what had happened in the front end space is that frameworks really proliferated. Angular had a lot of usage, but so did React and so did View and so did Ember. You know, I'm sure if you asked individual developers of these frameworks, they would pretend like it's the only game in town, but if you look at the data, it's not at all true. And so Ionic was looking at this fragmentation happening and saying, well, we're having issues with framework performance in general, but if we want to support these other frameworks, now we have to go and build bindings for each individual framework. It just wasn't really gonna happen and so we felt like we were kind of stuck. But then we started experimenting with web components. We started taking our Angular components and pulling them out and porting them to vanilla web components and it was actually really, really promising and it actually ported over really well from Angular too and I give the Angular API a lot of credit for kind of being similar to web components. And our first results were really, really encouraging. Our 3G time to interactives were four to six times faster, 2.78 seconds compared to 11 and 13 seconds before. We were shipping 10 times less code, especially for the first view. We're talking 39 kilobytes compared to 422. Like that's a really big difference and that was a highly optimized Angular Ionic 2 app. And then we had a build step because we used TypeScript and it was considerably faster. We're talking like 3.87 seconds for production builds compared to 50 seconds. So not only was it faster loading components but the developer experience was better. But we still felt like something was missing from these vanilla web components. We really, really love traditional frameworks and we think they've given developers some really great tools. Things like reactive data binding, virtual DOM, support and kind of embrace of TypeScript and also some key innovations that we saw like React Fiber where the approach for performance was moving from, how quickly can I synchronously render 10,000 items to how can I help the browser not lock up its UI thread and do less work. And we felt like those were key innovations that web components really needed to stay competitive. And finally, we wanted to use JSX. We're excited about some of the template literal stuff as well. And we knew that server side rendering pre-compilation had to work out of the box. So we took our great promising results from going to vanilla web components with the desire to bring these framework features that we really, really loved and missed and decided to see if we could combine them together. And so today we're really, really excited to announce a new project that we've been working on that's really an artifact from our own efforts that helps developers build faster web components with framework features that they desire but in a 100% standards compliant way. So today we're announcing a new project. We're calling Stencil. And Stencil is a compiler for web components that Adam will talk about in a bit. It's still super alpha, but we are using it for a lot of stuff internally. So we created it for ourselves. We're using it for ourselves. It's gonna power the next version of Ionic. So we're really excited about it, but keep in mind it's still alpha and we'll have links at the end. And this is really Adam's brainchild, so I should let him talk about it. Thanks, Mac. So Stencil is largely a compiler for web components. And so what it builds is optimized custom elements. So the big thing I wanna get across is that this is not yet another framework. It's not a Stencil framework. It's not window.stencil that adds a bunch of stuff and a Stencil API. Rather, it's using the build time compiler to build just vanilla custom elements that work in all browsers. And that was key that we really wanted to focus on. And so with that compiler, we're also able to enable virtual DOM, server-side rendering, precompilation, asynchronous rendering, and reactive data binding. And it's absolutely been inspired by the best parts of Angular React View and Polymer. And again, as Max was saying, it's based on TypeScript because honestly, we use TypeScript because it's made us really productive in building applications and building Ionic itself. So we use TypeScript because we enjoy it. The side effect is that everyone else gets the typed library that we have, but the real reason is because we prefer it. And we also use JSX along with that also. And I'd also like to add that is MIT licensed. So with our goal of creating just vanilla custom elements, we really want to make sure that we had no external dependencies of any library to include Stencil. Stencil is not a framework, but rather it's just the compiler for it. And so by reducing or removing every single external dependency and having just a vanilla custom element, we're able to then work in all frameworks, which is kind of our original goal. So if we go back to what we originally wanted to do in 2013, was just make it easy for developers to build applications. Fast forward to today, there's many ways to do it in many different frameworks. So we feel like this is kind of meeting our original goal and we're really proud of that. And on top of that, Stencil can be used to create just standalone components or it could be used to create entire applications. And it's been working great with all framework, with one caveat that we've had issues with React and that React hasn't been working the best with web components. So that's an area that we would love to work with the React community and improving upon moving forward. So the way I like to think about Stencil is that it's kind of a pre, it pre-bakes custom elements, pre-compiles, whatever. It takes a lot of these concepts that we've really expected and have gotten to like about frameworks like virtual DOM, lazy loading, change detection, all that stuff. And there's a simple compiler that just takes those concepts and builds custom elements with some of that stuff baked in. So we call it Stencil because it's just a tool to help you stamp out these components. It gets out of the way at runtime. All right, so demo time. So we talked a lot about asynchronous rendering, JSX and all that above. So we kind of wanted to show off something here. So a few months ago at a React conference, they had an amazing demo where they showed this. This is React version 15. And what's going on here is that there's the single node at the very top that's being updated with a new number every single second. And then that passes all the information down to all these child elements and so on and so forth. And then every single frame, each one of those circles is being animated out. And every single one of these circles has a mouse over and mouse out event on it. As you can see, this is React 15 is no joke, but this is very difficult app to pull off because you've got over 700 nodes being updated every single second. So what's happening here is why you see it kind of jank a little bit is because the browser is kind of freezing, trying to get sure that all those 700 nodes are updated. So this is why it's kind of janking like that. And so if you've heard, React Fiber is kind of solving this problem. So React Fiber is the next version of React. It's in beta right now. And we read all about it and we were really intrigued by how they're solving this where instead of trying to synchronously update every single node, it lets it kind of put into a synchronous queue, which then lets the browser kind of manage that. So we were curious, or that sounds awesome, React. I wonder if we can do that because frankly, we also have asynchronous rendering. Can we do that with Stencil? And so we're really thrilled to show, like we were blown away by the first refresh to see that actually this thing is pretty darn smooth. It's updating just as many elements. It's the same thing. All the properties are being passed down. And so it's kind of proving that this tiny little web component that we have here, which is just vanilla, not dependent on anything else, is just as performant as React 15 in the next version or React 16 in the next version. So we're really proud of kind of the performance that we've gotten from this. So back to the slides. So yeah, so that's a quick kind of demo from the performance with the asynchronous rendering engine. But we've kind of talked a lot and haven't really gotten any code, which is the best part, right? So I want to talk through kind of what a simple Stencil component looks like. And the cool thing about Stencil is this is pretty much the entire API. It's very small. It doesn't have a lot of surface area to the API. If you know ES6, if you know TypeScript, it's pretty easy to understand. So this is an example of a Stencil component. It's a simple ES6 TypeScript class. We are using decorators to indicate that this is a component. We specify the tag name, and then we can also link a style sheet. So we support SAS or vanilla CSS out of the box. Inside the actual class itself, you'll notice this at prop decorator. And what this does is basically indicate that this class member is going to be filled in from a property on the actual component. So someone above this component would actually write name equals blah or age equals blah. We have typing information, so we kind of know what the type is going to be. Stencil is based on TypeScript and we do use this type information to our benefit. And then inside the render function, you've noticed this a few times today. This is kind of where we feel things are moving. We compute the next view basically using JSX, but in theory we could use template literals or anything in here. And we're taking the value of this.name and this.age. The cool thing here is if name or age changes, the component knows how to redraw itself automatically. There's also another thing instead of prop, there's state for internal state. It's a very explicit way to indicate that this member variable is going to change. And if it changes, we need to redraw ourselves. So this is what a component is. And underneath the hood, we generate a simple custom element using this code. So there's really not a lot of surface area. If you know this little API, you know how to use Stencil. So and also because everything is broken apart and asynchronous, one thing we wanted to make sure we pulled off from the very beginning is that all of the components are lazy loaded. So a challenge that we've had with our existing versions of Ionic is that if you wanted to use a checkbox, you needed to use everything. And that works well in Cordova world, but moving forward, it's really not working well in PWA. So we wanted to make sure that if you only use toggle, it was going to be the only stuff you downloaded. So out of the box, we made sure that we enabled this, but also wanted meter goal of not having external dependencies. So Webpack was our challenge to say, like, could we do this without Webpack because that kind of seems to the standard right now. And sure enough, the way we're doing it is that we're able to just register just kind of the base information about a component. So we tell the browser, like, hey, these are all the components that are possible, but that's really about it. We don't go further into detail or we don't have any more code than saying, like, Ion toggle is a thing, Ion checkbox is a thing. And then, hey, browser, if you happen to see that, let me know. And that's about all that we register up front. And so then from there, what we're kind of doing is we're reversing how traditional lazy loading works, where traditionally, a developer on the computer will run the build step and we take these best guesses of which things should be bundled together and which pages should be bundled together and which stuff do we think the user is going to request. And then we ship that as one big final package. But with this, we're letting the browser decide exactly what it needs. We don't even get involved. We just say, hey, browser, let me know when this thing happens and when you need Ion checkbox, then we'll send it to you at that time. And so then users are only downloading what they need. And another awesome benefit of this is because we have a compiler, we can hash all the file names of these files. And so with that, we can then forever cash all of these files. So on your phone and CDNs and edge networks, all of these files can be cached. So if you have the slightest update to your app, let's say you had a misspelling and you want to update the app, you don't need to rebuild the entire application and reship it to all your millions of users for them to download the large bundle anymore. Now you can just say, I had this one file update. Let's update that file. And users are only updating the change file and everything else is still static on their phone or as fast as it can possibly be. So we're really proud of this being able to enable this with kind of how we're using a compiler to pull that off. And on top of that, we want to make sure that all of this stuff was easy to share. So with large organizations and any company really, or any organization, it's often a challenge of which different teams have different frameworks. So different developers use different frameworks. And so this has been talked a lot during this conference. And so we want to make sure that it's just as possible for people to put together a collection of their components that they can then easily share with all the teams in their company. And we want to do this also through MPM, which is awesome to hear that Polymer is doing the same thing too. But another challenge that we've had has been with sharing our SVGs of like our IANA cons. So IANA cons is a project of ours where we have over 900 SVG icons which people can choose from. Again, that has the problem of like, well I don't want to download 900 icons for this work. So we were able to solve that with kind of the same concepts here. But at the same time, it's easy for a developer to build this and have their 900 icons in their application. But as a UI library that wants to share these 900 files, it becomes a challenge. Because now it's in node modules, which needs to get copied to the correct folder in your application. And then at the same time, it needs to get requested from the same location from the browser. So there's kind of two steps here. That's always been a challenge that we've had a lot of work grounds. But with Stents, we make sure that this was seamless, where you just import or you just require which are you require the collection of IANA cons? And it's all handled for you, so you don't have to worry about that. And additionally, we're also able to group certain components together because in reality, there are certain components that will come together. For example, we have Ion Card. Ion Card has Ion Header and Ion Content inside of it. So chances are, if you use Ion Card, you're gonna use those other two also. So we wanna make sure that we can reduce as many requests as possible or separate requests as possible so we're able to easily bundle them together. And again, the biggest thing I wanna get across from this is that the final output is not a stencil thing. It is purely vanilla custom elements. And Ionic Core is really just another collection. So again, it can be reused in any framework to include Ionic Angler. So, and that's another thing I wanna make sure that I get across is that the next versions of Ionic Angler will continue to work the exact same. Everything you're using about Ionic Angler is just like how version three works, except now under the hood, we're using this to pull it off. And then we have the added benefits as the developers that we can then enable it for react developers and review developers and so on, and whatever comes out next month. Additionally, we wanna make sure that server-side rendering was enabled. Traditionally, that's been kind of difficult to do with web components, but we, because we're using our own asynchronous rendering engine, we're actually somewhat easily able to add this to our components. So what that means is your first paint can be just HTML and CSS so that perception is lightning fast and as fast as it could possibly be because you're only rendering the bare minimum of what that first paint has to be. But then on top of that, another thing that keeps coming up is TTI, Time to Interactive. That was our core focus to make sure that the actual time to interactive was really just as fast as possible could be. So what we're doing is making sure that all of the existing nodes that came back in that first request for that first paint are the same ones that we just enhanced with listeners. So we're not repainting the whole thing like you see in other frameworks. And also just like how we wanna make sure that you develop on desktop and you develop on mobile, you kind of have a single experience of what you're creating. We feel it's the same thing of like if you're developing for client side, a browser, or you're developing on the server. So we wanna make sure that the experience of developing stenza components is the same whether you're doing server side rendering, pre-rendering, or it's for the browser. So to do that, we're able to have window and document and everything that you're used to so you don't have to kind of change your mind or do a mind shift between the different versions. So just some FAQs here. Who is stencil for? Well, first and foremost, it's for us. We built it because we needed faster apps. Like we mentioned, we wanted to be and we do wanna be a major UI framework and solution for progressive web apps. And frankly, we're kind of dead in the water without coming up with a different approach. So we need faster apps with components that run in all frameworks because we can't possibly support all the frameworks out there with our components and the ones that will come out in the future. Secondly, it's meant for UI framework authors. I strongly believe that the days of building a CSS jQuery framework, like let's say a Bootstrap or even like an Ionic that has its own CSS and then trying to build custom bindings for every framework. I feel like those days are coming to an end. I think it's a really positive thing for both UI framework authors because they can do less work, but also users. I think we've all used a Bootstrap binding that had one set of features for one framework and had different features for another. And so web components kind of solve that problem for us and Stencil can help these developers build these components. For large development teams that are using a lot of different frameworks and build chains, using Stencil and using web components, they can share components like your team has a branded button or a specific thing that has to be in every single app. Instead of duplicating that every time, you can just share the components with everyone. And then finally for app developers, we all want faster mobile and desktop apps, right? So this can help us get it. So how is this different from current solutions? I think the biggest difference is that it's a compiler, not a runtime solution. So we do all the work during the build step. It's just a simple NPM install. There's no CLI, it's just a NPM script for your app and this just generates a set of optimized custom elements. So there's nothing Stencil in the final output. We keep saying that, but I just want to really like drive it home. There's no window.Stencil. It doesn't exist. It's just your components. And this complements all frameworks because we really like don't want to discourage anyone from using React or Vue if you like it. That's great. We kind of have taken a step back and saying, we don't really care what you want to use anymore. We're done fighting that battle. We're just going to create custom elements. You can use them if you want. But if you want to explore this brave new world where you don't use a traditional framework, you just use web components, we support that too. And we're actually really, really excited about that. Hashtag use the platform. And it's a small footprint. There's very little Vue code here. There's no CLI. And then finally, this is going back to kind of preferences. We built this on TypeScript. TypeScript has let us get a lot of rich typing metadata that we actually use for the tool. But as a company, we've really embraced TypeScript. We feel like enterprise usage is really, really accelerating on TypeScript. So we base it on TypeScript. We love it, but if you don't, then maybe this is a point of disagreement. So what's next with Stencil? Our Stencil-created components are going to go live in hundreds of thousands, hopefully, of Ionic apps. Who knows, this might be the biggest distribution of web components in the app stores in short order. So we're really excited about that. We're going to begin seeking some feedback on an experimental Ionic with Stencil starter instead of just Ionic with Angular or a typical framework. And based on the feedback, we might integrate that more deeply into our tooling. This is still experimental. We're not really sure how people will like it. We need better state management. I'll have something like Redux. So we're working on something like that, trying to make it easy to import third-party modules, looking at maybe template, literal templates like lit.html beyond JSX. Because we're just a compiler, it's pretty easy to swap that out, but we've just decided to go with JSX today. And then we're going to submit our HNPWA, which we're really excited about. Yeah, so we're really, you know, again, this is something we built for ourselves, but we're really proud to show this off and get some feedback of what you guys would think or what the community thinks. And so we have it up on our GitHub and we have a couple of demos out there that include the Fiber demo and the Hack News demo. So I've labelled enough, we've labelled enough about this, but one thing I want to say is take a look for yourself. So take a look at this Hack News app and do the traces yourself, look at the file sizes, look at how fast the time you're inactive is, and just take a look and you'll see that it rivals the best PWAs out there right now. And so we're really, really proud of it. And on top of that, it's built from a component of libraries. It's not just a bunch of divs that have, you know, different background colors and stuff and just switching out divs. This is an actual, legitimate, large-scale application that has modals and alerts, popovers, all the different things that you want in application, but it also loads just as fast as any other of the fastest PWAs right now. So please, we encourage you to take a look and we're really proud of what we have so far. Thank you very much. Thank you. So some of you may have noticed it took us a little while this summer to get the schedule out for this. One of the last pieces on the last day we were trying to get it up. We were like, you know, we asked the Ionic guys if they wanted to do a presentation at Polymer Summit and they got really excited about it and Dion had sent them an email and we were asking for the abstract and bios and all that so we can put it up. And finally, Dion sent an introduction email with us and them and we were talking to them and they kept tweeting about it that they were very, very excited but then we weren't hearing back about bios and the abstract. So finally we put together on the very last day we had had the email, Dion had the email wrong so the entire time we weren't actually communicating and apparently ionic.com is some other thing and they weren't even replying to us. So luckily we got all that figured out at the very last second. So it's lunchtime, just a few announcements just like yesterday, Halal and Kosher meals are available at the station over there. If you can't find them, ask a staff member. There's a code lab starting in 15 minutes. It's a really cool one. It's purple with Firebase and custom elements with Keanu in the code lab area and we'll be back here at 1.30 or 13.30 if you're not a silly American like me. See you in a little bit. Let's see. Our next set of talks are gonna be shedding some light on some mysterious black box stuff. Much like Shortender's cat in the box we're gonna reveal the true state of things. Speaking of which, anyone know what Shortender's favorite input type is? Checkbox. Okay, so our next speaker is also my manager so if you give him a big round of applause I can use that in my performance review. Gray Norton. Thank you, Brendan, that was excellent. Welcome everybody. My name is Gray Norton. I'm an engineering manager on the Polymer team and today I'm here to kick off an hour worth of content on server side rendering or SSR for short. I'll get the ball rolling with a brief introduction and then Sam Lee and Trey Shugart will follow me with a couple of different approaches to server side rendering web components. Sound good? All right, let's dive in. So over the next 10 minutes we'll ask and answer a series of key questions about SSR. The first one being why are we even talking about server side rendering here at the Polymer Summit? In short, because you guys asked us to. SSR is a hot topic in web dev and frankly we've seen there's quite a bit of confusion about what exactly SSR is, why it's important, and how it relates to web components specifically. So our goal is to answer that call for information, clear up some of the confusion and let you know about some recent developments specifically related to web components. To start, let's establish what we even mean by SSR. So for the moment I'd like to define it very broadly. We're gonna say that you're doing SSR if your web server ever delivers meaningful, renderable, route specific HTML to browsers or other user agents. So for example, given a URL like this, if your server sends something like this to a browser or a bot, then you're doing SSR. On the other hand, if you only ever send an empty HTML shell more like this then that's not SSR. So this may seem overly broad but stick with me, we'll get more specific soon. With that definition in mind, the next question is why would you want to use SSR? Turns out there are two major reasons. Now if you've been building apps for production or paying attention to public conversation on the topic, you probably know these, but let's quickly spell them out. The single biggest reason is to maximize your reach. To have any kind of impact to your app of course needs to have users. And the best way to reach users today are search and link sharing, both of which rely on non-browser user agents called bots. Search bots crawl the web and populate search indexes while link bots visit URLs and produce appealing visual snippets to encourage sharing. Sadly, many of these bots don't run your client-side code. If you don't serve them any meaningful HTML, your site might as well be invisible. Okay, I need to pause here for a second before we move on to reason number two. Because I'd just like us to recognize how messed up it is that here in the year 2017, we're depending on services to help us find and share information that basically see the web like their browsers from 1997. Now for anyone who embraces the idea of the web as a dynamic and capable platform, and I'm guessing there are probably a few of you in the room, I think this is an existential issue. So let's fix it, all right? Okay, with that off my chest, back to our regularly scheduled talk. So the second reason you might use SSR is to maximize loading performance. Now we all know that performance is a critical component of user experience, and in the mobile era, loading is almost certainly the most important aspect of performance. The performance-based argument for SSR is pretty simple. Serving renderable HTML is generally the fastest way to get something on screen since the browser can paint right away before loading, parsing, or executing any of your JavaScript. Makes sense, right? So now we've reviewed the two key benefits of SSR, but I'd like to look critically at each one. First, do you really need SSR for search and sharing? Honestly, this is a question you need to answer for yourself based on the nature of your site and your business. But that answer feels a bit like a cop-out, so we're just gonna say, most likely, yes. Let's note the exception first, though. If you don't rely heavily on sharing, and if you're satisfied with the reach that Google Search gives you, you may not need SSR. That's because Googlebot uses a rendering service based on Chrome, and therefore sees your site very much like your users do. There is a caveat. Googlebot's version of Chrome isn't entirely up-to-date, and it has a few features disabled, so its rendering may differ from your users in subtle or sometimes not-so-subtle ways. The details are out of scope for this talk, but I encourage you to check out these resources on developer.google.com for more information. In general, though, you'll be happy to know that Googlebot runs your client-side code, fetches your data, and renders your app very much like a real browser. The problem is that Googlebot is not at all typical. As we've already mentioned, many bots don't attempt to run your code at all, and many that do do so in very limited ways. So if you care about non-Google search engines, or you depend on link sharing for much of your traffic, you'll probably want to use some form of SSR. We'll see how you might do that in a minute, but first let's ask the same question about performance that we just did about search and sharing. The answer to this one is even more nuanced and frankly starts creeping into some political territory. But what the heck, I'll just say it. No, you don't need SSR to achieve excellent loading performance. Using patterns like purple, you can deliver a great loading experience without SSR. We've shown this with our own example apps, and developers of real-world Polymer apps have done the same. And the Polymer version of the hacker news PWA is among the fastest to load, despite not using any SSR. But while it's nice to know that you can build a fast loading app without SSR, what's more important is actually the converse. That is, using SSR is by no means a guarantee that your loading performance will be good, or even adequate. So Kevin Schauff built this slide to demonstrate a problem known as the uncanny valley. SSR helps you get something on screen as quickly as possible. But if your app isn't interactive within a few seconds, your users probably aren't gonna be happy. And SSR doesn't do anything to get you interactive faster. So whether it's more important to paint quickly or get interactive quickly may depend a lot on the nature of your app, but the bottom line is that SSR should in no way be viewed as a magic bullet for loading performance. With that said, can SSR help with performance? Absolutely. So you should try hard to get interactive as quickly as possible, but for however long that takes, your users would rather be looking at meaningful content than at a blank screen. And ultimately, I think the best user experience will probably come from combining some form of SSR with a technique like purple. All right, so now we can move on and talk about SSR and web components specifically. As I mentioned in my intro, we see quite a bit of confusion about web components in SSR. And this is probably because certain features of web components do pose challenges for one very popular flavor of server-side rendering. We'll discuss what those challenges are, but the key takeaway is that web components are absolutely compatible with SSR. Let's see how, starting with a common pattern that works with virtually any form of server-side rendering. Specifically, reusable web components, like UI controls and embeddable widgets, can be used in any server-side rendering scheme. The same goes for a class of elements we'll call presentation components. As illustrated by this example from Electronic Arts, these are components you build yourself to standardize patterns for presentation and navigation. Each component has its own semantics and encapsulates styling and behavior to ensure consistency across your site or even a family of sites. Now, you can use presentation components in UI elements with any form of SSR. So you might craft your HTML by hand, generate it offline, server render it at runtime, whatever, you just include custom elements in your markup, being sure to place meaningful content like links, text, and images in the light DOM where bots can see it. Now, one quick note is that this pattern does lead to rendering in two stages. Your document will paint immediately, but your custom elements won't render fully until they've registered and upgraded. So it's a subtle point, but something to keep in mind. But what if you're not just integrating components into your site, but building your entire app out of web components from top to bottom? What if you don't have an existing SSR solution to lean on? In this case, you need something more. Let's look at two possibilities. One option is to delegate bot rendering to a standalone service. So let's assume you're doing something like this. User shows up, requests one of your app's routes, and following the purple pattern, you serve an empty HTML shell and just enough JavaScript to handle that first route and let the browser take care of the rest. But now suppose a bot shows up and requests that same route. Before responding, you send that same HTML shell and JavaScript to a rendering service. Service returns your rendered HTML, and you pass it along to the bot and everybody's happy. So all things considered, this is probably the best way to implement SSR for a pure web components app today. The integration's light, you don't write any server-specific code, and bots and users need to get what they need. Now the approach isn't new, it's been used for multiple generations of AJAX and single-page apps, but there's always room for improvement, and Sam Lee will be up on stage next to tell you about a modern, web-components-friendly rendering service that he's built using Headless Chrome. Now you may have noticed that a rendering service doesn't address performance, though, since you're only sending HTML to bots. For that reason, you might want to explore another approach called isomorphic JavaScript. This is a form of SSR in which your front-end code is explicitly designed to run on both the server and in the browser. So instead of rendering an empty shell, or returning an empty shell, you run just enough code on the server to render your initial HTML, which you then serve to both browsers and bots, along with your client-side JavaScript. Once the browser loads your code and the app boots up, it reclaims those pre-rendered DOM nodes, and then runs just as if it had rendered on the client to begin with. This is super elegant, and that can be a great solution, but as always, there are trade-offs. So your code gets a bit more complicated, and you end up doing more work on the server for every request. And as we mentioned, there are those couple of web-component-specific challenges. First, because web-components rely on web-platform APIs, you need to either include a full browser in your server environment, which sounds expensive, or use an abstraction layer that provides a lighter-weight implementation of those same platform APIs. And the thickness of that layer may vary depending on how many platform APIs you need to render your initial app state. Now, if your components use Shadow DOM, there's an additional wrinkle. After you build your DOM tree on the server, you need to serialize it into HTML markup. So without Shadow DOM, as in this case, it's pretty straightforward. But unfortunately, the platform doesn't currently give us any way to represent a shadow root in serialized HTML, so our simple task becomes a tricky problem. Now, there are ways to work around this limitation, but they require inserting placeholders and running a bit of client-side code during your initial render. So Trey Shugart will be on stage shortly to share his work on isomorphic JavaScript for web components and take a closer look at all of these considerations. And with that, we've wrapped up our brief intro to SSR. I hope it's been useful to you, and I'm sure that you'll be excited to see what both Sam and Trey have in store for you next. So thanks and enjoy the rest of Polymer Summit 2017. Thanks, Trey. I'm just gonna try and keep the momentum going. Our next speaker, Sam Lee, is gonna give you a tool to try this out. And Sam, take it away. Hi, everyone. I'm Sam Lee, and I'm an engineer on the Polymer team. If you managed to pick up on my accent in the last five words, I am indeed Australian, and it's honored to be followed up by Trey, a fellow Aussie as well. Prior to joining this team, I'd worked on the beloved Chrome DevTools. One of my smallest, but maybe my greatest contribution was adding the ability to rearrange tabs in DevTools. There's probably the greatest five lines I've ever written. I did work on other five other features, so if you find me afterwards, feel free to ask me about them, and I might share a DevTools trick or two. More recently, I've had the humbling experience of building web components at all, and witnessing all the incredible components that all of you have built and published. For example, the one and only Pokemon selector. And if you're the person who says, but there's only 151 Pokemon in the original set, well, there's even an option that lets you set that too. So all kudos to Sam Lee for this. It was, however, in the process of building web components at all, which brings us to what we're here to talk about today. So first, I'm gonna cover my story of how I came to encounter this SEO problem while building web components at all. We'll then look at how I used Headless Chrome to solve this before diving into all the details of how that actually works and how you can use it. So I'm gonna take a step back for a moment and talk about what I learned in the process of building web components at all. The first thing I learned was how the platform supports encapsulation through the use of web components. With this encapsulation comes with inherent code reuse, which leads to a specific architecture. I also learned about progressive web apps and how they can provide us with fast, engaging experiences. I learned how the platform provides APIs, such as service workers, to help enable those experiences. I also learned how to compose web components to build a progressive web app. And we've heard from Kevin yesterday about the purple pan, push, render, pre-case, lazy load as a method of optimizing delivery of this application to the user. And one of the architectures which enables us to utilize the purple pan is the AppShell model. It provides us with instant, reliable performance by using an aggressively cached AppShell. You can see that for all the requests which hit our server, we serve the entry point file, which we serve regardless of the route. The client then requests the AppShell, which is similar, but because it's the same URL across the application, we can combine that with a service worker to achieve near instant loading on repeated visits. The shell is then responsible for looking at the actual route that was requested, and then request the necessary resources to render that route. So at this point, I'd learned how to build a progressive web app using client-side technologies, like web components and Polymer, and how to use patterns, such as the purple pan, to deliver this application quickly to the user. Then there's the elephant in the room, SEO. For some of these bots, they're basically just running curl with that URL and stop right there. No rendering, no JavaScript. So what are we left with? With this PWA that we built using the AppShell model, we're left with just your entry point file, which has no information in it at all. And in fact, it's the same generic entry point file that you serve across your entire application. So this is particularly problematic for web components, which require JavaScript to be executed for them to be useful. This issue applies to all search engine indexes that don't render JavaScript. But it also applies to the plethora of link rendering bots out there. There's the social bots, like Facebook and Twitter, but don't forget the enormous number of link rendering bots, such as Slack, Hangouts, Gmail, you name it. So what is it about the AppShell model that I would really like to keep? Well, for me, this approach pushes our application complexity out to the client. You can see that the server has no understanding of routes. It just serves the entry point file and it has no real understanding of what the user is actually trying to achieve. This allows our server to be significantly decoupled from the front-end application, since it now only needs to expose a simple API to read and manipulate data. The client that we pushed out to the application that we pushed out to the client is then responsible for servicing this data to the user and mediating user interactions to manipulate this data. So I asked, can we keep this simple architecture that we know and we love and also solve this SEO use case with zero performance cost? So then we thought, what if we just use headless Chrome to render on our behalf? So here's a breakdown of how that would work. We have our regular users who are making a request and they would like a cat picture, because who wouldn't? And as part of this approach, we ask, are you a robot? And to answer this, we look at the user agent string and check if it's a known bot that doesn't render. In this case, the user can render so we serve the page as we normally would. The server responds with the fetch cat picture function and then the client can go and execute that function to get the rendered result. By the way, this is one of my kittens which I fostered recently, which is super adorable. Now when we encounter a bot, we again look at the user agent string and determine that they don't render. And instead of serving that fetch cat picture function, we fire a request to headless Chrome to render this page on our behalf. And then we send the serialized rendered response back to the bot so they can see the full contents of the page. So I built a proof of concept of this approach for Web Components.org and it worked. I wrote a medium post about it and people really interested in this approach and wanted to see more of it. So based on this response, I eventually decided that instead of my hacky solution, that I would build it properly. But then came the most challenging part of any project and I know you've all experienced it as well, naming. So I asked on our team chat for some suggestions and I got a ton. So these are some of our top ones. There's some great ones in there. Power renders, user platform as a renderer. However, today, I'm very pleased to introduce RenderTron. Let me render that for you. RenderTron is a Dockerized headless Chrome rendering solution, so that's a mouthful, so let's break it down. First off, what is Docker and why did I use it? Well, no one knows what it means, but it's provocative. In all seriousness, Docker containers allow you to create lightweight images, as standalone executable packages, which isolate software from its surrounding environment. In RenderTron, we have headless Chrome packaged up in this container, so that you can easily clone and deploy this to wherever you like. So what about headless Chrome? It was introduced in Chrome 59 for Linux and Mac, Chrome 60 for Windows, and it allows Chrome to run in environments which don't have a UI interface, such as a server. This means that you can now use Chrome as part of any part of your tool chain. You can use it for automated testing, you can use it for measuring the performance of your application, generating PDFs amongst many other things. Headless Chrome itself exposes a really basic JSON API for managing tabs, with most of the power coming from the DevTools protocol. All of DevTools is built on top of this protocol, so it's a pretty powerful API. And one of the key reasons that Headless Chrome is great is that now we're bringing the latest and greatest from Chrome to ensure that all the latest web platform features are supported. With Renditron, this means that your SEO can now be a first-class environment, which is no different from the rest of your users. So just a quick shout-out, this all sounds really interesting to you, and you would like to include Headless Chrome in some other way in your tool chain. There's a brand new node library that was published just last week that exposes a high-level API to control Chrome, while also bundling all of Chrome inside that node package. So you can check it out on GitHub at Google Chrome slash puppeteer. So I've looked at the high level of how Headless Chrome can fit into your application to fulfill your SEO needs. Now it's time to dive to how it works, but I've been talking a lot. So who wants to see Renditron in action? Yeah. All right, so this is the hack and use PWA created by some of my awesome colleagues, and it's built using Polymer and Web Components. It loads really fast and all-round performs pretty well. We can see that there's a separate network request, which loads the main content that we see. And we can guess that it's affected by this SEO problem since it uses Web Components, which require JavaScript, and it pulls in data asynchronously. So one quick way to verify this is by disabling JavaScript and refreshing the page. And once we do that, we can see that we still get the app header since that was in the initial request, but we lose the main content of the page, which isn't good. So we jump over to Renditron, the Headless Chrome service that is meant to render and serialize this for you. So I wrote this UI as a quick way to put in a URL and test the output from Renditron. So first off, what are we hoping to see? Because these bots only perform one request, we want to see that whole page come back in that one network request. We also want to see that it doesn't need any JavaScript to do this. So take a look. I'm going to put in the Hack and Use URL and tell Renditron to render and serialize this and that I'm also using Web Components. And it renders correctly. I'm going to disable JavaScript and verify that it still works. So you can see it's still there and it all comes back in that single network request. Renditron automatically detects when your PWA has completed loading. It looks at the page load event and ensures that it has fired, but we know that's a really poor indication of when the page is actually completed loading. So Renditron also ensures that any async work has been completed and it also looks at your network request to make sure they're finished as well. In total, you have a 10-second rendering budget. This doesn't mean that it waits 10 seconds though. It'll finish as soon as your rendering is complete. If this is insufficient for you, you can also fire a custom event which signals to Renditron that your PWA has completed loading. Serializing Web Components is tricky because of Shadow DOM, which abstracts away part of the DOM tree. So to keep things simple, Renditron uses Shady DOM, which polyfills Shadow DOM. This allows Renditron to effectively serialize the DOM tree so that it can be preserved in the output. So let's take a look at the news PWA, which you've all seen, and it's also built by some of my other colleagues. And we'll plug that into Renditron. We'll then ask Renditron to render this as well, and that I'm also using Web Components. And there we have it. So what do you need to do to enable this behavior? With Polymer 1, this is super easy, and Renditron doesn't actually need to do anything. Simply append DOM equals Shady to the URLs that you passed to Renditron, and Polymer 1 will ensure that Shady DOM is used. With Polymer 2 and with Web Components v1, it's recommended you use Web Components loader.js, which pulls in all the right polyfills on different browsers. You then set a flag to Renditron, telling it that you're using Web Components, and it will ensure that the necessary polyfills that it needs for serialization get enabled. So another feature of Renditron is that it lets you set HTTP status codes. These status codes are used by indexes as important signals. For example, if it comes across a 404, it's not going to link to that page because that will be a really poor search result. Our server, though, it's still returning that entry point far with a status code of 200 OK. So it looks like every URL exists. Renditron lets you configure that status code from within your PWA, which understands when a page is invalid. Simply add meta tags, dynamically is fine, to signal to Renditron what the status code should be. Renditron will then pick these up and return that status code to the bot. So this approach isn't specific to Polymer or even Web Components. Let's plug in fonts.google.com and see what happens when we serialize it. So that looks pretty good. Who can guess what JavaScript library was used to build Google Fonts? Angular, Renditron works with any and all client-side technologies that work in Chrome and whose DOM tree can be serialized. The Renditron endpoint also features screenshot capabilities. So that you can check that Headless Chrome and the load detecting function are performing as you expect. Unfortunately, this service is not fast. For each URL that we render, we spin up Headless Chrome to render that entire page. So performance is strictly tied to the performance of your PWA. Renditron does, however, implement a perfect cache. This means that if we have rendered the same page within a certain cache freshness threshold, we'll serve the cached response instead of re-rendering it again. So how can you get your hands on this today and how do you use it? Well, first, you'll need to deploy the Renditron service to an endpoint. You'll need to clone the GitHub repo at Google Chrome slash Renditron. And it's built primarily for Google Cloud, so it's easiest to deploy there. But if you remember, this is a Docker container. So you can deploy this to anywhere which supports a Docker image. So to make things simple for you to test out, we have the demo service endpoint, which you can hit at render-tron.appspot.com. And that's the one with the UI that we saw earlier. It is not intended to be used as a production endpoint, however, you are welcome to use it, but we make no guarantees on uptime. Having this as a ready-to-use service is something we might consider based on the interest we receive. So just in case you're wondering, my boss's Twitter handle is MattSMcBelty. Just in case you wanted to tell him how awesome I am. So once we have that endpoint up, you're gonna need to install some middleware in your application to do the user agent splitting that I was talking about earlier. So this middleware needs to look at the user agent, figure out whether or not they can render, and if not, proxy the request through the Renditron endpoint. If you're using PurpleServer, which is a node server designed to serve production applications using Purple, you simply need to specify the bot proxy option and provide it with your Renditron endpoint. If you're using Express, there's a middleware that you can include directly by saying app.use Renditron.makeMiddleware with the proxy endpoint and whether or not you're using web components. If you're not using either of these, check the docs for a list of community maintained middleware. There's a Firebase function there, as well as a list of existing middleware that Renditron is compatible with. If it's not listed, it's also fairly simple to roll your own middleware by simply proxying based on the user agent string. And that's it. That's all the changes you need to make to use Renditron today and all these bots can now be happy. Renditron is available to use today, compatible with any client-side technologies, including both Polymer One and Polymer Two. Thank you. That is so awesome, Sam. I was so excited for him to reveal that I had to run off the stage and I forgot to tell you something important. You're probably, your brains are working, you're hearing new stuff and you probably have questions. If you have any questions, go ahead and tweet them with hashtag Ask Polymer. The panel that's gonna be later today, we'll see those and hopefully answer them. Our next guest is kind of skating to the SSR SEO puck from a different angle, Trey Sugart. Hello, how's it going? I'm Trey. I work on the front end of Atlassian by day, but by night, I'm a hopeless romantic for web components. And it's almost shameful. So you may or may not have heard of a library that I created called Skate.js. It's... It's like Polymer, but it approaches things from a different angle. But I'm not here to talk about that, but I do make mention of it in the slides, so it's worth mentioning. I'm here to talk to you about some techniques that we're using to surface-side render web components and why we need to do so. But before we go ahead first into that, I really wanna briefly cover what pushed me to actually try and do this. So I tend to straddle the line between the WebNuclunut community and other communities. And it puts me in a somewhat unique position to empathize with other viewpoints from the other communities, as long as I'm able to keep my bias in check, of course. And of those criticisms, there's really three main ones that keep popping up. The first one is that you can't represent complex data structures with attributes. It's always been an issue with HTML, but so it's not really specific to web components. And Polymer and Skate kind of help you get around this because they set properties and they manage that property to attribute reflection for you. So it's not really a must-have. The second one is that there's no opinionated templating model. And Polymer and Skate kind of do this for you, too. And there's also libraries like LitHTML and Preact to weave into your custom elements. So it's not really a must-have. And lastly, the one that I hear the most, really, though, is that you can't server-side render Shadow DOM. And it's proven to be something that we do need in order to solve a couple of problems. But why can't we do this out of the box? So rendering Shadow DOM on the server is not possible because there's no way to attach a shadow root and represent its content without executing imperative JavaScript on the client. And while you can render your element without a shadow root, declaratively, there was some light DOM, you must then imperatively create the shadow root and then attach the shadow root and then put some content inside of it. There's no way to express this within the bounds of HTML. So the two main reasons compelling us to do server-side rendering. First one is SEO and bots. And the other one is to help with user experience. And this can be kind of contentious and I'll explain why shortly. Server-side rendering isn't a solution that's looking for a problem. There's prior art here. Many JavaScript communities such as Ember, React, Vue, Svelte, they're all turning to server-side rendering to help solve these problems. Search engines and other bots need to be able to read, scrape, and index content and then do something with it. These bots may or may not execute JavaScript. And on the ones that do, they're going to do so to different extents. If we wanted to do something like render an app shell with a single custom element, not all search engines and bots are going to be able to read the content of that. If they execute JavaScript, they might, but if they don't execute JavaScript, they're definitely not going to see it. Some have suggested not putting content in the shadow root that you want crawled and it works, but I don't think it's really fair. This precludes using custom elements as templating hooks. For example, the about page here. It might be useful to embed or share this in another part of the app or the site. Twitter is a great example of an app that would probably use an app shell and possibly use a router to then render a custom element for the page and it needs crawlability. Googlebot is probably the biggest and most notorious search engine out there. And it was recently announced that it's based on Chrome 41. Bing comes in second with a fairly sizeable market share, but I couldn't find any documentation about what it supports. But it turns out they both execute JavaScript, but they don't execute modern ES 2015. This means that you have to use the polyfills and then you have to use ES5 to write your custom elements or you have to use a transpiler for it. And during my testing with Bingbot, I didn't get consistent results because depending on which webmaster tool I was using, they all behaved a little bit differently. And what about other bots? Things like social shares, simple crawlers in language-specific libraries. In my limited testing on social media sharing, the server-side rendered pages behaved better. So the moral of this story is that there's a vast world of bots designed to massage content. Some of them don't execute JavaScript, but of the ones that do, your mileage is definitely gonna vary. Testing SEO has proven to be tedious and time-consuming and the results also sound a little hand-wavy and it decreases my confidence that what I've tested is actually going to work. And something I'm not really sure of is if relying on JavaScript's even a good thing because on top of targeting a browser matrix, you now have to target different bots with different capabilities. And as a developer, I really just want everything to just work without having to jump through hoops. The second major reason to use SSR is for user experience. Many argue that server-side rendering can hinder user experience, and this can be true because you have a larger HTML payload. It looks interactive, but it might not be. This is called the uncanny valley. Anchors and other built-ins will probably be an interactive, of course, but what about complex UI components that require JavaScript to boot up? And others argue that it can improve user experience because it's a faster time to first paint. And users can start consuming that content and plan their initial action while the JavaScript boots up. They're both right. The key here is context. Does your target audience mainly consume or interact with your content? Do you use a lot of built-in elements or do you have a lot of custom components that require JavaScript? Now, how long does it take for your JavaScript to download, parse, and execute? So the moral of this story is actually very similar to the last one and that your mileage is going to vary. Service-side rendering is a tool that can help you improve your user experience. Use it if you need it, but you need to measure and you need to care, and this is something that we should be doing regardless in our jobs every day. Using service-side rendering to solve your problems depends a lot on context. If you're only targeting Googlebot or Bing, you might be okay. That said, you have to know the limitations of what you're targeting. The same goes for UX. You have to know your audience. Measure your app's performance and if you think it will help, try it. Measure it. If not, you don't have to use it. Okay. So now that we've defined the problem's face, let's look at how we can solve this. At this point, I'd like to reiterate that our goal here is declarative shout-out. And I wish I had a dollar for every time I used the word declarative because I'd be rich, but it's very important. It underpins this talk because it's a core tenant of HTML. Therefore, it becomes a fundamental principle for being able to express shadow DOM. I'd like to start off by defining some terms real quickly. The first one is web components, right? This is just custom elements in shadow DOM. That's all I'm talking about here, not HTML imports or templates or anything. The compose DOM is the state of the DOM when the light DOM is distributed through the slots in the shadow root. And serialization is when you transform a DOM tree into the composed HTML. Reverse, rehydration is reverse engineering that string and taking the composed tree and turning it back in light and shadow DOM. At a high level, we plan this into a three-step process. First, we wanted to be able to run web components in Node because we really wanted the simplicity of universal JavaScript and being able to co-locate client and server code. Next, we wanted to take a DOM tree, a constingable shadow and light DOM, and then serialize that down and transform it into a string. And finally, we needed to then run JavaScript on the client to rehydrate that. There's a couple of different approaches we could have taken here. The first approach is running into something like Headless Chrome or Electron. I haven't tested Headless Chrome, but my first foray into all of this was trying Electron to do this. The DOM API support is top-notch, but there's a bit of friction because you can't co-locate your client and server code. Everything has to be proxied through a separate tool. And because of that, performance is questionable. So the other method of doing this is running your code in something like JS DOM or Domino or Undom. Depending on the implementation, this can actually be very fast. The API ends up being a lot simpler because there's no context shift between tools. And it opens up a lot of doors within Node. And I'll talk about this a bit more in a little bit. Unfortunately, none of them support Web Components yet. There is work happening to fix that. JS DOM has an ongoing PR for custom elements and Shadow DOM. And Domino has a PR open for just custom elements. However, we decided to use Undom. It's written by the author who wrote Preact, Jason Miller. And it's a subset of the DOM APIs. And we did that so that we could build on top of it and build our own subset of custom elements and Shadow DOM. We really wanted the ergonomics of co-locating client and server code. And it's only 1K. So the upfront overhead is pretty minimal. It's also fast because it's basically just array manipulation hidden behind a DOM-like API. And it's easy to extend because you don't really have to worry about implementing the standards in full. Being a minimal implementation allowed us to move really quickly. We needed that because there was a lot more work to do in order to concept this idea. The first pass of all this work is a new library under the SkateJS org excitingly titled SSR. The only thing you have to do to start writing Web Components in Node is to make sure the DOM APIs exist in your execution context. To do this, you simply require the module that registers the APIs. Once this has been done, it's as if the server and the client have joined forces. So now you can write your code as if you were running on the client. And here we've defined a simple hello world component that we're going to be using in the next example. So now that we can run our DOM code in Node, the next step is taking a DOM tree and turning it into a string. Our requirements for the string are pretty stringent. First, it must represent the composed tree because it must be legible to bots without executing JavaScript. And the concept must appear in order and retain semantic meaning. Second, it must contain all the information required to transform it back into the state before it was sterilized, so the state that was on the server. And third, it should be reminiscent of what a standards-based approach might look like because we do want to take this to the W3C as a future proposal. So assuming we've already defined the hello component in the previous example, we don't need to do that much to serialize it. The library's default export is a render function, so we load it up to serialize the hello element. And then we create an instance of the hello element that we're going to serialize. And we want to project some content in the light DOM, so we set the text content property. And when we call render on our element, it returns a promise that resolves with the serialized result. Using a promise helps account for components that might render asynchronously. For example, Skate does this to batch property updates into a single render, and then it happens on the next micro task. You can also pass a custom resolver. For example, if you have like a fetch request somewhere in your code that you need to account for. So the previous example would output something like this. And this is the compose tree with a few extra bits. So now that we've serialized our component, let's look at the rehydration process. This is the previous result that we had. Without built-in declarative primitives, we have to execute JavaScript in order to rehydrate this. So we dedupe the rehydration code to do a single function. That way we can call that function whenever we need to rehydrate a shadow root. And it's called rehydrate here, but the library is going to make sure that's not going to have any collisions in the global namespace. This is just for the example. So upon rehydration, that script tag that invoked rehydrate gets removed. The placeholder shadow root element is also then removed. And the real shadow root is then attached to the host in its place. And the content from the particular shadow root, sorry, the placeholder shadow root, is then transferred to the real one. Now we have to take all the top-level slots and then take it to sign nodes or direct children and then move that back to light on the host. We have to be careful, though, so that we don't move any default slot in content. The best way we've come up with so far is by flagging the slot within an attribute called default. But for the sake of simplicity, I'm not going to do that here. And this is what the full rehydration ends up looking like. This is something similar to what you'd see in DevTools. It's also a really, really great example of why we need to serialize out the composed tree. Looking at the order in which the text appears here, so you have world appearing before hello. Even if world came after hello, it's still appearing after that exclamation point. And if you had anything placed around the slot, for example, to place emphasis on the text inside the slot, you're going to lose that semantic meaning. Using a shadow root custom element, though, does have its drawbacks. It's not perfect. It's really good because we can deliver a custom element in the future instead of using script tags. However, we didn't do this initially because delivering a custom element opens up questions requiring polyfills and shims. And we wanted the usage of this to be as simple as possible. The possible alternative to a custom element would be using something like a composed attribute, or a shadow root attribute on the host. Doing this means that there's less DOM elements that need to be thrashed or mutations happening. And it seems to match the imperative API a little bit closer. Either way, declarative shadow roots are currently only designed to work for the initial parsing of an HTML page. Using another declarative abstractions like React and JSX and other ones hasn't really been prototyped yet, but we hope to be doing that soon. Between the time that I wrote this talk and now, we actually implemented experimental encapsulation for CSS class names. This means that you potentially don't have to execute any JavaScript at all to server-side render service bots and actually have scoped content on the page. I got about halfway getting all this stuff prototyped, and it kind of just hit me. Do you realize what doors this opens up? And all of a sudden, I got pretty excited. Behind door number one is the ability to run your test in Jest as a testing framework by Facebook. Jest runs in Node and normally uses JS DOM as its default environment. So we wrote a custom environment that uses our DOM one plus the extensions that we put on top of it. All you have to do to configure Jest is to specify the environment in the package JSON. And then you can start writing tests as if you're in the browser. Skate's entire test suite is now run through Jest using this. Similarly to Jest, you could use Mocha to run your test directly in Node. It looks a bit different, though, because Mocha doesn't have a concept of environments. You're just running it directly in Node, so instead of configuring it to use an environment, you would do as you normally would and just require the APIs at the top. And then you can just write your test as normal. If you find yourself rendering your content on the server statically to a single file with these APIs, you can actually quite easily generate an entire site statically. So we built a little CLI tool that will take a glob of JavaScript files that have custom elements, constructors. There's a default exports. And it will render each one to a static HTML file. It also accepts a props argument, so you can do pseudo-dynamic renders with it. And this could be useful for anything that doesn't have much dynamic content or isn't very interactive, like documentation sites. However, if you want to pre-render your pages and deliver your JavaScript separately, that upgrades some or even all of those rendered elements, you can do that, too. So dynamic SSR is a lot like static SSR. You're just running it in a node server, like HappyJS or Express. You can make your components render dynamically via props by assigning the request parameters to the component. Doing this allows your component to then dynamically re-render according to those props, and then the output is serialized and sent to the client. Writing libraries can be fun, and it can be really useful to the community. It can also be kind of glamorous. However, the definition of success here is that this library can be made mostly if not completely redundant. We've built all this stuff out on top of Undom, and Undom supports plug-in, so it would be pretty logical for us to then just extract this stuff out into a plug-in and into a separate repository. And once JSDOM has support, then developers can choose, depending on which one has meets their needs. Serialization or rehydration are pretty tightly coupled. It would be really great to get a standardized way to represent this composed tree as a string. Maybe something like element.composed HTML. I don't know. That way, we can be confident that what we send to the client is going to be properly rehydrated. And finally, to the main point of this talk, we absolutely need a standardized, declarative way to represent Shadow DOM so that we can service bots, client-scope content to users quickly without requiring JavaScript, and so that we can use the power of Shadow DOM declaratively in other libraries and frameworks. Before I go, I just want to say thanks to Bede Overund. He's somewhere in here. He spoke yesterday for all of his help. Thank you. Awesome. I love these deep technical explorations. Our next guest is going to do something more application developer-focused. Here is... Oh, I forget again. Ask Polymer, hashtag Ask Polymer if you have any questions. Be sure to tweet those out for our panel. And our next guest, Maria Hussmann, is going to show you some cool application development stuff. And you probably know her from answering your questions on the Polymer Stack Overflow. Cheers. Hello, everyone. I'm Maria, and I'm working towards my PhD at ETH Zurich in Switzerland. And so for my research, I'm experimenting with UIs that are distributed across multiple devices. And in this talk, I'm going to tell you how I use Polymer for that. So we have more and more devices, including phones, tablets, watches, and TVs. And I saw many people around here that have brought their laptops, and I assume that most, or maybe even all of you, also have brought their smartphones. So you already have at least two devices with you. However, I still mostly use these devices in isolation, except maybe for some cloud-based synchronization for things like files and email. However, there could be benefits if we could use multiple devices in combination, for example, to have more screen real estate. And I took this photo here of someone who's shopping for wine on a train, and they're using their phone to access the shop. And at the same time, they're using the tablet for specific wine, so for the same wine that they're looking at in the shop. However, in this case, there is no coordination between the two devices. So the user has to manually select the same wine first on the phone, in the shop, and then go to the tablet and look at it there as well. So for the user, it might be nicer if they could just select the wine on the phone, and the tablet would automatically show the review for that same wine. Another benefit of combining devices is that we get a richer set of input and output modalities. So in the scenario that you see here, it's easier to scan a document with your phone because it has a camera and it's more mobile. But once you have done that, you may want to continue your work on the laptop because it has a larger screen and it has a keyboard, so that might be more convenient. UIs for multiple devices that are used in collaborative scenarios, here, for example, you can see a group of people using both personal devices like phone or a laptop and shared devices like this tablet or a tablet. So researchers call applications that make use of multiple devices at the same time cross-device applications. And as part of my research, I'm trying to make it easier for developers to build such applications and for that, we have created a library that uses Polymer. So if you wanted to build such a cross-device application, you would need to address at least the following three things on top of the usual single device development tasks that you will need to do anyway. So first, state needs to be synchronized across devices. So if the user clicks a button on the phone, the other connected devices need to be informed or changed. Second, the UI needs to be distributed across multiple devices. And you need to decide what part of the UI should be shown on what device. And for that, you also need some information on the kinds of devices that are involved and working together. And third, the devices need to be connected somehow. So there needs to be some way to tell the system that these two devices are working together. And the other three devices are also forming a group. And so our library offers support for all these tasks using web components. And we are going to look at how you can use the library to build cross-device applications by looking at this sample application of a webcam viewer. So it's a pretty simple application that shows a large image of a webcam on a large screen and you can use the phone to control the camera angle and select the camera to show. The application was built using less than 300 lines of code in client and server combined, so also mostly in the client. And you can find the code if you want to have a look. There's a link to the GitHub down there. So here's the screen cost of the application. To pair the phone, I could just scan the QR code that you can see there on to this emulated phone. Then I can use the buttons to control the camera angle. I can go back and forward in time. And then when I flip the phone I can choose a different location to display. And let me show you the most important steps now for building this application and we'll start with state synchronization. So if a user interacts with the device on the left, they trigger an update to a shared view. So that to a shared model. And the model is shared over the networks with the other devices and will then trigger updates to the view of these connected devices. This is somewhat similar to something you might already be doing with Firebase. However, we use WebRTC peer-to-peer connections if the device is supported. So WebRTC is a newer protocol that allows devices to communicate directly. So they don't have to go via server which you would do with WebSockets. So this lowers latency significantly, especially if the devices are in the same room and the server is remote and this is kind of our expected scenario in cross-device applications. And here's how you can do this in Polymer using our library. In our application we define what should be synchronized by creating a property with our state. Here I've named the property synced, but you can choose any name, essentially. And at the moment, you can synchronize arrays and objects and you need to provide a label for each. In this case, we use time, camera and position as our labels. And with this, we synchronize the time of the photo to show which camera to select and the angle of the camera. Then you can use this XDNBC synchronized web component and use two-way data binding to specify that the synced property should be synchronized. Now you can write code as if all of your code was running on the same device, but the library then takes care of the synchronization for you. For example, I could have a bit up there with the iron selector where we select which camera to show and then use this information to display the correct image in the code. As I said, you can write code as if it was all running on a single device, but you still might want to specify what part of the UI should be shown on what device. And for that, we provide a couple of templates for what we consider common distribution patterns. And one of these is the remote control patterns where we use phones to control a larger screen. And we call the phones controllers and the large screen we call the viewer. And again, here's the code. So you add this controller layout component to your application and then inside this layout component you need to add a controller class to everything that should go to the controller devices and the viewer class to everything that should be shown on the viewer devices. And that's it. Finally, the device should be associated somehow and we chose an approach with identifiers in the URLs. So when you load the application on the large screen, it will modify the URL and add this connect parameter with a random ID. When another device loads the new URL, the devices will be paired. And the URL can be encoded into a cure code and scanned by device. Or in another scenario, you could also send it over instant messaging or transmit it using NFC. And we call the device that this is the code the connect device and the device that scans the code, the connector device. And again, this is how you add the functionality in your application. There's again a custom element for that. So we add this URL pairing component and the component will take care of modifying the URL for you and it will handle the pairing. For the webcam application, we specify that the connector device should be assigned a control role and the connector device should have the viewer role. We then use this information for the UI distribution that I showed you in the previous step. But if you don't have any use in your application for this information, you don't have to use it. So I've shown you how to build the application and next I'm going to speak about why we chose to work with Polymer. I think custom elements are great. Declarative code is very readable. I can get a quick overview of an application by just looking at the HTML. For us, it was all important to have code that works across platforms and on all kinds of devices. So if you look again at this picture that I've shown you in the beginning, you can see there are a lot of different devices and we want the code that we could just write once and run everywhere and Polymer meets that requirement. Then the Polymer concept seems to be a good fit for what we are trying to do. So two-way data binding integrates well with our idea of state synchronization. You just hook everything up and it works. And under the hood, we use conditional templates for the UI distribution. So we wrap things in a DOM if and decide whether it should be rendered or not. Modularity is also important. As a developer, you can choose the level of support you need. If you want your own way of distributing the UI fine, then you just don't use the pattern templates. The same goes for pairing. You don't have to use the QR codes. You could implement your own pairing component if you wanted to. As a Ph.D. student, I don't only do research, but I'm also involved in education. We have used our library and Polymer in various lab projects as well as Bachelor and Master thesis. And students had varying levels of experience with web technologies and some had even no experience at all and none of the students had any experience using Polymer. But the students usually become productive quite quickly and I think the declarative style is helpful for that. And I have to admit that the documentation of our library is not in a perfect state, but the students learn how to use it quite quickly by looking at sample applications. They can use the DevTools and click on a component and inspect the application. And for this it's important that Polymer uses the platform and the component is a custom element that they can inspect and it's not just a bunch of diffs. Now I'd like to give you a quick tour of an application that was built with our library and Polymer. It was built by three computer science master students at ETH and they did it in one semester which is around 14 weeks. They were expected to work around 300 hours each on this project. So Switzerland has this stunning landscape and it's the perfect country for mountain biking. And as their projects the students build the mountain bike trip planner. There are other trip planners out there but they tend to not work too well on mobile devices because there's so much information like charts and maps that are, it's really hard to squeeze it onto a small screen. So our idea for this application was that when you're out with your friends or your partner you could just pair your devices and then you'd have a bigger screen. But we did not want to limit ourselves to just using smart phones so for example if you had a TV in a hotel room you should also be able to use that. So we wanted an application that adapts to whatever devices you throw at it. And here's a quick video of the result. Two phones on the left are already paired. One shows a map and the other shows a summary of selected routes. And they are synchronized so when the new route is selected on one device the other device updates. And when we add a third larger device the UI redistributes and there is room for more information like the elevation chart in the left corner. So what were our experiences in these projects? As you may have noticed the students have used material design and the paper elements to build these applications. And our students usually don't have that much experience building UIs when they arrive in our courses but I think the paper elements here were really helpful for creating a nice looking application. And because the students could use our cross-device library they did not have to spend much time on implementing the cross-device functionality and instead focus on the main features of the app. They enjoyed working with Polymer and also find the concept of web components very intuitive. And one student told me this is how we should build web applications and on that note I will end my presentation. Thank you. Thanks Maria. This is how we should build web applications. That's our new user platform. Thank you. Our next speaker is going to take you to a virtual reality using custom elements. We have Martin Splitt. All right. Hey y'all, how are you doing? All good? Can you get that louder? Heck yes, awesome. I have 15 minutes to talk to you about a topic that usually takes a lot of time. I don't have a lot of time to talk about it but hey, virtual reality is cool. Let's talk about it real quick. And the clicker is not clicking. That's not a problem. I have a keyboard. You said you're not going to be on stage with me. Anyway, so yes, let's talk about web components. Let's talk about AV setups. It's not very chill about because it's going to blow up big time and so I have to present from my laptop and that makes everything complicated and here we are. Web components with VR and blah and all that kind of stuff and the clicker is still not doing the thing. Now it's doing the thing? Awesome. When it comes to creating web content in 3D, the thing is that you have to deal with very different things that you're usually dealing with. There's a bit of JavaScript, a bit of CSS, there's your web application, however in 3D and graphics things are slightly different. So one of the biggest problems of getting started with that is there's a lot of lingo, a lot of terminology that you have to get behind. And this is an excerpt from an actual tutorial for beginners and it reads, obviously it's not the start of it but still it reads, this is passed to the shader and used to calculate lighting for the object. Okay, I think I get what's possibly, I would have to research what a shader is but okay, lighting for the object, I get lighting, okay, I understand. It is the transpose of the inverse of the upper left 3x3 sub matrix of this object's monoview matrix. Right? Obviously. So I'd rather not click. Man, you're doing a good job, don't worry, don't worry. I'll just do it. The thing is it's not working. So I'm saying we don't do that, thank you very much. Instead, we're doing the following thing. Click. It's good that it paces me. Should I, yeah, click, yeah. Oh, right, okay, you're watching that. Now I understand it. We're basically doing Morse code because if I click here it blinks there. Future! Anyway, so instead of doing that, we don't have time for that. Serious stuff. Okay, so if we want to do graphics, we have to start with points. And because mathematicians are very pedantic people, they're like, no, no, no, no, those are not generic points. They are vertices because that's where two line segments meet. And so we call them vertices, but they are points of our shape. But points really don't make shapes until we connect them. So we connect them up to triangles. Why triangles? Because they're all vertical shapes. So it turns out that triangles are the brute force method of graphics. If you use enough triangles, you can approximate pretty much every shape. So the convenience would be we only teach a computer to draw a triangle, and then I can make every shape appear. And then some clever people build hardware that actually figures out what kind of pixels we have to color in to make that shape look more than like a 90s laser show kind of thing. So now we have a shape that we can color in. And then we write a program that's called a shader that actually determines the colors for each of these points. And how it does that is absolutely up to us writing that program. We can use a normal color. We can make everything green, for instance, or we can say make it a gradient, or we can read pixels and colors from an image and put that onto the triangle. We can do that, and then we get a triangle that looks like something like this, for instance. Great, so that's 3D graphics. But if I look at that abstraction level, I'm like, I don't really know how to build applications with that. Right? So I'd like to introduce a higher level application or abstraction, sorry, a higher level abstraction to work with. And so, for instance, think about making a movie. One thing that you need to make a movie or a theater play for that matter is you need a stage, you need a scene where things happen. Okay, sure. Then you need some props and actors to actually go onto that stage because these stages are not very fun to watch, right? So here we are putting some meshes, that's the lingo that the 3D people use. We are putting some meshes, which are 3D objects made from triangles connected to faces and then shaded. We put those onto the scene, and then we have something that we can possibly film using a camera. The camera is only a conceptual thing, like where am I, from where am I looking onto the scene and like, I see the scene differently than you guys see the scene with different angles from which we are coming. So the camera basically encapsulates that. And once we have the camera, the camera would film onto a film or an SD card or something. We are not doing that. We are putting it in the web. And the web means there's a canvas element that is really good for pixel access and it gives us access to a technology called WebGL. So there's a WebGL API that is really good at putting pixels onto a rectangular area into your website. And that's the renderer. We could theoretically render with something else. Or something. But we are using WebGL because it's really fast and really optimized for 3D graphics. And then we have our content on every device that has a browser that supports it. And that's pretty much every browser out there because this technology is around since 2011, works on IE 11, works on Edge, works on Samsung Internet, works on what's the thing with the Doga servo. So yeah, all these things basically supported iOS all good nice. Sweet. This is the code that you would like to write to do that in vanilla WebGL. Don't read that code. We don't have time for that. It's terrible. Don't do it. Luckily, there's libraries like 3JS that abstract this away into the application or abstraction layer that I've just been talking about. It's like making a movie. And here we have a scene, we have a camera, we have a renderer and we have a box, which is a mesh. We put the box onto the scene and then we have a render loop that runs all over again and basically just films the scene via the camera and puts it on the screen. That's better, I think. But if you look into it, we're not only putting one thing on screen most of the time. So let's do a little thought experiment here. So what if I have a ship that has a captain and I want to move around the ship, then I have to move around the captain as well. That's a bit tedious. So how about we make that captain 3D model a child of the ship model and if I move around the ship, the captain kind of moves with it. Maybe there's something else on the ship as well. Maybe there's a table and on the table there is a coffee mug and maybe some binoculars and the map and if I move the table on the ship the ship doesn't move, the captain doesn't move, but the table moves and with it hopefully the coffee mug and the binoculars and the map and all that kind of stuff. So we have to build a tree. But that's cool because we as web people are really good at building trees. So we're like, yeah, this is some stuff that I know. Who here knows how to build a tree, huh? And I now know who's like, what is he talking about? I'll just check my emails. That's perfectly fine. It's going to be more visual in a second. So I thought, well, actually not only, I thought we as at Archaeologic, which is the company that I work at, thought we want to make 3D content on the web and VR content on the web easier for everyone. How can we make that easier because right now you have to write a lot of JavaScript and it's tedious and boring. So how about we have a way of creating 3D content. And that looks to me as a web developer, that looks much, much nicer to work with. I have a scene and in the scene I nest my elements, I have my camera that has attributes like position and rotation and I can just rotate things around things and I don't have to deal with radians. I just use degrees like every human being does. And then here we have a mesh. Okay, there's a bit of implementation detail. I want to be able to specify the geometry, which is like the points and vertices and all that stuff, you know. And the material, like color and how reflective it is and all that kind of stuff. I want to be able to manipulate that manually. I also want to load 3D models and all that kind of stuff. So this is the API that we dreamt of. And when that came up in our work, I said, Polymer is the thing. So I built it with Polymer and now comes a live demo and I hope that you're all with me and have all your fingers and toes and all that kind of stuff crossed out. Right, here we go. So here we have a scene that hopefully is large enough to read, if not tough luck. So here we are seeing, like, we have a three scene and in the scene we have a camera and a mesh and in the mesh we have the geometry and material. So what happens if I say, for instance, I want to change the material, I want to change the color from green to red. And I do that and here we are, my box is now red. To be honest, I don't have time for that. So that's cool and nice and fine and all and I can make, like, you know, I'm a kid of the 90s so I go, like, ooh, wire frames. And I make that green again because, you know, green is awesome for wire frames. So, yay, green box. And what about I can change this to, let's say a sphere. Oh, that doesn't look like a sphere. That's a shit sphere. Well, that is because of the parameters here saying how many triangles are there. And the first one is the radius. So I can make that larger, for instance. But then I use one triangle for each of the sides and it's not very good. So how about we use, like, 16 triangles and hey, a sphere appears. So we're not there yet. Jesus, you are easy to impress. And I'll just remove that and pull out the most dangerous thing ever. I load shit from the network. Yeah, that's a good idea. Yeah, that's a good idea. And that's a good idea. Yeah, that's a good idea. So there's a 3D format called GLTF. It's called WebGL. It should be WTF, right? Anyway, so now it's like GLTF. That's what we got. I have a few really cool people that I work with. And one of them is Madalina Kalunda, who's probably watching the live stream right now. And she made me a comment. I'm Madalina. I know that you can do blender and 3D modeling and I can. So please. And she said, yes. So we have a 3D model. And I'm going to give it an idea as well, just so I can access it later on. And I position it into the screen so that we see it minus 5 meters into the screen. And that should pretty much be it. And there we go, a 3D model of that. And because we don't have a 3D model, I can do it. And because this is a DOM API, I can totally go and say, hey, 3D scene, can you give me an update whenever you are rendering something? So whenever you render a frame, please call this function and do something. And I'll take a, I know vars out and boring but screw you all. No, not logo. I want an angle. And what I can do is I want to rotate it around the Y axis. So if this is my Y axis, I want to rotate it like that. And I use plus, plus angles. So on each frame, I increase the angle by one degree. And there we go. That was easy. Now you can clap. Okay. Okay. Ta-da. This is kind of like my two-sided slide. If it would have failed, I wouldn't have been able to do that. I would have been able to do that. Anyway, I didn't know. That was a picture where I was young and pretty and now I'm just old and the opposite of pretty. Anyway, so moving swiftly on, let's talk about VR. All right. So the native VR experience, if you want to call it an experience, is you start by, like, Googling for something and then you go into VR. And I find that bizarre because that feels like back in the 90s, I'm like, yeah, I remember that, sitting in front of my computer waiting until, like, ACD ROM has basically vomited everything onto my hard disk and then I could finally install it and then run it and have fun. But this is 2017. So I would like an experience where I open my browser and type in a URL and then I put on my headset and then we are done, right? Because it makes so much sense and we are living in this future thanks to a standard called WebVR. And WebVR basically gives us three things. First things first, it detects if there is VR hardware available and that might as well be if you're on a phone and you have a daydream or cardboard device, then it goes like, yeah, this thing can possibly be put in a cardboard device or something like that. And then it gives you information that you really need because we are putting the user front and center into our application. Whenever the user is turning around their head, we have to take that into action for, like, our camera. Our camera has to move the same if I move around in the room. I have to update my camera to move around in the room. So it gives us this kind of information as well, called hardware that is additional controllers, like the daydream controller, the Samsung Gear VR controller, HTC Vive has them, Oculus Rift has them, so you have controllers in VR space and you want to have access to them as well. So the WebVR standard augments the gamepad API to actually provide you access to this information as well. And when you have all that, then you write a bunch of code. So here, for instance, we are asking for a list of displays. Great. You want to grab and hold onto a reference of one of the displays. If you have one, because then later on, when there's a user gesture, so the user has to explicitly say, I want to enter the VR mode now, so for instance, clicking a button or something, then we can ask this display, hey, I would like to present what I have on this canvas over here. So you can have multiple, like, scenes or canvases, and you can select a lot of different things. It makes sense to use WebGL because you want to render a lot of stuff really fast, but you don't have to necessarily use that. You can't render DOM content, unfortunately yet, but let's see what the future brings. Once we have that in our render loop, what we do is we are basically asking for frame data, which means we are getting back like the position and orientation and all that kind of stuff, then we are rendering two times, once for the device that you have in front of your eyes. So we are sending this information over so that the user actually sees it there. Once we have done that, we are using the VR display request animation frame, because for VR, we want at least, if we are doing proper VR, then we want at least 100 frames per second. Now you know that the request animation frame loop is usually locked at 60 frames per second at best, even though your hardware might do better, so here we get that. So we basically deliver this information, the graphics, as fast as the VR device can do it, not as fast as our browser can do it. That's an important difference. Even in 3JS, that's a little more complicated, so we thought, how about we are making this really easy? And all the change in the code that I showed you earlier on from our demo is I say allow in a VR mode automatically, which means we are putting a button somewhere for you without you having to do anything else. And then we have a VR controls, which gives us where is the user looking at, and then we put our camera into it, which means that whenever the VR controls are updating, they are updating the position and rotation of the camera automatically for us. And that, I think, is amazing, because it's a tiny change to go from 3D content to VR content if you are doing it declaratively. You don't have to deal with any of the Harry API things beforehand. And this is something that turns out, GIFs are large, surprise. So kind of like, you see that it automatically does the distortion and all that kind of stuff, so you don't have to worry about that at all. That's the VR API at play for you. However, the actual reality coming back from virtual reality is that it's not like that. And you're not going to find this project online because there is, as it turns out, prior art. There was Virmal in 1995 and it was standardized in 1997. And it was XML based and heavy-weight and never caught on that much. You needed a plug-in into the browser and, oh, my God. But then it was set together in 2001 and said, hey, X-Freedom, yes, it's pronounced X-Freedom, is an XML extension for the browser that you don't have to install a plug-in for. It still works in your browsers today, and gives you the possibility to use it. But they're not very good. But then we found that in 2015, Mozilla put out A-Frame. It has a huge and vibrant and amazing community and, like, beautiful, wonderful DevTools support. It's really, really cool. It's really vibrant. It's the thing. And it's basically doing what we're doing just better. So use that. But the good news is this is a Polymer conference, so don't worry. Polymer and A-Frame are platform-based. They're using the platform rather than fighting the platform. Here I'm having a color picker and binding the color attribute straight to the color attribute of my box. So that's all it takes to do that, really. So, yeah, I learned a few lessons and I'd like to share them with you at the end of my presentation. First things first, just because you can build a thing doesn't mean that you should really do it. Because if I would have done my three days of working, on the other hand, working with Polymer is pretty amazing because you get really good results really, really fast. Also, you know, don't really have to reinvent the wheel all the time. So, I mean, I was like, okay, even if the A-Frame exists, we still have to wrap it for, you know, and it turns out, no, that's not true because actually Polymer is really friendly to integrate with things like this. They are all using platform APIs. They're using DOM events. They're using it to kind of put them together and have fun. You don't have to worry about it. You have to wrap it into something. It's fine. And last but not least, use the platform. Because we have it and it's awesome. And with that, I'd like to say thank you very much. Thank you, man. Thanks. Very nice. All right, ladies and gentlemen, we have one more talk before our next break. We've got a best practices talk coming up. So, you think you know how to use ES6 modules, but we've got someone that's been working on telling you how to use them. And his name is Sam Thurgood. He's a Google developer's relation guy. Come on, Sam. Hello. That's quite loud. So, confusingly, I'm the second Australian Sam you're seeing today. So, we do look a bit different. So, you'll be able to tell us later on. Today, I'm here to talk to you about ES6 modules, how they work, what are the ups and downsides of ES6 modules? So, ES6 modules are known as JavaScript modules or ECMAScript modules, if you've heard of them before. They're not common JS modules, which are probably the most common non-standard approach to modules we have today. And it's what we get if you download nearly any module from MPM or Yarn. So, you know, ES6 modules are known as JavaScript modules or ECMAScript modules, if you've heard of them before. They're not common JS modules, if you've heard of ES6 modules, you'll have to download nearly any module from MPM or Yarn. There is an interop story here. ES6 modules don't have to exist in a vacuum and I will get to that later on. So, this talk is pretty big. It's comprised of three parts. The first part is I'm going to give a brief intro to ES6 modules, what they do, and how they work. And at the end of part one, you could walk out. Please don't. But you'd have to, you know, modules are really great. I think they're quite cool. It's given me a whole new lease on writing JavaScript. But they also change a lot of subtle things about the way we write it, which is why I split this part out. And finally, I'll talk a little bit about integration with existing packages and code splitting. So, maybe most importantly, why do we care? Well, obviously we care because Polymer is moving to ES6 modules and that's super important for us using Polymer. But the other reasons are, you know, we have a lot of different ways to use Polymer. So, we have a lot of different ways to use it. And the language really needs modules. Anything more than a trivial program in any language needs some sort of include system. And of course, in languages like Python or C++, we have obviously had that since the inception. But for JavaScript, we have not had this. You know, you include a script tag in your page and just hope it works. And we all know this. But including loosely standard, I mentioned common JS, which I'll come back to. But they are never picked up as a standard and never really adopted by browsers. And this is for a couple of reasons. But what I think is really interesting is that the ECMAScript standard for modules has a really intentionally small specification. It's actually really simple when you get down to it. But it has obviously a few nuances which I'll be talking about in a little bit. So, this is technically an example. This is how you include a script file to run as a module, which is the code snippet at the top. Or I can write a module script inline, which is what you see at the bottom. You know, okay, this is great. I've specified type equals module. But what does this actually get me? Well, it gets you a couple of things. Firstly, it gives you superpowers. Don't worry, that's the worst slide transition we'll have the whole time. But it gives you superpowers, JavaScript superpowers to import and export code. Sadly, modules don't let you fly. So, here, I'm going to import my script tag. So, I'm going to export my script to Python. Well, this is some joke, there. Okay, no Python programmers here. So now, you can import the code. And here, I'm declaring my script tag as a module. So, I can request this symbol, answer, which comes from the bottom, appears from the question source file. And I can request that that will show up in my index HTML. And now, I can are the errors, right? You can't use them. So you can't just include these files as regular JS and mix and match that code. So here's a slightly more detailed example. Unlike the last slide, we're now including the JS file via the source attribute. The point is here that it doesn't really matter whether the script is inline to your HTML page or you're including it, they still work the same way. And because we started with type equals module, both these files are kind of running what we call module mode. Even though the bottom one purely has side effects, I'd purely export something, sorry, it has the superpower to include additional modules if it wanted to. So what we've basically built is sort of like a path of imports. This is our little example. We've got a HTML page. It pulls in code and it pulls in question. But you could add another dependency here. Or I could add something else that's dependent on my multiple files. I can also have other HTML pages which maybe enter into this graph of dependencies that I have here. Which is super interesting and I'll refer to this a little bit later on. This slide details some of the options. So we saw the really simple mode of exporting and importing before. I find there's actually a lot of resources about the format of this kind of import and export syntax online. And it's interesting, but it's really well covered. If you Google for things like ES6 modules, most of the articles you'll find will talk about the formats of these things. So it's really well covered. I'm going to mention a few things. You can see on the left here that we have, sorry, on the right, we have exports for cat and dog. And on the right, yes, we import those things as sort of named things in this other file. There's a few nuances. You can also have a default export. Functionally, while it looks different than the other types of exports, it actually is just an export with a name default. We can't use it because it's a keyword. So we can't export something named default. It has to be special. Then you can see this on the right. If you actually use this format on the very bottom right, which talks about importing the whole module as an object that looks like a dictionary, you'll actually see that default just ends up being a key of that dictionary. So that's kind of interesting. The most important part of these statements is that they have to appear at the top level of a file. And I'll visit that in a second. So let's talk about what I'm allowed to export. So I can export something that's generated. Since apples are in season versus bananas, I'm going to return a different function. And so this file will export an unknown method that's not known at sort of static build time or whatever as the fruit property. But I can't optionally export something. This doesn't appear at the top level. So this is basically a syntax error. And on the import side, there's kind of similar problems. The rules around this are basically saying that I can't import from an unqualified path. It has to be a relative path with a dot at the start or a full URL. You can't import not at the top level. So even though this try catch block isn't actually a control structure, it doesn't actually optionally run. This is an error because it's not appearing at the very top level of a file. And at the very bottom, you see that you can't import a dynamically generated script either. So think of these restrictions as like, what can a very dumb static parser pass without running any code? And this matches traditional languages like Java or Python or CLC++. And it comes from that small specification that I kind of mentioned before. So this is the very basic intro to ES modules, but of course, what's the browser support? And so what I've shown you so far isn't theoretical, right? You can use this in all major browsers, but I'll admit as I give this talk, I think Firefox and Edge are still behind flags. We saw that in the keynote. But if you're watching this sometime in the future, well, as an Australian would say, she'll be right. I'll also note that when I say Chrome, of course, I really mean Chromium and browsers derived from that. And modules are implemented at the underlying V8 level. So it's possible for support to flow from there. So okay, we're through a sort of part one, you're convinced, right? Modules are great, syntax looks good, and you're gonna migrate all your code to this format. Well, we're web developers, so we know that nothing comes for free. So let's talk about the support story. And I'll start with our favorite thing, old browsers. Browsers will basically ignore script tags with a type they don't understand. So you can safely tell them, hey, here's this module script. And actually, your IE11s or whatever will actually just ignore this completely. It's how people have sent coffee script or dart down to the browser for years now. But of course, if we write stuff in a module format, we still need to transpile or compile for browsers that don't support ES modules. You might already be doing this, right? Polymers build and serve commands with the CLI stuff and already run a transpile step for you for these browsers. And so yes, this does mean we need to ship two sets of code down to pretty much all our users. And when we do the transpile, we obviously need to include both those sets of output. So the way we do this is we basically do what's called a no-module tag. So we add this script tag that browsers that support ES6 modules will actually ignore with a small caveat. So putting it together, you're basically on your eventual HTML page, you'll have two script tags, one for module browsers and one without. And this is great, right? Because you don't have to run script to generate the right imports, new browsers get new and old browsers will get old. But that start from before, actually Safari 10.1 actually shipped without no-module. So there's a small buggy to work around. We've actually got a solution. There's a small JS snippet you can include that will stop Safari from loading this no-module code. This actually was fixed in future versions and everyone else is shipping with this feature. It will eventually go away, right? All browser versions will eventually disappear like tears in the rain. And the biggest cost maybe in the very long term is that users on this very specific version might end up downloading both your bundles of code. So I mentioned a bunch of stuff. I said, well, you need to transpile, you need to ship two code paths and you need to deal with a fun bug. So why should I even bother with ES6 modules at all? And there's basically two really huge compelling reasons. The first is this immediate dev cycle. When you're building modular code and you have a native browser that supports it, you can, oh, sorry, if you don't have native browser support, you've got to build or run or whatever to actually see your output. And even if that happens automatically on when you save, it's still a cost, it's still gonna run something. Instead, with ES6 modules, you can literally save a file, go to your browser, refresh it, and your code is just done, right? It reads all the stuff in a modern way. The vastly more important thing in my mind, actually, is that it's a high watermark. What this basically means is that ECMAScript modules are a 2017 feature. Literally, any browser that supports ECMAScript modules supports a ton of features that you may have been afraid to use or afraid, you had to polyfill or afraid you had to work around. So if you have a separate serving path, you can quite literally only serve code to these browsers that you know that you can be confident about it's gonna work. So you get to restart JavaScript development from this point. And here's a table I sort of worked on earlier. This is a kind of a laundry list of features. And you might have been afraid of shipping this or using these. And of course, if you have an ES5 client, you've still got to transpile, you've still got to ship that old code to support those old browsers. But if you're shipping ES6 code in parallel, you can basically drop the polyfills for all these features. Your reward uses on up-to-date browsers. And to pick on a certain feature, ASIC can await. To transpile this as a bunch of bloat, they've got to rewrite the code to be a weird generator thing. And for ES6 modules, you can actually safely ship this down for those modern browsers while still supporting that very long tail of browsers that obviously will never get support for this feature. And let me stress one point, right? We don't penalize old browsers for doing this. You're probably already transpiling these features away and including the right runtimes. It's just the idea that for modern browsers, you can ship an entirely new fast code path that doesn't have to deal with any of that legacy craft. And okay, so what's the last part of the process? I've mentioned transpiling, but I want to revisit this a little bit. You want to convert ES6 code to ES5 code, and you might already have build tools and infrastructure that does this for you, right? They'll adapt nicely to ES6 modules if you just include them. These include Closure Compiler, which is by Google, of course, and Rollup is a good example of a starting point for something that will get rid of ES6 modules and ship it down to something that's a bit like ES5 for those older clients. And Rollup is basically what I recommend. It's an open-source tool that's built entirely for merging ES6 modules together to give you one output ES5 file. And actually to finish that whole process and compile and transpile, you would use Rollup to merge your code, and then, at least for me, I would include something like the Babel plugin to transpile it to the version you're targeting. So that's how you can build your ES6 modules down to ES5. And the Gold file looks a bit like this. We've all seen something like this probably, but I want to focus on two main things. We include the Babel plugin to actually do the transpilation for us, and we tell Rollup to give us an iffy out. And I'll come back to this later. But I have an opinion for simple apps. And for production, I actually suggest using Rollup with this flag and no plugins to ship a single file even to browsers that support ES6 modules. So this is interesting, right? Like, I'm telling you to use modules, then I'm telling you to roll them up when you actually finally ship this code. So firstly, it's smaller. You don't need to include polyfills, but I covered that before. But there's a few benefits, right? So when you use Rollup or tools like it, even to ship ES6 modules as a single file, you get a bunch of benefits. And one of the big things is called tree shaking. When you merge modules together, you basically get rid of code that you don't need or isn't being used. And this isn't a talk about compilation. There's plenty of other resources out there on that. But the key is, these will really save you a lot of space. And if you're including a lot of dependencies, which ES6 modules let you do, you can have hundreds of tiny dependencies using different, maybe it's written by you or written by someone else. You can be very confident in saying that the stuff you're not gonna use will get thrown away. And having a lot of module files is also gonna cause what's known as request chains. So if your browser fetches an index page, it's gonna then see it needs this file main.js. And then it's gonna see that it needs depth.js and so on and so forth. And before you know it, you might have done a whole four or five round trips to the server. So of course, you might have heard of HTTP to push. And this can solve this for you. Tell your clients, here's all the JS files they need before they even ask, right? I can tell the client, you need all these five files. Have fun with that. But a lot of servers won't do this for you, right? And in my experience, while it's a great feature, it's not widely adopted, right? So you can improve your user's experiences by having just a single file. And there's also the head-over-head, right? If you have a lot of 100-byte files, maybe the head is a 200-byte each. So rolling up your output could be really important to save space. Okay, so that's a lot. But I wanna go into some of the nuances of the way JavaScript changes when you write module code. So module code is always executed in strict mode. Basically, this turns more mistakes into errors and prevents some JavaScript anti-patterns that have been possible for a long time. So Mozilla has the best documentation on this. So go look up on MDN. But basically, you can pretend that every file has this used strict pragma at the start of it. And module code is always effectively invoked in what we call an iffy and immediately invoked function expression. So what that means is that symbols you define in your module won't automatically appear in global scope on Windows. And this is what it would look like in code, right? Your module stuff would basically appear in the middle. This is informational, right? Your browser doesn't actually do this. But it's something to keep in mind if you write any tooling around ES6 modules. And this is what it would look like, right? When we add code into here, you can imagine the sort of errors you would get. You know, the width statement, for instance, is sort of considered harmful and it's disallowed. And using this variable queue without defining it is also an error. We haven't got a var statement or a let or anything like that. On the other side, you get some benefits, right? This variable x won't define, won't pollute the global scope. It won't implicitly create a property on Windows, which is kind of not usually what you want. But we can still access things like, let's say we had this HTML element, obviously we can still access things in global scope as we are normally used to. So there's something else I want to talk about. In early examples, we saw that a module is usually either an entry point, which runs code like your application, or it's a dependency, you know, that was the example of a file that just exported something. But we can kind of have a mix of both. I can import a file just for its side effects, just to run it. And what's interesting is that this fits the model of many single file JS libraries. This is talking about integrating with the existing code. It's a really simplistic way to pull in dependencies that maybe aren't written as ES6 modules, which turns out is most code right now. And so here's a real world example that I was working on recently. I wanted to use a base64 library, which isn't available as an ES6 module, but it was available as a single minify JavaScript file, which, again, is pretty common for small libraries or feature polyfills. So I could target it directly for import, even though it has no export symbols, and its side effects will still take place. And again, this isn't the be-all and end-all of existing code, but it's an interesting step. Modules are also deferred. They actually aren't allowed to run synchronously. The reason for this is because if you have a lot of dependencies, your browser could be sitting there for a long time going to fetch all those files before your page can continue running. You can, however, specify async. And what this basically means is that fast loading or cached modules will run in any order as soon as they're downloaded. And this is the same as normal script tags and the way that they would work. Modules are imported only once. This is good to solve, although it's not a huge deal. It kind of avoids this problem, which no one really does anyway. No one accidentally includes multiple versions of the same code. But what's more important is because modules are kind of a graph or a tree structure, they're kind of like singletons, right? Two files that include the same dependency will actually get the same module back and it will only run once. So let's take a look at what that means. So let's say we bring in this default import from foo.js, even using different parts. These all point to the same file. And no matter how the browser imports that file, because it resolves the URL, the module object will always be equal. In this last case, this is where it would differ. This is a bit of a red herring, but it might be something to watch out for. We know these files are the same, right? We know the query string doesn't matter. But to the browser, it's a different file, right? There's a different URL, so this is gonna be a different file and this is kind of almost a hack to make a module run twice if you want to. Module imports are like functions in that they're hoisted. So this code here, which you see on the right, will actually run fine, right? Exciting method is brought to the top of the file, just like this. And so we have another example, which has an interesting trade-off with that, right? So let's say we have a new emoji polyfill. We need this for our emoji library to run properly. What we've learned, though, is since the import is hoisted, this actually won't work, right? The polyfill won't be available for the emoji library. We can basically solve this if you have code like this by moving our inline JS into an import, because now these import statements will still run in order. They'll just run at the top of the file. So we can use the side effect benefit to make sure that this window thing is gonna be available for our next file that runs. And lastly, which is kind of cool, circular dependencies are actually allowed in JS modules. And this is kind of interesting because it's something that we haven't had in other module systems. They work mostly because of some of the properties I've talked about on the last few slides. The basic rule of thumb for modules that have circular dependencies is to not do any work at the top level. You know, don't cause side effects, just import and export things. So let's see an example. So I'm gonna give you a classical CS exercise. You know, I have a superclass. Let's say it's a vehicle. We wanna subclass this and we wanna give each vehicle an ID. Makes sense so far, right? I have a new file, which has a class car. You know, it has an engine, a driver, whatever. It doesn't really matter. But because it inherits from vehicle, it's also gonna get an ID. So let's say we wanna have a static method on a vehicle that returns a car. You know, this is kind of a Javaism or kind of a basic polymorphism kind of exercise, but you have a builder on a superclass that returns a subclass. And you know, maybe as a user I treat that as a vehicle. I don't care that it's a car. So we now have a circular dependency, right? A car descends from vehicle and vehicle itself can return us a new instance of car. You know, and this is actually more or less fine. If I import either of these files, this will work as intended. So this is kind of a good example of where circular references will really help you. But there's one case that would break it. So you might have noticed the static kind of ID property. This is kind of a property of the vehicle singleton. If you can think of it that way. So if we end up using that property inside the car code, just maybe say, try to test or a default car, well, this is actually gonna fail. And you might be able to see why, but let's look at it why. Let's look at it anyway. Because the vehicle is effectively hoisted while we're in this kind of crazy scope and because it's exported in line, ID is not really available yet. So what ends up happening is that this constructor will fail with a reference error. ID, you know, effectively is not available to this class yet. There are a few crazy hacks around this, right? And what may actually work out to be possible is maybe there's some interesting patterns that people will work out or realize that will kind of leverage circular dependencies in a very interesting way to do some cool stuff. But for now, doing this sort of thing, actually using the code in line in this file is the sort of thing that will bite you. Okay, so I've talked about a lot of nuances. Let's talk a little bit about integration. And as I mentioned before, ES6 modules don't have to live in a vacuum. So let's talk about the elephant in the room. This is a bit forced, but using the existing ecosystem of node code. Firstly, there is a simple solution. We've talked about building modules using rollup and how modules obviously are very different. And while modules can help you get rid of the massive script tags, there's some cases when you can just include them anyway, right? And a really good example here is when we just want to include some dependencies for tests. Here I include Mocha and Chai just for things to run my tests. And these just have side effects. And remember that modules are always deferred, right? So I can also defer my script tags because I know that they don't need to run until my module is ready either. But I want to talk about a more concrete solution, you know, what you can actually take away and use in your own projects. What is the story for code you've installed? You know, this code that's inside node modules. This is stuff yet again that's likely in common JS format. It uses module.exports. So firstly, I want to focus on this sort of stuff. What's the solution, right? It's actually really simple. We want to build a special ES6 module that actually uses the require keyword and then re-exports those symbols again in ES6. You know, this is obviously magic, right? Because we know the require isn't supported by browsers. So how do we actually do this? And just to go through it a little bit more, I want to make this very clear. We have our app on the right here. This is our, you know, ES6 module thing. All our modules are in pink. And on the left, we have our node modules that we're really keen on using in our projects. So firstly, we create what's called the support.js file. This file will use require to pull in the node modules, just like we're used to. So we add these links here, right? We then use rollup to generate a rolled up.js file in ES6 module format. And we'll see how in just a second. This file bridges our common.js code to be module code. And because we've now kind of transformed this old stuff into a module code, we can now depend on it wherever, right? You know, in this case, I'm pulling in left pad, which I'm sure is very important and I'm not judging, right? I really want you to remember this left-right split. It's really the best way to split up our dependencies, you know, especially with this wealth of code we've had, you know, over the years. You know, yes, we're introducing a compile step, but it's really just for the stuff on the left, right? It's not really a huge burden. It only happens when we change our dependencies, which we presume is not going to be that often. So I've talked about the support file, but what does it actually look like? You know, we can just require code in this file as normal. Of course, a reminder, we can't really ship this. It doesn't really work. But I immediately then re-export them as ES6 modules. So this indicates to roll up that these are the things we're interested in. And, you know, as you see in the example, we might only care about one sub-symbol as well, and that's fine. So this is a sample gulp file, but there's really two important parts here. What we want to do is we want to tell a roll up about the common JS plugin. And this basically allows roll up to go and read those old-style formats and in its build process, it will output you a file that understands them. We also want to re-export as an ES6 module. So this is the example I was kind of talking about before, right? We want to use roll up with its config and say, well, I don't want an iffy. I don't want the final output. I want something in the middle. So what it generates is something that looks like this, right? It looks like a normal ES6 module that we can depend on with all the require statements, you know, eaten up and taken away. So this is now just another dependency in our, you know, graph of modules that I can depend on either in prod or dev. There's one other case I want to cover, which is kind of like an anti-pattern. ES6 modules don't allow dynamic imports and exports. And so, unsurprisingly, most of the tooling around that doesn't allow it either. So even though we're using the common JS plugin for a roll up, right now, roll up will complain horribly at this, right? It's a conditional it has to evaluate to decide what to include in this output. So it basically returns a file that just has this error in it. And so I'll discourage you from doing anything that's, you know, if you're going to integrate with old code, avoid those patterns that we want to maintain that ES6 modules prescribes. Another option, of course, is if you are lucky enough to see ES6 modules actually in node code, and obviously we're going to see that at some point with Polymer, you know, you can actually use Yarn's flat option to install them. And this is for a pretty simple reason, right? Flat was covered in some earlier talks, but unlike nodes require statement, which sort of has a few paths, sort of has a few paths that it looks at, import requires very sort of fixed relative paths. So that means that the code you download, which will be ES6 modules, kind of chooses the approach to finding files that it likes, right? And in Polymer's and other cases, you're going to see code that looks like this, right? It's always going to assume that its dependencies are going to be essentially up and one sibling across. So we've covered that option and I want to go on a bit of a detour. So one thing I was thinking about a lot with ES6 modules is this kind of code splitting story. You know, you've got a lot of code, we want to make it one file, but that's not always what you want. So, yeah, of course, with ES6 modules, isn't that code already split, right? We've already got lots of dependencies. Let's say I've got an application with two entry points, right? These are basically analogous to our two HTML pages that exist on our site. They import, you know, they import this graph of dependencies to achieve their goals. This is a bit contrived. I just drew some lines and some boxes just to make a cute example, but you get the idea, right? These are all things you might depend on and other libraries that they further will depend on. If we were to do my approach from before and basically say, let's roll up everything into a single file, we get two giant bundles, right? This is kind of a common problem, I think, with bundling any kind of code. You know, I don't know what my files really need. We end up just shoving all the JS in the output file and letting the client deal with larger download size. You know, there's a ton of code duplication here. But what we can do is we can basically identify what's needed at each entry point, and we can basically walk the tree to do that. So the reason I'm telling you about this is because the module spec is so simple that it's actually really trivial for you to do this yourself in your own build steps. You know, we mark these with colors, and now we can see almost implicitly what are the different kind of bundles we need, right? Initially, we basically want to include the code for entry one and entry two. Those parts are really obvious, right? We know that those files are only needed by those two entry points, so we can roll them up together. But what do we do with the rest, right? I mean, if there were HTML imports, we could actually just group them all together and send them down as one big chunk, and that's great, right? It's a block of shared stuff that both sides can import. But with ES6 modules, and I am admittedly simplifying a little bit, we can't really do that because of this. Because the two modules might export the same property name, we can't at least naively roll them up together, right? One module can't export the same symbol name twice. And so our entry points might use these methods in different ways. And you know, you can rewrite them. I'm sort of alluding to the art, the art. This is a simple option, and something like Webpack will actually do this. But to think about a simple process, what we want to do is we actually want to create different entry points into these two bundles, right? The left is D and the left is effectively E, right? The F is a byproduct of E. So this is the minimum amount of code splitting you need to do to basically serve these two entry points. And you can imagine, as you get more and more code and more and more dependencies, this graph might get more complicated. And like I said, the reason I've mentioned this is, so I've actually, you know, in building this talk, I went and built this tool, right? It actually goes and finds the minimum spanning requirement to basically split up your code into modules that don't duplicate code. But in doing it, it actually wasn't overly hard, right? The module spec is incredibly simple, and it's really just a simple parser that goes over and looks at this code and runs some really basic graph theory on it. And this is one of the huge benefits of switching to this approach. You know, common JS and require statements can support a lot of things. They can support, you know, variable imports. They can be not at the top level. And actually, that really open spec makes static analysis really hard. So check this out if you are curious. So this is basically the last technical slide. It's also worth mentioning dynamic import. And I know this was mentioned for Webpack as well, because they use it as their boundary for splitting code. But I want to talk about the function. It's not really available yet. You know, browsers are still building it now. But it is coming soon. You know, it looks a bit like this. You pass it a path and it returns a promise. No real surprises there, right? Which is unlike our regular import statement, which happens synchronously. Secondly, it's sort of polyfillable. You can actually use this today if you want to. There is one catch, though. So we can actually do this, but the browser has no way of knowing what my current file is. So I can actually import foo.js, but I'd have to tell this polyfill function what is my current path. Because essentially, there's no way for the browser to say, what is the currently executing ES6 module file. This is actually subtly different than the way we do regular JavaScript. Regular JavaScript, non-modules, you can actually use the document.currentscript property to find out who I am. But inside the ES6 modules, that's not allowed. We're not allowed to know who we are. So you have to include some information about where is the file relative to me. So, thanks for listening. We've covered lots of stuff, and admittedly, this is a bit of a grab bag of technical stuff, so I appreciate everyone paying attention. So what are some takeaways, right? Browsers love modules, they get along fine. There's even the no-module keyword for all browsers. The way we have to write JS will change a little bit. It's pretty subtle, and it's mostly to do with the way we interact with files, but there are things that might bite you as you go along. And secondly, modules play nice with NPM modules via rollup, but we obviously recommend yarn for the flat package approach. And I'll leave you with some further reading. My colleague Jake, who managed to avoid giving a talk, is hiding somewhere, and you can go grab him on myself after the talk and find out more about modules. And that's it. Thank you, Sam. All right, guys, we made it to the break. We've got about half an hour before our next big event, which you are not gonna wanna miss, so please go fill yourselves with caffeine and carbohydrates, get that in your bloodstream, and come back here for four o'clock for Supercharged Live, Live, Live! Also, tweet to hashtag Ask Polymer with any questions. Cheers. How's it going? So, starting my comedy bit. So, what is Forrest Gump's favorite password? One, Forrest, one. What's a computer's favorite rhythm? An algorithm. Tell me a programming joke. What's a computer's favorite beat? An algorithm. Well, up next, I don't know if you've seen Supercharged before, but you may remember this duo from the last Polymer Summit, and their hit YouTube series, Supercharged, we have Sermaw and Paul Lewis. Late Paul Lewis is now in the cloud doing machine learning. Poor one out for Paul Lewis. So, here I am. Hello, Copenhagen. So, I saw this bit recently, and I really wanted to try it on stage. So, Bill Bailey did it, and what he did is, he made the audience clap, but not clap like in applause, but only exactly once. So, like, put your hands together, exactly once, in sync with this. Are you ready? What an amazing sound. Nailed it. Enough time wasted. So, hey, welcome. This is Supercharged, which we usually do. Sermawcharged. Sermawcharged. I'm rebranding, because as you know, this person is not the bald Paul Lewis. I'm wearing a hat, though. This is the boldest cap that I could find. Close enough, maybe. So, Polymer colors. Monica luckily agreed to assist me today, and we're gonna do the Supercharged Life, Life, Life, as it is now, I guess, tradition. But Paul did move on to DeepMind, and is now working on, basically, on Google Home, and has been working on the next edition, which is gonna be the Google Home Developer Edition, or as its code named, the Chrome Home. And so, we have a developer preview that we can hopefully show off a little bit today while we do our code bits. So, it should be work pretty similarly as it usually does, and it, yeah, just... Do you want me to do it, because your accent's kind of funny? Yes, please. Hey, Chrome Home, what's the weather in Copenhagen? Bleep. Wait, really, I'm programmed with the knowledge of 50 Chrome developers. I know all about web APIs and standards, and you're gonna ask me the same stupid shit that you'd ask a regular Google Home? The weather in Copenhagen is kinda nice. For further details, please consult your nearest window. We're gonna maybe make use of that later. We'll see. It doesn't seem that helpful. Might as well have a third of it later. I guess Lewis has been spending his time very well. Let's do the thing we're actually here for. What is it? We're gonna build something. Sweet, what is it? We are gonna build custom elements, because after all this is the Polymer Summit, so I thought we should at least be using custom elements, although I'm actually not using Polymer. Well, can't win it over with this, can we? Because we're focusing on the other thing that you said was use the platform, so we're gonna show you the low level stuff, and I thought something that I have been encountering or noticing lately is that a lot of sites have, or blocks, especially have images like these. I have some images of Copenhagen on this test site here, and they load the images right away, like you load the page and all the images are loaded, and that is actually pretty hurtful, because when you look at it, you have like 1.4 megabytes for opening a web page, and maybe you just wanna read the first paragraph and then leave, and that's not cool. So I thought you could build like a lazy loading kind of custom element. I'm lazy, I'm into it, right? So. I'm gonna stop you right there, Serma. Okay. So I would like this. Paul Lewis told me not to ruin his legacy, so we have to do some housekeeping here. We would love questions from you, so that as Serma is banging on the keyboard, we can actually answer your questions because who knows what he's going to do, and because we don't have comments on the YouTube stream, please tweet with the hashtag supercharged. I will also be looking at Polymer Summit, but I assume you're just gonna be tweeting about how awesome we are on Polymer Summit, so hashtag supercharged, ask all of your questions, and I will answer them, or the Google Home. Or forward them to me to distract me. You know the usual deal. If you've seen us before, you know how this works, but we don't have the YouTube chat this time, so we are using the Twitters, which also, not Waldorf, does Serma. You should totally follow us. It's totally worth your time. But don't tweet at us, because I'm not reading my Twitter. Just Polymer Summit is supercharged. That's all I got. All right, Serms. All right, let's go. Do it. All right, so what I have here is a pretty empty website, but we have four images that I totally took myself in Copenhagen, and we are gonna try to make them a lazy load. And just to show you what's going on, it is super vanilla. It's literally four image tags with four spacer diffs in between, and the styles are just, the spacer is just a very high diff so that there's some space in between. That is literally all we got and everything else. We're gonna write here, write in our lives, so you can actually watch and ask questions and stop me if I'm being stupid. So instead of using the image tag, we are now gonna turn this into our SC image because branding is important. It's our supercharged image tag, and to use those, we actually have to, of course, define the elements. We're gonna include a new thing, which is called SCImage.js. And that is the- Make your text bigger. Yeah? Like this? Mm-hmm. All right. Thank you. SCImage.js. And now the usual dance where we go, okay, SCImage extends HTML element, constructor, super. There we go. Wait, but your constructor's not doing anything It's just, you know, have an element at first. So now the images are gone because the SCImage element is now in use and we haven't done anything. So if we look at our thing, HTML element, this should be saying SCImage, and that is because I didn't actually call custom elements defined. So my class will just exist if it wasn't used. So I'm gonna go with custom elements define SCImage. SCImage, I think, is that how it works? Yeah. You shouldn't still see anything, but now it says SCImage. Now we are actually having our own custom elements in place and now we can start working with them. So the first thing I wanna do is make custom elements by default are inline, display inline. And for images, we don't really want them to be blocks or nice and wide and fill the space. So we're gonna use ShadowDom to give this element some inherent styles. You should make the text even bigger. Also, somebody asked, he is using VS Code, I believe. It is VS Code. So let's create, great, oh, this is gonna be tough. I'm gonna close the sidebar. Let's create a template and the template gets some in IHTML, I'm using some template tags. And in here we have the style. And the reason I'm using a template is because instead of, in every constructor call, I could just be setting in HTML, that always starts the parser, which I don't really want. So I'm gonna use a template, which is much quicker to instantiate. So in here I'm gonna say display block because our element is supposed to be display block. In our constructor, I'm gonna say a Ted Shadow mode open. Can I answer your question from the live stream? So he didn't actually need to write the constructor who wasn't gonna add anything, but I think he knew ahead of time he was gonna add other things. If you're only calling super, you don't need to define the constructor there. That is true. I do it anyway, it's just like muscle memory. Because sometimes you don't wanna add stuff and then... So now we can do a template, content, clone node, node. This is a family show. So let's close the console and hopefully, we should say display block, so our elements are not display block. They're still... Cool, that was our demo. Thank you for coming, we're done here. That's it, not quite. So let's see how we can load the image. The, again, what I wanted to do is to load them when they come into view. So, and there's a new, kinda new primitive on the web which is called IntersectionObserver, which allows you to, why don't you explain it? IntersectionObserver is a thingamajig so that when you scroll a thingamajig into the view, the IntersectionObserver says, hey, the thingamajig is into the view, you should do something about it. That was super concise. And it's really useful if you have like a giant block of text and an image at the bottom and you really don't wanna load that image until that image is actually in the view because maybe it's never gonna get in the view. So, when it comes into the view, you're like, bam, showed that thing. All right. Hey, I got a question for you. All right. Why is your template outside of the class? Because I don't wanna recreate the template for every instance, it's just like, there's for me to reuse. So it's gonna be parsed once and I can re-instantiate it every time when a new element is being created, which is super fast. I mean, unless you're creating like a million images, this is actually gonna make a difference. Most of the time it won't, but this is just a good pattern to adopt so you don't run into these kind of problems. Okay, intersection observer. Let's talk about these. You create an intersection observer and you give the intersection observer a callback and this callback is gonna be called every time some of your elements change this state and the state meaning being inside the viewport or outside the viewport. When they intersect with the viewport. Yeah, exactly. Hey, Krumhum, how do you say intersect with the viewport in German? What? Sorry, the SIRMA module has not yet been installed. Intersector, does ein viewporten see if we play? Did you mean, here for my Nebada von Nebrenth? Yeah, yeah, that's what I meant. Well, that means help my bathtub is on fire. Sorry, Google Play Services has stopped. God, it's not working really well. I'm not impressed with Paul's work recently. It's been going for like two months what it's been doing. Downhill. So the callback is a callback that as parameter gets a number of entries and each entry is for different elements and how the state changed. So we're gonna go through all these entries and if that entry is intersecting, meaning it is currently inside the viewport on the element which is the entry.target we are gonna set an attribute which is gonna be called full which is meaning like it should now show the full version of the image. And that is pretty much all we're gonna use Intersector Observer for. Whenever an element scrolls in the viewport, the attribute full is gonna get set and then we can react to that change with our standard observed attributes that we know from the custom elements. So for that, we need our static getObserved attributes which is gonna be the full attribute only. And since we only have that one observed attribute, our attribute change callback doesn't need any parameters because we know it's just gonna be the full attribute. And what we're gonna do is if it's already full, we're gonna return because we're not gonna load the image twice once is enough. And otherwise we're gonna create a new element, create element image tag. Image source is gonna be this the source and now just really I forgot that I should get some getters. So we are using this dot full. While you're doing the getters, lovely question from the audience, which browser support Intersection Observer? And since I'm too lazy to Google it, hey Chrome home, which browser is support Intersection Observer? It's Edge 15, Firefox 55, Chrome 58, Opera 46 and Samsung Internet 5. That's actually pretty decent. So it's something, there is a polyfill, I think, there is a good polyfill for it. That you wrote. What? Didn't you write it? Somebody else wrote it. I did the first word and then I passed it on to other people who are much smarter than me. Nice. But you don't need it apparently because that was actually a pretty decent support list. So most of the time you'll be running without it. So you're doing two getters and setters for source and for full? I'm not doing setters because I'm not gonna- Don't set anything. Yeah, pretty much. All right, so we're creating a new image. We're copying the source from our image to the actual image tag. And then- And the reason we're doing that is because the moment you set a source on an image, it's gonna start loading. You can't stop it from loading on that image, no matter how hard you try. Exactly, and now we're gonna wait for it to finish loading. Train, don't stop in that platform. And once it's loaded, we are gonna shadow root, attach it to the image. That is actually not- That's not right. That's a pen child, right? Next. All right, this looks pretty okay, I think. Let's give it a try. I thought I did something wrong. Where's my console? Nothing is happening, which makes sense because I just created the intersection observer. I didn't use it anywhere, which, you know, might be helpful. So on connected callback, this is the second part of the API. So the intersection observer you create, you pass a callback in to know which code to actually when something changes, and then you have to call- Tell the intersection observer what to actually observe. So I'm gonna call observe this because we're gonna observe the element itself. And because we are good citizens of the web, we are also gonna do our disconnected callback and call unobserve. Nice. So now we're getting errors on line 13, which is totally what I expected. It's, oh, set attribute. I always dislike this. I only wanna set the attribute. And yet I still have to say, set it to an empty value. Or true, yeah. Or true, I guess. Whatever you want. But it just seems unnecessary. So let's do this. Backwards compellingly for you. I guess nothing is happening. Oh, because we are setting full and then we're checking a full, which we just set and therefore this is gonna, this is not smart. So I'm gonna just call it loaded. So we're gonna have two attributes, two properties now. The full attribute and the loaded. Full is when it's supposed to be on screen and loaded when it actually is loaded and on screen. So in this callback when it's loaded, we're actually gonna call this loaded is true. I'm not gonna define, even define an attribute because this is live and it's gonna work anywhere, right? So let's see. Cool. I mean, it's a little bit big. So that's not, but it's the images in here in the shadow dorm. So that's good. To be fair, you didn't set any styles. So it's gonna be. Yeah, let's change that, shall we? So our image in here, and this is what I love about the shadow dorm. I don't have to do like complex selectors because it is scoped by the shadow dorm anyway. So I can just go image and say with 100% and be done with it. Boom. And now the thing is, if you go to the network panel, only image A has been loaded with 265 kilobytes. Once I scroll down, yes, the second one loads. Scroll down, that one. And this is, you know, in terms of data conservation for the user, this is much better because now they only actually download the data they actually have on screen. So does the intersection observer run on every pixel scroll? How does it actually work? Is it performant at least? It is super performant. So I think as far as I know, if you don't use the polyfill but have a native implementation of intersection observer, it hooks into the actual layout engine. So the browser can is obviously the only entity that knows if something is on screen or not. And once it is on screen, it chews up one of these callback invocations. And those are dispatched in idle time. And that means you will only get to process these entries if the browser has time to do so. So if you're busy encoding a GIF or whatever people do these days on the main thread, your intersection observer callback will be delayed until there is actually a headroom to do these kind of... So you're not blocking layout, you're not blocking paint, you're not blocking your animations. Or scroll, it's great. So it can basically only get better because the most important thing really should be to be interactive with your, for your user. Cool. All right. I'm gonna answer some more questions. Is that okay? Is that how we do it? Is that... You're doing pretty good, actually. So one of the questions is if the image content is the Shadow DOM, can robots access it? So like these search bots, and they can if they run JavaScript. So this is the same question for SEO. Yeah, pretty much. And that you saw, what does SEO stand for? Search engine optimization. Served my engine optimization. Served engine optimization, I like that. Perfect. Yeah, so as you saw, exactly the same answer as in the SEO talk are from here. If you're running JavaScript, it will be accessible in there. And also you should probably set some alt. We're not doing this because it's not live, it's not production. I should be setting alt. But that image doesn't have an alt. Rob Dodson is probably in the audience and he's not impressed. He's gonna punch me. Yeah, please don't punch us. Because I was honest how to, I'm honest how to team and I didn't do the accessible, but I'm actually bad. Okay. I mean, that is pretty cool. This is working. We could say we're done, but there's something else. If you start on the, if you scroll down and read out the page and scroll up, the image is kind of pop into existence because at the start our image has no height. And then once the zero height diff comes into view, the image suddenly it has a height so they kind of appear, which is really sad. So what I would like to do, and this is something where this image is gonna be better than the native image element, it's gonna have support for aspect ratios. So we're gonna, I mean you can do it with a native element. You're like preemptive three questions from Twitter. You're nailing it. Bam. Impact. All right, so what we're gonna do is we're gonna write like a tiny bit of a back end. So I'm gonna bring the sidebar back real quick to create a new file, which is gonna be, no, not in here, down here, there. I'm gonna write a little back end and we're gonna be using some express because whenever I do nodes, I just use express because it's easy. And we're gonna kill our Python web server and instead start our new server. Are you gonna do some server side rendering you would say? So we're gonna be server side rendering. Okay, so we create a new app and that app, hang on, that app uses the express static middleware because mostly we're just gonna do static page delivery and app.listen on 8080. And so now everything should be working the same. Cool, it's still loading. Now we're using our new back end. And now we're gonna do something new because we're gonna define our own route for HTML files. Request, response. And what I wanna do in here is basically inspect our images to figure out what their aspect ratio is and do some CSS hackery to give the elements an aspect ratio so that they retain their aspect ratio, even if the image data has not been loaded yet. And to do that, we have to first figure out which file is actually being loaded. So let's do the file path, which is request URL. And if the file path ends with a slash, that usually means that we have to add index.html, right? Because if you have a slash, it's a folder and that kind of deal. And now we have to do, to read the file basically. And to do that, you have to do the FS module. I'm gonna use, I'm on node eight, so I can use all the new shiny stuff. So I can use the new promise of five function from the utils module to turn the old callback version of file system into a promise version. And we all like promises, so I'm gonna do that. Hey Google Home, do you like promises? Oh, Chrome Home, sorry, do you like promises? Yeah, I prefer streams though. Nerd. All right, so it's pretty simple. You just pass on a function that has the standard node callback layout where the last parameter is a callback with error and result and it turns it into something that is a, now is a function that returns a promise. And that means that we can turn this whole thing into an async function and can now do const. So if you're gonna read the buffer, we can do await, read file, file path. This is gonna work because we have to add static. Then we can turn it into a string so we can send it back. And then we can send these contents back. Let's hope that works. Still working, cool. So now we can read file to the way which I think is much nicer to read than having like either promises or callbacks, honestly. Async away really makes this code much easier. All right, we have the buffer, we have the content. And now we're gonna do some post processing on it because we need to figure out which images are being loaded, load all these images and then figure out what the aspect ratio is. So we are gonna do because, and I know you're gonna love this. We're gonna do some regex magic. Oh, God. So, what some people don't know, split- For the record, I haven't seen this code before. So I'm getting like anxiety every time he says these words. Like, we're gonna do some regex magic. Do you wanna write the regex? No. No, okay. Can I answer some questions though while you type your regex? Go for it. Because nobody needs to know what you're doing. I'm gonna explain the regex, so go for the questions and then I'm gonna do the regex. One of the questions is, why aren't we extending the image element with is equals? Because that's not a thing. That is not a thing, unfortunately. So is equals is one of the battles we lost a little bit for custom elements? I mean, it's in the spec. It's in the spec, but it's not actually implemented everywhere. Not even Chrome, I think, has it for the new one spec. So it doesn't actually work anywhere. The polyfills don't have it. So you can use is equals all you want. It's not gonna do anything right now. Some browsers have expressed very strong dislike of the is pattern. And if one browser doesn't do it, there's no point so far at least in just doing it in some browsers because it's also very hard to polyfill. I don't even think it's polyfillable at all. Not sure about that, but for now we have to live without subtyping native elements. We can only do HTML element and nothing else. Mm-hmm. Carry on. So what I'm gonna do, we're gonna write a regex and we're gonna find all the SC image elements. And we're gonna do this. So we wanna have everything until the closing tag. Come on, let's see image. It's beautiful, isn't it? And let's keep it this way and let's call join down here. And let's work on the source for a bit so we can see what is going on. This means it is not working. So that's good. We're splitting this. That is correct. Thank you very much. And my images disappeared, which is actually true because I need to put parentheses around this. So now this should look the same. The good part is that now NuCon is an array and it's either gonna be remainder code or it's gonna be just one isolated SC image element. I can just to show what I'm talking about, I'm gonna console log new content for a bit, gonna refresh and then in the console, it's an array and every second element now is an SC image element because that is the part that matches the regex and everything that doesn't match is gonna be put in another element. So we now just split apart our entire document into what is an SC image and what is not an SC image. And now we can do post processing on that. So what we're gonna do next is we're gonna remove the semicolon and we're gonna map. And each of these items is if the item starts with SC image, actually when it doesn't start with SC image, we don't wanna do anything because we don't care about it so we're just gonna return it. And otherwise, we want to figure out what the actual source attribute is. So again, we're gonna do regex because oh my goodness. That's how I roll. And there's parentheses around this and that and then we're gonna do an exec on the item. And then we're gonna return something. Let me think for a bit. Let me just do a test. I'm gonna return the source just to see it works. So now we know the source actually has only the value of the source attribute. So that's good. It's probably gonna read that file. And now we're gonna do the item. Actually no, the item can actually stay. What we wanna do now next is we wanna actually figure out what the aspect ratio of the image is. And this is where it gets, you're into a lot of the weeds of the node ecosystem because now we have to look into image processing libraries. I just Googled a bit and took the first one I found which is called graphics magic for short GM and we can subclass it to use image magic because that's the only one I've installed on my system. Don't worry too much about it. Basically all the image processing libraries can do what I want, resize images and figure out what the size of an image is. And what I'm gonna do down here, I'm gonna load the image and that's fairly easy with this library. So I'm just gonna do static plus source. And let's just do a console log to see if that worked. I hope it will. So this should all look the same. Looking good. We have loaded an image, size function. Now this is where things are a little bit weird because the library as all node libraries are a little bit old and have callbacks but the cool thing is, promiseify the function I loaded from node eight actually works on libraries as well. As long as you've conformed to the standard callback pattern that node has, this is gonna work. So I'm gonna call image.size bind image so I pass on the function. It's gonna turn that function into a promise version and then I can do with height await sizefunk. Why are you doing all this awaiting? Why don't you do it sync? Because the library is not synced. That's how callbacks work. So this is gonna, ooh. If you would have done it sync, it would have been fine. No, I'm kidding. I'm using await inside the map callback which is not an async function. So I'm just gonna make it an async function and for that to work, I also have to do a promise.all because now all the array elements are gonna be promise values. There we go. This should work again. Cool. Cool. So we see we have four image on our page and we have four widths and heights. Magic. That's why it's called image magic because you just do some code invocations at some point you get what you actually want. Now we have to, now we talk about something that I really like, the- Animations? Are you gonna do some animations? No, not yet. I'm gonna talk about the aspect ratio hack in CSS. Ooh, we can ask Google. We can ask, we can ask. Go for it. I don't understand you. Accent. Down. Hey Chrome home. Bleep. How does the aspect ratio hack in CSS work? So Mariah, I hate you. So you have an element and it has another element or pseudo element inside it that has a padding top that is the aspect ratio that you want and then within inside it you can use absolute positioning to keep something the same size. Everybody totally got that. That was well explained. I'm hoping the code will now actually show how this works. So the weird thing is when you define a padding top, a percent in percentages. So padding top 50%. That 50% is not the height but the width of the parent element. Don't ask me why, but that's how it works and the cool thing about that is that we can say here, actually I should be using a temp attack that this we can abuse this basically to define an aspect ratio because what we're gonna do is we're gonna do height divided by width times 100 in percent. And that means the wider the image is it will grow in height as well because padding top is proportional to the width not of the height. This hack is also really good for iframes whenever you're loading like a YouTube video that you're importing and it's always a weird aspect ratio. Do this, do this for everything. So just to show that this is actually working. I'm gonna, I'm replacing the closing characters of the elements to inject some styles which I actually- And the closing. Yes, like this. Let's take a look at this. Okay, so you can see we have injected percentages successfully. So let's look at the actual visual version. Not quite what I was going for but we can probably fix that with some styles. So we're gonna do position relative. And so the problem right now is that we have a padding on these elements and the contents of the shadow DOM are being pushed down by the padding which is not what we want. So we're gonna just absolutely position the shadow DOM image inside at the top and at the left so it doesn't really care. What? Oh, I think it rolled so fast. There we go. And because we can't really see it I'm gonna slow down the network which where is it? Is it down here somewhere? Oh, there it is, network conditions. Let's do it on slow, fast 3G, I think should be good enough. So you can see the rectangle is there even though the image is not loaded and once it's there it just replaces the right rectangle which is still underneath there technically but now we have images that consume the space the image will need once it's loaded. And that's something the native image tag doesn't do. It does not. I mean you can use the same tag on the native image element. You can just define a padding top but I thought that this was a really neat trick to show off. And this is how you don't have like your stuff just jumping around whenever your images come in. That is the best part. If someone loads the page at the bottom for some reason and scrolls upwards stuff is not gonna jump around because the images already allocate the space that they need. The only thing I dislike about this is that they kinda like the red squares. They're very annoying, yeah. Yeah, so. I'm gonna steal a question though first. Go for it. The question was what does the host selector does? Oh, that's a good question. And that is, I'm gonna take that one. Go for it. But basically it's just style the custom element itself. So if you think of a custom element it has basically like its shell of a custom element and host is that element itself, not the things inside of it. And something you can do, fair warning, is if I'd written it like, oh, actually I should have put it down here. Something like this wouldn't work, I think, because you cannot really, would this has worked? That definitely works, yes. So it's a function, I guess. Yeah. It's like a pseudo. It's not a matcher in the classical sense. So that's something to look out for. It also can't go down multiple children, I think, only top level children. It's a little bit iffy, but we have good documentation on the developers.google.com slash web, which you should totally go to and read up on this. And yeah, as I said, these red squares are a little bit sad. So what I'm gonna do instead, I thought, we could also do. An animation. No, not yet. But maybe later, maybe I will humor you. You're not a one-trick pony. I thought I would do the medium bit, where they have the blurred version of the image. Like a low res, base 64 image background from Twitter suggestion. Do you think we should do this Twitter suggestion? Someone has been thinking along, I like it. That is exactly what I'm gonna do. So we are gonna generate a thumb. And because, as I said, the library is async, we have to create a new promissify function again. So I'm gonna call resize. Dot to buffer, dot bind to the image, because otherwise it doesn't work. And then our thumb will be the thumb, thumbfunk. How do you say thumb in German? Chrome home. Hey, Chrome home, how do you say thumb in German? Ein thumb. It's actually Daumen, but that's all right. So I thought I would do a thumb size, because we can probably play around with the resolution a little bit, so I'm just gonna put it here. Thumb size is gonna go with eight, because that seems reasonable. Thumb is now the image buffer. Actually, that's not true. Or is it? That is something that needs to go here, if I remember correctly. And here we just say PNG. There we go. Do you have all your brackets? Do you need an extra bracket? I think so, I think I'm good. It's not complaining. And so our thumb URL, what we're gonna do is we're gonna encode the thumb version, as you already hinted at, as a Bay 64 inline URL, because we don't wanna wait for the network to load a low-res version, so we can then show the high-res version. So we're just gonna put it inline into the document right away, so when the HTML arrives, we have something on screen, which I think is a much better experience. So the thumb... Somebody on Twitter wants me to do the animation stuff. You guys, I don't know how to animate anything. Like a transform is too hard for me. I really should, it is. It's too much transform today, I think. Yeah, I know. To string and luckily note, in contrast to the web has just to Bay 64, which is really convenient. And what we're gonna do here is we're gonna do, say, our background URL, background image is a URL, and in here we're gonna do thumb URL. Boom. This looks about right, I think. I'm still on the slow network, so we can actually see the loading pattern, or I'm actually did a mistake. I probably did a mistake. Thumb URL is not defined, why not? But it's right here. Oh, that is, it should just be thumb. Thank you. A Zehen-noggle, Zehen-noggle? That wasn't quite what I was hoping for. I think it's actually correct because it's just an eight by eight image tiled all over the place, but that's not the visual we were looking for. Gotta stretch that out. So what I'm gonna do in the inherent styles, I'm gonna say background size is 100%, 100%. So we're on fast view, and this is pretty good. But now, your moment, what could we do next? I'm gonna answer a question from Twitter. Okay. One of the questions was, why didn't you just distribute an image as a slot in here? And that would be annoying, wouldn't it, for every image that you wanted on your screen? Yeah, also that wouldn't be framework compatible because then if you whenever you have something like VDOM, it will just eliminate the image out of my turn. In general, it is rarely advisable to just sprout new children into a custom element dynamically. No, but you could have had the SC image, and then you would also put, you was the page author, put your image in there. That's just annoying. You're writing the image twice. Yeah, no, I like this better. Yeah, just do it as a child. To be fair, that would work as well. The shadow child. I feel like it would be more work for me. I would copy and paste it twice. I could see a SC image, and then an image, and then a SC image, and then an image. Okay, so I'm gonna ask you again, what could we do next? Put on your hat and get a clap out of people? Yes. I genuinely don't know the answer to this question. Because I think people might not be quite awake. It still works. Okay, so I'm gonna ask you, what are we gonna do next? But this is your moment. We can do some animation. We can do some animations. And actually, it's gonna be super, super easy because the thing that annoys me a little bit is you have this nice blurred version, and it's gonna be like, pew. That could be a little bit nicer. Are you gonna do a transform over there? No. Dammit. You're wrong. So what I'm gonna do instead is, we are just gonna- You guys, there's a fight on Twitter about how to properly translate thumb and thumbnail for the record. Ooh. The- I'm gonna get in on that later. Yeah, yeah. So I'm gonna define a keyframe animation, which goes from opacity zero, and that's it. Oh, what does it go to? Nobody knows. So I'm gonna put this on the image. And the nice thing about this is that this way the browser will know it's an animation, it will do the whole promotion to its own layer and make it fast, and then, and then, and then. And if you're writing production code, you'll put all the other vendor prefixes for keyframes and all that jazz. Well, you have tools for that, right? Like, I don't write those by hand, but- I do. Like an antelope. We have forgotten the most important thing about Supercharged. This is not production-ready code. Don't copy it. We never do production-ready code at Supercharged. This is about concepts and, you know, like things you can do on the web, but you shouldn't be copy-pasting this. There's, we didn't do accessibility. We didn't, we did not do accessibility. I didn't, like, reflect on my properties correctly. I only have getters and no setters, which is also not nice. We're not even handling if you change the source on an element. Or if you're, like, scrolling really fast and you're creating children a lot. Yeah, let's, so I wanted to show you how it can be acceptably easy to load dynamic images lazy and have a nice transition on it, not to have an element that you can use everywhere. But still, I think the concept is pretty interesting. So the last thing I'm gonna do is I'm gonna put an animation duration of five seconds on it, mostly because I wanna see the thing happening, not because five seconds is a good value here. But this should be enough to actually have the image just fade in because by default, the opacity default value is one. So that's why I could leave out the two because it's gonna transition to the default value. And secondly, animations don't loop by default. So I don't have to worry about the fade going over, which is gonna fade to the end position and stop there. So hopefully, we're gonna see a blurred image. It's gonna load and then it's gonna fade in, right? And we scroll down. We can actually do the throttling now because we don't wanna wait that long. We see the next image and it's gonna fade in. And that, I think, makes a much nicer experience. That's wonderful. And just because we have a couple of minutes left, I'm just gonna show you one thing. If you're more into the pixelated look, one property is all it takes. And I'm wrong. Sir, I'm gonna get on. What is it? There you go. And that is something when you increase the resolution a little bit on the thumbnails. Let's go to 16 by 16 because why not? You can actually see the patterns already emerging a little bit, which also can be a really nice look. And I think I'm gonna stop here. I'm gonna push this code up on GitHub as we always do. It's on github.com slash Google Chrome slash UI element samples. But we're also gonna put it in the description on the video. Thanks everyone so much for watching this, for bearing with me through all the weird phases of this. It's beautiful. Thank you for handling the Twitter and confusing the hack out of me and making me use the hat twice. Thank you for clapping along. Could have been third times. And if you have any questions, ask me on Twitter later or I'll be around a little bit more. Thank you. All right, one session left. So hi everybody, my name is Taylor Savage. I'm a product manager here at Google and I work on all the different open source web developer products that we build as part of the Chrome team. And one thing that I really love about working on open source products as a product manager is when you work on a typical consumer product, you're working with an engineering team, you're building a product and you're shipping it directly to users and it's kind of a straight path from you as a team, as a product manager to your users. On a web developer product or on a developer product in general, we're working on this product and then we're shipping it kind of out into the wild into this crazy, messy, insane, fun, creative ecosystem of developers, all of you, who then ship products to users. And so the coolest thing about that is the products that we build on a developer product team are just kind of the beginning, just the seeds. And it's really you, it's the community, it's the ecosystem that's actually building the product in the end result that gets shipped to users. And you make our product better. You make our product better through using it in creative new ways, through extending it, through taking ideas and building new things, through building completely separate things and making our ideas better. And so one of the things that I really love about this Polymer Summit and these developer community events that we do is this opportunity, this amazing opportunity, a totally unique opportunity for our team to meet all of you, to hear how you're using our product, to hear the problems that you run into, the problems your users are running into, how we can make our things better, how we can help you do your jobs more efficiently and more effectively. And that's really, really, really special to us. And so we love being here. We also love that the events team that puts this on is the best in the world at what they do. They do such an incredible job organizing this summit. So a huge thank you to the events team who put this on. And so we like to, at all of these summits, make sure that we kind of return the favor to all of you and all the feedback that we're getting for you and opening up our team since we're here and since we get to be right here with all of you in the same space. To questions, to the questions that you have, peek inside some of the things we're thinking about, some of the concerns you might have, some of the issues you're running into, some of the things that you wanna know about how our team is thinking about building the next set of products that we're working on. So I always like to do this panel at the end to kind of open up that opportunity and end with a big conversation among all of us. This is really our engineering team, all of us here working on building products that we ship to those end users. So I'd like to welcome on the stage a bunch of people from Polymer Engineering Team to answer all your questions. So first, Justin Fagnani is an engineer on the Polymer Tools team. Monica is an engineer who you've met many times today and yesterday on the Polymer team. Rob Dodson, developer, advocate, extraordinaire, expert in all things web and general internet celebrity. Wendy Ginsburg, the product manager on the Polymer team itself who gave the keynote yesterday. And Steve Orville, the mastermind, one of the masterminds behind the Polymer Core Library and engineer on the Polymer Core team as well. All right, so we have two microphones, one microphone up here, two microphones up here. So please come up, ask your questions. This is an open space, this is a safe space. We love being here with you. There are no such thing as dumb questions. Please come up, ask us anything. You can also tweet at us at the hashtag AskPolymer and we're looking through those tweets live and so I'll be looking through a machine up here to try to get some of those questions delivered to our panel as well. All right, great. So to kick us off, I have a question in the barrel, which is that lots of exciting stuff coming out yesterday and today with Polymer and NPM and ES6 modules, lots of excitement about how to develop components. Also a little hesitancy for a lot of folks who really enjoyed being able to author HTML in HTML. So is there anything that you're thinking about in terms of how we can kind of better support that use case that has been so special to so many Polymer developers of authoring HTML in HTML? Please. Yeah, so I mean I first want to say that we hear you. We spent a lot of time yesterday and today talking to people who were kind of lamenting the loss of authoring in HTML. And it kind of makes sense because that's something that's been a little bit unique about Polymer and you people have chosen to use Polymer and that's why you're here because you like it a lot. And so it's definitely got us thinking about ways we can help support that kind of style and that kind of workflow even as we move to JavaScript modules. It's important to note though that also our philosophy of using the platform kind of means that like when the platform gives us a tool and in this case kind of denies us another than one that we were hoping to use. Our choice is in a sense made for us a little bit. We have to go to JavaScript modules because that's the native loading solution on the web. But we can look at ways to kind of layer the project and provide ways to have some tooling or something like that. The important thing is that Polymer 3.0 is very, very early in preview stage. And so there's a lot of time for feedback and a lot of time for experimentation with solutions to give you the ability to write HTML and HTML. So the actual Polymer part is the same. Yeah, I mean the actual string that you write is the same template. In fact, you could draw a line from every line in a HTML import to a line in a JavaScript file and there would be a one-to-one correspondence between every line in there. We're just following Sember and that's why you have to bump the major version. Yeah. So we hear you, we have some interesting ideas on how we can serve the best of both worlds. Yeah, I just add quickly that, you know, again, based on feedback, the reality is that JavaScript modules are here today and we wanna support them, you know, but there's some spec work potentially to develop an HTML extension to those modules. And that's something that we'll be thinking about and evaluating and seeing kind of, you know, if our users are really clamoring for HTML. On our own team, I will say that there was definitely a bit of like difficulty ripping that band-aid off, but we're also excited about some of the new things that modules are gonna provide. I mean, Lit is an excellent, interesting direction and we can't do that in HTML. So, you know, we're gonna be experimenting and listening for feedback. So a closely related question that should hopefully be quick. If I'm working on a new project right now, should I target 2.0 or 3.0? 2.0, definitely. 3.0 is very, very early. And the other thing to note is that when we published Core and all the elements, those were literally converted automatically by a tool minutes before we published them up. It was like one process. So if you're on 2.0 and you keep pushing forward and you do that, we're gonna take you to 3.0 automatically. Fun fact, you can tell that they were published while we were here because we got rate limited by GitHub on the conference IP Wi-Fi so that if anybody was trying to use the GitHub API, you couldn't do it because Justin did all of it. No, we used the GitHub token. That wasn't us. There was just so many people in the code lab. God damn you. Yeah, I think a little bit of straight talk to steal Gray's term is we're really aiming, as Justin said earlier, to not change the API between 2.0 and 3.0 just as little as we have to, absolutely have to. And actually Kevin's app that he built, he transferred the whole thing using the tool that Justin's team has built. And it had some bugs for a while, but the latest version worked pretty seamlessly. And the kinds of, I mean, I think Justin sort of obtusely mentioned we're changing as little as we can, or maybe it was Fred, we're changing as little as we can. Honestly, let me give you just one example. There's the Polymer import href. What did that do? It created an HTML import. Well, there's no more HTML import, so you have to do that differently, right? This is like about what we're gonna change. That's the only change really. Yeah. And in fact, this brings up a good point. If you're vending open source elements out there, we actually recommend that when you use the converter, you do not change the API of your elements at all. Treat them as having the same major version, even if you bump the major version because it's now modules, because that means when your users run the converter, they will automatically be compatible with your element. So we look at this transition as not really an API breaking change, it's just a repackaging. All right, let's go to a live question. I have a question regarding tooling and improving developer experience. Is there any opportunity you are developing and to have provided hot module replacement? Like you are coding and you'll immediately see the changes in browser. Oh, hot module reloading, yeah. That's a tool's question, doesn't it? Yeah, I mean. We don't have any work planned on that. There are several ideas on how you can do that. One of the things with hot module reloading is you often get into a problem where if you're replacing some JavaScript, it's hard to, huh? It's hard to replace the state that might have gotten in a certain, you know, states through some path of code that you took. So I was just talking with some people today about ways we could make templates auto reload. And Kevin's been looking at using Redux, which, you know, has some facilities for that kind of stuff that can replay your state. We don't have any media plans, but it's an interesting and open question. Yeah, I mean, I will say that Kevin kind of alluded to in his talk that we're considering ways, you know, we're not really prepared to say exactly what we're ready to build right now, but we'd like to improve the overall developer sort of end to end experience. And, you know, Kevin demonstrated using Redux as a good way to do that. And a tool like that works well with something like hot module reloading. And, you know, this is something that I think we can explore without really committing to anything. I mean, frankly, I think we'd love to be able to build something that community can also contribute to, and you know, if that's a useful feature, that's great, thanks. All right, so we have another question about tools, which is what are the plans for all the various tools that Polymer Tools Team builds when it comes to Polymer 3? So with Polymer CLI, what's gonna happen with Polymer Bundler? Will it be replaced by Webpack Plugin, et cetera? Yeah, so there's a little bit to be figured out there, but I'm kind of on the mind that we have two different paths to explore here, probably at the same time. One is we have existing customers who are using the CLI and who are successful and happy, and we wanna keep them moving forward into this world without them having to ditch our tool chain. So we have been adding support for NPM. We now have an NPM flag on the CLI and PolyServe and WCT, and we'll probably try to add support for modules as an incremental choice that you can add into your project as you go. But we also want to enable people to not have to use our tools because we're so different from everyone else, to be able to use Rollup and Webpack and all these other options that everybody else has out there. So we're definitely gonna take care of you and keep our tools moving forward. It could be we're looking at opportunities to replace some of our custom stuff as something that the community already has, so we have to do less work. But we're also going to explore the idea of, if you have an existing project and you wanna sprinkle Polymer in, you can do that without having to buy into our tool chain. Make Rob happy. With Polymer 3 and the transition to NPM, what was the kind of reasoning for choosing to go with yarn and a flat dependency tree as opposed to using NPM and peer dependencies? Um... Sorry. Getting all the questions here. Happy to do it, though. It's really the platform again. A lot of people kind of think they've been using JavaScript modules because there's been support in all the compilers that will turn ES6 syntax into a common JS or something. But it turns out that the web compatible modules are a little more restrictive than what people have been using, say, in the Node or Webpack ecosystems, and they require that you import by path, just like HTML imports. And so HTML imports and Bower worked well together because Bower installed flat, and then you can import Polymer by importing ..slashPolymer, right? So we have that same situation on NPM. JavaScript modules require importing by paths. If you wanna import from another package, you do import ..slash and that package name. And that kind of forces us to have packages in a flat layout. And Yarn does that, NPM currently doesn't. I hope they consider adding that feature and then we'll be able to use the NPM client as well. Yeah, I think there's really two reasons. Number one is, custom elements have a, when you define a custom element, nope, that's used. You can't use it again. So that doesn't really work well if you have to be careful about that. So loading a dependency twice is bad for that. The other reason is loading a dependency twice is horrible for performance. So relying on a tool to make your page completely not suck and load the same code over and over and over again is not ideal, I think. When we install flat, we ensure we don't do that. I think actually it is worth pointing this out a little bit. So it's also important to understand that your client side dependencies, you definitely want those to be flattened as Steve was saying. You don't wanna ship 10 versions of the same thing to the client, that sucks, right? Your dev dependencies, all your build tools and things like that, those can actually still have the nested node install. And so there may be a little fine tuning that we need to do here to make sure that it's very easy and ergonomic for developers to sort of be like, okay, cool, I flattened my client side stuff and I. One of the things you tried out was different folders. Was a subfolder. So these are my client side dependencies. These are my normal nodes or build or whatever dependencies. Flood in one, dole flood in the other. And we still have a lot of work to give everyone templates that they can start from that are set up to serve and build in the proper way. So we'll be coming out with that in the coming weeks. Yeah, but just personally, even though there is a little bit of work and overhead involved sometimes in flattening your client side dependencies, the flip side of that is, I think with Webpack you have the, what is it, the common chunks plugin or something like that. I mean, you can either de-dupe this at install time and handle it yourself that way and at least then you've got a GUI walking you through it or you've got to put this whole thing into your build process to de-dupe your packages. So you're doing it one way or the other usually and personally it seems like doing it as early as possible in the install process is a good thing. Let's jump to a question from Twitter which is what about accessibility? So how does Polymer work with voiceover and screen readers in general? Yeah. Okay, hi. So how does Polymer work with accessibility? I mean, generally like fine, I can't think of anything that is painfully broken. The biggest issue is going to be around Shadow DOM. So most of the stuff that we do in accessibility to build relationships, so you say for instance, you have attributes like are you labeled by and things like that where you actually say, this element is labeled by that other element and you use an ID reference to that other element. So literally like CSS ID reference, right? The challenge there is with Shadow DOM, right, you're creating this little scoping bubble, your IDs are scoped to that bubble and so you can easily end up in situations where you've got something in the Shadow DOM that you would really like to label or you'd really like to refer to with like ARIA controls or something like that and you're unable to do it. You just like can't get down there. So there are a few things that we're working on to fix this that aren't actually like Polymer specific, but are just like web platform stuff. So probably the most important is the new accessibility object model, which you can find that on github.com slash WICG slash AOM AM. And this is basically like adding a programmatic API to accessibility and making it a lot easier for you to imperatively like set up relationships in JavaScript between elements and say like, hey, I've got this element here and I have access to its accessible node property and I can actually then give it a, an actual like node reference in JavaScript and I could, you know, hop through Shadow Boundaries and things like that. That's probably the most important thing that I think we're doing to fix this and that'll help web components obviously, but it'll also just make accessibility more broadly across the platform easier for developers, I think. Let's go to a live question. So when I started using Polymer was because I could just serve up data on a flat with just nginx or a static server, but most of the examples I've seen now or over the past year have always been through a dynamic node server or something like that. So I just wanna make sure that, you know, one of the, that's gonna stay in Polymer that you can just serve from a static server. All of the things that I build are always from a static server. That's like a moniker rule. So they always work. The only time I do extra things is to bundle, but there's no requirement, it's just in boards. And that's one of the reasons why modules require paths is because the browser needs to figure out the exact URL to load and then with Yarn Flat, it'll be able to find it and load it. Yeah, I mean the one thing I would add is because of the way that custom elements version one is defined, because of the current existing browser support and because of our desire always to ship really optimized sets of code to the browser, we do sometimes what's called differential serving to serve just the right bundle. You know, like for IE 11, we can't serve ES6. This is why we will sort of provide tools to let you serve the optimized thing. Hopefully that's gonna go away soon. The next year or two. Got a lot of questions. I'm really happy with all the tooling and documentation you guys have provided to help move users from Polymer One then to Polymer Two and now to Polymer Three. It's like, you know, really made it as turnkey as possible for folks like myself. But my question is, you know, there's still obviously a lot of projects and code out there with Polymer One still and using the zero dot X web components polyfill. What do you guys have in terms of a long-term support strategy or end-of-life plan for Polymer One and obviously eventually Polymer Two and those zero dot X polyfills as well? Can you repeat the question for anyone watching on my video or live stream? Sorry. Sure, I mean, Taylor. Yes, so question was around end-of-life support and just general kind of long-term support for Polymer One and polyfills. Yeah, I mean, so basically there's the native features that we rely on is one part of the question and you know, this is sort of web standard practice, right? Those will not be removed for centuries probably until like that we actually do crazy stuff like, you know, how many people use these things? But so those won't go away really. And you know, Polymer, we are supporting hybrid mode. So I think I can't say exactly but we intend essentially to kind of adopt a similar policy where as long as there's need we'll continue to support it. Unless Taylor tells me differently. Does that also include the v0 shadow DOM implementation in Chrome? That's a good question. Yeah, I mean, this is really what I'm speaking to you. I mean, the policy, basically one of the reasons the web is great is cause it really doesn't break. And you know, Chrome kind of leapt ahead of everybody with v0 shadow root and you know, this is gonna be available as long as people are using it. Cool, thank you. The way features are removed from Chrome is that we keep tracking them. We have a percentage of pages that are using it. You can see that as well. Sorry, that's not public information. Chrome status. Chrome status.com, yeah. And they have to drop below like a, I love very small threshold, like 0.03% Yeah, 0.03% of all pages are using this feature. And if it drops below that, then you're confident enough that it's basically just like old pages or unused pages or just we're sorry we're gonna break you. So I don't think we're nearly close enough and we're not, there's no chance we're gonna go there in the next years. One thing that is happening soon that's gonna make Monica excited, but. If anybody was here still on like 0.5 or anything and you're using the deep selector in CSS. Or the shadow selector. Yeah, that too. Those are going away very soon in Chrome and that will change the styling of your page. So we've been trying to urge people to get up to Polymer 1.0 and stop using that for the last couple of years, hopefully you have. They're basically getting replaced by the descendant selector. So like it's not gonna make your style not get applied but deep is just like it becomes a descendant selector. So it's not gonna go into shadow roots or anything like that. Yeah, there's been a little warning in the console for quite a while, but it's very, it's weird. I have this like innate reaction now. I see the warnings and I'm just like. Clear. So like do it. It's gonna break you. If you were like me. So you're still triggering the warnings? No, no. Rob. But like if you're like me, you gotta fix your stuff. Yeah. Definitely going to break. It's definitely getting removed. Luckily I've seen some sites that hopefully they've moved. It doesn't break them that bad. Little tweaks. All right, let's jump to another Twitter one. How optimistic are you and what are your feelings about other browsers agreeing to native element extension? And what can we as the developer community do to help? Are we gonna do is equals? Is equals, yeah. I will take this one. My cross to bear. Is equals is a great idea where the idea of extending a custom element or the idea of adding your own set of functions and behaviors and prototypes to a native element. This idea, very clear here, is agreed on by all browsers. The implementation of this idea is not agreed on by all browsers. So the current is equals implementation in particular is not, is very content, like content is, and is not agreed on by all browsers. And this means that we would like to, we will continue fighting for something like is equals. It will probably not be as equals in the state that it is hilariously enough as equals is a likely contender one letter off. But basically there is a need for extending custom elements. There are things that you cannot do unless you are a native element. For example, submit forms. So in parallel with us fighting for this API to actually work, we're also fighting for browsers to actually, you know, just fix the form element. Form elements should look at custom elements. There is no reason why the form developed 20 years ago should not look at something developed now. So it's a hard question. We're working on it. We're trying very hard to write specs where it's an ongoing battle. I think one of the things to, I'm not, I don't know if this is entirely accurate, but I'm gonna say it on stage as if it is. If you add a shadow root to an element that you've type extended, like, so let's, one of the things is like, oh, select, right? Select is amazing. And it has all this built-in accessibility and presentation to it. And my understanding is if I extend that and then add my own shadow root, like all of that goes away. And so it's- It doesn't even work. It doesn't let you do it. Yeah, okay. It just dies. So there's a lot of things that you might think that you wanna do with is equals that actually just aren't gonna quite work. And so then it does become a bit of a weird API where it's like, you get some stuff, but not all the stuff. And the browser's not very good at pinky swear. Like, I pinky swear I'm not gonna do bad things because I've read there's these 20 things that I can add a shadow root to, but not these other four or something like that. Yeah, so I definitely think, personally, yeah, I want just them to sort of expose the primitives or fix the elements. I want the label element and the form element and all these things just either just work with custom elements or let me sort of imbue my custom element with formness and labelness and labelability and things like that. And one day, one day this will happen, I promise. I'll just add, personally, I'm not very optimistic just speaking for myself only. Again, because I think this concern, for example, that Rob brought up around shadow roots is a really good idea of the larger problem. In v0 custom elements, you could extend input, but you couldn't add a shadow root to it. So what you could actually do, it was kind of limited. Why? Because it had this native legacy implementation. And this is really the larger concern is that these native elements have this long history of C++ implementation and crazy specs before custom elements was even a thing. So the long-term fix is to expose those APIs and make those available to any custom element who wants them. And maybe there's a shorter term fix. It probably isn't gonna be is, it may be some crazy thing. And we know that people really want this and so we're definitely kind of got our fingers on the pulse of that and we'll be pushing for something. Because I think the exposing the guts of input so that you could do it all yourself is probably a 25-year project. Yes, so we probably need something in the meantime and if is is objectionable, we'll be pushing and sort of pushing people for some better solution. And I'll give a little shout out to Marika's talk earlier today. All of these discussions are not in some secret back room. These are all happening out in the open on public mailing lists. So if you have good use cases. And on GitHub, yeah, on GitHub. So if you're really passionate about it, please go and tell everybody you're very upset that is equals is not a thing. Because. I was gonna say specifically W3C, or github.com W3C slash web components. There you go. Voice your complaints. It's the only way we get things done by proving that you actually want something like is equals. They don't believe me. All right, let's go to a live question. Thanks. So my question was in fact related to the previous one but first I'd like to thank you for letting us use the bleeding edge of this back right now before it was recommended. And given the is attribute is gone, what's the migration path that you would propose to the currently implemented elements that are using is attribute, especially those that are using specific parsing context like templates, script, table, TR, and so on and so forth. Do you wanna repeat the question? I think we're getting it from the bigger mic. Oh, okay. So thing that I did with form for example, so our inform was an extension of form, you convert it into a wrapper and then you put content in it. So basically all of the extra is equals things that you were doing you put on a wrapper element but that for TR is kind of... Yeah, tables are tricky. Tables are tricky. But remember that you can style things as tables with CSS so you could move to a different element. You can style things like a template type. Oh, what's right? Template type. Yeah, you cannot. So yeah, I mean what Polymer did, I mean, yeah, it's not super pretty, we're not gonna lie, but this sort of decoration pattern is essentially, this is what Polymer did with like, we used to in Polymer 1.0, we had template is DOM repeat. We, for sort of compatibility, we did some magic where we sort of made that work in Polymer templates in 2.0, but the way that we actually make that work is we have a DOM-repeat element, it expects a template inside of it and it works that way. This is a general pattern that's gonna basically work. Again, it's not super pretty, but it's what you can do. Great, thanks. Thank you for this sound, it was great. It was great, sure. Yeah, actually you need to bring them back. Yeah, why not? That's a gesture. On the Polymer project website, there's still a mention about an upgrade tools to 2.0 from 1.x. Any update on this? On the upgrade tools? Yeah, there was a mention about the upgrade tools from 1.0 to 2.0. Yeah, so you know, last year we said we're gonna make this upgrade tool and then this is kinda my bad a little bit and it seemed like most of the feedback was like, oh, it's really easy to upgrade and the things that were tricky to upgrade were actually gonna be impossible for a tool to automate. Things like converting a content tag to a slot tag where the content tag actually had a selector. There's no way we can just automatically determine what the slot name should be everywhere and apply it. So it seemed kind of to us for a while that everybody that we were hearing from was kind of satisfied with how easy it was to upgrade and then it seemed like more recently we heard an uptick in a number of people who were asking about the tool and wanted it. So also recently we started converting all the internal elements inside Google from 1.0 to 2.0 and Peter, an engineer on the tools team has been working on the tool quite recently. It's not a very clean code. We've just been like hacking at it to do whatever upgrade we could and so we're planning on releasing the version that we have for internal external pretty soon. Okay, then do you have a suggestion to lend old element to find quickly what has to be changed to zero, right? Rather than going to each element and searching for specific things but something too late so it's highlighted. A way to highlight in your code what should be changed. Yeah, we can certainly, if there's cases where we can't determine what it should be, we can output a message that says this needs to be updated by you. Okay, thanks. All right, we're running long time. Let's do one more live question. Hello. My question is, do we have any tutorials or videos for the making something like neon animation because we really love the neon animations in the polymer one? I think the question is about neon animations and its future, right? So is it, if it is, we have a blog post that Elliot wrote, you probably saw him on stage before on the Polymer Summit site that kind of explains what happened to neon animations and what's going to happen to neon animations? Yeah, so my question is like, is there any, some new videos or tutorials like neon animation because in the version too, it is duplicated, right? Will there be replacements for neon animations as well? Yeah, exactly. I don't think, so the problem with neon animation is that it's fundamentally a flawed concept that we thought was going to work in dozen. It doesn't actually add anything in the current state that it is amazing to the web animations polyfill API and we had a team, the team that actually works on the web animations API, look at making a new and better neon animation and what it ended up being was the web animations API. So it turns out just like, a declarative rappel would be amazing but it just doesn't really work well and if you want, it's just a jar of spiders. Yeah, I'll also plug Valder and another engineer on the team made a code lab for performant expand collapse animations and just by going through that, you can see something pretty typical that anybody would want to build and how he does it and then you can learn yourself and maybe apply that to some other projects. Thank you. Let's close out with a question that I love which is what would each of you see as your number one request or a current hole in the web platform, a feature or some change that you would like to see made to the web platform to make it better? There's two, I'll go one and hope somebody else does the other one. I think right now for web components, we're looking pretty good with native implementations at least in Chrome and Safari. The one piece that seems to be left to make everything much better is the theming support in CSS for Shadow DOM. So that's the part and theme CSS spec. It just got promoted to a working draft or editor's draft. That was a good one, I want to have one. Oh, you want to take the other one ahead? No, it's fine. Okay. So yeah, that would just help everybody's theme across their Shadow Roots. That was mine. For better or for worse, for some reason, I care about inputs and forms despite never actually using a form on anything that I've ever made. So what I would like to see in the web platform is input being less opinionated and forms being less opinionated about their tag names. So form caring about more than just input and input type equals color actually working on all of the browsers, because it's not yet. Yeah, form and input. I would like form and input to succeed at their job. Yeah, so mine, it's kind of related to that. There's a lot of things that native elements can do. There's a lot of built-in magic. I've been working on accessibility a lot, the how-to components that we all worked on. Accessibility was like the primary driver behind that work. We were like, how do we build tabs and trees as custom elements and make sure they are like really robust and accessible, right? Fully keyword accessible, doing all the right things with ARIA, everything we can possibly try. And then there's still places where we come up short because there's some stuff that we just can't do in the browser today, the label element and things like that. You click on a label and it focuses your control. That's a cool magical behavior that like, there's no easy way to get that. You're gonna have to make your own custom element label now. And then you end up just reinventing like all of HTML to get all these little features. So I'm really excited about AOM and other stuff like being able to hook into forms. I think these are probably some of the most common things folks have brought up and we've kind of been pushing for them for years. I actually think people are listening a little bit more right now. I think forms are actually a good hope. I think we're like finally getting through to some folks there. So yeah, that's really what I want. I wanna make sure the future of HTML is accessible and that's easy for developers to do that, to do the right things. And it's not like this arcane art to make something accessible. I think that the biggest frustration I have is when there's like a really cool feature and not every browser has it, of course, trying to bring down polyfills and I think a way that that could happen is oddly enough through payments and commerce on the web because the more we allow users to buy things and spend money there, the more companies pay attention, large companies that have tons and tons of sites and have tons and tons of developers. And so we can start doing more stuff with payments. I know that Chrome payments just came to desktop. We'll start attracting a lot more attention and then hopefully companies will start putting a lot more attention into their browsers or paying attention in standards meetings and stuff. Yeah, people on the team know that the thing I care about sort of the most is performance and I think I've said this before, but I think with Custom Melons we have an opportunity to add things to the browser that makes it faster, makes us able to do things in custom ways faster. I care about this for a couple of reasons. Obviously I want things to be faster so that we can all make stuff that's awesome and better for users, but there's another reason. And you heard yesterday in Alex Russell's talk, he's talked a lot about mobile CPUs. Now he only talked about that for like five minutes, but he works in the same office as us and we have to hear him talk about that for like hours and hours and hours and hours. And I would like to have things faster so I don't have to hear that venting much anymore. Alex, he's joking, we love his talk to us. All right, great. So Matt's gonna come on in a second with some closing remarks and some very important announcements. So please be sure to stick around. But thank you so much to the panel. Thank you. All right, so we have an odd tradition apparently in these Polymer summits. So if any of you are here last time in London, you may recall that a few people joined us with just about an hour to go after going through a pretty crazy, hellish airport experience for a couple of days. So this year we've managed to do something even better and a few people showed up, two of them in fact, just about an hour ago on a sailboat from Germany just to join the last hour. So I wanna give them a brief hand and not to repeat Taylor too much, but first I wanna just say thank you very much. This means very, very much to us as a team. This is the highlight of our year. It's turned into that. We were, the first Polymer summit we did three, two years ago actually, we had never actually put on a developer event before. We've got an experienced team that helps us and makes us look all slick and professional, but we had never done it. And now it's really the highlight of our year and it's what we plan around and it really means a lot to us. So it means a lot that you're here. It means a lot that more people were at this one than ever before. Despite the fact that we're in summer and in fact we're in Copenhagen's a little further away for some people. So that's really, really, really awesome. So just a few things. So one, I wanna give a huge thank you to the speakers and the staff. We're gonna do a little bit of a camera work with this one so if we can do an extended cheer that would be great. So let's give everyone a big hand. Thank you very much. So we have two different events we need to tell you about in just two weeks on the 5th and 6th of September in Krakow. There's a Google developer days. Look that up on your favorite search engine. Hopefully it's Google, tickets are still available. And then it's starting exactly two months from today on October 23rd and 24th. We're gonna have the fifth annual Chrome Dev Summit. We'll be there. It's back home in San Francisco. Tickets are available on August 30th. You are the first to hear that. So definitely show up and see us there. We'll be there. We'll be eating some talks and a lot of the other people including our Polymer home, Google home will be there as well in person. So just two last things. One is the survey that you got in your email just a few minutes ago is really, really important to us. We take it incredibly seriously hence the wider chairs, hence the more bathrooms but even the content. I actually personally read every single response we get and so do a number of other people. So it's really important if you can fill that out that's great for us. It helps us improve these events going forward. And then the last is one of the other odd traditions we have. The very first Polymer Summit, we ended up giving away a lot of the bean bags and pillows and everything else and people started tweeting them going home and someone tweeted a picture of a Polymer Summit bean bag going around like a luggage carousel at an airport. So we give these away each time if you want a pillow or anything. I think there's only a couple bean bags left already because people have already figured this out. They're at the swag table. And that's it. Thank you very much. Hope to see you next time.