 Hello, everybody. We have a packed house here. We had no idea what to expect. Welcome to the third annual Polymer Summit. I am Matthew McNulty, I'm the engineering director for Polymer and Chrome DevTools and Lighthouse and a bunch of stuff like that. So welcome to the third summit. This is going to be our biggest summit ever. As you can tell, it's our biggest venue ever. So we have sort of more of everything. We have more code lab space. We have more space to talk to you, Polymer team members. We have more space to get food. We're actually really, really responsive to all the comments that people leave every single year. So the biggest number one comment from last year, I don't know if anyone recalls, is wider chairs. So unfortunately, fire code says we have to strap them together. So we tried to skirt that and didn't really weren't really successful about that. But it should be a little bit more room for everyone else. So we have two full days, 24 talks, so just a few quick notes to get out of the way and then we'll get started. So bathrooms are on the wall behind you. There are additional unisex bathrooms by the front entrance where you came in. And then just in case, we actually brought more bathrooms in out in the courtyard as well. That was the second most commented thing last year. So meals and breaks will take place directly across the hall. There's a mother's room and prayer room in the back corner next to the restrooms. But most important of all is our code of conduct. So it's incredibly important for us that this is an inclusive community and event. And in order to really get that across, we decided to make a neat video about it this year. So check it out. We want everyone to have the best experience possible at this year's Polymer Summit. This is an inclusive community. No matter your experience or background, you're welcome here. We encourage you to be excellent to each other by saying hi to new faces, building on one another's ideas, and reporting any uncomfortable experiences. We have a zero tolerance policy for harassment of any kind. This policy is posted on large signs around the venue, and our full community guidelines are on the event website. Please share your positive and constructive feedback with staff and speakers. Staff and speakers can be identified by their staff or speaker badges or shirts. Let's make this the best developer event ever by creating an excellent experience at this year's Polymer Summit. Thanks. All right, so let's get this party started. I'd like to welcome onto the stage Wendy Ginsburg, Product Manager for Polymer. Good morning. Hey, everyone, I'm Wendy. I'm a Product Manager on the Polymer Project at Google. I'd like to welcome all of you to the third annual Polymer Summit. We're absolutely stoked to be here in Copenhagen for a couple of days celebrating the web platform. It's always awesome to see so many different people at these events, from all over the world, representing all kinds of different companies with all different roles, skills, and passions. And this year is no different. Folks have come from far and wide representing almost 100 different countries. There are people from industry, open source contributors, weekend warriors, and even people who are brand new to web components and Polymer, many of whom we met this morning at the Code Lab. But no matter who you are, where you're from, if you're here in the audience at one of the numerous GDG watch events, or at your own computer on the live stream, we are so glad to have you here. I think I speak for everyone on the Polymer team and on Chrome when I say how much we love these Polymer summits. It's a huge opportunity to get a group of smart, bold, future thinkers in one room to talk about awesome stuff happening in the web platform. We also love being able to meet real developers using our products face-to-face to learn more about them and get feedback. Sometimes we don't even find out a company is using Polymer until they reach out at an event like this. If you were at those previous summits or watched the videos on YouTube, you'll look around and notice that they've evolved a bunch over the years. So let's get a show of hands and see where we are. How many of you were with us last year in London? Wow, that's a lot. And what about the first year in Amsterdam? Anyone? Awesome. Great. So to all those who have been to past events, welcome back. And to all the new faces, thank you so much for coming out. The first summit was just one day. And frankly, we didn't even really know if people would show up. But luckily, they did over 700 of them. And at that time, Polymer 1.0 had just recently been released, and all of the summit talks were from Polymer team members themselves. It was the first big long-form opportunity to tell the Polymer story directly to you, what we were doing there, what we were trying to accomplish. And we had such a great time that day that we knew we had to do it again. So last year in London, we doubled everything, two full days, more talks, more space, more code labs, more rain, more everything. Polymer 1.0 was in the rearview mirror at that point. And now lots of real companies were using Polymer. We wanted to showcase how much our community had grown. So for the first time, we invited some of those companies on stage to tell you how they used Polymer and how important web components and Polymer were to their companies and their workflows. And that brings us to today, to 2017. And things feel a lot different now, even more than they did just last year. You can tangibly feel the web components community coming into its own. So for the first time, we opened up talks to the wider community. And we were blown away by all of the suggestions. It took us much longer than anticipated. It took us weeks to go through them all. And that, as you know, even delayed registration a little bit. And just from that call for topics, we found out about amazing community projects and several big companies that were using Polymer. And a lot of them are here today. We, of course, couldn't fit everyone, but we did everything we could to give a voice to as many different parts of the community as possible. These are the community speakers you'll see over the next two days. They're from companies large and small, universities and the research world, and the open source community. Some will be talking about Polymer, but many will be talking about different web components libraries or different web components tools. We're hitting that fun part of the J-curve in the web components ecosystem, thanks to both those longtime web components champions who have done such amazing cutting edge work for so long and the newcomers looking to give a new technology a shot adding your voice to the community. Our first summit was just two years ago next month, and it blows my mind when I think about how much things have changed since then. So Polymer project itself has gone through a long journey through many distinct phases to get to where it is today. The Polymer project was conceived a few years ago by some engineers on the Chrome team. They wanted a team of web developers who lives in the future, who can look at web development not as it is today, but as it will be and report back to the present to help inform the platform. The first official phase or the infancy of the project was an experiment to help prove out the idea of web components itself. It was to work side by side with spec authors and browser implementers to find the most natural way to extend the platform, to hand greater power and flexibility over to developers by giving them access to the browser's component model. And they wanted to do this without introducing concepts that would be foreign to the way the web platform itself worked. So we build Polymer primarily as our means to tinker with these new specs, to wrap them together in various ways, to invent new patterns, to anticipate how these specs might be used in the future. And at that point, the only people using Polymer were us, a handful of Google teams, and a little sprinkling of some crazy, brave souls. Oh, also Comcast. Is the Comcast team here? Well, we'll just clap it up for them for taking an early risk. So the real promise of web components though, the real value of baking these new capabilities directly into the platform, it couldn't be realized until the majority of users were on browsers that natively supported web components. So this meant trying to help convince other browser vendors of their value and working with them to ship compatible implementations. We couldn't make this case on our own, though. This required developers to broadly make the case through real-world web component usage and actual, tangible business results. So that was our cue, and we decided to transition the Polymer project from a small experiment to a full-blown production grade library. And this was to unlock the power of web components in the wild for everyone. So Polymer 1.0 attracted a lot of new friends in a much larger community. Blue-chip companies like General Electric, startups like Simpla, and platforms and web component libraries like Vaudin, developers in communities all around the world, from Nigeria to Spain to India and Indonesia, developers everywhere were diving into web components. Unfortunately, our friends over at Safari, Firefox, and Edge and across web standard bodies worked hard to hash out a new take on the web components APIs, ones that we could all agree upon, and more importantly, ones that we could all ship. So let's take a look at where we were at around this time last year before V1 of the web component specs. Native web component support was varied and scattered. Template was supported across the board, but other critical features like Shadow DOM and custom elements, they were either behind a flag or entirely unsupported. But now, the web component specs have finally gelled with custom elements and Shadow DOM V1. The journey wasn't always smooth and it wasn't always as fast as we hoped, but we've arrived at our destination. I mean, look at that, it's all green. Yeah, web components are here. They're native on over one billion mobile devices and everywhere else with polyfills. The web reaches farther to more extremes of geography and network conditions than any other technology. And now web components do too. So, we were able to transition the Polymer project yet again to its next phase. Polymer 2.0. And Polymer 2.0 paid down some technical debt from the specs upgrading and unlocked that really awesome future forward technology today. A lot of what you're going to hear about at the summit, a lot of that is just capabilities be unlocked with the advancements of 2.0. Major companies from all around the world have continued to adopt web components. Nigeria's e-commerce giants, Kanga and Jumia Travel, General Electrics, Industrial Internet of Things platform Predix, over 700 projects at Google, India's Olakab, as well as a few you'll hear from today, like Netflix, YouTube, Electronic Arts, USA Today, and Simpla. As a community, we've grown webcomponents.org to have over 1,000 high-quality, open-source components in just over six months. I know many of the people who helped us get there are here in the audience, so thank you so much. Because webcomponents.org is huge. It sees over 60,000 monthly active users. It has well over one million page views a month. And over the last six months, we've also seen work on supporting web components happen across the web developer ecosystem, from libraries to frameworks. Major frameworks like Angular and Ember have made public commitments to webcomponents, whether by giving talks on interop or by releasing Glimmer.js with support for webcomponents. Preact has done some amazing work on making it super easy to support webcomponents. And tomorrow, on this stage, Ionic will be here to share how they've been embracing webcomponents as well. So the first goal of the Polymer project was to get webcomponents up the mountain. And well, here we are at the summit. So what's next? Well, we have this concept of a weirdness budget on the team. It's a very scientific measurement system, but basically it states that we can only be a certain amount of weird. Well, as people, we can be as weird as we want. Like, I could bring my pet lizard Julie into the office and teach him how to use the Polymer CLI and no one bats an eye at that, but as a project, we have a budget. And every project has a weirdness budget because you can only stretch things so far outside the mainstream while still being relatable. And with webcomponents support growing in leaps and bounds, we're finally less weird when it comes to the library itself, you know, super native ES6 classes. But now we have the opportunity to be less weird on another big front. In this next phase of the project, we can both embrace a lot of the amazing work that's being done in the broader web development ecosystem as the platform grows in capability. We can also expand beyond just a singular view of web components to a much more practical place of building apps that users will love. And all of this is thanks to Polymer 2.0, web components V1, the growing support for other new platform features, and the expansion of our community. With this solid foundation to grow upon, we can more easily take advantage of some major pieces of the web platform and web community. So I'm extremely excited to announce that as of today, we will be joining the massive, sprawling JavaScript ecosystem of NPM. And we will be embracing the power of ES modules. This is a critical step in bringing web components to the mainstream. There is so, so much to say on this topic, as you can imagine, and I want all of you to hear it directly from one of the folks working on the projects himself. So in just a few minutes, Fred Shot from our awesome tools team will share more information on how we're approaching this, why we decided to do it now, and how you can check out a super early sneak preview yourself. And that'll be the first of many awesome talks. Over the next few days, you'll hear about using Polymer with other frameworks, about building Polymer projects with Webpack, using Polymer with Redux, about not using Polymer at all and using other approaches for building web components. You'll hear about how companies, universities, and teams are using web components. You'll learn about how specs even come to be in the first place, not just how to use the platform, but what the platform is. You'll hear from Polymer tech leads on what they've been researching since launching 2.0, about all the collaboration that's happening in the broader web components ecosystem. We'll cover VR, SEO, and server-side rendering for web components. And of course, we'll actually get our hands dirty at the very end with some live coding. So I'll say it again, we have a packed schedule. We're covering a lot of bases and venturing into territory we've never had the luxury of exploring before. And this summit represents a major transformation for the Polymer project. But as the Polymer project grows, we're always going to be sure to maintain our core values. Of course, to use the platform to take advantage of powerful new features being shipped in the web platform. To minimize abstraction, to keep overhead down while maintaining those developer ergonomics that we all care so much about. And lastly, to inform the platform, to work hand-in-hand with browser vendors to drive changes that push the web platform fundamentally forward. I'm extremely proud of the work we've been doing with Polymer. I'm ecstatic to see how far we've all come together as part of the web components community. And I couldn't be more excited for what's to come next. Thank you. So on that note, to kick things off with the first window into our future, welcome one of our amazing tools team engineers, Fred Schott. All right. How's everyone feeling? Exciting, right? All right, well, thank you, Wendy. Thank you everyone for being here. On behalf of the entire Polymer team, I'm thrilled to be here to help answer the question, what's next for Polymer? We have this motto on the team. We're pretty subtle about it, so maybe you haven't seen it. Maybe you have on a poster or a podium, but in case you haven't, our motto's been to use the platform. And four years ago, Polymer launched with a simple mission to invest in the web platform and to make web components fast, accessible, and easy to use. In 2015, we launched our first official version of Polymer, Polymer 1.0, a production-ready library for building with web components. And then at last year's summit, we previewed an early look of Polymer 2.0, which had evolved to match the final web component specs that browsers were shipping with. And so today, native web components are becoming a reality on all major browsers. So that's it, we did it. Nothing left to do, right? Thank you all for coming, get home safe. No, no, of course not. Even as native support continues to grow, there's still plenty to do, plenty of opportunities to make web components easier to use than ever before. And so that's why I'm excited to share three big changes that are coming to Polymer. The first is that Polymer will finally be moving off of Bower and joining the MPM ecosystem. MPM includes millions of developers in over half a million packages. And soon, Polymer developers will have access to all of them. But that's not all. Polymer will also be fully embracing JavaScript and ES modules to give us a fully native loading experience across the browser and a much better tooling support for developers. And finally, to help you with all this and make your upgrade in your move as easy as possible, we'll be releasing an auto upgrade tool to make this super, super easy to use. And so I'm excited to announce today that together, these three changes will become the next major version of Polymer, Polymer 3.0. And even though this is just a very early preview, we wanted to share our first look of what you can expect from our next release. So we're really excited, but before we get into it, I wanna take a quick look back at how we got here. When the project kicked off in 2013, there was plenty of challenges and problems we had to solve. And there were two problems in particular that I wanna talk about today. Component loading and package management. For web components to work, we needed a fast native loader in the browser. It had to support deep dependencies that could load other people's code and build off of each other. And while inlining and bundling have always been options on the web, we really didn't wanna force an extra build step just for development. And so these were our requirements. And there wasn't much to choose from that could do all this natively. The browser had always been able to load different scripts, but those scripts can never then load their own dependencies. And there were whispers of a JavaScript module system that was in development, but the project was still years away from any consensus about what it might look like or how it might behave. And so instead of waiting, we proposed a new simple native HTML loading system called HTML imports. And this would give the browser the ability to load HTML on demand. And HTML can load scripts and styles. So essentially with one loader, you could load JavaScript, CSS, HTML, all of it. We liked this because it was incredibly simple and straightforward, which allowed us to move quickly from prototyping to spec writing and onto implementation in the browser. So loading was handled thanks to HTML imports. But we also needed a way to package web components and to make them easy to share across projects and teams. And remember, this is 2013, so a package manager for the web that was kind of a crazy idea at the time. Handling JavaScript dependencies and version updates was still a pretty manual process for most teams. But we wanted to build an ecosystem, which meant that we needed a way to manage dependencies. And because two versions of the same web component can't exist on the page at the same time, we knew that we needed a way to resolve version conflicts and install a flat dependency tree. And then lastly, we wanted web components to work anywhere with any framework. So it was really important to us that we chose an active community that was available to the entire web. And so we chose Bower. Bower was new on the scene, but it was already growing really fast. It was able to manage dependencies, resolve version conflicts, install flat dependency trees. And best of all, its goal was to be a package manager for the web. So this aligned really nicely with our mission and our goal to be accessible to any web developer and any framework. So we had our native loader, we had our package manager. With these two pieces settled, we launched Polymer, our first version of Polymer, and we continued to rely on them to this day. But a lot can change in four years. And a lot of those core ideas that we had in 2013 have finally gone mainstream, thinking with components and encapsulating styles, native shadow DOM, native custom elements. And so the last year has been a really good opportunity to look back and reflect, and look back at HTML imports and Bower specifically and ask, are these still right for Polymer? Do they still meet our needs? And are these still the best choices for web components going forward? And so I already gave away the ending. I'm sorry, spoiler alert. But in addition to exploring these two things and what they'll look like, I also want to show you how they do an even better job at solving those same requirements we've had since the beginning for this year's web development. So let's start with packaging, because front-end development has come a long way since 2013. And probably the biggest change has been the huge growth of JavaScript. Node has exploded onto the scene, and NPM, its package manager, came with it to build this one community for everything JavaScript. So that includes Node, that includes the web, that's included tooling, I mean, even space. I mean, NASA is using NPM to develop spacesuits. I mean, that's the final frontier of JavaScript. And so NPM has brought everyone together using JavaScript into a single shared ecosystem. Bower hasn't been doing as well. Most of you are probably aware when you go to install Bower today, you'll see this message along with a note explaining that Bower is deprecated and that you should move to NPM instead. So while the Bower community was growing fast back in the day, today almost everyone has moved over to NPM, and the Bower project is winding itself down. But you know what? Bower's still a good package manager. Its version conflict resolution is still best in class. Its flat dependency trees are guaranteed to be as small as possible. So I don't know, maybe this makes me some sort of JavaScript hipster to say, but Bower's still a really great technical choice for web components, which is why we've stayed with them for so long. But the world has moved to NPM, and NPM has become the world's largest community of packages and developers. I mean, look at those numbers, that's absurd. Those are some crazy numbers. And so by moving to NPM, each of these packages becomes available for your project with no extra hassle or setup. But the package manager itself has never really supported version conflict resolution with flat dependency trees. And so with NPM, you can end up with multiple nested versions of the exact same package. And while this can be fine for Node, on the front end it blots and slows your application. And with web components, it can completely break them because web components expect to be unique on the page and can't just be overwritten by conflicting versions. So luckily, Yarn came onto the scene last year as an alternative client for NPM. And our team actually worked with them early on to add the support we needed for web components. So with Yarn, you get that same great NPM ecosystem, but now with a package manager that supports everything we need out of web components, just like Power. Getting started is super easy. All you need to do is install Yarn and run a knit. And if you end up running our automated tool, which I'll get to in a little bit, you get to skip all this because it'll basically do all this for you. But it's still good to understand what this will look like, so I'm gonna go through it anyway. So Yarn and knit will generate a manifest for your project that you look a lot like the Bower.json you're used to. And some dependencies have the exact same name on Bower and NPM, which is great because that means your code actually doesn't need to change very much. So with Moment, for example, one path works on Bower, same path works on NPM. No need to change the code. But some packages do change their name. So all Polymer packages, for example, are nested under the Polymer namespace. And that wasn't the case for Bower. So when that happens, you need to remember to update any import paths as well. So Polymer on Bower becomes at Polymer slash Polymer on NPM. Another thing to keep in mind is that if you're building an application and you reference Bower components directly, that's no longer gonna exist. Yarn installs everything into node modules. So just be conscious of that and make sure you update any paths. And that's it. Then you can run NPM, sorry, not NPM. Yarn install to install your dependencies from NPM. But remember that yarn doesn't use that flat dependency tree by default. So you actually need to tell it that you needed to install flat. So just add this option to your package.json. And then when you run yarn, it'll install all your dependencies for you. If anything comes up, any version conflicts, it'll help you resolve it just like Bower did. But yarn also has a ton more features on top of Bower. So for example, really smart caching, really fast installs, lock files to freeze your dependencies, workspaces for cross package development. So there's a ton of cool stuff here that I know I'm really looking forward to using. And I'm looking forward to the move to NPM in general because it really helps us complete that original promise of web components to create elements that work on any framework in any browser on any project. And asking developers on NPM to set up Bower just for a single polymer element or package, that's been a hard sell. So this is what I'm most excited about, honestly. As a developer, the platform should be able to meet you where you are. If you're an NPM, then Polymer should be able to make it really easy to work on NPM. And with Polymer 3, that's exactly what we're gonna do. So that's packaging thanks to NPM and yarn. Now let's talk about module loading because with all the excitement around JavaScript, the plan for a native module system has continued to move forward. Different ideas were considered, different features debated. And finally, the spec was finalized last year for ES modules. And meanwhile, HTML imports, they haven't progressed as much. Chrome and Opera, they added support really early on, but other browsers haven't moved forward. And the reality is most of the other developers aren't really asking for them. So we'd really hope that they'd catch on but that just hasn't happened. And that means that polyfills are still required in most browsers and will probably continue to be required for a long time. And so we revisited ES modules. Now that they're finalized for the browser to see if maybe they could meet our needs. Essentially, ES modules unlock two new features for JavaScript, import and export. Export lets you tag things as module exports, like this element variable in Polymer element. And then import, the import keyword lets you import it from anywhere else in your application. So really straightforward and really explicit about what you're using. And so we revisited this and it turns out that they actually meet every one of those original requirements. And not to mention the JavaScript community is really enthusiastic about them. And not only that, browsers are excited about it too. So Safari has already shipped native support, Chrome and Opera are in beta, and Firefox and Edge aren't far behind in active development. And so back to this chart, ES modules have finally become a viable alternative to HTML imports. And because we decided to go with them, well, you guys see this, right? Look at that chart, it's completely green. Do you guys know what that means? That means that absolutely no polyfills are required on those browsers for the first time ever. How awesome is that? Yeah, so cool, so exciting. So pumped for that. This is the result of years of work from developers across all these different browsers, and we're just so happy to see it all coming together. Okay, so what does this actually look like for Polymer? Well, let me show you with an example. Let's convert this basic Polymer 2.0 element I have called PrettyButton, and let's move it to Polymer 3.0 on NPM and JavaScript modules. So there's actually only three things you really need to do. Update your exports, update your imports, and move your template into the class definition. Really straightforward, pretty simple. Let me show you what I mean. So for exports, instead of attaching things to the global shared window, you get to be really explicit and tag it as an export for that file. So now we can change this to export PrettyButton directly from the file, and that's it. That's all you really have to do to change it. It's really easy, right? Super easy. This baby even knows how easy that is and how excited he is. I mean, I'm not sure if this baby knows what JavaScript is, but I'm 90% sure that this baby is excited about HTML, sorry, JavaScript exports. Okay, let's keep going. Imports. We're gonna switch to JavaScript imports and again, super easy and straightforward to do. It's gonna look a lot like what we're already used to. So you can call imports with the exact same path, the exact same type of path, and you're gonna import the element from it. And this is important because if you've already worked with NPM, then you might be used to importing packages by name, but modules on the browser expect to import a file by path. And so we're gonna keep the same path style imports we've always used in Polymer so they can work natively on the browser, no bundling required. You can still bundle if you'd like to, totally up to you, but it's not a requirement. You can't really see that path, but it's almost exactly the same. And finally, you're gonna need to move your template. So traditionally, Polymer templates have always lived alongside the class definition, and then behind the scenes, Polymer would connect the two for you. But now that we're moving entirely to JavaScript, this needs to change. The template is gonna have to move. And so we can move it ourselves directly into the class definition. The class is already where we describe properties and observers and mixins, and so it makes a lot of sense that this is where we would describe the look and feel of the element as well. And so all we need to do is move the template in, and we are now officially completely off of HTML. And don't worry, I'm just using a simple string here for an example, because I kind of know what you're thinking, that's a little gross, right? I totally get it, you're not wrong, I actually agree. Writing HTML in its own file format is much nicer than this. Editors understand it better, syntax highlighting is easier. So we definitely understand we're losing something here. But bringing our templates into JavaScript also gives us a lot of new options, including the option to go beyond just simple template strings. So I won't say much more now, but if you're interested in learning more, I'd highly recommend you check out Justin's talk tomorrow on expressive templates. That's all I'm gonna say. Okay, so exports, imports, templates, all really simple, straightforward changes. And now we can bring it all together. So there's the example we were just looking at. All we now need to do is add our import and make sure we reference it properly. Add our export, and that's it. There's your new Polymer 3.0 element. JavaScript module, it's got everything. It looks exactly the same just with a few small changes. And if you're using hybrid mode, don't worry, Polymer 3.0 is gonna work with that too. And so by moving to JavaScript modules, we finally get a true native loader for the web. And Polymer does a lot in JavaScript already, so just a few small changes are all you need to get there. And just as importantly, this unlocks the entire world of JavaScript tooling for Polymer 3.0. So you can now use Webpack to bundle. You can set up Babble to compile to older versions of browsers. And tools like ESLint and Predere and all those JavaScript tools will work just out of the box without any extra plugins or configuration needed. So just like with Bower, HTML imports were also limiting how well we could work with other projects. And asking JavaScript developers to use HTML imports and to download special polyfills just for a single Polymer element, that's also always been a really hard sell. So JavaScript modules are just another way we're growing to meet developers where they are. And this is great for everyone. Polymer developers, you can now easily access all that JavaScript tooling on MPM. Non-Polymer developers, you can now use that really cool Polymer element you just found. And Polymer package authors specifically, you now have this chance to reach millions of new people, millions of new users on MPM. So that's just a brief summary of the changes that are coming to Polymer 3.0. And the last thing I wanted to share is something new, something we've created to handle all this for you. Because no one likes painful upgrades. And if we decided to change a lot of things at once, it'd be impossible to make this move safely. So MPM modules and then a bunch of changes to Polymer, now that wouldn't work. So intentionally the core Polymer library actually isn't changing much at all for this 3.0 move. How we use Polymer is changing, definitely, as we move to MPM and JavaScript modules. But the Polymer library interface, its behavior, hybrid mode, how it all works, that's all been changed as little as possible. And that's all allowed us to focus on just the things we need to change and create a new automated upgrade tool for Polymer 3 called Polymer Modulizer. This is really exciting. Polymer Modulizer is still under active development, but as you can see, it does a lot of things already. It can generate your package.json, it can map from Bower to MPM. And I don't have time for a demo, but the most important thing to take away is it's awesome. It's available to try out today. And the reason we know it's awesome is because for the last few minutes, we've been eating our own dog food. Does that phrase translate in Danish? I just want to be clear, I don't actually eat dog food, but the point is, just the same, the point is we use our own stuff on the Polymer team. We're already using Modulizer to run on our own suite of Polymer elements. And at this point, it's starting to look pretty good. And so I have one last exciting announcement to share. As of hopefully about five minutes ago, all of the official Polymer elements have been automatically modulized and published to MPM, available today as a part of our early preview of Polymer 3. Super exciting. So please go check them out, play around with them. I'm sure you can tell, we're all really excited about this change. This has been years of work and we're just really excited about where it's going. MPM is the package manager for the web and so we're really excited to join that community. ES Modules provide finally a native loading experience that's going to work on across all browsers. And we have an automated tool so you don't even have to worry this is going to be a really easy change. So I'll end on the most important part, right? How can you trial this out? Well, the first thing you need to do is install Yarn. Yarn's how you're going to work with Polymer on MPM but it's also just a really good client as well. And then here's what's available today. So we have an early preview of Polymer available on MPM, an early preview of all those elements, automatically converted and package authors. If you have any packages out there that you've published, please check out our Modulizer on GitHub. It's also available on MPM, but just please check it out, give us some feedback. We're really excited to see how that works. And just remember this is an early preview so we're doing this because we really want your feedback but work is still in progress. So some bugs or maybe some missing features that's still expected at this moment. And don't worry about should I use Polymer 2 or Polymer 3 early preview? Keep using Polymer 2 for anything in production. Remember there's going to be this automated tool so whenever you do make the change it's going to be super easy, so no rush. And this is just a rough timeline so don't hold us to it, but this is a pretty rough look at where we're going to be going. So browsers, they're going to continue to move forward. We're going to be really focusing on documentation and tutorials going forward, improving the Modulizer and improving just our overall general tooling support. So a ton of exciting stuff coming down the pipe. So check out the Polymer blog for any news and updates on Polymer 3. We're going to have some great blog posts coming out during the summit and going forward this year. And if you have any questions, I'm sure you do. Totally great, we have a lunchtime brown bag Q&A. So grab your lunch, you can come to the main stage and we'll have a few people up here answering questions, any questions you might have. And so that's it for me. I'm so excited to get these things out there to get people playing with it and to join MPM and the JavaScript community. Thank you all. So just a quick correction, the brown bag's actually going to be in the code lab room. So you'll have spots to put your lunch and everything like that. So at lunchtime, about 15 minutes after lunchtime starts, we're going to have it in the code lab room. You can ask all your questions there. Additionally, there'll be Polymer members in the Ask Polymer lounge the entire time as well. But if you really want to get so in deep with modules, you can go over there at lunchtime. So one of the interesting things about Polymer is a lot of times we have no idea who's using it until they reach out. The call for topics that we put out a few months ago actually led us to not only a bunch of speakers that we have today, but finding out companies that were using us that we didn't even know about. Sometimes we have a, Eric Bidelman wrote a Chrome extension that lights up if a site uses Polymer, so we use that sometimes. And using that about a year ago, we found out that Electronic Arts had started using Polymer, the big game publishing company. And so about five months ago, some of us went up there. Actually, a lot of us went up there. We rented a really big van to go from the airport and Taylor rented it and it got us a very terrible hotel with a casino. But we met the Electronic Arts guys and saw just how much they were using Polymer and it was really, really awesome. So even though they promised us that we would be able to play video games and didn't, we still invited them to speak today. So I'd like to welcome to the stage Alex Zagheb from Electronic Arts. Hey, how's it going, folks? My name's Alex Zagheb and for the last eight years, I've been working with EA as a technical director. And I'm here today to share a story of how web components and Polymer has really facilitated a monumental transformation of web at EA. From the dark ages of building one-off pages and transient sites to the blissful world of reusable components and rich engaging player experiences. So before we dig in, I'd like to set the stage briefly about EA's place in the world. We are a gaming company that is driven by our core purpose and values to inspire the world to play and to be the world's greatest games company. We deliver on this purpose by being organized into this concept of studios such as Bioware, DICE, and EA Sports to name a few. These 22 studios are comprised of approximately 10,000 people in 30 locations worldwide. Each studio has one or more franchises and a franchise is kind of like a movie series. Each game in a franchise is usually a bit of a different story, characters, et cetera, but generally follows some consistent thematic elements or overall base fiction. Many of you have heard of franchises such as Dragon Age, Mass Effect, Battlefield, and FIFA. EA's player facing web is managed and delivered for the most part in a centralized model by about the 100 person team that I work with, Pulse. Through this centralization model, we can achieve efficiency through economy of scale and our efforts and overall just ensure an engaging and consistent experience across that diverse EA ecosystem. This means that our group must meet the needs of those 22 studios totaling over 50 franchises that release 20 plus games to our players every year. So today I'd like to go over three topics. First, our journey and decision-making process of kind of how we ended up with web components and Polymer. Second, an overview of the design system and component library we've created over the last two years. And last, some highlights of how we've been applying this design system and how it's been a huge win for EA and our players. So let's start with the story of how we landed with web components. So we need to go back about two and a half years, so basically a decade in front-end world, to February 2015, when we really started our journey with web components and Polymer. EA's approach to web was very different back then. We embodied Conway's law to the fullest in how we delivered our disjointed web ecosystem. And for the most part, we designed and developed EA's web more like an internal agency, one project, one site at a time. This siloed and disjointed web was really a sad place for our players. And equally a sad place for our designers and developers too. We were spending 80% of our time on commodity web, such as navigation, footers, news articles, media galleries, video player, pre-order and by-page, leaving only a fraction of our time to innovate, excite, and engage our players with custom game-specific experiences. So naturally, in the world, we wanted to live in, we would flip this allocation of time and focus around and spend the majority of our time on the fun-engaging stuff instead. So we started really thinking and talking about, okay, how do we get from this 80-20 bad to this 80-20 good? And voila, the network design system project was born. Project name aside, this was really about applying well-known concepts of a design system, a pattern library or a UI framework to how we did web. The idea was that by having all of our team members from product managers, to designers, to developers, to QA, embracing the complexity of the web with a component-based approach, we could drive efficiency and make our commodity core web features turn key. But before we started to hit the keyboard, we took an introspective look at ourselves. What other requirements, needs, wants did we have beyond those of a standard component-based design system? What special challenges did our contacts bring to the table? In the end, our special needs and wants for the NDS distilled down to these four items. Number one, to facilitate deep theming capabilities, to work with any language and any framework, to support our micro-site architecture and to deliver user interface as a service. So let's look at these each in more detail, starting with facilitating deep theming capabilities. Now, ideally we would have a solution that worked with our entire spectrum of branding and theming needs, from one extreme being the broad, out-of-the-box kind of EA corporate brand, to the opposing extreme, being one that is heavily themed in a very focused and specific manner for a single game. And in some cases, meeting the very stringent style guide requirements and approvals from a licensor, such as the cases with Lucasfilm for the Star Wars franchise. Okay, number two, let's talk about our desire for this solution to be language and framework agnostic. Recall, this is early 2015 still. And our group at that time was the result of several internal team mergers and reshuffles. And as an outcome, we had a plethora of technologies, languages, and paradigms in play, from frameworks and libraries, such as Drupal, Symphony, Grail, Spring, Angular, DropWizard, and even JQuery, to languages including PHP, Java, and Groovy. And content management systems, such as Adobe Experience Manager, Even Yet, and El Fresco. We had technology divergence. So unsure where we would end up with our stack in the future, we didn't want to get locked in with any one particular back end or front end framework and language. This must work for the unknown future us, as well as the rest of EA. So all right, number three, our micro site architecture. We had, and to some extent still have, many discreet web applications that have their own lifecycle, teams, budgets, et cetera, but are assembled together to be perceived as kind of one single cohesive player experience. What this means is that some design elements, such as global navigation, responsive UI breakpoints, and grid, login flow, pre-order page, need to be centrally controlled and consistent across all of our sites, whilst other design elements can be more customized for each site. Okay, last up, number four, user interface as a service. So given this micro site architecture, how do we avoid projects that aren't being actively developed from falling behind and getting out of sync with the rest of the EA ecosystem? How do we keep all the projects pointing to the latest version of this design system with minimal effort? By delivering our UI conceptually as a live service, we hope to address these issues and really embody the mindset of leave no site behind. Okay, so given our goals to build this component-based network design system, coupled with our four special needs, where do we start? I mean, what's the approach? The architecture, the systems, the tech, the tools, I mean, there's a lot of questions. So just like the purple pattern states, unlike any developer should be, I am lazy. And knowing that our general problem space of building a component-based design system was not a novel concept, it made no sense to start from scratch. There's so much out there and there's so many people to learn and leverage from. So we begun reading, listening, experimenting with and consuming anything and everything we could possibly get our eyeballs, ears and hands on in regards to the topic of style guides, design systems and UI frameworks. Serendipitously, a new podcast from styleguides.io was out and posting fresh episodes on this topic right around the time of really 2015. What a bonus. So I'd like to talk about three particular paradigms and systems that were quite inspirational and key to our path to web components. Bootstrap and sort of friends, Lonely Planet and the government of the UK. Starting with the ever-popular Bootstrap and more generally UI frameworks that follow the same approach. So here's a simplified example of what integration of a Bootstrap-like component library might look like. You include a pre-bundled CSS and JavaScript file from some CDN location and off you go using the components the system provides. Seems easy, seems awesome, seems good, right? So let's use this pagination component as an example. It's a good one because it has some reasonably complex UI, different states, some brains inside and it really needs to interact with other components and code to actually do something that's meaningful. So let's see how we integrate this component. We start by going to the documentation for the paginator, find the sample HTML snippet, then we copy and paste this large chunk of code into our application, okay? You know, it's obviously not good from a reusability and upgradeability standpoint as we have essentially forked this chunk of code and will now need to manually merge those updates as they come along. Additionally, there's little to no encapsulation. All of the internal structure and all the style are exposed to me as an integrator. I mean, what really is the public API? Is it all of it? So we're not done. We still need to hook up some behavior now to this pager by means of JavaScript. And this is generally completed by grabbing a reference to the root element of the component and passing it to some factory or constructor as shown here. Okay, so the Bootstrap model shows us one way to do it. Let's move on to Lonely Planet and see their approach. Much like us, Lonely Planet had a microsite architecture comprised of many sites, each with their own stack and team around them that need to work together to provide a single cohesive user experience. Their initial approach was to create a shared UI layer following the Bootstrap-like model we just explored. And in this diagram, taken out of a talk by Ian Feather of Lonely Planet, you can see the two core problems around risk and reuse depicted. The shared layer has the most reuse, but as changes here can affect all the applications that use it, risk is high when making modifications, especially without a clear API boundary. Due to this lack of API, you really don't know what people are doing with your component. So over time, engineers naturally will shy away from making changes and doing things in the shared layer because they're afraid of breaking stuff. So the shared layer shrinks and atrophies while the site-specific layers grow, basically the anti-vision of any shared component UI system. So what Lonely Planet did with their design system, dubbed RISO, was move away from the copy-paste approach by encapsulating each component into a Ruby and Rails helper that took two parameters. One being the name of the component, and the second parameter being the clean and minimal set of data that that component needed to render. This is much, much better. So by implementing this component layer and API between the shared layer and the specific sites, they're able to reduce or remove the risk of making changes to those components as well as really ease overall integration effort significantly. So the Lonely Planets folks really plus one the Bootstrap model with their RISO system. Naturally, this is my face after coming across RISO, so excited and essentially ready to lock in and go with this approach. But not so fast. Sometime after RISO had been humming along in production, Ian Feather put up a blog post titled, What We Would Change About RISO? First, the solution was tied very tightly to Ruby on Rails and ERB templates. And hence was not portable or usable by applications that employed different languages or frameworks. He hypothesized about using Mustache templates or similar to sort of mitigate that issue. Second, all of that CSS, HTML, JavaScript and Ruby that made up RISO was bundled and integrated into each site via a Ruby gem. So each update to RISO meant you had to go through a dependency update, build, test and release cycle to each of their 10 plus integrating sites. Okay, so Lonely Planets RISO was an excellent overall approach, clean API data-driven components. But before we move on, being Canadian all, we figured it would be a good idea to check in with the Queen over at the government of the UK. Inspired by the Lonely Planet approach, Ed Sodin, working with the government of the UK, really wanted to resolve that issue of the effort and cost of propagating those changes from the component UI layer into those integrating sites. So building on the core approach of RISO, they added the concept of a template resolver, whereas all those ERB templates were moved out of the gem and into some shared network location and then they're lazy loaded from this location and cashed locally for a short period of time. This combined with pre-bundling the CSS and JavaScript onto that CDN location allowed for changes to be propagated with minimal effort to all of their apps. As you can see, the WK folks really took the already awesome Lonely Planet approach and plus one did big time with the addition of that template resolver. Now, we got quite excited about this approach. It really felt like a viable option for us to start heading down. But before we dove in too deep, we went back to our four special needs just to make sure and see how things measure up. So number one, deep theming capabilities. Now, reminder, this is still early 2015. So CSS custom properties were only available in about 10% of users' browsers. Okay, so I'm not sure how we can do this exactly. I guess we can figure something out. Perhaps each site can build their own CSS override file that just has the CSS rule sets that need to be modified to theme those components. Hmm, okay, but what happens when we refactor some CSS or we introduce a new component? I guess we'll need to rebuild those override files. Okay, well, I mean, we've got some tools to build. I've got a process of documents and people to train. All right, number two, let's see. Working with any language, any framework. Well, I mean, we could just use mustache templates coupled with the same WK template resolver approach. That would do the trick. We just need to build client libraries now for every language that we need to support. Okay, so that's like PHP, Java, JavaScript, I mean, at minimum. And then ideally, we'd have first-class integration into the front and back of frameworks that we might use. So let's see, that's at least four frameworks. Oh boy, now there's already a lot of open questions, plumbing and miscellaneous pieces to worry about here already. Now remember, I'm a lazy developer. I don't want to write and maintain code if I don't have to. I really wish that there was just an off-the-shelf solution that I could just grab, read some docs, give me everything I need, and one nice cohesive package. Okay, so back to the drawing board. What else? What else? What else? I mean, we have designs for all our components now. Like we need to start getting on some code really soon. So I remember this hipster tech that was brewing for what feels like years now. Sounds cool on paper, web components, and Google Polymer, I think it was. Oh yeah, that's right. I remember there was a talk at Google I.O. in 2013. Oh yes, now I remember. A fun-looking toy, terrible performance due to some nasty invasive DOM polyfills. Basically, Chrome only. I mean, way too hipster for this job. I mean, this is all of EA's web, hinging on the decision. I need a solution that is ready for production at EA's scale, like today. But wait, what is this? Polymer 1.0, just released, like now. Production ready web components. Really, all right. Sounds good on first glance, back by Google. Focused around components and not full-blown applications. All right, chill out, Alex. Back to our special needs first and let's see how things measure up. So number one, deep theming capabilities. Well, we get theming right out of the box using CSS custom properties and mixins with a shim that works everywhere. Cool. We can theme these components at runtime too. No build step necessary. That's pretty awesome. All right, number two, working with any language, any framework. Well, I mean, it's really just an HTML element at the end of the day and almost standard DOM interface. So hypothetically, anybody or anything that can put an HTML tag into a document could technically use our components built with Polymer. Oh, and all the CSS, HTML and JavaScript is bundled up together into an HTML import for client-side inclusion, so no tie-in to any specific back-end or front-end language or framework. Needed, awesome. This is like sounding really good. All right, number three, support our micro-site architecture. Well, it seems that we can control the API, however we want on a per-component basis, so that should work well. And last up, user interface as a service. Well, it's just client-side integration. HTML import seems like it has some of the basic working pieces to make this happen. So yeah, this is looking really good. So my mind is blown at this point. An off-the-shelf solution that meets all of our special needs and lots more, tooling, docs, et cetera. Less code from my team to write makes me very happy. And bonus is that most of the gnarly parts of Polymer are promised to be native web platform primitives sometimes soon too. I mean, a framework's roadmap that states that it wants to get smaller, do less over time, and eventually may not even need to exist. I mean, that's amazing. So with Polymer, this hipster web components thing just became our top contender. So compared with the already awesome approaches of Lonely Planet and the government of the UK, web components in Polymer 1.0 was like a plus 100. So in September of 2015, we decided to officially lock in with web components and Polymer as the core technology for the network design system. Now, let's fast forward two years later, down our journey with web components, Polymer and the NDS. You know, what did we end up building? How did our four special needs sort of hold up in real life? Did web components in Polymer meet my mind blown expectations? I'm pleased to say that we have approximately 75 components that are arranged into one of six component families, which are just logical groupings of components based on their purpose. Let's look at a few examples. Starting with our most commonly used component, our call to action or button component. You know, it's quite polymorphic and it supports different types for text, text with icons, images, and so on. And this component also comes with Google Analytics tracking out of the box, ensuring that every button on every site sends consistent and meaningful data. Next up is our pagination component, something with quite a bit more logic and state than a button. To me, this paginator really showcases the power of web components. All that complex structure, style and behavior are nicely encapsulated away behind this minimal, clean, declarative, and imperative API. In this example, we just simply listen to that page change event and do what we need to do with the data. We also have some very high level components that are almost like encapsulated in many applications all on their own. This newsletter signup has various states for anonymous users, authenticated user, already signed up, as well as it provides a multi-step flow complete with error handling, localization, and integration with a few external service APIs. All right, so now with that sample of components in mind, let's review our special needs again and see how we've applied web components in Polymer to bring these two fruition. Starting with facilitating that broad spectrum of brand specificity and fidelity that we needed. Now we have about 300 CSS custom properties that are set at the system-wide level that are there to mostly control color and typography. We love that even back in 2015, when CSS custom properties were not widely available, the shim that came with Polymer 1.0 allowed us to use this powerful tool to theme dynamically at runtime with no build necessary. In fact, we now control these primarily through our content management system. So a designer and a product manager can spin up a site and theme it without any developers involved. We also have many CSS custom properties that are used as internal design tokens and are really there just to keep things dry from a development standpoint, but they should not really ever be exposed or altered by our integrators in any way. And so these are rules such as our responsive grid, gutter and column widths. So what we do is at a build time using post-CSS, we substitute and compile in these static values for these design tokens, making them essentially immutable and leaving only the themeable CSS custom properties exposed. So this is our tile or our card component. And it's used in most places where we have a list of items to display, such as a news article, videos, characters or weapons. And although CSS custom properties are great for granular CSS level control, using HTML attributes as another way to theme and control a component has doomed to be very powerful. The example that illustrates as well is the type attribute on this tile that has two possible values, horizontal and vertical. Using the host pseudo selector, we can have that single HTML attribute impact the entire presentation of the component from placement and size of the media asset, to margins and padding, to different layouts completely, that mobile versus desktop. All by letting CSS and the browser do all the hard work, no imperative JavaScript needed. So just by using those system-wide theme, themeable CSS custom properties and the per component HTML attributes, we can deliver all four of these tiles using the exact same component, all themed at runtime to boot. Okay, number two, let's talk about how the need to be language and framework agnostic has gone. Here's a very simplified view of our overall stack today. We have Adobe experienced manager as our CMS in the back end, Lightben's play framework as our middleware, and of course, Polymer in the front end. Now, we have taken the approach that when we think about a component, we think about how it manifests across this entire stack, not just in the front end, ensuring that we can drag and drop a component onto a page in the CMS, pull that component into the middleware, and serve out the custom element for Polymer to then render. So for our stack, each component has first-class integration, top to bottom, as an example of a button. But this does not mean that integrators must use our full stack to get the benefit of the network design system. As each layer of the stack is loosely coupled and has clear API boundaries, it's really easy to just use that front end Polymer component all on its own. No gem, no template service, client library, just a simple HTML import. Okay, so number three, our micro site architecture. So recall that some components need to be centrally controlled and consistent, whilst others can be more customizable. Web Components of Polymer makes this really easy, as we get to control how narrow or how broad the API is on a per-component basis. Components that we really need to be nearly 100% consistent across all sites, we just simply severely restrict the API into that component. And others that are meant to be super flexible have all sorts of knobs and switches in the form of CSS custom properties, HTML attributes, and view sub slot-based composition. So our pre-order by-widget, newsletter sign-up, footer, network navigation are all examples of where we really keep that API minimal. Okay, last up, number four, user interface as a service. Now, we apply Semver very, very diligently, being super careful to keep the NDS backwards and forwards compatible, with a strong focus on not bumping our major version number for as long as possible, sticking to backwards compatible changes resulting in minor version bumps only. So this mindset has worked really well as we are currently at version 1.31.0. And in practice, we could update a site from 1.2.0 to 1.31.0 without any code changes or issues. Currently, we deploy a pre-built version of the NDS to a CDN location using a naming convention as shown here. So this gives the URL meaning as to the version, the artifact type, and the artifact that's being referenced. By supporting FuzzyVersionMask on the major release version, means that the site can ask for a 1.star, a 1.x, and get back a 302 that actually redirects them to the latest 1.x release, which as it is a tagged and unchanging artifact, is sent back with a very aggressive caching policy. So as you can see, our special needs have come to fruition thanks to web components and Polymer. Now, I'd like to briefly cover a few more wins and show off some of our sites. So as mentioned, we have 75 components in production. They're now powering about 30 sites in total, and that number is growing rapidly every day. On the left is EA.com, corporate site, using the out-of-the-box default theme. And on the right is the more heavily styled Titanfall 2 site, but both using the same NDS version and the same HTML import. And here are a couple more with Mass Effect on the left and on the right, an upcoming new franchise from Bioware, Anthem. And we have gone from a previous tedious and lengthy process to bring up a site to now only taking about one and a half weeks of content only. That means no development effort. So component type thinking is really just how we operate across all functions now. We don't design and build pages anymore. We are always looking to decompose and then compose with component-based building blocks. And as a bonus to the delight of our digital intelligence team, we have been very consistent and complete analytics tracking across all of our components, allowing us to compare apples to apples between different sites. And of course, efficiency and economy of scale. We've reduced duplication. We have better engineering mobility between projects and teams. And with a predictable proven and streamlined system, we have much, much lower project risk and a far more dependable schedule. So going back to this original goal of living in a world of making our commodity web turnkey low effort and maintainable, how have we done? I would say a resounding success thanks to web components and Polymer. Thank you very much. All right, it is time for our first break. We're running a little bit late, which is Polymer Summit tradition. So there is coffee and tea and light snacks available over in the catering area and we'll start up again at 11.35. I got some music to play beyond this time. All right, so our next two talks before lunch are actually the two core engineers of Polymer that's Steve Orvell and Kevin Schaaf. They do everything together, so they're even sharing the same talk block. Monica likes to call them Stevan. So up first, we've got Steven Orvell. He's the one who always complains that our elements are not fast enough, not good enough, so we put him in charge of it and now he's here to talk about it. Hey, everyone, my name's Steve Orvell. I'm an engineer on the Polymer team. And we're here to talk about the next generation of Polymer elements, like Matt said. We're still pretty early in this process, but we're iterating pretty quickly as I hope you'll see. So it's this kind of reusable elements that we're talking about. They're used in almost every app. And we have a pretty large catalog of them that we've published on the Polymer project. They've been out for a while, a couple of years, since Polymer won. So we've been asking ourselves what needs improvement. Matt gave you a preview of that. And luckily, we've gotten a ton of feedback from our users both inside and outside of Google. And YouTube's built a whole application with Polymer. They've asked us to make elements faster. People come to YouTube to watch videos. So we make Polymer faster. That will make YouTube better. The Chrome Settings UI is built with Polymer. And they've actually asked us to make Polymer elements smaller, which is kind of crazy, since you don't actually have to download them to load the Settings page in Chrome. But they do have to download with the Chrome app. So they told us, if you make Polymer elements smaller, it'll make Chrome download faster, and users like that. And our own team has asked us to make elements easier to maintain. They've specifically voiced concerns about our behavior system that we had in Polymer 1. It was a little bit cumbersome to use and a little bit hard to make maintainable code with. So they asked us to work on that. And of course, all of our users always want a lot more features. We've gotten a ton of GitHub issues on that. And we actually know that all of our users actually want all of these things. We want elements to be faster, smaller, easier to maintain, and have more features too. Size is especially critical if you're making PWAs that need to load on slow 3G networks. Of course, you need things to be as small as possible. So we thought about all of this feedback when we were designing Polymer 2. And we realized that the needs of these reusable elements, like inputs and checkboxes and buttons, are really not the same as the kinds of elements you're making when you're making big applications. When you're making these application views, you're concerned really with getting data from a server and transforming it to the UI, managing those complex interactions. And you really want the ergonomics of that experience to be good. When you're making reusable elements, you're concerned more with size, and speed, and look, and feel. So we realized that one size does not fit all in Polymer 2. We decided to throw out our behavior system in Polymer 1 and in Polymer 2, you really embrace the new features in JavaScript, classes, and mixins. And this would allow us to make Polymer 2 modular, pay for play, so that features are there only if you need them. And that would allow us to hit more of these use cases more optimally. So we did this for Polymer. But the existing set of elements, we've made what we call hybrid elements. And these are really a bridge to the future. We really haven't addressed the feedback we've gotten in these elements. They're there really to let you transition from Polymer 1 to Polymer 2 seamlessly. So with that in mind, we're now starting to think about what it means to make that next generation of elements where we can start addressing that feedback. So we're just going to take a look at some of the topics that we're thinking about. And you're going to get an early peek at that. So first, we're going to look at addressing that feedback that we've gotten to make things smaller and faster really directly. Then we're going to look at using extension, which is a new feature in Polymer 2, and what that means for elements. And finally, we're going to dive into some improvements that are coming in the platform around styling. All right, so let's start. And we'll look at making elements smaller and faster. And to do so, we're going to go ahead and remake an old friend of ours, PaperInput. Inputs are used in almost every web application. So if we can make a small, fast one, that'll be great. This one has this material design look and feel, has that animation, that validation effect with a customizable message. And of course, we know we need all of the native features from the input element, accessibility, all of the types and all of that. So as we want to remake this to make it smaller and faster, we need to ask ourselves, what's the minimum that we need? And we'll start by making a base class. And we can use this for this input element, and then maybe for some other elements too. So let's go back to the modular design of Polymer 2. We build on top of HTML element, and we have a number of mix-ins here. Property accessors, exposes, helps us manage getters and setters and react to properties changes. Template stamp helps you stamp a template. Property effects gives you data binding, and we wrap it all together in Polymer element, which we think is actually pretty good for building these application view elements. It's just about 12k minified in GZIPT. And that's actually a pretty good trade-off here. But when we're making a really small and fast element, it's a lot more than we need. So we're going to actually go down to property accessors and use that. And that's actually just 2k out of the box. So it's going to really help us with making the element really small. To make it fast, we'll go ahead and just use this mantra, do less and be lazy. We want to do as little as work as possible to render the element and then do work lazily only as the user needs it. So the last two slides were actually from previous talks that have been given. At Google I.O. this year, Monica gave a talk, Polymer Billions Serve Lessons Learn, all about the modular design of Polymer 2. And I gave a talk last year, you can see my standard uniform, which has all about practical performance patterns I can use with Polymer. All right, so let's dive in a little bit to property accessors and see what it gives us out of the box. So it helps us make accessors, which are getters and setters, where we can react to changes when properties are set. It also helps synchronize with attributes, which is important when we are using HTML. It has a explicit API for turning on the system called enable properties. And that's there to better integrate with the boot up process as a custom element comes in. We take those attributes and reflect them to properties and we're able to process that as one set of changes, which is more efficient. It also triggers the ready method, which is lifecycle method that we add to allow us to do one time initialization work. And finally, we have an entry point called properties change, which allows us to react to this batched set of properties changing. And in Polymer Element, this is implemented for us. And this is where we get data binding and property observers and things like that. But in this element, we'll go ahead and implement this and do the work that we need when properties change to update our rendering. All right, so now that we understand property accessors, let's go ahead and build a little base class on top of it that we can use to make our paper input replacement. We'll just call it simple element. We're just kind of experimenting, so that sounds good. And the code is just going to take HTML element, apply the property accessors mix in. It's going to call enable properties. At the connected callback is a good time to do it to get that proper turn the system on. And then it'll define a template getter. And we'll use a string, which is going to work well for modules. This is something that is actually currently optional in Polymer Element. And it's all that we'll support here. And then we'll implement the ready method and stamp the template into the shadow root, which is great for custom elements. We'll steal a feature from Polymer Element, the this.$ node map for IDs in our template. And we'll do this because we're going to be working with those elements directly in this. And this is just so we don't have to find them. We'll call super ready. This is actually the easiest way to have a broken element is to forget to call super. So I definitely remember to do that. OK, so that's basically all we need to do to put into our base class property accessors. And then we made simple element. And then we'll make a thing called simple input. And this is just going to replace our paper input. So let's take a look at a few things about this. We'll look at what we call a decorator pattern that I'll explain in a second. Then we'll look at the template. And finally, we'll look at some of the code, specifically the ready method and the properties change method. So first, we're going to use an approach we call the decorator pattern. We're going to ask our user actually to put the input and the label in the light DOM of the element. And this is actually a change. We didn't do this in paper input. And it was a big pain. And the reason is because if we put the input in the shadow DOM and we want to customize or expose all the power of the input element, we have to forward all that information. It's a lot of complexity to manage, and it's not a good trade off. So if we use what we call the decorator pattern here, it makes the element a lot simpler. And we get all of that native accessibility and the type capability of the input out of the box. All right, so let's define the template. And we have some styling. This is kind of the minimum that we could do to get that material design look and feel. I won't bother you with that since you're all CSS gurus. We have a slot for the input, which allows us to project from the light DOM into our shadow root, a slot for the label, and then some non-semantic nodes that we get out of the way. And here we have an underline node that helps us manage that animation. And that's really all there is to it. All right, so let's look at the ready method. Again, this is sort of our boot up work. And since we're using a simple element, we call super ready, which is where we stamped the template. This is going to happen before the element gets rendered. So that's fine, but we have to be aware of that work since we want to make our element fast. And so anything else we're going to do, we're going to do after the first render of the element. We set up that CSS so the initial render is free. We don't have to do anything to get it to show right. And then after the render, we're going to immediately find the input in the light DOM, just using some shadow DOM API to do that. And then just add a couple of event listeners and use a private property here to track the focus state. And I do a little bit more for the label too. And that's really all we need to do in ready for that setup work. And then we're going to go ahead and implement properties change, where we'll react to those properties being set, like focused, and a couple others. And basically, we'll be manipulating the DOM directly. Here, I'm just going to add a class to that underlined element just to tell whether or not the element is focused, and that'll manage the animation. We do a little bit more of this for the label in the error message. And we get this, which is hopefully looks almost exactly the same as the paper input. Material design look and feel has a little validation guy. And it's looking pretty good. So how do we do on size and speed? Well, pretty good. It's just a little over 3K using property accessors, the little base class that we used, and then the code that we had for the simple input. And now compared to paper input, which is a hybrid element, which means it really is not leveraging Polymer2's modular design. It has all of the legacy API from Polymer1. It's actually 10 times smaller, so it's a huge improvement. And by using the decorator pattern and cutting out some of the features, we made it five times faster to render, which is, again, a pretty humongous improvement. But of course, we know from our feedback that the features that we eliminated, users are going to want. So what are we going to do about that? Well, let's move to our next topic, which is extension, and see if this will help us at all. So extending elements, as I mentioned, is sort of a new feature for Polymer2. And we just get to rely on JavaScript for doing this. And here's some things to kind of keep in mind as you're using extension and making elements. And the first is to keep our base classes simple. We kind of did that with simple element and simple input, so that's good. Using extension to add features is kind of an answer to how we're going to deal with those missing features that we didn't support in paper input yet. And then, importantly, if we can design our base classes with extensibility in mind, at least in the sort of obvious ways that we can anticipate, this is going to make it much easier for us as we extend them. So as I said, JavaScript classes really give us a lot of help with extending elements, because it's native now, and that's great. But when we were kind of designing this, we were thinking about, what about the templates? Especially when we're defining those templates in HTML, we were kind of concerned that we might need a system to help us do the kinds of things you want to do when you're extending elements. But if we're using modules, specifically if we're using those JavaScript string template literal to define a template, then that might actually change things. So we want to investigate that. And specifically, we want to look at two use cases. The first is you're making a subclass of an element, and you want to wrap some content around the superclass template. That's a common use case. And then the other common use case is we're making a subclass of an element, and it wants to insert some content somewhere in the middle of the superclass template. And hopefully, we want to do this in such a way that we don't have to copy and paste the entire superclass template. So let's take a look at both of those use cases really quickly. This is just a contrived example, so we can kind of see what's going on. Just a simple element that asks, how do you feel? And then has a little input to respond. Now, if we make a subclass of this, we'll define the template here and just use a string template literal and the native syntax here to refer to the super template. We get that kind of for free. And then if we want to do the wrapping, we can just go ahead and insert the content like that, adding a header there and then adding another question. Kind of get that for free, so that seems pretty good. But now let's go ahead and say, what if we wanted to subclass this? We realized that the way that we've organized this here, well, there's a header and there's some sort of list of questions. Can we design this element to make it better to subclass, easier to subclass? So if instead we have a template for the header and we expose that, then now we've created an override point that a subclass can customize. If we do see the same thing for the questions, then you can see that we've added very little to our base class here, and the rendering is exactly the same. But if we want to subclass the element and make it look like this with a different header and maybe a couple other questions, then all we need to do is override the header template like that and the questions template like that. It's just that easy. So we can kind of see that extending templates when we're using those string template literals is possible to do a lot without a system, especially, as I said, using those string template literals and using classes and mixins for polymorphism, just like we saw. So how can we apply any of this to the simple input element? Let's go back to the template. And we know from our experience with paper input that people want to do a lot of customization with how the input looks in the material design look and feel there. They want to add icons to the beginning of it, maybe icons to the end of it. And we could add a lot of complexity to our base class to support all of those needs. Or we could just expose an override point for the input template. And that adds a very minimal footprint to our base class, but it's going to expose a lot of power as we subclass the element. For example, if we make something like this a credit card input to make sort of show a credit card after the input, we could customize that input template, add some styling, call super, add a little icon for the credit card, and then it would look like that. Obviously, we'd want to do a little bit more to make a full credit card input, maybe restrict it to numbers and validate the credit card. But our subclass gave us a nice helping hand, and it was really useful. So we're really excited about leveraging extension to build new elements. And we know this because it's going to help us factor our code better. If we factor our code better, it's going to be easier to maintain. It's easier to maintain. We're going to have more time to add features. And we're going to have a good helping hand there to address some of that core feedback that we've gotten. All right, so we'll move on to our last topic, which is some changes that are coming to styling. So you might ask why we're talking about styling. Isn't this a solve problem with web components? We have Shadow DOM, which helps us encapsulate styling, preventing styles leaking into or out of our elements. And we know that that encapsulation is intentioned with theming, which is a more global concern. But we have an answer for that, which is CSS custom properties natively available on all major browsers now. They naturally flow down through the Shadow DOM boundary, where we can use them in our custom elements if we want to. And let's sketch this out a little bit so we understand it. So I didn't show this, but here's a little custom property that we can set here in a content view that might have one of those simple input elements in it. And then that CSS that I didn't show in the simple input, we might use a custom property like this, where we say, OK, we're going to default that underlying color to navy, but we'll make it setable. In this case, that's going to be orange. And that's all we need to do to expose that color to be themeable to the outside world of our element. And that works great. And we also had in the styling here in the simple input a little bit of opacity, and that was just the designer said that was there to make it look good. But then we might have a user that says, oh, that opacity, I want to be able to set that too. So now we actually have a little bit of a problem. And that is because we could expose a property for the opacity, but then the user might want to make the padding different or any of the other hundreds of properties in CSS. We have a scaling problem with custom properties like this. So there's something missing here. And on the Polymer team, we helped propose AdApply, which a lot of you may be familiar with, as an extension to custom properties. And we worked with Tab Atkins, who is a CSS spec guru that works at Google. And he told us he actually put whatever information he want as a custom property. And here we're putting the opacity for the underline. But we need a way to make that go at the spot in the custom element. And that's where AdApply came in. If we changed our variable name to match here, then all of those properties could be applied at this spot in the custom element. And this would totally solve our scaling problem. So we see we have a pretty good story with styling, with Shadow DOM for encapsulation and custom properties for theming. But AdApply is a really important piece of that puzzle to solve that scaling problem. So I have some good news and a little bit of bad news, and then some more good news. So first the good news. Although AdApply is not standard in any browsers today, we have a shim in Polymer that's been around since Polymer 1 and carried forward to Polymer 2 and is used out of the box in all of those hybrid elements, along with the sort of spirit of the modular design of Polymer 2. It's an opt-in feature that you can load with the apply shim. But now the bad news, it's looking like AdApply is probably not going to make it into native implementation in browsers. And the reasons are complicated. There's some issues with nesting. There's some issues with how it integrates with what are called pseudo classes, like colon focus, colon hover. Didn't really play nicely together. And this is a link to Tab Atkins blog where he has all the great details and a lot more information about that if you're interested. But back to some good news. Tab and crew in the CSS Working Group have a better alternative, and that's part and theme. So why is it better? It's actually kind of how the platform itself accomplishes this type of theming, as we'll see in a second. And it's also a revival of a previous proposal from a couple years ago. But the issues with that earlier proposal have been addressed. So let's take a look at that really quickly. So this is a native input element, and it has an attribute called placeholder, which probably a lot of you are familiar with. It controls what's going to be shown in the input if there's no value there. And when this was out of the platform, of course, users immediately said, well, I got to style that. I want to style what that looks like. And the platform answer is what's called pseudo elements, and they have this colon, colon syntax here. And you can see it's sort of similar to AdApply. You can specify a whole set of properties here, just going to customize how the placeholder looks. In this case, it'll be orange and centered. So that's how the platform does it with pseudo elements. And a custom element with this new proposal would do it like this. Here's a slider element. And it's shadow root. I can expose these pseudo elements by specifying part attributes. So I make the slider have a track and thumb that's styleable, demable. And then users would be able to target these elements, these pseudo elements, like this with that same colon, colon syntax, now with part, and then the name of the part in parentheses. And so this kind of solves some of the same problems as AdApply. So that's looking good. It works better with pseudo classes, so that's great. But notice one issue, which is we had to be able to target the slider element. It had to be that element had to be in our shadow root in order to style its part. We can't do it from outside. So it's not great for theming, where theming is more of a global question. This is actually the fundamental problem with the earlier proposal for part. It didn't support this kind of composition. So the new proposal has an answer, and that is forwarding. So here is kind of a crazy syntax that may change. It's a little early still. But using this fat arrow, I can say users of my content view element here can style this slider's thumb by this new name here, slider1thumb. And it's comma separated. I could do the same thing for the track. But we also have a catch all that's proposed here. And that would expose all the parts with a prefix or a suffix. And then from, say, that content view element was in some my app element, I can target that exact slider's part there with a slider1 thumb. And it'll go and make that one blue. So that's actually pretty powerful. Looking good for theming. There's, of course, probably the more savvy of you are figuring that, OK, well, that looks a little cumbersome. I got to forward all that information. And that's good. That's explicit. That's probably what you want to do most of the time. But there's an answer for if maybe you can't do that all the time. And that's colon theme, colon, colon theme. It avoids this need to be required to forward everything. So if we use colon, colon theme, then that part name will be targetable anywhere in the shadow root of the my app element, even in shadow roots in elements inside of it. So how might we use this in our simple input example that we've kind of been going through? Well, let's go back to the template and look at that div that we had here that was for that underlying animation that users wanted to apply that styling to, that we used at apply for. We can just add a part, and that's all you need to do. So that's actually looking pretty good and promising. So part and theme were proposed earlier this year at the CSS Working Group meeting. There's a lot of enthusiasm for it because, again, as I said, it's kind of how the platform itself accomplishes this same goal. And Tab is currently fleshing out the spec. You can have input on it if you want. We're actually right now in the process of sort of thinking about what this means and when this is going to come actually natively in the platform, which we think is at least a year or two out. And how we might make a shim for this in the meantime. And so then if we kind of zoom out again and go back to the feedback that we had at the beginning here, making elements faster, smaller, easier to maintain, and add more features, we kind of think that if we make things sort of along the lines of that simple input using extension, adapting to the platform as we need, squint a little bit, that we actually might have made a lot of progress towards that. So let me now go back to the sort of demo that I showed at the beginning here. And this actually is not it. This is a new version that we've been prototyping with some elements made sort of along the same lines as that simple input element. You can see that there's some missing stuff yet. It's not all there. But the old version was built with hybrid elements. And it's actually a lot of code. It's bringing along all that Polymer One legacy API. It's more than 100K. The new version so far, again not done, is just 6K. So that's actually a ton better, and we're really happy with that. So there's still a ton of work to do, still early, but we're iterating pretty quickly. And that's all I have. So thank you. All right, so now we've got the second half, which is Kevin Schauff. So the thing to know about Kevin is he's actually a Game of Thrones die-hard fan. So feel free, if you see him anywhere around or at the Ask Polymer Lounge after his talk, ask him all about it. Normally the code of conduct we have and being kind to each other means no spoilers. So make sure you only talk to him about it. But he's seen the latest episode, so you can talk to him about that, but make sure no one else is around. But he may protest, he may say, no, I never watched that damn show, why did Matt say that? It's just an act. So Steve is here talking about elements and how to make our elements better. We get a lot of questions about how to make apps with Polymer and how to deal with state management and how to do the big things you need to know to build a big app with Polymer. So with that, we've got Kevin Schauff. Thank you, Matt. Such a jerk. All right, my name is Kevin. I'm a developer on the Core Polymer Library and today we're gonna talk about apps. So as you probably all know by now, web components are great for building highly reusable user interface components like the ones on the screen here. And that's not really surprising because that's kind of a core use case that web components, the specs were designed to try and solve. But on the Polymer team, we've also shown that web components are great for building applications as well. And in a lot of cases can be the only component model that you need to build an application. However, that's not to say that web components are all that you need to build really robust, awesome apps using the platform. And we get a lot of questions on the team about what that actually looks like. What's the end-to-end experience look like building applications with Polymer and web components? So that's what I'm gonna focus on today. So imagine this scenario, right? It's a Monday morning. You're sitting in your office, minding your own business, and your boss comes barging in, a boss, kind of a jerky boss, maybe like Matt. He comes running and he's like, hey, I know I've got a great idea. I know how we're all gonna get rich. I need you to build us a real estate listing app. Okay? Now stay with me. It's gonna be awesome. It's gonna have a really slick, cool user interface, really modern. Users can search for whatever they want. It's gonna come up in a nice list. You'll have sorting and filtering in the list. You can switch to a map view with pins flying around showing you where all the houses that meet your criteria are. You can go to a detail view, maybe a mortgage calculator up in the corner. All that data is gonna be pulled out of a server and kept in sync as it's changing. The users can log in, bookmark their favorite homes. Of course, we're gonna have a responsive layout so that this one app works on all the different form factors we need to target. It's gotta load super fast on 3G. It's gotta have amazing SEO, so users can find all these houses using our site. Of course, it's gotta be accessible and international to reach the most number of users we can. And you have two weeks to get it done. Right? So this is the question. Do you feel like you have all of the tools and patterns that you need to get up and start it quickly? To build out futures quickly and iteratively to deliver a great, easier experience with maintainable code that's gonna scale well into the future because that jerky boss is probably gonna want you to keep adding features after you get this first proof of concept done. So if we walk through the scenario from start to finish, there's actually a lot of things that we as web developers kinda have to learn and master and know where to start. So we need to make sure that we know how we're gonna structure our application to deliver great performance to our users. We wanna scaffold out our app from a good starting point so we don't have to reinvent the wheel with every app that we're doing. We wanna make sure we're following good patterns for factoring the UI so it's maintainable and reusable. We wanna make sure it's internationalized and accessible. We wanna make sure that we're following good layout patterns for responsive layout. We want to make sure that we have really solid and repeatable patterns for managing state in our application and making sure the application is easy to reason about. We want good developer workflows for serving and debugging and linting and testing during development. We wanna optimize our build for deployment. We wanna make sure that we're efficiently serving the application for deployment. It's a ton of stuff that we have to master to get an awesome user experience out. And it can get kinda blurry, right? Trying to figure out where to start. But the good news is there's help. So the Polymer App Toolbox we launched at IO last year actually provides answers for a lot of these topics. So the Polymer CLI, the init command there will scaffold out our application from a good template, a good starting point. We have the app localized behavior which helps us format strings in our Polymer templates and browsers actually now have good cross-platform support for formatting numbers and dates through the internationalization API. The paper elements that our elements team produces have always had a really strong focus on accessibility. Our app layout elements provide components for common layout idioms, responsive layout idioms. The serve command in the CLI gives us a nice development server to use while we're developing. Chrome DevTools actually has built-in support for web components which is really awesome. Shout it out on my custom elements and we're committed to helping the Chrome team improve the development experience using web components. Polymer Lint provides a static analysis during development. Web Component Tester gives us a framework for continuous integration. And the Polymer build gives us, performs a lot of the common optimizations that we need for minification, translation, that sort of thing. But as you can see, there are a few places of the story that I actually sat down and built this application that I talked about for this talk. And there are a few places of the story that I realized that we probably deserve a little more focus than we've given them in the past. So these are what I'm gonna talk about today. Talking about how we're gonna structure the application to hit the best performance, how we're gonna factor the UI, how we're gonna manage state in an application, a complex application like this, and how we're gonna serve it for deployment. So there's a reason that I put performance first on this list. And that's because the number one way you can make sure that your application performance is gonna be horrible, is to think about it at the very end of the app development. Get it all done and then go, hmm, I wonder how fast this thing loads. So you owe it to your users to make performance the number one concern for how you're structuring your application. And a lot of the front end web development world is enamored with this concept of server-side renderings as a means to good performance. Where they'll send down server rendered static HTML first to get something painted to the screen while we're waiting on the rest of the JavaScript to load. But when you think about it, unless your application is just a kind of a shell around passive static content, what the user actually wants to do is interact with your application, right? They wanna select a departure date or sign up for a newsletter or buy a product or bookmark a house in our real estate application. And server-side rendering really helps with none of this. It just gives them something to look at until the code loads so they can actually do what it is they came to your app to do. So if you still have to send a huge bundle of JavaScript down to your users to transform that initial rendering into an interactive application, your users are still going to hate you. And if you don't believe me, what I'm gonna show up here are two actual real-life server-side rendered applications. So I get this going. So, as you can see, they actually render to the screen really fast. And they give you this impression that you can start using the application, it's there. I've got this throttle down to 3G, so this is running in Chrome DevTools on kind of mobile type settings. And you can see it results in this horrible experience for the user. They have to sit there wondering when the thing is actually going to become interactive. And this is not to say that server-side rendering is wrong or a bad technique to have in your arsenal for good performance. In fact, Trey Shugard is here to give a talk tomorrow about some really awesome work he's been doing on being able to server-side web components on the server. But rather, it just says that there's just really no shortcuts to delivering a good user experience. We just have to be focusing on the right metrics. And for a lot of applications, the right metric is probably not the time to first paint but the time to interact with, the time until the user can actually do something on your site. And the best way to ensure a good timed interactive is this. Don't make the user wait for anything they didn't ask for. So that means only send down what the route requires in as few round trips as possible, sending as little duplicate information as possible between routes, right? And that sounds easy enough, but historically it's been fairly difficult to achieve. And this is why on the Polymer team, we've been working really hard to kind of promote and get people thinking about the purple pattern. And this gives us a straightforward pattern for factoring applications for optimal delivery. So in short, we'd start by factoring our application around decoupled routes that can all fit back together into an interactive experience. And then we're going to push down only the code that's needed for the initial route with as few round trips as possible. So only what the user asked for in their first URL, that's the only thing we should be burdening them with first, right? And then from there, we're going to render and get that initial route interactive as quickly as possible. And then while the user's enjoying the route that they asked for, we can, the service worker can boot up and in the background, we can be pre-caching the rest of the application. So the code needed for all the other routes in the background so that when the user is ready to move on to other parts of the application, we can then lazily import the code that's needed for the remaining routes right out of the cache and get those rendered quickly as well. So we summarize this as purple, push, render, pre-cache, and lazy load, lazy import. And it gives us a nice pattern to ensure that we're giving the user exactly what they want and no more, right? But that's not good enough just to follow the pattern. We actually have to be measuring. We have to measure the metric that makes sense for our app. And we recommend using WebHS for this. So they've recently added an easy mode so you can just go to webhs.org slash easy and it'll open up the site pre-configured for testing on mobile devices. So you just want to check the Lighthouse checkbox there, drop in your URL, hit start test, and then it will run a performance test of your application using real mobile devices and real throttle networks. Once it's done testing, you'll get a result page, something like this. If you click into the Lighthouse score, you can scroll down just a little bit and in there will be the timed interactive. So this is a really good metric to be focusing on when you're building your app to judge how good your experience is going to be. And then to ensure a good experience, we recommend setting a budget for yourself. And we like to target around three and a half seconds on these mobile 3G networks. And on these settings, the first byte from the server actually doesn't get down to the client until two seconds. That's because SSL negotiation requires a lot of round trips to the server. And so that eats up a huge amount of your budget, right? So that only leaves you a second and a half to get something interactive. The thing that the user wanted to do on your app get that interactive on the screen. And we found that this one and a half seconds of budget, if that's what we're budgeting, translates into about 50 kilobytes of code, right? That doesn't sound like a ton, but with new Polymer 2.0, that starts at around 12K. So that leaves you about 40K left for your budget. As you heard in our last talk, we're doing a lot of work coming up in the future to make sure our element sets are highly optimized so that they can help you fit into this budget. And we found that the best way, anytime we're putting our recommendation out, we want people to follow us to give it a name, right? Because names are powerful. As soon as you have a name, you can talk about it with other developers and that sort of thing. So we're calling this one Purple 50, right? So this means apply the Purple pattern and budget yourself to 50 kilobytes. That's a good rule of thumb to ensure that you're hitting a good performance. Okay, so let's see what applying the Purple 50 pattern looks like in our real estate app here. So we're gonna start with an application shell that's responsible for handling the route on the client, loading the top level components for the route. And then we'll think about breaking the application down into meaningful routes. So for this application, we might have a home route and an explore route and a detail route. So we'll start with those features and then we can add more later on as the boss comes in and asks us to. And then for each of those routes, we're gonna build a custom element that encapsulates the view needed for each of those routes. So here we'd have a homepage, explore page, ETL page, something like that. And then following the Purple 50 pattern, we're just going to try and keep an eye out on the code that we're putting into each route and try and stay within our 50 kilobyte budget. So this is the general idea how we're gonna approach meeting all of our performance requirements and ensuring we have a good structure for the app. Okay, so we've got the structure down. Now let's move into how we're gonna start building out our user interface. So this usually involves taking UI mocks that designer might have sketched up and then factoring those, drawing boxes around them, bringing them down into individual components. Some of those we can get off the shelf and others are gonna be our application-specific components. And then we'll use composition to build those all back up into our final application. So we're gonna wanna leverage reusable components wherever possible, right? Because the best line of code is the one that you didn't have to write, right? And just like MPM is kind of our go-to source for JavaScript libraries, webcomonents.org is our go-to source for reusable web components. So we can go there and see what in the catalog might fit our needs in any given application we're trying to build. So for a lot of our app UI, we can just stand on the shoulders of others in the community and not reinvent the wheel where we don't have to. So if we look at our mocks, we can get elements out of the catalog, maybe tabs for the navigation here, maybe some icon buttons. Even the Google map, right? There are components for things like maps in the catalog as well. But again, using custom elements for our reusable, using custom elements are not just for our reusable components. We can use those to build our application-specific components too. And this has a lot of benefits over using a non-standard component model, some other library to do your application. So among these, so we can achieve a smaller payload because we're using the component model that's built into the browser. We don't have to download code to get that. Whenever we're scaling an app to a large team and encapsulation is really important to help with that coordination problem when developers aren't all working really closely together. And we get encapsulation for free with web components built into the browser. Like I mentioned before, Chrome has really awesome DevTools support for custom elements in Shadow DOM. And most importantly, our components aren't locked in to any given framework silo. I want to pause here because that's a really important point. I don't just mean that you're just locked into a different silo. That's not the point, right? The point is that as long as you're using custom elements as your component model and properties and events as your interface for the component in Shadow DOM to encapsulate the rendering, really how you transform the inputs to the component into whatever the component does is purely an implementation detail of the component. We have a really nice abstraction between the interface and the actual implementation. So we can actually extend from whatever base class we want without losing interoperability with other elements in our application. So if we were building an application, we could build the whole thing by extending Polymer element, our base class that we showed with Polymer 2.0. But that's just one choice. Down the line, we could switch some of our components over to using the simple element base class that Steve just talked about building. And those can totally work side by side in your application with Polymer elements. We could try Skate.js. Trey is here and he might sell you on Skate.js. You could try that out in your application without switching the whole application over. And someone is bound to make some new, better web component base class in the future that we all might want to shift over to. And we can do that incrementally over time without throwing our application away each time, right? So think about that. If you wanted to change from popular framework one to popular framework two today, that's a total rewrite. You throw out your application. When we start building our applications out of web components, it gives us this ability to incrementally change over time without having to throw everything away and without losing kind of the ability to innovate. Okay, so back to our real estate app, right? So let's take a look at how we'd factor our application views down into components. So let's just focus on this one route, this explore page element, and then we can use reusable elements from the catalog. So the Google map, the paper icon button, we can even use native selects here, style them a little especially, right? And then we might want to factor some of the rest of the view down into other application-specific components that we're just going to compose together. So we're just going to keep kind of breaking down and composing back up until we've got the final view for our route. And then at the top level of our application will be our app shell. And this component is responsible for the top level layout of the application. So the real estate app is our app shell. Inside of there we might have app toolbar, paper tabs, so it's doing the layout and the navigational components. And then the app shell is also going to have the code for the router. And then the knowledge of how to lazily import the components needed for each route. So if I go to the explore route, the app shell would be responsible for loading that explore route element or explore page element and getting it rendered and interactive. If user changes to a detail route later, it would load and get that element rendered. So to get started with an app shell template kind of set up for this sort of factoring, you can check out the Polymer starter kit template in the CLI to get going on that. Okay, so we talked about structure for performance factoring the UI. Next we need to bring our application to life by loading it with data and managing state changes in the application. So application state management is kind of one of these areas that the web platform has really had the least to say about. And so it's an area that we get a lot of questions about. It's probably our biggest area of confusion and questions and that sort of thing and request for guidance. So two years ago at our first Polymer summit in Amsterdam, I gave a talk called Thinking in Polymer. And it put forth the concept of a mediator pattern we think about composing and coordinating state changes between components. So just a really quick recap. In short, a mediator owns a scope of other components. So it owns them and is responsible for propagating data down to the components, listening for events from the components to handle user interactions, that sort of thing. And then based on those events, perhaps mutating some data and propagating those changes back down to the components it owns as well as up and out via events. So the mediator pattern is really useful for creating reusable standalone components that can handle their own complex state changes internally while still communicating the changes up and out. And that's because it encapsulates the statements management, so it's portable. So these are the kind of things you're gonna find in webcommonets.org, things that you get off the shelf. You don't have to, they don't have to tell you how to do your state management. It kind of comes along with it. And using this pattern, we've shown that you can actually just kind of keep composing mediators into mediators. And as you keep building that up, you end up with a final top-level mediator that's mediating other components and you can build your application out of that. So it's kind of the turtles all the way down mediator concept. However, the community has also shown that there can be benefits to having less granular and even global mediators of state. So it's kind of a trade-off space in a spectrum here. And particularly as components become more and more app-specific and are largely used to compose kind of more generic components together with application-specific logic, having one mediator for all application data can make the application more easy to reason about. And it opens up nice developer workflows that we'll see in a minute. So these global mediator patterns, so Flux is one really popular example, popular in the React community, they kind of formalize this concept of having one central place to put all the application state that flows down to components. And one place to dispatch events that cause application data to be mutated. And pass down. And so we can just think about you know, these kind of global mediator patterns as the mediator patterns, the instance of the mediator pattern is just at a global level, right? Now there are lots of choices out there to implement the global mediator pattern, kind of too many to go to in this talk. But a lot of times developers just come to us and say, just tell me one to use, just tell me anything that you know is good and I'll do it. And if you're looking for that kind of answer, we actually think Redux is a really good choice. It makes really well to web components and we know that people have had success for it. So Redux is really simple. It really has very little magic in it. It implements a really simple mediator pattern. And there's a really strong ecosystem that Redux has built. The library actually starts really simple, very easy to understand concepts. And as the needs of your application become more complex, there tend to be ecosystem based answers to handle that complexity. So a lot of times these take the form of add-ons that help you manage async flows of data in and out of a server, for example, or kind of complex flows through the application that the user is going to take, that sort of thing. Okay, so let's just take a look at what applying Redux to this application would look like. So first let's introduce some terms. So in Redux, the global mediator, they call that the store. And so for passing data down kind of into the application from the store, there's a subscribe callback that elements can call to be notified of any changes to state kind of the global state in the store. And then actions can be dispatched kind of in place of events, actions are dispatched to the store to tell the store to kind of change the current state of the application. And those are done using what they call reducers. So these are functions that we write and give to the store that tell how to change the state based on actions that happen. So like I mentioned, there's a bit of a trade-off going from localized state management where it's all self-contained to global state management where you put it all in one spot. But one of the key benefits is that it opens up really nice developer workflows like the DevTools that ship in Redux. So this is a screenshot of the DevTools that you can install into Chrome. And they just sit there in your DevTools. And we can, on the next screen, I'll show what this looks like. But basically because we're centralizing where all the actions that can possibly mutate the state go, it's able to log everything that's happening in the app that led to a state change. And then because all the state is sitting in one spot as well, it lets us see all of the state altogether. So that kind of helps make it a lot easier to reason about as you're building an application. So here's an example kind of using the application in the DevTools. And as you can see, for every user interaction that happens in the application, in the action log there, we get an action that's logged. So we can kind of have a log of every event that's leading to a state change and actually see the state changing over time in the application. So for each user action, you get action state changes. And then what's really neat is it introduces this concept of time travel debugging, which you might have heard about. So it lets you scroll back up into the action log. And because the DevTools are snapshotting that global state object at every action, you can actually play back. You can go back in time, kind of look for, for example, if you had a bug, like maybe this little pop-up got screwed up there. I can actually jump back in time, find the action that caused the state to change in a way that caused the bug, and makes it a lot easier to kind of debug your app and find what's going on. OK, so there are lots of ways to connect custom elements to Redux. And it's fairly simple to do so. So I'm just going to show one way. There's lots to do it. But one way is to build your elements as generic views that just accept properties and fire changes out and don't mutate any state themselves. And then what you can do is then subclass that element so that the generic element you could use in any application in your lineup, that sort of thing. But then you can subclass it and connect it to the store of a given application. And then in the subclass, to connect it to the store, we would just have the element called the subscribe callback to Redux to get any state changes and set those into properties. And then we can add event listeners, just normal DOM event listeners that listen for events, and dispatch actions to the store that are going to change the data. And one other kind of pro tip I want to point out is that if you're kind of moving and looking into global state management techniques, most of them don't come out of the box with a way to kind of separate all of the state management code. It kind of leads you towards having a big blob of all of your application, its global state management. So you put it all in one spot. And this kind of runs afoul of our purple concept, right? We only want to load the code that the user actually needs for the route that they added. So that's something we want to pay attention to that we'll see here. So just very briefly, so pretend that Explorer page is the element that we created, the view just takes properties in and events out. And we're going to subclass it. In the constructor, we're going to call redux to the subscribe method. We get a new state object. And then we simply dereference any state that's needed in this particular component out of the store and set those into properties. And here I'm using Polymer 2.0 Set Properties API, which allows us to set a batch of properties to an element once really efficiently. And then on the other side, we want to dispatch actions and event listeners. So for any events that happen that should cause state to change, we just add an event listener and then call redux's dispatch method to send an action to the store. And here I'm using what redux calls action creators, which are just functions that we write that take parameters and return an action object. So I'm creating an action and dispatching it to the store based on the event. So we're going to do that for any events that our application fires that needs to mutate state. And then last, we want to make sure that we're lazily loading any of the state management code along with our element and not putting it centrally in the application. And the way I've done this here is redux has a lot of extensibility hooks. So I wrote a very simple enhancement to redux that allows me to add an API to the store to lazily install the reducers, the things that manage state changes into the store as more and more elements come in. They're lazily adding the state management code to the application. So here, the listings reducer is responsible for handling any changes to the listings. I load that along with this component, which, again, via the purple pattern is being lazily loaded just for that route and installed into the store. And then, likewise, any action creators, any action-based logic, you want to make sure that you're importing that along with the component as well. So we will continue experimenting with patterns for state management and how they can fit into web component-based apps. And I wanted to also call out a community library called Polymer Redux, which takes a declarative approach to binding Polymer elements to Redux. So that's something you could also check out. And like I said before, there's a ton of innovation happening in the community. And redux is just one of those. Most of those can also be happily paired with web components. So we encourage you to share ideas, share how you're building applications, share them with us, share them with other people at the conference today. So we're done managing state. Finally, we have to actually serve our application to our users. So we'll need to host and serve it to our clients. And although a lot can be accomplished by statically serving our client application, we feel that there are a few key minimal features that are required to be implemented on your server to actually get the best user experience. So these include serving the app shell for all routes. So based on any route in your application, making sure that you're serving the app shell, which is then responsible for lazily loading the components you need. Using HTTP push when possible to reduce round trips and to serve granular components when that makes sense to improve the efficiency of your caching. And when HTTP 2 is not possible, then alternatively serving bundled assets but that are bundled per route. We want to make sure that we're sending the optimal code down for the capabilities of the browser. So for browsers that implement ES6, for example, we might want to take advantage of native custom element subclassing by sending non-transpiled code, for example. And then finally, we want to make sure that we're serving SEO-compatible output for any bots that might visit our site. So we've been doing a lot of work on a reference server that does all of these good things that we're calling purple server node. And that's designed to work hand in hand with the outputs that come out of the Polymer build system. So it's a node-based server that's set up for client-side routing. And then it has a lot of extra features to do these kind of smart things. When I was doing a run-through a couple of days ago, Chris, awesome developer on our team, was like, you're kind of short-selling purple server node by using that kind of lame database clip art there. Can't you jazz it up a little bit? Because the server is actually pretty awesome. So I put some sparkles on there. I don't know if that made Chris happy. But basically it has some presets in there that allow us to that can automatically detect the capabilities of the browser and serve the optimal code for each. So for browsers that have ES6, for example, it can serve raw ES6 code so that we can take advantage of the native ES6 subclassing of custom elements in the browser. And then for browsers that don't have ES6, we can serve the transpiled code as well. It can also differentiate between browsers that have push so we can serve a granular component to reduce round trips when possible and serve bundles for non-push compatible browsers. And then it will kind of do the permutation of things and serve the right set for the capabilities of the browser. And then last, kind of under this heading I'm calling bot, so these are things like Google being search crawlers, that sort of thing, as well as social snippet generators that kind of create the cards that show up in social networks. We're actually integrating purple server node with another project that we're working on on the Polymer team aimed at tackling this SEO problem with web components. And that will actually serve rendered HTML, fully rendered HTML that's compatible with SEO and social snippet generators. And I'm just going to tease that today because we have a whole talk on that tomorrow that you guys should stick around for and check out. All right, so you can check out the beta of purple server node here. Give us feedback. So this is kind of set up for you to deploy onto a hosting site like App Engine, something like that. Sit behind a CDN and get really good serving efficiency. OK, so with purple node server, we kind of fill in the last big gap in our story about how we're actually going to deliver these awesome experiences using the platform. And hopefully you feel more confident now that you have all the tools you need to build robust real world apps like this real estate app. And at the beginning, I said we had like two weeks, right? That was our setup. We had two weeks to build this. So to be honest, I didn't have a whole lot of time to kind of put this talk together, but I really wanted to build an app. And I actually was able to build this out in two days, this much. It's a proof of concept. I still need to do a little more work to get the demo ready. But of that two days, like a full half of it, like a whole day was just generating a bunch of fake real estate JSON so that I didn't get sued by people by showing their house in my talk. So hopefully from this proof of concept, I feel that I'm way more confident now that I've got a really well factored UI with components that I can reuse across other applications. I've got easy to debug state management that will scale well to adding more routes, adding more features. I know I'm going to be set up well for performance. I've got a structure that scales well for performance with the purple 50 pattern. And I've got a great serving environment with purple server node. So hopefully you feel like you can be this productive with the web platform too. So we're going to continue working to take all of these kind of best practices that we come across as we kind of tackle more and more challenges on the web platform and provide guidance and provide features and provide products to help help move this along. Thank you very much. All right. So it's lunchtime. And I just wanted to do a couple quick announcements first. So food will be over across the way. There is a station over there for kosher and halal meals if you can't find them, just ask any staff member. At the same time, in about 15 minutes in the code lab space, we're going to start a casual Q&A session about the modules change. You can bring your lunch in there and ask questions in there. Steve and Kevin will be talking about Game of Thrones and everything they just talked about over in the Ask Polymer lounge. And there's also a Polymer quick start or quick tour area over there where you can preview sort of our new Getting Started guide and give us feedback and help us really hone that. So we'll be back here at 1.30. Matt, again, just got a lot shorter this time around. Getting no. Hi, I'm Elliot. I am a software engineer on the Polymer team. Welcome again to the Polymer Summit 2017 over in Copenhagen. Hope you all had a great lunch. We have a lot more talks on the way. It's going to be a lot of fun. Actually, so up next, we have Zilling and Mikhail. They're part of the YouTube team. Actually, Mikhail almost didn't get here because of visa issues. So I guess you might say some of our YouTube content wasn't available in this country. But well, give a round of applause for Zilling Mikhail coming to you in live and not YouTube live. Hi, everybody. My name is Mike. And I'm Zilling. I'm a software engineer at YouTube. I work on features and cool new tech. And I also work at YouTube. And I work on web architecture and infrastructure. So we're going to talk about how we use Polymer at YouTube. But before that, let's talk about just YouTube, just for a second. So I know it's kind of an obvious question, what is YouTube? You can ask anyone. And the person will say, well, it's where you go and you watch videos. And it's a popular website. And everybody are using that. It kind of makes sense. But so we know that YouTube is popular. But how popular? We went to Alexa to check. And apparently, we're number two. And unsurprisingly, it means that we have a lot of users who actually have 1.5 billion logged in users that are using our website every month. And not all of these users are desktop users. And actually, some of these users are never using desktop at all. But still, desktop is one of the largest and one of the fastest-growing platforms that we have. Unsurprisingly, people come and watch a lot of videos. How much? One billion hours a day. And again, this is across all platforms. But what is it billion hours? Well, it is 140,000 years. It's not 114,000, which would be a time before the ice cream cone was invented or an M-tube. It actually enough time in one day to go back all the way to the Stone Age. So every time you talk about anything that we do, you have to add that we are one of the largest at doing that, be it just pushing bites through the inner tube, or maybe image hosting, or a social network, or even a search engine. So hopefully, it gets the idea. We're kind of a really big website. So our motto is, broadcast yourself. And that means being inclusive. So we right now offer ourselves in 80 different languages, and this continues to grow. Additionally, we're accessible, and we consider it a first-class citizen. That means even if we're building a website from scratch, it has to be accessible from day one. Also, we have to deal with all kind of weird browsers. We have a lot of extensions that are specifically targeting YouTube. And while we're dealing with this crazy world that's happening outside of us, we do the main thing that we want to do on YouTube, we serve our videos. And we do it pretty efficiently, thanks to all the amazing job of many engineers that spent a lot of their time making sure that we do it really fast. Actually, it takes us less than two seconds worldwide to start video playback. In Denmark, that's 1.5 seconds on the median. Way to go, Denmark. Anyway, YouTube is a very huge and a complex project. And by no means it is a monolithic website. We have dozens of internal and external mini websites that fulfill different roles within the YouTube ecosystem. And also, YouTube is, I think, 12 years old right now, which means that we have over a decade of engineering decisions. We have over a decade of code that we have. I mean, we rewrite the code from time to time, but still we're talking about more than a decade. So we found this post on Reddit that we want to share with you. I'm fairly certain that in the eight years since Google bought YouTube, that very little of the original code remains. This guy has a lot of faith in us. The thing is, when you spend 12 years working on something, you end up optimizing that product. And despite having this very large code base, despite working on it for so long, it is a very highly efficient platform, because a lot of clever engineers spend their time working on this platform. Yet, we realize that this platform has limitations. And at some point, these limitations became too obvious for us. We were hitting some of the performance limitations that we were unable to overcome with the stacks that we had. And we were hitting some of the engineering problems that were unable to move as fast as we wanted to do. So we started to think, how can we make things better? How can we move forward? And what will be the next thing that powers the YouTube? So what did we have? What was our starting point? We have a server-side rendered application. And we always render on server-side. This happens even when you go through pages. And when we do an Ajax navigation, all the magic still happens on the back end. We built a lot of homebrew frameworks. They're not real frameworks. It's more like tools that we built for our products. We do have homebrew frameworks for JavaScript. We do have them for styling. We do have them for layout. So basically, we built things that fit our needs at a specific time frame. And the thing is, we're looking for some things that would be a little bit more modern than what we had before. And we had a couple of goals. We had a couple of restrictions of what we wanted. We have a couple of requirements from the framework that we use a framework or a library. We wanted a lightweight framework, and both in terms of the size that it will add to what we have and in terms of the footprint on our ecosystem. We wanted something flexible. We didn't want a framework that would force us to do things. We have a lot of unique challenges. We have a lot of unique business reasons to do things some specific way. So we didn't want the code to be in our way. We knew that component model is what we really, really want. There are a lot of benefits of using components. And you can look at these benefits from different angles. There are benefits of organizing your code as a component just because everything is very localized. There are benefits of deploying and building application as a set of components, because by doing that, you create clear boundaries. And then if the framework can provide you with encapsulation that happens on the clients, that's even better. So we wanted components. We wanted to iterate fast. And at some point, we realized that what we had was good enough that way our engineers work is not efficient. We want to be faster. We want to build new cool features. And when new engineers join our team, we want them to be able to get up to speed faster. We don't want them to spend a lot of time learning. Also, we didn't want to stay with a stagnant technology. And we wanted to do as few infrastructure work as possible. It's always very beneficial when you have a team of specialists working on a platform that you build your application on top of. So we wanted to be able to work with the platform and build relationships and feedback loops that are beneficial to everyone. After all, YouTube.com runs on the web platform. And the health and growth of that platform is good for YouTube. Also, this is kind of a big surprise, but we had a very direct and simple order from our president to make YouTube great again. And that is a real tweet. We don't know why, but that was pretty obvious. And we did. Anyway, back to reality, we had a lot of requirements, a lot of features, a lot of processes at our website. We do things specific way because YouTube is an existing product. It's been live for a long time. We had to make sure that whatever we do next will help us to transition into a new world, not break it. We looked at Polymer. We actually looked at many different frameworks. And now, because we are here at the Polymer conference and we're talking about YouTube at Polymer, it kind of makes sense. We picked Polymer. It wasn't that obvious when we just started because Polymer was actually not the first choice on the list. But as we went through this list, as we checked the checkboxes, we realized that now other frameworks, while great and sometimes fulfilling a lot of the checkboxes that we had, not necessarily give us the flexibility and the full picture that we wanted. And also, the Polymer at the time was just metering enough. It was going from 0.5 to 0.8, I believe. So it took some time for us to get used to the idea that we're going to use Polymer potentially. And just going and rewriting YouTube in Polymer, that would be too crazy, even for us. We didn't want to do something that big. So we wanted to try something smaller. We started with some tiny experiments, some tiny internal tools. And then after that, we launched YouTube gaming. This happened about a year ago. It's a variation of YouTube that's focused on videos and live streams of games. Afterwards, we launched YouTube TV. This is recent in the last few months. YouTube TV focuses on TV streaming and DVR. These two are both complex projects with a large user base, but they're not on the scale of YouTube.com. Also known as one of the largest Polymer deployments in the world, because again, YouTube, we do everything at this crazy scale. And building all these amazing websites gave us an opportunity to learn. It gave us an opportunity to polish our infrastructure, develop best practices. And at some point, we finally realized that maybe it's now the time to do what we planned all along right from the beginning. And we announced that we're launching or we're building YouTube on Polymer during the last Polymer Summit. And we're actually launching YouTube on Polymer. And I don't have an exact date, but we're this close to doing this. We're excited about that. And this is going to be the largest Polymer deployment in the world, obviously, right? Because this is YouTube and we do everything at scale. I mean, we're pretty confident it's going to be the largest one. I really hope. Some of you may have already seen articles or have even tried it. The important thing is YouTube is available for opt-in right now. Yep. You just go to youtube.com. There is a big button right in the middle of the screen. You press it. You get the new experience. And also, please, there are a lot of articles on the internet talking about how you can edit cookies and everything will be great. Everything will not be great. Don't do this. It's not the right way. Just go to youtube.com slash new. So what is it made of? Well, the site is 100% Polymer. And by that, I mean, from the app tag down, you can inspect the site. And you can see all the components right there. We have about 400 components that are just YouTube specific. More than 1,000 components across all the code bases. And we are happy that we can share a lot of our components across different projects. So let's talk a little bit more about how we use Polymer at YouTube. But before I go into that, let's take a small step back. And let me explain to you that we have this thing called, it's our universal data API. And it kind of shapes all the applications that we build. Keep in mind that YouTube runs on everything. Also, following a very sophisticated naming process, we internally called it inner tube. So here's a small set of apps that run on this universal data API. And we're not just talking about web. We also have apps on iOS, Android, game consoles, TVs, and of course, different verticals. We have apps for kids, apps for creators, apps for TV, apps that replace your TV, that run on TVs, and everything. And all of these run on this universal data API. So what does it look like? Well, what we ended up with is a presentational API, which is an API that structurally kind of defines how your page structure will be based on the data response in JSON. And we found that this structure maps really well to web components. As each web component receives data, that data defines the children that that web component will render underneath it. And then in that component, it takes a subset of the data, some subtree, and passes it down to the child. This continues until you hit a leaf node. And while doing this, we realized that what we do is render a lot of lists. In fact, YouTube, YouTube's a bunch of lists. You may think that we serve video, but from a web framework side of view, what we do is draw lists over and over. And these lists are super dynamic. The machine learned, they experimented on, they're ranked in their massage. And every single time that you come to YouTube, these lists are re-ranked, re-experimented on, and it's going to change. Also, if you're a list fan, I told you so moments. So this is a standard YouTube channel page. There's some obvious lists here if you think about it. But I'm going to highlight just a few to make it a little bit more obvious. There's menus, we've got navigation bars, search results, subscriptions, horizontal shelf video lists, lists of horizontal shelf video lists, and again, these change. They're dynamic, and every time you come to the site, we've got to redo it. Also, we had to stop at some point, and this is not all the lists we have, but the screenshot became too crowded to actually show all of them. So because this is so important, we spent some time and optimized our list rendering. We do some of the basic stuff that's pretty well known now, DOM diffing, efficient reuse of the DOM, but we also do other things like signal-based deferral. That means that we can block some of the content rendering until we get a specific signals, and to name a few, we can wait for the content in the viewport to appear, or we can wait for the video to start playback. As well as doing that, we also do lazy and budgeted rendering. To explain that, I have kind of a simplified view of how the render thread works. People think you play video, you go fetch some video bytes, you give it to the browser and video plays, but YouTube's not that simple. We do adaptive bit rate, we use dash, that means you have to use the media source extensions, which means the render thread kind of looks a little bit more like this. This is still a simplified view, but you got to fetch the audio bytes, the video bytes, you got to initialize the video API, you got to append those bytes, and then at some point, video starts playing. And so you can see that the render thread is actually fairly busy handling asynchronous events, and then you introduce UI code. UI code can take a lot of render thread time, and so in this case, you get these squiggly lines, and what those curved lines mean is that we're not able to receive an event on the render thread as fast as we'd like it to be. So how do we solve this? Well, with the scheduling system that we have with lazy rendering and budgeting, we try to break up our tasks, our UI code, into small chunks and fit it in between the time that these events come back. Also, we're cheating a little bit on screenshot because this screenshot, well, there are no user interactions here and our users tend to do all kinds of weird stuff like typing or scrolling or sometimes resizing the page. And while that obviously can affect the performance, splitting our UI into smaller tasks allow us to have as high FPS as possible in these cases. Also, I mentioned scheduling. We have a global scheduler on the YouTube website and also deals with priorities. This is very important because not every single piece of code and not every single piece of UI is of the same importance. So let's say you come to the YouTube watch page. The most important thing for us is that video. This is the thing that we want to prioritize over all other elements on the page. But following that, we have things like the watch next. This generates a lot of watch time on YouTube, so we want to render this next and we do this in a lazy budgeted format. Some things then are actually extremely deferrable like my comment here. Also, we know that YouTube comments doesn't have the best reputation, but they're kind of still important. They've gotten a lot better. Basically, list rendering, it's a foundation of performance at YouTube. So let's talk about the development process. How does it look from inside for an engineer working for Google? Well, the cool thing about having components is that as long as you can render this component, as long as you can create it separately from the rest of your application, you can bypass a lot of issues that we had before. And again, YouTube is very large, bringing up the development environment to work with YouTube, take time, and take resources, and being able to just take one component and then test it separately or create a demo or feed it with some data to put it into a specific state is critical for development process. So this is an example of a component, and the image on the left is what the engineer created. This is components that consist of some other components, but the pictures in the center and on the right is the two special flavors that are created automatically unless the engineer wants to change them in some way. And the middle screenshot is the dark mode. Also, I will highly recommend trying it on the new YouTube. And the right image is the RTL, so right to left version of the website. And all of these versions of the component are being tested, screenshotted separately. So you can see here. There's some changes on this component. It's not very obvious just looking at it, but once we add screenshot dipping, we can tell that things have shifted down, some padding has changed. This is a level of testing that has never been possible at YouTube before, but we've really embraced it and it's a core part of how we develop now. In addition to that, we took this granular component and testing approach and took it a little bit further, and we created something called Storybook. And what Storybook is, is a way for you to interact with the component during development. We have here a Storybook for something called a video list cell. And on the left of that Storybook, or the component, is a list of stories that have been generated. This is either a fixture that we saved before or some data that was automatically generated. And this allows you to kick up a Storybook and then click on any of these stories and you can see your component rendered in that state. This is the pretty much one of the best ways to interact with the component during development. But moving further, we integrated Storybook into our internal Polymer catalog, which is another way for you to bring up a component without needing to type anything on the command line. You can search for a component and bring it up in any time in its commit history because we've integrated this with our version control system. You can browse the documentation, you can bring up the Storybook, run unit tests, and look at the API and all that stuff. Also, all of these tools are Polymer projects. So welcome to the component inception because the Polymer dashboard is a component or a set of components rendering a Storybook which is a set of components, rendering a component which renders a lot of lists. So what's the state of Polymer at YouTube? We've mentioned that we have over 400 components on YouTube.com and over 1,000 components across all properties. So we have a data model that maps really well the web components. We have a highly optimized list rendering system and we have a suite of component level testing and debug tools. But there are a few things that we're still trying to bring up to date. Which probably not a big surprise from a website that just dropped IE9 support. So we're still on Polymer 1.x. We got a lot of stuff to migrate and we're moving to hybrid elements which have made it a very easy migration path. We're really excited about Polymer 2. It has a lot of increased flexibility and expensibility that we're looking forward to. And we're still on Shady Dom. So YouTube's a little bit more conservative when using these really cutting edge APIs. So we're still reliant on polyfills but we're really excited about Shadow Dom and we're moving our tools and our infrastructure to support this in the future. So to wrap it up, we feel like we're building for the future. We finally have modern tools and we're working with a web platform and we help push the web forward. The site is now faster to iterate on and we have much better test coverage than we ever had before. And while this is partially thanks to the major rewrite, Polymer played a major role helping us organize our internal workflow. Overall, the site is faster. That's up to 15% faster depending on the page. And developers are happier. We finally share components across our projects and we're using a standard stack instead of developing everything by ourselves. And we can't wait to start shipping all these cool new features after we launch. Thank you. Thank you, everybody. So up next, we have Kunal. He is from Netflix and Netflix, as you guys know, have been going through a lot of modernization throughout the last few years. If you think about it, they used to actually mail out physical DVDs and not use web components. But also, I don't know if you guys have heard about, have seen Netflix's recent financial report, but they're actually spending $16 billion on web components. So to hear you tell you more about their new Netflix original series on web components, we have Kunal Kundaje. Hello, everyone. My name is Kunal Kundaje and I'm a software engineer on the cloud platform team at Netflix. Today, I'd like to talk to you about our journey to reboot the cloud platform user experience within Netflix and how we're using Polymer as we go about doing that. So let's get started. Here's a quick overview of the topics we'll be covering today. I'll start with a brief introduction to the cloud platform engineering organization at Netflix and what we do. We'll then go on a quick whirlwind tour of a few different types of apps we've been building using Polymer, five apps in about five minutes. Next, I'll talk about reboot, our internal component library and some tooling that we built for a better developer experience. Then we'll look at the road ahead with Polymer 2 and I'll share an incremental approach that we're taking to migrate with minimal pain. And finally, we'll circle back to apps, specifically state management in larger apps and some experimenting we're doing in that space. So cloud platform engineering, who are we and what do we do at Netflix? Well, here's our team's charter at a high level. We provide a common set of foundational building blocks to Netflix engineers so that they can focus on core business value rather than recreating these infrastructure layers. So some examples of these building blocks include data stores like Cassandra, Dynamite and Elasticsearch as a service, Kafka Messaging and Stream Processing as a service. Atlas, our internal system for aggregating operational metrics across all of our services and so on. So all of this sounds like a lot of server-side stuff, right? But all of these services that we provide as an organization become much more usable when we offer cell service apps that empower our engineers to get onboarded quickly and leverage the capabilities of our systems without roadblocks. Insight tools like dashboards and visualizations, together with control planes, also help our engineers detect issues and fix them quickly and effectively without having to switch between multiple tools and command line scripts when they get paged in the middle of the night. And building all of these apps is where we come in. So let's go on a tour of some examples of apps that have been built for these various systems. They're all quite different from each other. So this segment should give you a good sense of the variety of app types that can be built using Polymer and Web Components. We'll start with Lumen. Lumen is our dashboard builder for operational insights. And it's widely used by engineers across the company to plot time series data, collected and aggregated by Atlas, our metrics data store, so that they can observe meaningful changes in these metrics over periods of time. So here's an example of a time series dashboard that's used by our cloud databases team to track various Cassandra cluster metrics, like IO operations and latencies over time. So Web Components are a great fit for this use case because we can have a simple public API for these graph components and encapsulate all of the logic to fetch these metrics and render them and then enable anyone to drop these components into any other internal app that we have, including those that are built in other frameworks like React, Angular and Ember, which we have within Netflix as well. But Lumen also supports more than just time series data. It also supports the concept of cell types, which are components that visualize the data that's passed into them in different ways. So for example, there are cell types for pie charts, histograms, bubble charts, data tables, what have you. And each of these is written as a custom element using Polymer. So here we can see a single real-time data source that's been connected to six different cell types to visualize the data in different ways. Users can also write custom data mappers, which are just JavaScript functions that take the data that's passed into them and then convert that to a format that can be understood by any visualization type. So here's an example that shows data from Elasticsearch being mapped to different cell types and visualizing that data in different ways. Next, let's take a look at Casper, a custom health check dashboard that we built for our fleet of Cassandra database clusters. So at a high level, Casper visualizes the entire health of all of our Cassandra clusters in the fleet as a tree map showing simple red, yellow, or green states. The sizes of the boxes are proportional to the footprint of the cluster itself. Obviously, the cluster names have been anonymized here. You'll often find this page projected onto a wall-mounted TV right next to where our cloud database team sits in the office. So what makes this dashboard quite unique from a front-end perspective is that the data source is a high-volume firehose of data being streamed in via web sockets from a job that's aggregating all of this health check data from over 10,000 different Cassandra cluster nodes. So in order to not completely lock up our main rendering thread, we process this incoming data in a web worker and post-message back when we need to re-render changes. So when a specific cluster goes red, an on-call engineer can look at Casper's cluster detail page to quickly get oriented with the current state of the cluster without having to switch between multiple apps or command line scripts. So this page describes the topology of the cluster, how many instances it has, what regions it's in, any running maintenance jobs that may be impacting cluster performance. And it even includes charts of some of the most relevant metrics like read and write operations and so on. And in fact, the Atlas charts that you see at the bottom here are the same ones that you saw in Lumen earlier. That's the power of web components there. Now you'll notice that this dashboard also sports a dark theme which is unlike most of our other apps. So this was a good test for the teamability of some of our components. So our polymer elements expose CSS custom properties that Casper could override with its own color scheme to get them all to blend in with its look and feel. So now that you've seen a couple of examples of dashboard-style apps, let's continue our tour with something a little different. So Keystone is our near real-time data pipeline that allows engineers to send events and logs to hive tables, elastic search clusters, and Kafka topics. This is the collection pipeline that's used to gather data for hundreds of the AB tests that we run, as well as some of our personalization algorithms. And this is Keystone self-service, a web app for engineers to manage their data streams. So we use D3 to build out a visual representation of the data stream as a directed graph with colors, motion, and tooltips along the edges, reflecting actual data flow metrics. Engineers can also use this view to make any config updates for any of their outputs or to change the topology of the cluster of the data stream by simply dragging and dropping nodes to rearrange them. So the encapsulation provided by custom elements and Shadow DOM was really beneficial here because we could build a simple graph, this graph component as a self-contained component with a simple public API and all of its complex implementation details neatly hidden away. Since all of the SVG nodes are within the element's shadow root, they're protected from the rest of the apps code and CSS styles. Moving on to the Netflix Data Explorer, and as the name suggests, this is a tool that allows our engineers to explore and update their data in our cloud data stores. So here we're looking specifically at the Data Explorer for Dynamite, our key value store that's built on top of Redis. Although the app's architecture allows us to layer in support for other data stores as well. So using the Redis API, Dynamite Explorer allows users to search for data by keys. Now we needed to scale to handle millions of keys per cluster, so we used a virtualized list, like the awesome iron list element, an infinite scroll style pagination to keep the UI running smoothly without skipping a beat. So you can see that it's handling about like 145 million keys in this case, right, so. In addition to simple data types like strings, Dynamite Explorer also supports complex data types like JSON values, hashes, lists, sets and sorted sets. And with Data Explorer, our engineers no longer need to figure out like which boxes to hop on to and what data store specific commands they need to run when they simply need to look up or update some data. So you may have noticed that this app heavily makes use of the paper and iron elements along with the material design styling, right. So having this rich palette of well built, well tested components available to us allowed us to rapidly prototype and build out this UI in a very short period of time. So we could spend time also building out and optimizing our Node.js API layer, as well as like adding additional features like single sign on, auditing and so on. And last but not the least is Winston Studio, our app for operational run book automation. So let's say you're an on-call engineer and whenever you get paged in the middle of the night for a particular issue, you perform a series of steps from a run book to diagnose and fix the issue. So Winston is our internal platform that lets you automate those steps in response to alerts. And Winston Studio is the web app that they use to wire up, author and test these automations. So here's an example of a simple automation that collects logs from a service and emails them to me when a health check failure email fires. So you can edit the Python code for this automation right in the browser itself with syntax highlighting and basic syntax checks. And before promoting it to test or prod, you can actually test this incrementally with a fixed set of input parameters. So this is another unique app compared to some of our other examples because it's almost like an IDE in the browser itself. We're using code mirror for the editor component here. And the nice thing about a component-based app architecture is that we can lazy load some of these larger third-party dependencies only when a user actually gets to this page in the app. So those were just some of the many apps that we've been busy building using Polymer within Netflix. To recap, here are some of the interesting takeaways that emerge from some of these app examples. So custom elements and shadow DOM give us a great encapsulation model allowing us to build complex components with simple public APIs and shadow routes that shield them from global styles. Theme ability can be achieved using CSS custom properties and the add-apply mix-in. And we're also looking forward to the part and theme specs that we talked about earlier. Web workers are a great way to do background work off of the main UI thread. Virtualized lists, like the iron list, for example, combined with paginated APIs, can give us fast fluid performance even when we're dealing with huge amounts of data. And a component-based app architecture allows us to lazy load just the dependencies we need for the views that a user has requested. All right, so now if you're looking closely, you may have noticed that many of these apps have a lot of things in common, right? So modern web development across most libraries and frameworks these days is centered around apps composed of components. In the case of Polymer, these components are web components or more specifically, custom elements. In this section, I'll talk about our approach to building component libraries and some tooling that we built for a better developer experience. So being a small team with lots of apps to build and improve and maintain, we take us a more pragmatic approach to building new components. When we can find high-quality, styleable components in the vibrant open-source ecosystem, and there are many, we use them and build on top of them rather than reinventing our own. We then augment these with our own elements that are specific to our internal use cases. And we use CSS, custom properties, and the app apply mixins from our internal style guide to apply consistent styling across all of these components in terms of colors, typography, spacing, and so on. Now when it comes to building our own components, we also wanted to have a consistent and streamlined developer experience. This includes things like scaffolding out new elements and iterating on them, generating API docs, demo pages, and making it really easy to do semantic versioning when you need to release new versions of your elements. So a couple of years ago, before the awesome Polymer CLI as we know it today existed, we built the reboot CLI internally. So this is a command-line tool that a developer can install using NPM, and then use it to scaffold up new components with a consistent structure, a common ESLint profile, and a set of common NPM tasks to perform various actions. The NPM run dev task like you probably expect just fires up a polyserve dev server so you can begin building and iterating on your new component. But there are a couple of other interesting things that the reboot CLI can do. The first of these is auto-generating API docs for elements, properties, and events. We do this by piping the element definition through a custom Babel plugin that traverses the syntax tree and generates a markdown file containing the list of property names, their types and descriptions, list of custom events, the element emits, and so on. And this also includes links to the specific line numbers in the source code for all of these things. So having this in a markdown file makes it really easy to look up the API docs for an element either directly in the Git or stash repo where it lives, or in our element catalog where we convert that to HTML. The second is easy and consistent semantic versioning of elements. So when you're ready to release a new version of your element, you simply run NPM run release and you tell it if this is a major, minor, or patch version bump. Reboot version then automatically figures out what the current version number is, auto increments the appropriate position in the version string, updates package JSON, creates the Git tag and pushes to origin all in one quick step. So as you can see, we've been using Polymer One for a little while now and we've built a bunch of components and a bunch of apps and now it's time to start migrating all of those over to Polymer Two. So we just started going down that path fairly recently. So I thought I'd share an approach we're taking depending on sort of what the app type is and so on. So to start with, we're making the latest stable version of Polymer One, the baseline across all of our shared elements and the apps that use them. This basically ensures that we can safely use hybrid style elements everywhere. It's a good practice to regularly keep up with the latest Polymer versions anyway. So this step shouldn't involve any big breaking changes. Now we recently started working on a couple of brand new apps. These were perfect candidates for actually starting out with the new Polymer Two library and ES6 classes for the app-specific elements. But these new apps also depend on some shared elements from our component catalog, right? So we took this opportunity to convert those shared legacy elements to the hybrid style so that they can work in both Polymer One apps and the new Polymer Two apps that we're building. When we eventually have ES6 class versions of those shared components, we can simply swap those in and now we have a fully migrated Polymer Two app. But what about existing apps, right? Existing apps, I think we can take two different approaches depending on the size, complexity, and rate of change. So for small apps and apps that aren't being updated much anymore, we start using Polymer Two and upgrade their app-specific elements directly to ES6 classes. Because these apps are small and not changing very frequently, we can skip the intermediate step of converting these to hybrid elements. And again, once when we have the shared elements upgraded to ES6 classes as well, we can simply swap those in and we have a fully migrated app. For larger apps, and ones that are still in active development, things get a little trickier, right? It's a bit more challenging to do a Big Bang migration all at once. So for such apps, we started converting their app elements to the hybrid style first while still running the Polymer One 1.9 version of the library. Once that's done, we can just replace Polymer One with Polymer Two and continue running in hybrid mode. Any new app-specific elements that we're writing now will then be written as ES6 classes and we can retroactively start converting some of the older app elements to the ES6 style as well. And again, like the previous two cases, once the shared elements are upgraded as well, we simply swap those in and we have a fully migrated app. So those are just some of the different strategies we can take depending on what type of app we're talking about. And that brings us to our last topic, which is state management. Also happens to be my favorite one probably just because of the stranger things reference. So as we were working on a number of apps, especially larger ones, composed of lots of nested components, we started running into some pain points around state and shared states specifically. So let's briefly dive into what the problem was using a couple of examples. So say you've got a parent component in its child that shares some common bit of state. A common pattern in such a situation is to make the parent component the source of truth for that state and pass it down to the child as a property. If the child then needs to change the value of that state, it simply emits an event that the parent listens for. The parent then makes a change to that property and passes it back down to the child and everything just works. But consider this example. Here's a deeply nested child that cares about the same bit of state as a parent a few levels up. In this case, the property has to be passed down every intermediate component in the hierarchy. Even though these intermediaries don't really care about that state besides just passing it down. So they're just acting as pass-throughs. Same thing with the events bubbling up from the child. And here's another situation. In this one, two sibling components need to share a bit of common state. So now you'd have to find a common parent ancestor and store the state in that, even though it doesn't actually do anything with that state besides passing it down to the children as properties. This example also illustrates a potential problem when a refactor or design change causes one of these child components to move elsewhere in the visual hierarchy. You now have to move this component there, find a new parent, a new common parent and then play the property passing game all over again. So it turns out that there's a very popular project Redux that aims to ease some of this pain. It's described as a predictable state container for JavaScript apps. And you heard Kevin talk about this a little earlier today as well. So we've had several teams within Netflix using Redux quite successfully with React. But there's nothing really React specific about Redux. So we've started experimenting with it in some of our larger Polymer apps as well. So the central concept in Redux is a store which is an object tree containing all of the shared state for an application. Properties on individual components can be mapped to specific slices of the store so that they're automatically updated when the state changes. Components can also dispatch actions to the store when they need to update some part of the state. So this is great because our shared state now all lives in one place and is only passed to components that actually care about it. So an open source library called Polymer Redux by Christopher Turner makes using Redux with Polymer quite simple. Once your component class extends a mix and create of this library, you can add the state path to any of your property definitions and they'll basically change whenever the store updates that value of the path in the state tree. Components also inherit a dispatch method from the mix and that they can use to dispatch actions instructing the store to make a state change. So what's next for our experiment with Redux? Well, firstly, we're still evaluating. So Redux may or may not be the right solution for every app type or every use case. So we're still trying to figure out where it makes sense to use it versus not. Second, in apps that are using Redux, most of our shared app state already lives in this Redux store. But there's also this additional state that lives in the URL in the form of path parameters and query parameters. So could we have the Redux store capture some of that as well so the app can now handle all of that state in the same way? Can we keep the URL in the Redux store and sync bidirectionally? That's something that we've just started exploring recently just a few weeks ago, in fact. It looks something like this. So similar to how the Polymer Redux library has the state path concept, we can declare this query parameter here. And then our mix-in now is responsible for keeping the URL always in sync with the Redux store. So our app now needs to, our app can basically read the state exactly the same way. So diving into the depths of Redux can be a whole talk in and of itself. And we don't really have much time for that today. Props down and events up works really great in many, many cases. And you should stick to that pattern if it's working for you. But if you'd like to learn more about Redux here and how it can be used with Polymer, here are some resources that you can check out. So there's the Redux site itself, of course. There's the Polymer Redux library that I mentioned. And then there's these couple of awesome polycast videos on YouTube by Rob Dodson that actually show you how to use the Polymer Redux library as well as do some of the async stuff with it using thunks. Yeah. And that covers everything I wanted to talk to you about today. Again, my name is Kunal Kundaj and you can find me on Twitter as at Kunal. Thank you for being a great audience and have an awesome time at the rest of the summit. Cool. All right, so coming up next is one of four, yeah, I counted four Australians giving a talk here. We actually, so many Australians that we're actually going to rename is the Polymer Australia Summit. So get your passports out. Australian Border Security's going to come by and check it. I'm just kidding though. But Bede's going to come up here and give a great talk about how Web Components makes CMS with Simplr, Simplr. So give a big Aussie welcome to Bede. Hi, everyone. So my name's Bede. I'm a developer from Melbourne in Australia. And I work on a content management platform called Simplr. So I'm here to talk to you a bit about Web Components for content management systems. So I've been working with CMSs for a while. And honestly, I've been a bit frustrated with them. Essentially, I've been frustrated both from a developer's point of view and also as a content editor, I've been frustrated. A couple of years ago though, I started using Web Components. And I realized these could potentially really change the way we look at content management. And they could help resolve some of these frustrations. So that's what I'm going to talk about today. I'm going to talk a bit about content management systems, where they have been in the past and where they are now, and how Web Components could potentially change them. I'm also then going to go over some of the patterns you might use to build out your own custom element for content management. So let's first look at a monolithic CMS. So a monolithic CMS is essentially your traditional CMS, such as a WordPress or a Drupal. It essentially gives you all of the functionality that you need in one single app. So on the one hand, this is great. It means it's really easy to set up and get going with. But like any monolithic system, it means it's quite rigid. So from a developer's perspective, it can be quite difficult, for example, to change the way you're displaying that content. For example, if you want to use a different framework or a different templating system, it's going to be really hard to wangle that in with the monolithic system. Also, for a content editor, you're going to have a bit of frustration there. So most of these systems, you're going to have some kind of a dashboard where you log in and you edit that content in a form. The problem with this is that that content there is in a completely different context to what your user is going to see. It's a very different environment. So for your content editor, they're going to get a real disconnect between what they're seeing and what the user is going to be seeing. Over the last few years, though, there's been a huge, huge rise in popularity for something called the Headless CMS. So the Headless CMS essentially takes this monolithic system and gets rid of the view layer. Instead, it replaces it with, generally, a really consumable JSON API or similar. Essentially, for developers, this is fantastic, because at this point, they don't actually have to deal with a CMS. Instead, what they're dealing with is just an API. So they can really use whatever framework or library backend that they want to talk to that content. Generally, though, these systems are going to end up with a dashboard just like the monolithic systems where you have to go into some form and edit that content in a different environment to your users viewing the content. So this is where components come in. What if we had a componentized model for content management? The idea here is that you essentially build upon a headless system with that API to consume content, except you take that view layer on the monolithic system, and you break it down into really small chunks of data, such as image text or something a bit higher level like a blog post. So this is already being done on the dashboard of these other systems. If you go in, you'll edit a content type, so in a kind of componentized manner. But the idea here is that that view layer is embedded into the component itself when it's delivered to the user in the browser. On top of this, we're also adding in a editing layer that's embedded inside the component itself. So for our content editors, they're actually able to go in and edit that content in the exact same place and same environment that users are going to be viewing that content. Also because this is a small modular and componentized system, we're trying to retain a level of flexibility for the developer so that they can still have control over this content and how it's displayed. So why am I talking about this now in the context of web components? So first of all, custom elements is the first time that we truly have interoperability. Before any componentized model was going to be restricted to the framework it was working in. This way, we can build components that can actually be distributed to any HTML environment. More than this, we also have encapsulation through ShadowDom so that we can make sure that editing UI isn't having any side effects to the rest of the DOM. So what would this look like? So here, I've got a dynamic image custom element with a path property that maps to some URL somewhere, essentially it's just pointing to some data. I want at some point for it to fetch from an API and I want to render out that content. So in this scenario, that means an image tag. Later on, I then want it to save back to that same endpoint. So we'll do a put request there. So this is for a developer. This is generally what we'll see. But what about that editing experience? So for example here, we want the editor to be able to come in, interact with that image in an isolated environment, maybe upload a new one, manipulate it in some way, and then be able to escape all in the context of that one image and see what the user's going to see. So how would we build that? So I'm going to go through a basic primitive. I'm going to go through that dynamic image and I'm going to focus on the fundamentals of the content part of that component. So I won't touch on the server and I won't touch on the internals of that editing UI. So this means I'm going to look at how do we store that content? How do we render it onto the page for the user? How do we toggle that editing UI to manipulate it? And obviously, how do we get and set that content over a network? So first of all, we need to create a basic dynamic image element and that's inheriting from the base Polymer element. And straight away, as early as possible, we want to set up an image on our instance that's going to act as a rendering point for all that data we're going to have. Next up, we want to append this into the light DOM in the connected callback. So we want to do this in the connected callback because only once the element is inserted into the DOM do we know that the user actually wants to start consuming this content. We also need to make sure that we're doing this into the light DOM, not the shadow DOM at this point. And that's because content should fundamentally still be accessible to the user. For example, if they want to use any third-party style sheet or any third-party library that expects an image tag to be on the screen, you need to be able to make sure that's accessible. This also opens up the door for really easy server-side rendering. So as long as your content is in the light DOM, anything that can spit out a HTML string can server-side render. So let's look at some properties here. Essentially what we want is we want some properties that are going to be able to hold all of the content to display that image. So for us, that's pretty simple. That's just a source and an alt property. You could add some more meta information if you need, but this is fundamentally what we need. We also have a render function. So this is an observer for those two properties. So this is going to get called every time and just pass those props down to our image. This might be more complex in other scenarios, say if you're building an article element that's based on markdown. You might store the content in markdown so your render function is actually going to have to take that markdown and convert it into HTML. So at this point, we're storing some content. We're rendering it out to the DOM, which is great, but it's pretty basic. It's essentially just a wrapper for an image. It's not doing much. So let's look at adding in some editing controls. So on the right, I've got my template, which is how I want my shadow DOM to look for this dynamic image element. And essentially, we're encapsulating all the editing functionality into it. Now you'll note my editor controls is surrounding the content, but really, that's just depending on the structure of your editing UI. It is important, though, it's beneficial for all of your editing functionality to be packaged into one element, just because this means that it's a clean separation of concerns and it also is going to give you a performance benefit, which I'll talk about later. So first off, we need a way to make sure that we can toggle those controls open and closed. So we've just added an editing property to the host and an open property on our editor controls. And all this is doing is just making sure our host can control when they're open and closed. We also obviously need to pass down those properties and that data to our editor controls. And at this point, we don't really know what editor controls is doing under the hood. It might prompt the user for a file. It might bring up a canvas to manipulate it. But essentially, we're just giving it the data. And at some point in the future, we know we're going to want that changed data. Lastly, I want to look at a load controls method. So most of the time with the dynamic image, most people that use it, most people that come to your site with a dynamic image on it, are just going to be viewing the content. They don't actually want to edit it. So we don't want to burden them with the functionality that comes with the editor controls. So what we want to do is we want to have this observer function that when editing goes to true, we can then import a HTML file that has the definition of editor controls inside of it. And only at that point, once we know that the user definitely wants to edit, we can load in and pass and boot up the JavaScript needed for editing. You can actually do more than this. You'll notice this entire shadow DOM is only for editing. So you could actually defer all of your work for dealing with a shadow DOM until the user is actually editing content. So now we have a dynamic image. It can display some content for the user. And you can add an editing prop, which will open up some controls that you can start manipulating it. This is pretty simple. And it's a nice base to work on. We have an interactive and dynamic image element. But it's not there yet. We really need to have some kind of networking to be able to load and persist that data. So first up, you need some kind of property which is going to let you uniquely identify the data it's talking to. So I've chosen a path here to map to some kind of URL. But you could choose a unique idea, whatever essentially is going to pass you back to a unique URL. We're also going to need a deserialized function, essentially just a way to take whatever the server is giving to us and hydrate our own properties from that information. Again, this API is pretty simple. It's giving us nice those source and alt properties straight up. But in other scenarios, you might not have control over that API. So your deserialized function might have to do a little bit more work. Conversely, we're also going to need a serialized function, which is just going to do that same process in reverse. We just want to take the content stored. For us, that's our properties, package it into an object literal that can be sent over the wire later on. And obviously, we need some methods to actually perform these requests. So our load method is fetching a URL based on that path property. We're taking that JSON, passing it, and giving it over to our deserialized function. The same, again, pretty similar, just doing it in reverse. So we're getting that serialized function. We're calling it, turning it into a string, and sending it over to our API via a put method. So this save function, you're probably going to want to wire that up to some external UI. For example, a save button that you might have on a page, or perhaps an internal event such as those editor controls, maybe when they close, you want to automatically save. Lastly, we want to make sure that we're loading in that content at the appropriate time. So I'm doing this in the connected callback because, basically, you want to make sure that you're not making a network request too early, but you want to make sure they're getting it at the right time. So once the element is in the DOM, we're making that network request. But you could be a bit smarter than this. For example, you could use something like an intersection observer so that you could load in that content only once the element has been scrolled onto the page itself. So now we have a pretty simple image, but it can display content. It can provide an editing UI so that you can edit it in place and get feedback on that editing. And it can also dynamically fetch that content from a network endpoint and persist it back to that same endpoint. But this is just one component. Ultimately, to get to a CMS, you need more than this. You need a whole library of components. And that's what we've been doing with Simpler. So we've been developing out a bunch of primitive elements such as video or image and text and bringing them all together using other mechanisms such as authentication systems and global management systems that are able to synchronize save events amongst all of the elements and make sure they're all editable at the same time. But more than just a single CMS, wouldn't it be great if we had an ecosystem of these dynamic elements where people could mix and match and use the ones they need based on their website or app? And what if we could have a plugin system for those APIs so that you could choose the content source that you're using for a specific element? You could have multiple content sources on the one page. Ultimately, I think using all of these together and utilizing web components and the features that it gives us, I think we could really see a different approach and a better way to manipulate and use content on the web. So thanks very much. I hope this has been interesting. Please, if you're interested at all, come and chat to me afterwards. Thanks. Awesome. So last year, we had Gannett up here. They gave a talk, and they loved the Polymer Summit so much that they had to just come right back and bring a friend on top of that. Fun fact, one of the speakers up ahead is a blacksmith. He also brews his own beer and lives on a mountain, and his name is not Ron Swanson. So also Gannett is the parent company of USA Today, and most of these people are from USA Today. And so to bring you the USA of tomorrow, today we have Josiah, Josh, and Marianne. Well, hello. Good afternoon, everybody. Welcome to the talk with a very long name. Designing a design system for modular modules and building a team to build it. My name is Josiah McCann. Joining me on stage are Marianne Epstein and Josh Trout. Josh and I are representing the core web development team, and Marianne is here representing the UX design team at USA Today. First, I want to start off by saying thank you, Polymer team, for having us out to speak again. Last year was a really, really fun year, very, very practical talks, and I'm really excited about the talks tomorrow and the rest of the talks today. It's been a great conference so far. So I want to share a little bit about the USA Today network and what we're all about. And we're all about making communities stronger. And to do that, we have to inform them, equip them, and empower them, fostering deep and vital connections between members of our community and the world around them. And we connect these communities all together through our national brand, USA Today, and our 109 local media organizations, merging our national voice with the local communities. As an award-winning news organization and a modern media company, our 500 plus digital products reach 110 readers every single month. We reach 43% of the internet population with our content, resulting in 1.5 billion page views every month. And as you can imagine, this level of scale and fragmentation between so many websites has its challenges. And today, we want to share our success as a dev and a design team, focusing on very practical points you can immediately walk away with and apply to any size team or project. So to give you a little bit of context, here's what we've been up to since we last spoke. A year ago, we launched our new Polymer-based web framework. We're test driving it with a few, we test drove it with a few different microsites. We talked about one last summit, our Olympics coverage, our data-driven Olympics coverage. And after that, we launched our continuous coverage of the election, all converging on election night, the biggest news night of the year, where the framework made its big stand, taking on heavy amounts of traffic. And the entire process, it was a great way to see how much faster and more efficient we could build on this new framework, while also identifying areas that needed to be improved. And at the beginning of this year, we began replatforming our current sites onto this framework and doing a complete redesign at the same time. Right now, USAToday.com on mobile devices is completely powered by this new Polymer web framework. Part of this new framework is we want to figure out, can we build quickly, can we build more efficiently? And it really, really worked for us. But this new approach that we took, this module everywhere approach, is key to our success in building for a large news organization with many developers scattered all over the nation. So this approach, it's adopting a module way of thinking and web components on the client are front and center, or Polymer-based approach. But also, not just our client, but our servers, we have server-side moduleization through microservice through a microservice ecosystem. This module's everywhere approach is very decoupled, allowing for maximum component reuse across not just our team, but through any of our web developers spread across the entire network. Reducing the cost of experimentation, maximizing shared code use across each and every property. And to support this development philosophy, design had to be on board and think modularly as well. And here to talk about our new design system that supports this framework is Mary Ann. Everyone, as Josiah said, our design team has spent the past year working on a new modular design system. And today I'm gonna talk about what a big change this was for us, as well as what worked well for our design and dev teams in case your teams might be approaching sort of similar challenges. Heading into our redesign last spring, we had separate desktop and mobile sites, and we'd been adding new things to them for a couple of years. And over time, our designs had veered off in many different directions. Here you can see a story coming into our desktop site, and that same story also coming into our mobile site, and they look very different from one another. And if you were to hide the logo at the top, you might even think that these came from two different publications. And these differences were causing problems for our business. Our journalists couldn't predict how their stories were going to look, which was very frustrating for them. And from analytics, we knew that our readers weren't as happy as they could be either. And because these same sites were being sent to 100 different newsrooms, our designs were frustrating a lot of people every day, which on the UX team is the opposite of what you want. So there were some good reasons behind everything looking so different at this point. First, a lot of things had changed since these sites were built. Reader habits, our storytelling and response to those, and our scale, we had grown a lot as a news network in the past few years. Meanwhile, we were not set up well for all of this change. We didn't have a style guide. So whenever we needed something new, which was pretty much all of the time, we would do our best to make it match. But more often than not, we would have to design it from scratch. So we wanted to really understand our problems before diving into redesign. So we shadowed our journalists to find out what they really needed from our sites. And then we took inventory. So we screen capped every page of our websites to understand how we were currently meeting the journalists' needs. And what we found were just hundreds and hundreds of single-use one-off experiences. So here's a specific example to show you what was happening. We are a news organization, so one of the most important things we do is promote stories to help readers find things that they're interested in. And we call these story promos. And at the time of our inventory, we found this. So this is 12 versions of a very similar-looking story promo. But each one of these could only be used in a specific place. So this one only ever showed up on home pages. And this one only ever on blog pages. This one only ever appeared on mobile article pages. And this one only in desktop search results. And on and on for all of these. And if you look at these more closely, almost every style here is unique. So every headline has a slightly different font treatment. Even the hex values of all of these grays, they're all different. So not only had this same thing been designed 12 times, but it had also been devbed 12 times. And this was just one example. So we saw this same sort of duplication and variation happening for our video promos, our share tools, pretty much everything else on our site. So to step back for a second, as a designer, unearthing all of this was actually very, very exciting. So seeing the same thing done so many times at such scale and knowing how it was really causing problems for our journalists and our users, that meant we had a really good problem to solve. And it was actually two problems. We had a lot of design sprawl and a lot of inefficiency. And the design team thought a lot about how to solve these problems in such a way that we wouldn't have the same ones again in another year. And we realized our focus had to be reusability. So we needed to take anything we were doing over and over and do one thing instead. And that meant we needed smarter modules. So for us, that meant modules that would either do the same job in different places, so for example, a promo that can live on a homepage or an article, or to do the same job across use cases, so one promo that could support a video story as well as just a regular story. And we needed smarter styles, so that we wanted to be able to reuse them across these modules to keep everything cohesive and help fight design sprawl. So here's a very short version of how we got there. Based on our inventory, we distilled the needs of our site into module categories. So for us, as a news site, ours were promo, story, media, ads, a couple of others. Then we distilled all of our style needs into style ramps. So anything we used again and again, like type, color, spacers, we abstracted those into variables. And finally, we designed some documentation to help us stay organized. And for us, documentation was absolutely the key part of getting all of this to work, because if there's one thing we learned from our inventory, it is that reusability does not happen by accident. So even if Dev moved to a component-based approach, design still had a role in making sure those components actually met the right needs and actually fit together visually on the page. And we found that reusability only happens when we pay extremely careful attention to details and then write them down. So this was quite an adjustment for our team, because not everyone loves writing things down. But we have come to love what it does for us, which is to help us design things that are, in fact, reusable. And I'll share our version of design documentation in a moment, but first, I want to show you where we ended up. So here's our new Story Promo, which now has a real name. Promo Story Thumbs Small, or P1 for short. And this was our single reusable module answer to the 12 different versions we had before. And this module can live on a small screen or a large screen. It can live in the main content well, or in the side rail. It can promote a video story, or a 360 video story. It can promote a story without an image, which is often the case for breaking news. Or a story without a timestamp, which helps us showcase our best evergreen content. And it can live on a homepage, or an article page, or search results, or any other page we build in the future. And it is made entirely of reusable styles. So here you can see that everything in this module is pointing to one of our style variables, even the space between elements. And Josh is going to talk a little bit about that for a second. So we implemented this through the usual ways of styling Polymer applications and elements. Custom style element for our theme. And that had CSS, custom properties, mix-ins, and some classes. This sample shows the colors and typography used in that promo. And also shows how we mirror some of the mix-ins into classes so we can use that in server-side HTML. We also built custom elements for some of the more complex design elements, like icons and buttons. And this example on the screen, which is like a little label header that goes above a lot of our list of promos. And now Marianne's going to keep talking to us more about documentation. Thanks, Josh. OK, back to our friend P1 here. A lot of attention to detail went into making this one module smart enough to be reused in so many ways. And that's where our documentation came in. So we wanted to keep documentation as minimal and lightweight as possible, while also making sure we weren't missing anything. And one exciting thing we learned as we troubleshot that is that dev and design actually needed to know the same things. So we found that to make a module reusable, we all had to agree on the answers to five basic questions. What is it called? What is it made out of? What variants do we need? How does it scale? And what styles is it using? And here is a peek under the hood at our version of design documentation for that P1 module. So on the left, we have our functional spec and on the right, our design spec, which we call our matrix because it's a visual matrix of how this module can look. And together, these documents answer those five questions. So question one, what is it called? We never used to pay attention to this, but now anything we build gets a specific name, so we can keep track of it and reuse it. And we collaborated with dev on a naming system, which includes three parts, a shorthand ID. So in this case, it's P1, and that just makes it easier to talk about things. Category ID, here it's promo, and a descriptive ID. So this is StoryThumbSmall, which tells us this has a small thumbnail image. Next question, what is it made out of? Here we established that this module has a headline plus some optional things, like labels and image and other nested modules, like a timestamp and an icon. Question three, what variants do we need? This is where we capture our module's use cases. So for the P1, we have our default display, then variants for no image, different media types, advertiser content, and a couple of others. Question four, how does it scale? So here we see we have a narrow version and a wide version, and the matrix tells us how these sit on the grid, as well as what the two sizes look like. And finally, our styles. We call out our style variables over here, and this helps us avoid those one-off styles we used to have so many of. So I hope you can see that for us, documentation is not an end in itself. It has really turned into a thinking tool for our teams to help all of us check our work for reusability. And over the past few months, we've used this method to build out an MVP set of modules for our new site, which launched to 100% last week. And here it is. Here is our new story page design. And we think it looks much cleaner and more on-brand and much more trustworthy than the old one, and is going to be much more predictable for our journalists and our readers. But what's even more beautiful to me about this new design is that we now have this kind of X-ray vision into it. So anyone on our team can look at this page and know exactly the modules that came together to build it. And this X-ray vision makes us so much more efficient than before. When we need to change something, which we know we will, as we test things and get feedback and as our needs change, we can change it in one place instead of 12. And we can quickly build new pages by repurposing things we've already built. And the last thing I want to share with you all is that we've been beta testing this for a few months. And we can see that our readers are now spending significantly more time with us per visit on the new site. So while we love the new design system, our readers loving it is what we care the most about. So we're really happy with that result. And we're excited to keep evolving all of this because it's brand new, it's a work in progress, and there's still a lot to learn. Thank you so much, and Josiah is going to take it from here. Thank you, Marianne. Very good stuff. So building a team to build stuff. So we've unified around a module-based, decoupled web framework. We've established a shared design system that organizes our vision behind every component we build. But we need to think about how to structure a web development team around component-driven web development. Because I believe we're in the post-abstraction era of coding for the web. And as we unify as a team around a standards-based approach, element encapsulation, and heavy reusability, we need to structure our team for maximum efficiency and effectiveness. We've all been building websites and web things the same way for a very long time now. But web components changes this work dynamic entirely. And because of that, it was time to think about a new kind of team organization. In our internal observation, we identified three coding styles that make up our team. The innovative artists, the disciplined scientists, and the very reliable craftsmen. And it's really important for us to understand how these different coding styles work together in order to build an effective team. And every style has its strength and weakness. No one's greater than the other. And some of us don't really fall cleanly into one column or the next. But some projects may benefit from one style being more dominant than another style. For example, a banking application is very focused on being accurate and not time to market, while if you're working for an innovative startup pushing that code out the door that's changing the world, we want to do that very, very quickly for our investors. A balanced team can cover the weaknesses of one single type, but only when good communication and empathy driven teamwork is applied. So let's learn a little bit more about these different coding styles. Let's take the artists, the adventurer, the innovator, the fast moving, the problem solver, always figuring out the problem, always finding a better way, while cutting a few corners in there. I can identify with the artists the most. It's like tests. What are tests? I don't know. I don't know what a test is. Our weakness as an artist, unique solutions are great for pushing innovation and doing things better. But unique means it's harder for someone else to pick up and maintain code. How an artist would approach building the P1 modules, they might say, oh, I'm going to use the new CSS grid framework, and that's how I'm going to make this happen. But we already have a grid framework for the company, and we've just fragmented company standards. All of a sudden, someone else comes around to reuse it, and they're like, what did you do with the CSS grid framework? So instead, artists need to innovate the right way by focusing on things to improve everyone's workflow, not just the current vertical that they're working in. The scientists pursues code as a discipline to be mastered. They're focused on best practices, have very well tested code. And this is really, really good, but it can all come at the cost of over-engineering solutions and slower time to market. And the scientists, they would approach that P1 promo module that Marianne showed us. They might look at it and say, oh, we need to use the image resizer for this image. Oh, this image resizer, it's really clunky to work with. It needs some refactoring. Or, oh, how we're fetching data over here for this P1. It's not very elegant. I think I'm going to rewrite it. And so all of a sudden, we've taken a very small scope module and really, really extended the scope. But the positives are they're continuing to improve on a framework and plugging holes in framework and pieces of code. Carefully tested, disciplined, always seeking the industry, best industry standard practices. Slower time to market, over-engineering. You don't know anyone like that, do you? Maybe. The craftsmen, very, very important. These are often under-looked coding styles and people that you work with every day. They are the steady, dependable, delivering consistent on-time code that's very, very reliable. Sometimes they can lack innovation and deeper technical expertise. They might approach that P1 module like this. They're using two keyboards in Emacs. No, not really. Who uses Emacs? They might approach it. Oh, I wrote 60% of this code last week or code like it. I'm just going to copy my code. I'm going to bring it over here. And I'm going to reuse all this code. Great. And that's awesome because we're keeping on the company's standards. We're doing a lot of reusing. But not so good because no one has stopped to think, hey, is there a better way to do what I'm doing? Is there a better way to solve this problem? And maybe copying the code that I used from last week is great. But if I'm copying codes from a month ago, a lot of things can change in a month. Overall, the craftsman is a very, very important addition to the team, often overlooked by the other coding personalities. Now, adding web components in the mix on top of these styles, web component development really resonates with each of these styles in different ways. The artists, they get to forge ahead on these new best practices. They get to blaze a new trail because we've been building things the same way for a very long time. They get to go back to the drawing board. The scientists, well, they get their standards-based web development. Even though this is a very free solution, they get that encapsulation. They get clean organized code and the ability to test things logically. So the scientists, they love this. And the craftsman, well, they get to use familiar tools and technologies that they already know, HTML, CSS, and JavaScript. And Polymer has such a straightforward, simplified API, it's not like throwing a whole new abstracted web framework at a craftsman expecting to them to learn a completely new API and framework. They get to use the tools that they already have. That's how web components resonates with each of these styles. So important, we know and we understand these different styles a little better. So how do each of these styles work together? How do you balance a team with these styles? And we have to remember, and when you're working with a team with different style coders, we're all in this together. And we can either be building each other up or tearing each other down. And balance is really struck by the scientists bringing that structure, bringing the, hey, we need to harden this and test this to the artist. And the artist saying, hey, let's solve this problem that no one's been able to solve yet. Let's do it with innovation. And he's bringing that to the craftsman, the scientist. And the craftsman is like, hey, reality check everybody. We need to be on schedule. We have something to deliver. And we just got to go, go, go. So the other interesting thing is that this can also be applied to entire dev teams as entire development teams sort of lean one way or the other. And how do you solve for team to team interaction? That's even harder than just managing a team and figuring that out amongst a team. And it's through empathy. It's through clear communication. And it's through cooperation. So empathokies throughout that word, well, what does that really look like practically? Well, it's like a bunch of scientists on a team saying, I can't believe they don't have 100% code coverage in that project. And we do this all the time. So stop and let's empathize. What does empathy look like here? Well, empathy and understanding says, well, maybe the team is focused on delivering something fast with imperfect code. Maybe that's what this particular team needs to be focused on right now. And then the artists, why aren't they using the absolute newest way to build things? Well, let's empathize. Maybe it's safer and easier to build on a proven industry standard for their project than going off the rails and building something else. Because what we want to do, we want to fight against extremes. And both the artists and the scientists, they can look at the craftsman and say, they're not real coders. They're not up at 3 o'clock in the morning contributing to open source repositories every night. But they're the bread and butter. They're the ones that are churning out all this work. And we have to fight against extremes. And we have to remember to empathize or else craftsmen will get impostor syndrome. And the walls of hubris and ego will be built up with scientists and with artists. So now Josh is going to talk about how we've structured our web component developer experience specifically. Josh? So we put this into practice on our dev team in a few ways. The first is focusing on the API over the element implementation. Because there always be times when you must compromise on code quality, where speed to market is just more valuable to the business than having the absolute most rock solid code ever. So what that means is like focusing on reviewing the API, which is the names, the properties, and public methods of an element. The implementation of all those things can be refactored later very safely without having to worry about breaking anybody else's code. The way we actually make sure that that refactoring actually happens on our team is we have a program we call adopt a module. And that means we just bake in some time to every sprint where we allow developers to go back and review modules that have been maybe sitting around for a while that we haven't touched. And clean up documentation, clean up some JavaScript that might be a little messy. Maybe some styles aren't implemented as cleanly as possible. And that lets us kind of get code out to market quickly, but still come back and make sure we are having really good code that will be maintainable and long lasting. So what that looks like in practice is this is a really simple sample element and showing a good API with some bad code. So you see there's this horrible little string function that's filtering out spaces for some reason. But it's got a nice name for the element. It's got a good property title. And all the bad stuff can be refactored out later. On the flip side, you've got a bad API good code module, which has got a really nice implementation for the filtering. It's just much cleaner. It's got some error checking. But there's a problem with it. There's misspelling in the title change handler. And you can't go back and fix that because if somebody else is already using that, you can't fix that spelling. And the property is just called t. So now all the elements that are set t equals whatever, they're going to have to change that. And then the element name's not even that great. So this is an even bigger problem when you're dealing with other teams using your code. And it's the biggest problem when you're actually open sourcing your code. And the rest of the world is using your stuff. You don't want to be breaking their applications because you've got a little sloppy at the start. So to get this focus, we kind of build things backwards. We call it demo driven development. We start by building examples of how the code will be used, how the element will be used. And this is pretty easy because of the spec documents we have. They list out all the different variants that design has told us that we should account for so we can show what each of those look like and think about how the element will be used to build out each of those things. Once we have that solid and we like it, then we actually build out the element's implementation. And then finally, we'll come through and add tests and make a production ready. So what this looks like for our promo module is listing out stuff like the normal variants, the version with no image, a version for video, version for galleries. And all of this lives in a demo file that's right next to the element. And it gets served through our custom dev server. So as you're developing your element, you can be testing on this demo page. And as you build out the implementation, things start to come to life. And when the whole page is working, you know you're done and you ready to start building those tests. So the last thing that we get from web components is division of labor. And what's nice is you can break out who works on which element based on their skill set. So if you've got an element that you say, well, we're really not sure how to build this thing out very well, so let's give that to our artists because they're gonna be able to come up with an interesting solution for this. If you've got an element that's really complicated and you need someone who's gonna really put a lot of tests behind it, then get that one to the scientist. And then if you've got an element that you know exactly how you wanna build it, but you just need it to be built on time and get it out at the right moment, give that to your craftsman. And then the great thing is you can always come back and have the other style do the refactoring later. And so the scientist can come in and clean up the artist code and add some more tests or the artist could come in and say, oh, craftsman, you could have built this a little bit better. And so if you get that kind of interaction, it's great. So we hope this glimpse into how we build things will help you build great things with Polymer as well. Thanks for having us here and have a great rest of the conference. So howdy, everybody. So before we go for a break, I have just a few announcements. So in 10 minutes, the code lab's gonna start with Valdrin Koshy. He is a really interesting code lab about animations, making the performance. Light snacks and coffee are right across. And if there's anybody that you heard earlier today and you have any questions for them, they should be hanging out in the Ask Polymer lounge. And we will be back at 4 p.m. with Monica Rob Dodson and some more webpack and stuff like that. It's gonna be a lot of fun, guys. Have a great break. Hello, Polymer afternoon audience. Is everyone still awake? My name is Brendan. I'm on the tools team of Polymer, working on some fabulous tools as the CLI and Bundler and Build and Polyserve and many, many tools. Our next guest is a developer advocate. Developers, developers, developers. No, you know him from Polycast and the many good videos that he does. I had a special introduction to Rob when I was recovering from surgery. I watched about 9,000 videos of his as I was learning Polymer. So he's sort of the angel that ushered me into the future of web components. Here he is, Rob Dodson. How y'all doing? Good. So this talk is gonna be about my journey to try to get the whole world using custom elements and some of the interesting things that I've learned along the way. So I've been at Google for maybe like a little over three years at this point and during pretty much that whole time, I've worked on custom elements and web components and it's my belief that if you're building a UI library and you're working at like a mid to large size company so you wanna build a whole bunch of components and share it with a bunch of teams that are all on different stacks that custom elements and web components are really like the way to go about doing that. And this story is basically what I've been advocating for all these years. And being the point person for custom elements and web components means I get a lot of feedback on that idea. And feedback that kind of looks like that. So this is a tweet from a fellow Googler who was trying to use web components with another framework. And he said, every few days I see someone like Rob say that web components work in all frameworks, all of our problems are solved and it's really clear that no one has tried any of the above. And you will notice that this is tweet 6B of a very long list of grievances and subordinate sub grievances that rolled up into the larger tweet storm. So very organized and very emblematic of a lot of the feedback that I've gotten over the years. And it kind of boils down to this. We say that custom elements should work everywhere, that they are based on web standards, that they are the future of the platform. So we say that and then like why don't they? And the more I thought about this, the more I started to wonder if like, perhaps we put the cart before the horse. And did we maybe get so caught up in the fact that we could build custom elements that we didn't stop and spend time to think about how we should build them in the first place. And if custom elements and the things that we build, if they are unpredictable or they are inconsistent, does that then make it harder for frameworks to work with them? And so that's really what I set out to find out. So I split this talk into two parts. The first is my journey just to identify what a quote unquote good custom element would look like. And then the second part is looking at how frameworks should work with those elements. And my hope is that by the end of this, you all will have a better understanding of how to author your components and also what to expect when you try to use those components with other frameworks or libraries. So let's dive into that first point. What is a good custom element? And to start this journey, I actually went to the Chrome engineer who was in charge of implementing custom elements APIs in the browser. And I said to him, I was like, hey, do you know if there are any reference custom elements that I could look at? Stuff which you think really lives up to the standard. My thinking there was that the people that write the specs and the people that actually implement the APIs in Chrome, surely they've gone and built a bunch of custom elements and they're like, these are our ideal components and all components should live up to these. His response was a bit surprising because he was like, nope, none that I've seen. And that really got me thinking. I was like, hmm, all right. So we need to identify the best practices for custom elements. And I'm not really sure how to do that, but maybe what we could do is assemble like a crack team of engineers. And then together we could probe the depths of the HTML spec and would uncover all the treasures that lay within. So that is exactly what we did. Working with my teammates, Serma, Monica, and Eva, we began writing a set of vanilla custom elements. So not using Polymer or anything like that, just vanilla JavaScript custom elements. And we wanted to figure out how the heck these things should actually be authored. And along the way we learned a lot. And in fact, we're still learning a lot. But what I wanted to do is go through some of the stuff that we've tried to capture so far. And we've been doing this work inside of an element set which we call the how-to components. So the how-to components, these are a collection of educational custom elements. We're saying that this is sort of like literate code. So we want folks to actually look at the implementation, read them, look at the comments, and understand why we did the things that we did. Now I want to be really clear. These elements that I'm talking about here, these are not things that you should use in production. They're not even styled, really. These won't be going on webcomponents.org or anything like that. Instead, we wanted to create a resource where developers could read the source code kind of side by side with the comments and then learn from the elements and understand why we made certain decisions. And then take that knowledge and actually go apply it to the custom elements that they are building within their own company. I want to be also really clear that this is a work in progress project. Don't be surprised if you're looking at the repo, we start changing things around because basically as we learn new ideas and it happens all the time, we learn new ideas and we bring them to all the elements and we update them all at once. So again, it's not a production thing. It's not something you should actually use the code or anything. We just want you to read it, interpret it, and think about it. Because what we want to do is create a resource where we can all learn in the open and have discussions on GitHub about like, okay, well, why should we do this? Why should we not do this? So if you want to check these out, you actually can. The repo is available up on GitHub. Here's a direct link to it. And I'd like to walk through some of the best practices we learned as we were doing this. So I split this up into kind of three topics. We're going to try and cover. How do you deal with Shadow DOM? How do you handle attributes and properties? And how do you manage events in your elements? And I want to be clear that what I'm going to talk about today are guidelines. So these are not meant to be like rules or laws that you like have to follow. Because the web platform is not really consistent all the time, not by a long shot. And so you should always feel free when you're building your own element to color outside of the lines if you need to. If your element calls for it, right, you get to make that decision. But having said that, I do want to dive into some of these best practices and we'll start with Shadow DOM and some of the things that we learned. Now, the first question that comes to mind when you think about Shadow DOM and you're building a custom element, especially if you're doing it as a vanilla custom element and you're writing all the JavaScript yourself, you're not using a library or anything, is like, does my element even need Shadow DOM? And I have to admit, personally, over the last, maybe a year and a half, I kind of got a little cranky with Shadow DOM. I got a little frustrated with it at times. And I've got the team, the Polymer team, they're like, dude, you got to use Shadow DOM. You got to use Shadow DOM. When you're building custom elements, you always got to use it. And I'm like, hey, you know what? Like Shadow DOM, it can be kind of annoying to work with at times. It can be hard to re-theme. The polyfill can be a little wonky. And you know what? Maybe you just don't tell me what to do and I'll build my own elements, okay? And so, yeah, deal with it, boom. And so that is actually how we started this process. We were like, we're just going to build vanilla custom elements, we're not going to use Shadow DOM, we're just going to do our own thing. So the first element we created was a checkbox. Here it is, very simple, how to checkbox, right? Seems straightforward enough. It's a single tag. It doesn't have any children. I didn't see why something like this would need Shadow DOM. So instead, we were like, okay, cool. We have a JavaScript file for our element definition and then we have a CSS file for all of the styles, okay? But very quickly, we discovered that if someone takes your element and you're using a global style sheet like that, so they take your element and they put it inside of another element that uses Shadow DOM. Okay, well now all of your styles are broken because you don't have a way of getting your style sheet into that other element's scope. I mean, maybe they can like link out to your style sheet or you could have like a build tool or some build process like inject your styles into their element or something. But it basically means that like, it's up to you and that developer to figure out how you're going to get your styles into their scope. So that was a little, I felt kind of not awesome about that, but whatever. And oh, I should point this out because this is actually really interesting. There is actually a discussion to see if we can streamline this little bit. So there's a thread on the W3C GitHub where Steve Orville, a member of the Polymer team, has proposed this notion of if you have a very simple custom element, like a single tag, something that doesn't have children or anything like that. Could we maybe, as you registered the element, also give it a style sheet object and that would work almost like a user agent style sheet just for that tag. So this is a really cool proposal and something that I actually hope we land at some point. But today, this is not real. It's not shipping anywhere. And it made me realize that you probably need to create a shadow root if your element is going to self-apply any styles. And you might have a shadow root that only contains a style tag and that's totally okay. The benefit is that your element starts to become a lot easier for people to reuse. They can use it outside of Shadow DOM. They can use it inside of Shadow DOM. Right, they can put it inside of more complex elements and it just works. Now as we were working through this first element, a conversation kind of popped up on my Twitter feed. And someone said, I'm kind of paraphrasing the original tweet here, but someone said, you know, since we have things like CSS and JS for scoping our styles, do we actually still need Shadow DOM? I mean, can I just use these like fancy build tools? And it was actually a Trey Shigart who'll be speaking tomorrow who responded on Twitter and he said, you know, style scoping is a really cool benefit of Shadow DOM, but aside from style scoping, one of the other major benefits is DOM encapsulation. So if you're building an element when it like creates its own children as part of its implementation, you can actually hide those children, you can hide that implementation inside of that little DOM scoping bubble. And this is really crucial for framework interoperability. And so he shared a example that I wanna walk you all through because I think it's really interesting. So let's say we're building an element and we're building a counter. So it's just gonna say the word count and then have like a number next to it, like count one, two, three, and so on, right? So I've got a little div here in my template with the word count and then I'm gonna slot because I'm gonna assume that this framework or library that I'm using my element with that they're gonna put the number in there for me. I'm gonna create two versions of this element. So the first one is count with shadow. So here I'm creating a shadow root. I'm just appending my template content inside of the shadow root when my element is constructed. The other version that I'm gonna create is gonna be called count without shadow. So this time instead of using shadow DOM at all, we're just going to stamp that template content into the light DOM. Now I'm cheating a little bit because slot elements don't work outside of shadow roots, but whatever, go with it, go with it, pretend it does. So let's look at what happens when we try and use this inside of something like react. So in react, I've got my little render function here and I'm going to try and stamp out both of those elements and I'm just gonna have react pass in the count. So every tick is just gonna pass a number as a child of each of these elements. One, two, three, and so on. So what do we get when this renders? Well, you might be a little surprised. The first element looks like we would expect it to, right? So you've got count one. The second element though looks totally broken and I wanted to dive into that a little bit. So why is that? So in this first implementation, all react can see inside of this element is just the number one which you put inside of there. Now we know there's a shadow root and inside of that shadow root is the word count and our slot tag, but react isn't able to see any of that because it's not piercing shadow boundaries or anything and this is good. It means our implementation is hidden. If we look at this second element though, it's kind of more interesting because it seems like react created an element and then put some text inside of it and then attached it to the document and then our connected callback ran and all our stuff stamped out after the content. Okay, so already that looks broken. Probably not what we want, but it gets a little bit worse too. So the next tick react is gonna look at this and it's gonna be like, in my render function, you told me about the number, but you didn't tell me about this div thing here. Really, what is that? I don't know, you didn't tell me how to render any of that. So what's it gonna do? It's just gonna delete it and so the next phrase just throws away all the light DOM children and now we just have the number two inside of that element. Now I did some tests and I found that other frameworks may leave your light DOM children in place or they may not. There's really nothing to enforce that. It's just kind of a convention that they may or may not adopt and so what we learned from this is if you're authoring an element that creates children, you probably wanna put those children inside of a shadow route. That's because those children are part of your implementation and the rest of the page should not need to know about them. So after all of this thinking, going back to the original question, does your element need shadow DOM? I think it actually probably does and even though it can be a little frustrating or it can make you kind of cranky to work with it at times, shadow DOM is really important because without shadow DOM, you lose all guarantees of safety. You know, if you wanna put your element inside of another element or you wanna put your element inside of another framework where there's other JavaScript actors at play, shadow DOM kind of protects your implementation from the outside world. It's the little engine that makes the framework interoperability story go. Okay, so that takes care of shadow DOM, quick recap there. Definitely if you're self-applying style, you should probably want a shadow route and any children that you create should go inside of your shadow route. Let's talk about attributes and properties. Basically, how does your element handle data and reflecting state to the outside world? This might seem like minutia, this might seem like a very boring topic to talk about but when it comes to framework interoperability bugs and issues, this area is actually where I found the most problems and the most differences across the board. And this is like a really contentious topic actually. So even the Chrome team and the people that work on specs and the people that work on Polymer have different ideas about how elements should handle attributes and properties. And I wanted to try and do my best to get to the bottom of this and just put out some semblance of best practices. So when we think about attributes and properties, it's like, what would we consider canonical behavior? It was like, okay, well, I mean, where would I even look to figure that out? Well, there's HTML, right? HTML has a spec and it actually, if you read the HTML spec, it explains how attributes and properties should work on elements. And so it's like, okay, cool. Well, I guess we'll read the HTML spec and we'll just do everything it says. I don't know how many of y'all spend much time reading the HTML spec and really dive in in there. It's actually pretty gnarly the deeper you go. If you dig down deep enough, you'll find inconsistencies, you'll find anachronisms, you'll find stuff that seems to kind of predate the current era of standardization, like older elements like input are just super weird. So it can make it kind of hard to distill out what one would consider like the best practices. But for all of its faults, it's also kind of the only model that we really have to follow. So if you can make a custom element that's mostly indistinguishable from the behavior that built-in native tags have, I think there's a really good chance that frameworks will be able to work well with your component. So here are some of the best practices that we came up with. The first is that for primitive data, so strings, numbers, booleans, you wanna always accept that as either attributes or properties, ideally both. So someone should be able to just walk up to your element and call like set attribute for a particular thing or set a property like a corresponding property and both of those should just work. And ideally, you want to reflect back and forth between your attribute and your property. So let me give you an example of that from a native element. So if we look at the native video tag, I want you to see this behavior because it's actually pretty interesting. So this element has a preload property. It also has a preload attribute, right, the correspond. I can go up to this element and I can actually query it and I can say, all right, what is your preload property value? I know it'll say by default it's auto. And I can set that property to none and you'll see it sprouts an attribute when I do that. And I can then read that property value again and it's actually reading it off of the attribute. And I was like, okay, cool, that's interesting. Can we mimic the behavior of built-in elements with our own elements? And so my understanding from talking to spec authors is that the way this behavior is implemented in native HTML is that there are getters and setters for all the properties and the getters and setters are actually like really dumb. The only thing they really do is just kind of reflect back and forth the attribute. So I'm gonna make a fake element here called custom video and in my getter, I'm just going to try and get the value from the attribute. If there is one, I'll return it. If there's not, I will return a default value, right? So this is that default auto string that we saw. So this is kind of interesting because you're co-locating the default value with the property, with the getter and setter. In the setter, all we do is we take the value that was passed in and we reflect that to the attribute. And because the getter is leveraging the attribute, we have now synced our properties and attributes. So you change one, it changes the other, right? And that's pretty interesting. We don't have to write any additional code. You don't have to have like a little underscored property that you're sort of like managing under the hood and all this private state. Instead, these two worlds are just in harmony now. So I thought that was really interesting. And so you can see this in action with this like custom video element that I created here. So you can go and you can look at it and basically the same behavior. So we can read the preload property, we get the default value out of that getter. We set it, we now sort of spring an attribute and then we read the property again, it's reading it off of that attribute. So for primitive data, I feel like this behavior actually works quite well. And it makes things very consistent. So someone can fiddle around with an element and still call get attribute on it and get the right data. There are exceptions to this rule. So you might not want to reflect properties that are like really high frequency. So for instance, the native video tag also has a current time property. And that is basically every millisecond it's updating the current time for the video. So reflecting something like that, probably don't wanna do. Again, in the HTML spec, there's inconsistencies and quirks and there's reasons to color outside the lines. But generally speaking, primitive attributes and properties reflecting seems like a good thing. And it can simplify your mental model as well, right? It can help you understand how to write your component if you just do this consistently. So that's good for primitive data. What about rich data, like objects and arrays? So I think this is really interesting because I think for rich data, you probably only wanna accept that as properties on your element. And there's reasons for this. Oh, also you probably don't wanna reflect rich data from properties back to attributes. And the reason is because one, it's expensive to reflect, right? So stringifying a big object that someone passed in as a property and reflecting that, that just doesn't seem very useful. Polymer actually used to do this, like Polymer is zero five, we reflected everything. And we stopped because people were sending these massive JSON objects down and then we would stringify it all and reflect it for very little gain. We're just spending a bunch of time parsing an unparsing JavaScript. The other reason I think you should avoid this though is because a serialized object loses all of its identity. So this is something that my teammate Justin pointed out and it's pretty interesting. So if you take an object and that object has references to other objects and then you call JSON stringify on that, well you've just broken all of those references, they don't work anymore. So you pass that to some element and maybe the person is expecting mutating that object to mutate some of the original sub-properties and that just doesn't work, right? So if you just stick to properties, you kind of avoid all of this weird string bizarroness. So definitely recommend that. All right, so for attributes and properties, a quick recap, primitive data, you want it as either attribute or property. You ideally want both and you want them to reflect. And then for rich data, objects and arrays, I think your element should really just accept those as properties and that'll make your life a lot simpler. Last thing I'm gonna be quick about this is events. When should an element dispatch events? Now this seems like, I don't know, kind of a weird question to ask, but then when you really think about it, it's kind of interesting because if you look at native built-in elements, they do not seem to dispatch events in response to like a host setting property or anything like that. And I think there's a good reason for this. It's kind of superfluous, right? You set a property and suddenly your element dispatches event, like, well, you don't need to tell me the value, I just set it, right? So it's kind of weird to dispatch an event then. And if you're not careful, you might end up causing an infinite loop. So you tell the host, hey, this property changed and if the host is, you know, if you've got like a unidirectional app set up, like you're using React and Redux or something like that, the host is like, I don't care that the property changed, my model says it's this and it sets the property and the property says, hey, the property changed and the host is like, no, and then you end up in an infinite loop, right? So you gotta be careful about that. So for these reasons, I recommend not dispatching events in response to downward data flow. So when the host sets a property and this actually differs from how Polymer does things. So Polymer is gonna fire events whenever you change properties. And this is actually what powers Polymer's data binding system. But because Polymer does the additional work to guard against firing an event, if you set the property equal to the same value twice and it's okay, you don't end up in an infinite loop. But if you're running just a vanilla custom element, then I don't think there's a ton of value in firing property change events. So then that leaves the next question, which is like, well, then when should I dispatch events from a custom element? And here I think what you wanna do is you only wanna dispatch events in response to internal element activity. So what is internal element activity? Well, this could be things like a user interacting with your control, an asynchronous task finishing, like something loading or an animation completing. Basically anytime the component knows something changed and the host does not and we need to clue the host in. We need to tell the rest of the app, hey, something new has happened. This is a good time to dispatch an event and this kind of mirrors what native built-in elements do. So last recap really quick on the events. Don't dispatch them in response to downward data flow. Do dispatch them when your component has special private knowledge the rest of the app does not have. I realize I went through all of these like super, super fast. And that's because we're short on time and this is by no means an exhaustive list. Many of these are things that Polymer actually does for you already. So if you're using Polymer, it's awesome, right? You don't even have to worry about this. But I still wanted to go through the process of documenting all of these things and just trying to figure out how stuff should work. So we've been collecting all of these best practices that we've learned while building the how-to components and we've actually created a new section on developers.google.com slash web all about building components. So this is available today now. It basically takes all of our existing material around web components and custom elements and it organizes it better so we've got API primers, checklist of custom element best practices and a few example how-to components are up there as well demonstrating how to actually implement those best practices. So you can check this out. Here's a direct link to it. And now that we've looked at custom elements I wanna switch gears now and this is kind of where I think the talk gets a little bit more fun because I wanna look at the other side of things and I wanna talk about how frameworks should then work with custom elements. And this is like a really tricky problem to solve because as you all know there are as many JavaScript frameworks as there are stars in the Milky Way galaxy. This is not anything I have to tell you this is science, it's a known fact, right? But I thought what we could do is we could take just like a subset of the most popular frameworks and we could write some automated tests for them and then we could publish those results to the web so we could track what does work and what doesn't work and hopefully learn from one another. And so I'm really excited to announce another project that I've been working on. This is a new site called Custom Elements Everywhere. Making sure frameworks and custom elements can be best friends forever. The URL for this is available at customelementseverywhere.com so you can play around with this and check it out. And but you gotta keep paying attention to this talk too. And I'm gonna just give you like a quick run through of how this works. So every framework on the site has a little section and in that section I indicate how many tests they're passing. There's a little write up to explain like any quirks or gotchas or anything weird like that. I've also gone through and tried to track down every web components related GitHub issue for all these different libraries so you can keep tabs on everything in one place. And of course you can go down and you can click this little button and you can view all of the tests for the framework. You can see, all right, what passes, what doesn't pass so you can have a better expectation of like if I'm gonna use this custom element in pre-act or Angular or wherever, this is what is and is not going to work. Oh yeah, this is where I wave my arms around and apologize. All right, if a framework is not represented here, it is not because I think that framework is unimportant or not awesome or anything. I think all of them are awesome. It's just really hard to write a webpack file for every framework. I don't know, have you all tried that? It is not fun, okay. So I have done my best here but I would love help from the community to add more libraries to this site. I really want everyone to feel like the tools that they use and that they enjoy, that they are fairly represented on here. The other thing that I wanna point out is there are some known unknowns to this process. Like have I rigorously tested every feature of every framework and every permutation of custom elements? No, not at all, okay. I don't even know all the features of all the frameworks but I did wanna start just like having the starting point, just to help some of these things shake out. So why don't we check out how we did? So I'm gonna go through these scores in alphabetical order starting with A for Angular and Angular actually got a 30 out of 30 in the test that I wrote. I actually was not able to write any tests that Angular failed. I'm talking about new Angular, not old Angular, new Angular, okay. And by the way, like I said, known unknowns, okay. So if some of you are using new Angular and you're running into issues, please like open a PR, open an issue on GitHub and help me track those down, write some tests. But the test that I've written so far, Angular passes all of them. Preact comes in at 24 out of 30, so pretty good, about 80%. React 16 out of 30, so 53% and we'll talk about why that is. And then finally view with a 30 out of 30 again. So they're getting 100. One thing that I think is important to point out is that so far I have not encountered any major issues related to shadow DOM so long as the custom element is kind of following the best practices that I talked about before. In the past, this has been an area where things like totally did not work, in particular with like polyfill. But these tests run in native shadow DOM and against the polyfill. And so far, I haven't had any shadow DOM issues. So fingers crossed, maybe we're in a better place today. That would be really exciting. Let me talk about the two areas where we did actually encounter issues though. And that was mainly around handling data, so attributes and properties and dealing with events. And the tests for attributes and properties, these mainly check that you can pass data to an element declaratively, so using a framework's binding syntax, I can get the data into my element like I need to. And I'm kind of generalizing here, but I think roughly speaking there's like two ways that you can think of doing this in framework land. There is what I'm calling the manual approach where the framework has explicit syntax that you can use to tell it, hey, either set a property on this element or set an attribute on this element, but it's sort of up to the developer to make that decision. And then there's the sort of more automated approach where there's really only one binding syntax and the framework has a heuristic that it uses at runtime to try and figure out how it should pass data to an element. So I'm gonna go through each of these and kind of like indicate which framework is which. So again, we'll start with Angular. This is the binding syntax for Angular. So this is how you pass data to any element or component. It doesn't matter if it's an Angular component or a custom element or whatever, this is what you use. And basically what these square brackets mean is it's telling Angular, set the foo property equal to bar where bar is like some value inside of your Angular component, okay? In other words, my element dot foo equals bar. So not only can you pass primitive data this way, but you can also pass rich data this way. So objects and arrays, right? And that's great. Like we talked about before in the best practices, that's how we want to pass rich data. So this works really well. If you need to explicitly set an attribute for any reason, you can do that as well. You can add this attribute modifier to your binding and this tells Angular to explicitly call like set attribute on the element, right? So you've got total options there. Angular falls into the manual bucket. You can kind of do whatever you want using their syntax. View, on the other hand, is similar but they sort of do the inverse of what Angular does. So for a component that View creates itself, like a component just written in View.js, I believe by default it will pass data as properties, but when it encounters a custom element, it will pass data as attributes. So that's like saying set attribute. But because View also falls into the manual bucket along with Angular, they have a dot prop modifier and you can use on their syntax and you can tell it explicitly pass a property to an element, which is good. Again, now we can pass objects and arrays to elements. Cool. React, on the other hand, falls into what I'm calling the automated bucket. So it has a heuristic that it uses to try to decide how to pass data to an element. Currently, when it encounters a custom element, it will always pass data as attributes. For primitive data, like a string or a number or Boolean, maybe this is fine. But when we get to rich data, this becomes a problem because when you call set attribute in React, you end up with something like this. And that's not very useful. I can't do anything with that. So in React, there is not a good declarative way to pass rich data to a custom element. You can work around this, so you can grab a reference to the element and in your render function, you can manually set that property on it. But because there's not a declarative way to do it, React kind of fails a few tests there. Again, this is more of a hiccup than like a total showstopper. You still can use custom elements with React. You just got to know about this, gotcha. What's interesting is the React team is actually discussing switching this behavior to always setting properties when they encounter a custom element. So this is an RFC for React 16. I think it would be awesome if they did this and you can check out this GitHub issue to kind of like follow the discussion. Finally, Preact. Now, Preact is interesting because Preact, like React, uses JSX. But it uses a different heuristic. So when it encounters a custom element, it will use the property on the custom element if it is defined, if it is available. Otherwise, it'll fall back to using an attribute. So the way this works in Preact goes like this. It just does like an in check. So it'll say, if foo in my element, use the property, and if it's not, okay, well, we'll fall back to the attribute then. We'll treat it like configuration. This is pretty cool, and it actually works pretty well. As a result, Preact passes all the properties and attributes tests that I've written so far. They also have a pull request that's open so that if the element that it's working with is not upgraded yet, and it's passing rich data, like an object or an array, it'll use a property then as well. Lastly, let's talk about events. So the test here, check that you can declaratively add an event listener to the DOM events dispatched by custom elements. So the way this works in Angular is anything that you put in the parentheses there, that is the event name that you are telling it to listen to. The value then is the handler to run. So this is like saying, my element, add event listener, foo changed on foo, okay? Now the cool thing is we can use basically any event name that we want inside of those parentheses. So we can use lowercase, we can use caps case, which is good for things like URL changed or like DOM ready, right? Think of like acronyms and things like that. That'll work. Camel case, kebab case, Pascal case, my personal favorite, asshole case. Do not actually, no, no, no, no, no, no, no, no, no. Okay, view on the other hand is like basically the same behavior as Angular. So basically anything that you put after the V dash on directive, it doesn't matter what case it's in, it'll work in view. So view passes all the tests here. React is a little tricky, because React does not have declarative syntax for listening to DOM events. So React implements its own synthetic event system that kind of like sits on top of the DOM event system. For native elements, they have a white list of events that they know to listen for, but that doesn't really work for custom elements because we could dispatch infinite different event names, right? So unfortunately this looks really tempting, but it does not work, bummer. Again, like the attributes and properties thing before, we can work around this. You can just grab a reference to the element inside of your render callback, and then using like component did mount, you can just manually add event listener. So again, a hiccup, not a showstopper, but just something to be aware of. And they do have a GitHub issue to bypass their synthetic event system for custom elements. I think it would be really, really awesome if they did this, but this is still being discussed. Finally, we get to Preact. So the exciting thing about Preact is that it uses native DOM events. So this right here totally works. Mostly, it mostly works. There's like some gotchas. The only problem here is Preact will take everything after the word on and call to lowercase on it. So like if your event was actually named foo changed with those capital letters, it would lowercase it to foo changed, all lowercase, and maybe never hear your event, right? So I realize you're all now thinking the same thing, which is that it means it doesn't support asshole case. And I know, major WTF. Anyway, I showed this to Jason Miller, who's the creator of Preact, and he created a pull request to add support for asshole case. I think Jason prefers to call it mixed case events because he's like Canadian, very polite, but like we all know what he means. So with that, I think we're nearing the end, and I just wanna kinda like recap some of the things that we learned along the way. So regarding custom elements, I don't think there are really any like rules which dictate how you must write a custom element, but I do think there are best practices that we can start to follow, which will make it easier for us to write elements consistently, and easier for other developers to then consume those elements that we create. And when it comes to frameworks, the good news is that like most of what seems broken today, I think it's actually very easy to fix, and there's already issues open for most of it. And if we can generally agree on how custom elements should behave, and then in turn how frameworks should communicate with them based on those assumptions, then I think we're actually pretty close to this cool interop framework utopia land, right? Seeing that we now have libraries which get basically like perfect scores on the test that I've been able to write is really, really encouraging. I wanna thank all of these awesome folks. So the folks who worked on how to components with me, everybody who took the time to review the custom elements everywhere site and the test, and also just like hash out these best practices with me in docs and threads and everything. These are all very, very nice people who are kind enough to share their time with me, and I really, really appreciate that. I also wanna thank all of you for spending the time to listen to me today. I'm really, really excited to see what you all build and I hope you enjoy the rest of Polymer Summit. Thanks. Awesome job. I wanna see a bumper sticker that says coexist but with the JavaScript framework logos. I think Rob's on a good path there. Our next guest, I met through a bunch of GitHub issues. He was filing on our tools and I've spent the last year trying to get our stuff good enough for him to use. But I think this next talk is gonna show how some of that work is irrelevant and I'm really looking forward to it. This is Chad Killingsworth. Thanks. So I work for a company called Jack Henry and Associates, which probably none of you have ever heard of. But we write US banking software. So why am I talking to you today? Well, in that role, I get to contribute to a lot of open source projects. Polymer being one, another being Google's Closure Compiler. But today, I wanna talk to you about a new project and that's using Polymer with Webpack. Now I can tell from the amount of questions I've already had that there's a lot of interest in this subject. So hopefully I can give you all the detail you need to actually use this today. Before we dive deep into the internals of how Polymer and Webpack can operate together, I wanna talk a little bit about why this is important and how we got here. So my team has been building JavaScript applications for a long time, number of years. But lately we've become pretty disenfranchised with modern monolithic JavaScript frameworks. What we found is over time, even though we tried to guard against it, almost every file in our app became tightly bound to the framework. So if we started up a new project, we didn't have a good set of choices. We were either making the same framework choice we had already made, which kills innovation, or we were rewriting and duplicating code we had already written, which is never fun. The other thing we found with this is modern JavaScript frameworks tended to have a much shorter lifespan than the products we were developing. So once they became deprecated, we were either forced to upgrade or basically maintain old code, and no one likes that. So the idea with Polymer and the Web Component Standards these are baked into the platform. And as we all know, there's a rule out there. You can't break old code in the platform. So if we're using platform-based code, we're guaranteed backwards compatibility. Great, this is perfect. This gives us our long lifespan. In addition, Polymer as a library was just really light syntactic sugar over the standards. And in fact, they went out of their way to not reimplement or even make easier in some case, native APIs, which we loved. Well, it might require us to write more code locally. That code was then closer to the platform thus producing our longer lifespan. So with the promise of Polymer One, I dove into it fully with my team and helped develop this rich component library that we could share among all our projects of UI elements. The idea being that as each project spun up, we could reuse the elements we had already built, just like Rob so aptly demonstrated and talked about, but not be locked into this huge framework. So we did it. And in most cases, things worked great at the element layer. When we put a Polymer element into an old Angular project, it worked. Now there's caveats. Angular couldn't see into ShadowDom, but that was intentional, we expected that. But once we got beyond, hey, this actually works, how do we build it, life was not near as fun. The really interesting thing came down to Polymer being mixed content of HTML, CSS, and JavaScript. Now, like most teams, we'd been through the full gamut of front-end build systems. We started with Grunt, we moved onto Gulp, and we even experimented with just using NPM scripts and rewriting Gulp ourselves. And kind of, actually, truth be told, today have projects that use each of these. It didn't matter which of those build systems we choose, integrating Polymer was not fun. One of the problems we encountered is when you did this, Polymer used HTML imports. Everything else out there uses modules of some sort, CommonJS, ES modules, AMD modules, it didn't matter. They used a module-based syntax. So those dependency trees, there was just no good way to mesh them. What we ended up having to doing felt pretty icky, actually. What we have to do is create a synthetic Polymer import element and add any component we might want to use anywhere in our app to that import, and then we can use it over and reference it over in the JavaScript. But when we switch back or refactor, we have to remember, oh, now I no longer use this, I need to remove it from my HTML import over here. And be careful because I need to make sure I really am not using it anymore. What happens is on larger projects, we ended up writing custom tooling to try to make sure and add checks that those two didn't get out of sync. Again, not a lot of fun. What we found ourselves struggling with was just the sheer weight of the build system code. Over and over in my team's retrospectives, what keeps coming up has got to be a better way. I don't like writing all this build system code. The other clue that there was a problem that needed fixing was the build system itself started requiring substantial documentation. Not the code I set out to write was a build system. I set out to write a product for my company. Enter Webpack. So let me first be clear. Webpack is a build system, and I'm talking about solving build system problems with a new build system. It's okay, it's totally worth it. I know you've heard that a hundred times. Give me a chance to prove that this is the case. So Webpack is a little bit opinionated, but it tends to be opinionated in the ways that help you as a developer instead of getting in your way. For instance, there's a lot of tedious tasks that we work on in build systems that Webpack just bakes in. Calculating the hash of a built file system so that you can add it to the name to do cache busting, there's 14 bajillion plugins that do that in grunt and gulp and name your build system here. Webpack has it included. Don't even mess with it. Just use the naming convention and away you go. One of the other big benefits of Webpack is it comes with the Webpack dev server, which is a local development server and an extremely stable file watcher. And I reference that because if you've tried to roll your own dev server, you know how easy it is for an exception in your build process to crash your dev server. That doesn't happen with Webpack. One of the other defining points about Webpack is the dependency graph. This is nothing new. Lots of build systems have a dependency graphs where they crawl your site and find all the used code. The difference with Webpack is it's all your code, not just your JavaScript, your CSS, your fonts, your images, all your static assets also get added to the build graph and copied to the distribution folder. That means when you stop using them in your code, they don't get copied to the distribution folder. This is magic and just works and is something I never wanna deal with out again. There's one caveat with Webpack though. It only handles JavaScript modules. So how can we add static resources to our dependency graph when they're not a JavaScript module? That's the job of a Webpack loader. A Webpack loader takes one input file and its sole goal is to transform it into a format that the rest of Webpack can handle, a JavaScript module in some way. So this makes most sense when we start out with ECMAScript modules. So maybe we're writing latest and greatest stage two proposal ECMAScript and we just wanna transpile it down to ES5 for delivery. Hey, the Babel loader does that. JavaScript to JavaScript. No problem. The next part comes with TypeScript. That's a little bit different. I am, in a lot of ways, I'm in a different language. Well, the TypeScript loader simply does what the command line TypeScript compiler does and converts it down to a JavaScript module. Still okay. Now we start getting interesting. What about SCSS files? So there I might need two loaders and you can chain loaders in Webpack. So we're gonna pipe by file through a SAS loader which is gonna create a CSS file and then we're gonna pipe it through the CSS loader which will make a style module which is JavaScript and can be dynamically add styles at runtime. Pretty cool. So Webpack just handles that. After we've got our loader set up, the next thing we deal with in Webpack is plugins. And plugins can deal with any part of the rest of the compilation lifecycle. So the easy thing, the natural thing everyone thinks about is something like minification, Uglify.js or whatever other minifier you wanna use. I tend to prefer Closure Compiler. It's not for everyone. But in that case, that's only a small piece of the puzzle. Webpack has plugins to do framework specific things. Webpack has plugins to handle licensing. Webpack has plugins to do internationalization. Optimize modules so that modules don't get duplicated or get bundled in correct ways. On and on. Keep in mind, a plugin works a little bit different than a loader because it's working across multiple files where a loader is concerned with a single file. One of the other strong benefits of Webpack is how easy it makes code splitting and lazy loading. Again, these are not new concepts but they've been strictly painful in most build systems. You as a developer have to maintain one set of files with how does my input map to my output files and in your code, you have to keep in sync as well and make sure you import them in a way that that actually works. Webpack does away with all of that. Instead, it looks at how you wrote your JavaScript and builds the modules for you. It also adds the loader for runtime work so you don't have to do that either. You don't need a separate module loader. Webpack adds it as part of your build. Let's take an example. So in this case, we have a main JavaScript bundle where the whole app is a single bundle. But let's say that last section, aptly called huge, is rarely used. We want to split that off and lazy load it and make our initial payload smaller. All we have to do is change how we import the module. So instead of using a standard import statement, which is static, we change to a synchronous import statement and don't worry, we'll talk about a couple different formats for this. This is the new dynamic import statement for ECMAScript. It's promise-based. So we're importing the same module just in a different way. Webpack recognizes this, automatically pipes out huge.js code to its own file and adds the necessary runtime code to wire it up for me. By the way, I keep saying injects run type code. That's typically pretty scary to anyone who really likes to know what's going on in their JavaScript. The runtime code for this, uncompressed is about 150 lines. So don't feel like a lot is going on here. It pretty much stays out of your way and does the bare minimum and does it well. So with Polymer, we're obviously gonna need a loader. Again, the tricky part here is that a Polymer element is mixed content. It's the problem with build tooling in the first place. So the Polymer webpack loader recognizes the types of content in your file and processes each one a little bit differently. So the first thing it does is it goes through and looks for any HTML imports. Instead of relying on HTML imports, it simply changes this to a JavaScript import statement. Great, that one's easy. The next one's a bit trickier. What to do with the DOM module and the template? In this case, we just make a big string out of it and call a custom runtime function which registers it with Polymer's DOM module loaders at runtime. So it just adds it in. And then the last part is the script tags. So external script references also become module import statements and inline script tags just become the module body. So splitting up, handling all three different contents, what we end up with is an all JavaScript bundle. One of the side effects of doing this, wait for it, you're no longer using HTML imports at all. So now you're not relying on a spec that's not gonna be implemented outside Chrome and you've got a JavaScript bundle which then the rest of your tooling is free to optimize in the way it sees fit. So one sticky point with all Polymer developers has been, I just wanna import from Node modules and use it in my element. With Webpack, you can. You don't have to do anything, it just works. You're already in a JavaScript module. One little note here, we're gonna start, and I'm gonna talk about this multiple times, the script element has type equals module. This is a clue to everyone who looks at your code that you're opting into module syntax and that's a little bit different than normal. You may have code that's not all using NPM yet if you're using Polymer. So you can also configure Webpack to do the same type of lookup from your Bower components. All right, so we talked about adding static files to the graph. What about images? Well, typically with HTML files in Webpack you would use the HTML loader. It would scan your file, look for any image sources and add them to the dependency graph. Now adding an image to a JavaScript module dependency graph may seem a little bit odd but in here Webpack again really shines. So all it does is you typically use a file loader for images and that loader's job is simply copy this file using the naming conventions I specify to the distribution folder and the JavaScript module part of it is the path name I just copied to. So then I have a valid JavaScript module which in this case just returns the image path and everything just works. The trick is Polymer elements we've already done all this manipulation on so we can't use the normal Webpack bundlers and loaders to handle this. The Polymer Webpack loader does this for you. It's internally going to pass your HTML content off to the HTML loader and let it do the same thing and it's gonna do a similar set of steps with your styles. Both of these can also minify as they go giving you more bang for the buck. So there's a few differences in how you write your code when you're using Webpack. One of the things to note is that in Polymer to lazy load an element, we typically use Polymer.import href. So that uses HTML imports, that's not going to work. Instead, we need to use JavaScript asynchronous imports. And like I referenced, there's a couple different varieties. The ECMAScript dynamic import statement, you can hear more about this tomorrow in Sam's Talk, is the easiest way and the recommended way to do this. It's promise based so you put your path to your component and then say dot then and use the results. But perhaps you're not there yet and you'd like to use CommonJS. So this has actually been supported for a long time. No one just knows about it but there's a CommonJS require call called require.insure which takes a callback and does a very similar thing. It's job is to asynchronously load a module and then call the callback when it's ready. So we can use that in Webpack today and then we've got our dynamic import. Both of these import statements work in Webpack and will be treated as a split point for code splitting. Now, you'll notice in my paths that I'm importing HTML files. Don't let that trip you up. At this point, those files have been packaged as a JavaScript bundle. So it's totally appropriate to import them using a JavaScript function. The Webpack config itself is kind of interesting to look at. I think of it as kind of the best of both worlds between the grunt configuration file and gulp programability. You don't have to do near as much work and it doesn't get near as out of control as grunt does but it's a lot more declarative than gulp ever it was. So like most systems, Webpack is gonna start from an entry point or a set of entry points. With Polymer, your entry point is probably going to be an HTML element and that's just fine. The loader will do its job and everything will just work. But like I said earlier, we probably also wanna tell Webpack where to resolve and look up named modules from. By default, it's already gonna look in node modules but we might want Bower components too. So we just override the module lookup algorithm and specify two folders. It will now look in both and I don't even have to worry about it. Now we need to configure our loaders. Loaders are just a set of rules. Each rule has a test that helps filter down what files it needs to look at. Normally the test is just a regular expression on the file extension. Followed by that as a use block with the list of loaders I wanna run through. One note on this, it runs last to first. So in this case, we're running the Polymer Webpack loader to create a JavaScript bundle and then running Babel to transpile that bundle to ES5. And last, you can use include and exclude definitions to further restrict where it looks. In this case, I'm restricting it to my source components folder so that the rest of the HTML in my project doesn't get treated as a Polymer component. Now, member. Everything in Webpack is a JavaScript module. Modules do not have the same semantics as scripts. Probably the biggest difference here is as soon as you use a module, you're no longer in the global scope. The easiest way to address this with Webpack for your own code is simply to declare your elements on the window element, thus forcing them to be global. Now everything works again. You only need to do this if you need to reference the class constructor or the class itself somewhere else. If you don't need to reference it, if you just need to define it, you don't have to do this. But if you need to reference it somewhere else, you can declare it as a property on the window element. But perhaps you'd like to use a more modern method. You can also use module importing and exporting to do this. Again, notice the script type equals module. The import and export keywords are not valid in a browser unless you're already in a module. So they weren't working a script tag. So by adding type equals module, I'm telling the browser that I'm in a module. Now, one note, Webpack doesn't care one way or the other, you're going to be in a module. This is just an indication to anyone else looking at your element, what's going on. So now that I have script typing module, I can just say export default class, whatever, and that's now exported. Other things can now import it. One little note here. You can't actually import from an inline script tag. So while this is valid syntax, it doesn't really make any sense unless you're using Webpack. And then that block of code will become my module body and I most certainly can't import from it. So if you're trying to live in two spaces at once, this might be a good way to handle it. But a lot of the code we deal with isn't our own. It's library code. And for that case, we can't go change how it was declared. Webpack has a whole set of shimming options where you can at build time make minor adjustments to how code is declared to make it work. So in the top, I've got a script tag, and I do mean script at this point, not module. It's in the global scope and it's called add some mix in. I need to use it in another module. Well, since it's no longer going to be in the global scope, Webpack's exports loader helps. In that, I can declare exactly what file I'm talking about and I say I need to export the add some mix in symbol. Webpack will automatically add the export statement to the bottom of my file, which doesn't interfere with source maps and everything goes from there. On the reverse side is the provide plugin. So let's say a different piece of library code expected that add some mix in function to be global. Now it's not. The provide plugin says if any part of my compilation tries to reference add some mix in globally, add an import statement for it so that it's defined locally. So you can shim back and forth in that way. Now, if you have a large existing code base that uses a lot of script semantics, you're gonna be doing a lot of shimming. So just keep that in mind. Don't expect this to be just a plugin that works in it for all scenarios. There is a little bit of play around here. You can do it, but there'll be a lot of configuration to make that work. One of the other really cool things about Webpack is it natively normalizes modules. And by that I means it understands an ECMAScript static import, an ESNX dynamic import statement, the common JS require call and the asynchronous common JS require.insure call. If all the rest of your code is using language semantics that are already supported in your development browser, you don't need a transpiler for modules. Why is this a big deal? Well, one, it speeds up your builds not to have to run them through a transpiler like Babbel. But two, debugging is a whole lot nicer when you've got the native untranspiled code right there to look at. One of the gotchas here though is that as you start using Webpack and module syntax, especially with Polymer 2 and earlier, you're gonna lock yourselves in very easily to Webpack-specific syntax. That's okay for your own elements. Just be aware you've limited the ability to share your components with others. Here's an example. We're importing from node modules. Awesome, this works. We're using the node module resolution format except that anybody else who uses this element also has to be able to use the node module resolution format. So we've just limited our ability to share this to webcomponents.org. That's just not allowed. So just keep that in mind. Another thing to keep in mind is that how a browser resolves a URL and how it resolves modules and how nodes resolves modules, all three of these things have different semantics. For instance, the top link in the HTML import section is mycomponent.html. Well, you look at that and you know that that means find mycomponent.html in the same directory I'm currently in. Except that, if you're translating that to a JavaScript import, that could mean import from node modules or a sibling folder. So to get around that ambiguity, we require that the import statement has a dot slash. So the Polymer Webpack loader is going to add that for you. But what happens if you wanted to use a named import for a component? Well, Webpack adds specific syntax for this and it comes up in any place a URL is used, like CSS background images. By adding a tilde character in front of the URL, you tell Webpack, I really want this to be a module resolution and not a URL lookup. But again, that's very Webpack-specific, so be careful where you use it. One of the really tricky things that happens when you get away from HTML imports is that the polyfills start feeling very fragile to get bootstrapped. You have to do this precisely in the right order. So a couple things. One, if you're transpiling, you'll need the custom elements ES5 adapter and any browser that has native custom element support. But you can't transpile that folder and you shouldn't bundle it with any other polyfills because by design it can throw an exception. So watch out for that. The next thing is the rest of the web components polyfills, if you're using the web components loader, load asynchronously. So you have to delay your main bundle load until after the web components ready event fires. Now, if this all seems a bit tricky, it is, but there's a demo folder in the Polymer Webpack loader that shows exactly how to do this and I recommend you just reference and follow that and try not to overthink this. A lot of Polymer developers always seem to ask, how do I use a CSS preprocessor like SAS with my Polymer elements? The standard answer is you really don't need them, which is true, but not what they ask. And sometimes when you're working with other frameworks, you just really wanna reference those global color variables without using it. The problem is that the shady CSS polyfill features can't see an external style sheet. Adding an external style sheet reference, like I'm showing here, is completely supported by the Shadow DOM spec. The imported styles will automatically be added to Shadow DOM and properly scoped. But again, the shady CSS polyfill doesn't see it, so older browsers, you can't use custom CSS properties and in no browser can you use the add-apply mixing syntax. The Polymer Webpack loader has an opt-in option that in cases like this, it will inline your CSS into the element. So you can run your preprocessor on it and then the inlined elements will let the shady CSS polyfill match it and you can have both worlds at that point. All right, what about Polymer 3? So all of the functionality that the Polymer team is moving towards with Polymer 3 can already be used with this loader. In fact, it's doing almost exactly the same thing. You can bundle and use ECMAScript modules as you need right now. Where are we going with Polymer 3? Well, so maybe you're not thrilled with the idea of authoring your HTML inside a template literal in ECMAScript modules. Don't worry, Webpack should be able to do that for you. So we'll just run the same process we run today and stick it in the template property once Polymer 3 is far enough along in the development that that makes sense. My team, in fact, I didn't actually do most of the code. The developer who did is with me. We wrote this to reduce a lot of the build friction we were seeing. We're using this in production today with AngularJS applications. I also know of developers who are here who are using it in TypeScript projects. My team is gonna continue to collaborate with both the Polymer team and the Webpack core team to improve the experience in Webpack and Polymer for both of these to give you as a developer the best of both worlds. We really tried to design this to leverage the strength of both systems rather than try to let them fight together. A couple takeaways for you. In the Polymer Webpack Loader project, you're gonna find that demo project I referenced earlier. Rob Dotson actually wrote it. It's a great reference point for just the bare bones examples of what to use. But in addition, my team maintains a copy of the Polymer starter kit which shows the changes needed to make to a Polymer app to build with Webpack. That link is also referenced. You're free to reference both. Thanks, and I'll be around later for Q and A. Thanks, Chad. All right, we got one more. And our next speaker is somewhat unclassifiable. You're gonna see some live coding. When I asked her what I should do for an intro, she actually suggested. Meow, meow, meow, meow, meow, meow, meow, meow. Here's Myunica. It's not wrong. It's what's about to happen. 25 minutes. Hi, y'all. I have to make some banter with you. So I have the nightmare set up where I have slides for 12 minutes and then a live demo for 12 minutes. That's two computers. So everybody loves me right now. Cool. I'm just gonna go. Awesome. Hi, everyone. I am Myunica. I am not clicking. Can I have a clicker, please? I'm sorry. Hi, I'm Myunica. I'm that person with the cat. I'm not Weldorf on Twitter and GitHub and internet places. And I've been on the Polymer team for about two years. You may have seen me meowing about web components a lot. But I started right before Polymer 1 was shipped. This was exciting because back then, I thought web components were really cool, but nobody really knew about them. I would go to conferences and be like, I work on web components and people would be like, I don't know what that is. Is it an iFrame? No, it isn't. And I really liked doing that because I sort of became this traveling saleswoman door-to-door selling web components and be like, let me tell you how web components are right for you. And I'm glad I did that because talking to people, I sort of formed a story about the kind of teams that web components were good for and the kind of people that enjoyed using web components. And that story starts with the world around us. The world around us is built by people like you and me and sometimes we're crafters and sometimes we're assemblers. And what I mean by that is that if you think about your kitchen, somebody made you a table and a chair and a bunch of cabinets and maybe you or maybe somebody else assembled them together in a nice looking kitchen. You didn't go into the forest, cut down a tree, sanded it down, cut into slabs, took some screws, made a table out of it and we're like, I crafted the shit out of this chair. You didn't do that. Even if you use something like IKEA, you didn't really craft that table. You assembled it. Somebody produced a whole bunch of instructions and pieces for you and you assembled them together. You took things that other people did and you made them and you glued them together and you made a thing out of it. The same thing happens if you cook. When you make a meal, you take all these ingredients and you mix them together. You don't find your own tomatoes and corn and churn your own butter and grind your own spices. And this is very good that you don't do this because assembling is faster and it's more efficient and more performant and more accessible. I have no idea how to farm my own tomatoes. I know that because I have a plant, it's in my patio, it makes one tomato every month. I would die of starvation if that was my only tomato. And as a society, assembling is really fast for us. It was the whole point of the industrial resolution, got revolution to move from everybody crafting a lot and doing a lot of manual work and sort of assembling more and making it faster and more oriented. And the thing is, the exact same truth for the web. The web is kind of like a society, but on the web, we tend to be more crafting than assembling and that's a little bit weird because that means we're not gonna progress as a society if we don't learn from societies we already have. And the reason why I like this crafters and assemblers metaphor is that web components sit really nicely as a bridge in between these two. If you think about somebody who makes components or elements, they have a lot of institutional knowledge about what makes that element good and all the heavy lifting that you need for that element. Something like paper dialogue requires a lot of really weird knowledge about animations in the CSS spec and what a stacking context is and how you can't just produce one from thin air every time you want that. But on the other hand, while somebody has made you a web component, this enables people to just pick it up and use it and form a really nice experience with it. I can use anybody's custom element. I don't need to know how it works and that's really great because if you take my metaphor from before, this is great. It's advancing the web society as a whole. It's faster to make things and it's more efficient. The CDO of ING said this at last year's Polymer Summit and I really loved it and it stuck with me and he said that what he wants to see in the world is less crafting and more assembling, less people doing things by hand. You shouldn't have to know how to build a table from scratch to have a really nice living room. You shouldn't have to know what a CSS stacking context is to just have a modal dialogue that tells somebody there's a sale on your website. And this is true of every other industry but it's not exactly true for the web. Like in the real world, I can be a crafter if I want to. I can be an assembler if I want to. I get to choose. I don't have to be both at once but a lot on the web we have to both craft and assemble at the same time and that isn't really great. And all of this comes together in design systems. So before web components, when I say a design system, I mean like the elements, with the elements library, the things that your brand should look like, like the colors and the padding and the margins and the visual elements that an app can use. And before web components, having a design system was very craft-oriented. If you look at something like Bootstrap, which is amazing, but Bootstrap was like, somebody crafted you this enormous CSS file that had all these styles and it was great but even putting the elements in your page was a little bit hand-healthy. For example, this is a brand nav bar that you could have in your application and I took this from the Bootstrap site. But imagine that at some point, your brand changes. Maybe you need to have like that container fluid class needs to go away. What do you do then? You have to literally go in all of this DOM you produce and start deleting code or adding code. And that doesn't feel a lot like assembling to me. And of course we know this, web components came and they fixed this because now we have a custom element that abstracts all of this like garbage that we don't actually care about that somebody else built for us because the only thing you care about in this element is just that image. So that's good. But the thing is, this story only works if we have tools around this. The story only works if, you know, like Bootstrap, we have a catalog of elements. We know as a developer, like all of these things that are available to me and that I don't have to reinvent them from scratch that somebody already made, you know, the perfect kind of blue button that I'm allowed to use. So we made webcomponents.org, which you know very well. It's a public catalog of webcomponents and everybody can upload their components and it's got demos and it's got docs and the world was good and we were happy. Only we weren't super happy because it turns out that didn't work for everyone. There's a lot of companies like EA, like Comcast that are really sold into webcomponents and use webcomponents a lot, but their webcomponents aren't public. They can't use webcomponents.org because they can't publish their elements. They develop them internally. So as a result, every one of these companies basically built their own catalog. They could have forked webcomponents.org, but webcomponents.org solves a different problem, which is how do we let people upload arbitrary components and that isn't the problem that companies with private elements have. They just want to display this fairly limited set of elements they have. And the problem here is that there's no such thing as a one-size-fits-all. Steve told you this morning when he was talking about elements and it's true for apps and solutions. At best, you have something that works in the 80% case. 80% of developers probably want to upload a public web component, but in the 20% case, it's all edgy and weird and you have to do something different. And it's usually a more simpler and assemblier solution. So in this case, I build indie elements catalog, which it's fairly ugly. It's not amazing, but it's also a very specific problem. You have a JSON file and in it, you specify what kind of elements you want to load docs and demos for. And these can be Polymer 1 elements or Polymer 2 elements or just vanilla custom elements. They can be anything you want and the code is there for you to do this. And if this doesn't work for you, which is fine, you can fork it, you can clone it, you can fix it to make it work with your workflow and then it's good. And the world was good again. Well, it really wasn't. We again solved another 80% of a bigger problem and left 20% off. It's really great that, you know, now your developers know what elements are available for them, but they also need to know how they fit together. Because in an organization with real workflows and real people, you need to be able to validate your prototypes against code that you actually have, not code that you think you might have. And most of my examples when I give talks are from Chrome, from Chrome the browser, because I used to work on it and I know the developers and the workflows. And this is a page from Chrome Settings. And the way the Chrome Settings page gets designed is first, we give it to a designer, Alan Betz. He's a fantastic human being. And he designs all of these workflows in Sketch. He calls these sticker sheets because he basically has stickers of all the material elements that he would like to use or is allowed to use according to material design rules. And then he assembles them together and figures out the screens. And for each particular screen, he does a whole bunch of iterations. Should the bottom be on the left? Should it be on the right? Should the text be centered? And then he produces a whole bunch of screenshots and then gives them to developers. But the thing is, real elements have bugs and have problems. They have implementation limitations. And I know this because I wrote those elements. So a lot of the times Alan would be like, this toggle button needs to be 16 pixels. And the developer will be, it only works at 18 pixels because Monica didn't make it a resizable. Which is annoying because this means that the prototype that somebody made cannot actually be implemented. And this is what I call a, what you see is what you hope you get. You hope that the prototype that you get from the designer, you actually get to see in real life. There's no guarantee about it. And it's nobody's fault that there's no guarantee. It's just not everything works like it is on paper. Because instead of what you need is a common ground or a tool that is a common ground between developers and designers so they can look at the same elements and improve that workflow. Which is what you see is what you get, a whizzy wig. And something like this has been a long time dream of many of the Polymer team members. We had this in 0.5 and it was called Designer. It was really awesome. And Justin is the visionary on our team. So he said this big dream of Designer 2 which was gonna be an app that was gonna basically build an entire app for you. It was gonna let you do the UI and it was gonna let you do the JavaScript and all the data bindings. And you know, optimize shit like throw a service worker in there. And literally from using that app what you would see is what you would get. But he's always dreaming up really big ideas like the Polymer CLI and lit HTML which you'll see tomorrow. So he's never actually finished this. But lucky for you, I have incredibly small ideas and very little patience. So I had two free weeks and basically stole his milkshake and build this other thing. Which is more like a getting started tool. It gives you the ability to create something like a prototype but not a full blown app. It doesn't work super well all the time but it helps you get started. Because what you do is you deserve Justin's tool but we don't have it yet. So instead you get a WYSIWYD which is like a shittier version of that. And I'm gonna show it to you. Maybe, okay. So I really, really, really hate live demos and I'm really, really nervous about it. So hold all of your fingers crossed and toes and shit. Okay, so this is WYSIWYD. You can, oh, I just got a screensaver. It's going well. Okay, it's in polymerlabs.github.io slash WYSIWYD. The code should be open sourced very soon. It's kind of terrible but you'll live. So here's what the app does. It has a couple of panes and one of the things that it lets you is you can select some elements and drag them in. So I can do something like a div which is a native element. I can resize it. I can move it. I can change some of its styles so I can make a background color. And if you see over here, it has properties. I'm gonna zoom in. So it's got like a class list, hidden, and title. There's also things like custom elements. So for example, I can find God, a good one, paper input. Paper input is my favorite. And when I add it, paper input has different kinds of properties because we actually crawl the prototype chain so that we look at what properties or attributes your custom element has. So in this case, you can see that I have something like value and labeled and disabled and the pattern and all that goodness that comes with paper input. And you can change them. You can change styles. Flex is just the collection of the flex attributes that you might use. And once you're satisfied with the code that you've been producing, there's a code tab where you actually get the code for this element. It's got two imports, the Polymer element and paper input, which makes sense. Let me zoom in. And then as you scroll, it basically gives you an index.html that has both the element definition and the declaration at once. So we have our app and then the DOM module for that app. And you can see that it's got all the styles like I added a div, I added a paper input and here they are in my template. And then it has the element definition at the bottom. You can also see a live preview of it, which is where we actually render the code because I don't trust that designer pane, so I give you this. And because this code isn't editable, because again, that would be a very complicated app and then I would have to like re-render everything. There's also an option to pop this out into a JS Fiddle where it's a conference Wi-Fi is amazing. There it is and it's your paper input and it does all the things. And you can from here on start adding logic or JavaScript or whatever you want. Cool. So let's build a thing. Let's delete all the things first and then build a thing. What I'm going to try to build, again, thoughts and prayers, is a YouTube app where basically I'm gonna bang on a paper input and then get a whole bunch of YouTube videos related to that input. And in particular, they're gonna be related to cats because why not? Let me see how I'm doing for time. Okay. So one of the tabs that I didn't show you over here is the Samples tab. And these are basically combinations of elements that I've added because I really don't like rebuilding the same thing over and over again. And again, this is the kind of app that you would be able to clone and fork and take home and you could add your own samples for whatever team you have or your particular workflow. So in this case, I'm going to start with a header sample, which is basically an app header with a toolbar in it and it has a div. And over here, I'm gonna rename my div to meow tube and put some emoji in there. You can tell that I've practiced this. Let's put a television too. Sweet. So that's it. And now when I save it, it's there. If I preview it, it's there. If I look at the code, I actually have an app header with all of its attributes, a toolbar, and everything else. Cool. So I have my thing and now I wanna render my results. So in here, I'm going to add a DOM repeat because I'm probably going to get a whole bunch of results. Actually, I need an IronAgeX first because we're gonna need to ask YouTube for this. So in my custom elements, IronAgeX, here it is. And IronAgeX has a bunch of things we need to populate. A URL, for example, which I'm going to paste because nobody can remember this unless you're zilling probably. A bunch of parameters. So here is things like your API key, your query, which in this case, I'm hard coding to cats. Oh, what do you care about? Which is the snippet handle as JSON. So this basically tells IronAgeX that whatever it gets back from the server, it should be a JSON. We wanna set auto to true so that the moment we refresh the page, it actually fetches this. These are just attributes that you can get from the IronAgeX documentation. They're not made up or anything. And last response, I wanna populate an object with this. So here I'm gonna put videos. And what this means is that IronAgeX is going to get a JSON object from the server and then put it in this video's object. This is just a regular Polymer data binding, if you're familiar with data bindings. It's just it. It's exactly how you would write it. Cool. So now I have my IronAgeX, it's set up. And then I'm gonna create my DOM repeat. And this is where everything can go south. So, DOM repeat, here we go. I have it. I'm gonna drag it in. It's hanging out. It doesn't look like anything. I also have a paper card sample which basically has an image and a link in it. And I'm gonna move it around so that it goes into my DOM repeat. So there's these arrows here because sometimes dragging and dropping is infuriating. So if you don't wanna do that, you can just traverse the tree. Speaking of the tree, here on the left-hand side, I haven't shown you this yet, it's basically all of the elements in your app so that you can see the hierarchy and you can click on them and select them because again, dragging and dropping is infuriating. And I know this because I built this and it was just rage. Okay, we have IronAgeX, it's getting data for us. We have a DOM repeat which repeats, stamps the same thing over and over again. So it's gonna stamp one of these paper cards for every result that we have. And we need to tell the DOM repeat to take the elements from IronAgeX. So, IronAgeX was populating an object called videos and because I've memorized this demo, I know that videos actually has a sub-object called items, that's the one I care about. So here, videos.items is what I wanted my DOM repeat. Now the DOM repeat is gonna take every single thing in this object and create an item object. So to this paper card, I can now give my item. So, I have an image. The image should be this thing that I'm gonna paste called item.snipit.thumbnails.hi.url, very memorable. And then for my link, I'm going to change the text content to be, let me zoom in again. Item, ooh, hello. Item.snipit.title. And then for my href, every snippet has a video ID. So it's gonna be item.id.videoid. And now if everything goes well, if I go to the preview, I get nothing, amazing. Something that didn't go well. So let's figure it out. We have our image and we, oh, maybe we don't have data. Let's wait for it for a little bit. Nope. Okay. Pardon me? Oh, way louder and thank you. See, drag and dropping. All right, so let's move this one back and move it into the DOM repeat. Nailed it. Thank you. But I'm honestly kind of surprised at work, I'll be honest. So it looks a little bit terrible because links are, you know, display inline and we don't have an input. So let's do that now that we know our thing it works. So first of all, we're going to change our link to be display inline block so that it's sized correctly. And then because I don't want it to grow so that all my things are still aligned correctly, I can also add extra styles. So the thing that I did in here is that as a lazy developer, I didn't add every single CSS style to this designer because I'd be crazy. So instead I give you the option to just add whatever you want. If you add garbage, nothing's gonna happen because the CSS parser is smart. So in this case, I want an overflow auto and then that'll be great. So I have my DOM repeat, I need my input. So let's add a paper input again in here. And then I'll drag it into here and then move it before the DOM repeat. My paper input is going to have a label and it's gonna say find your meows. And whenever you type it into an input, it has a value property. So I'm going to save that into a query property. Cool, and now all I have to do is give this query to the IronAgeX. So if you move back to IronAgeX, it used to have this parameters object that you know about. And one of the things in here was cats, the query queue. So I'm just gonna add my query here and still keep the cats because it's a meow tube. We only look at cats. Why would you ever want to look at anything else? Now let's just check the code for a second before we run it. Have a whole bunch of styles, a whole bunch of inputs, a whole bunch of DOM that I didn't have to create. I just like dragged and dropped it and typed into some text fields and that was awesome. And now if I look at the preview and I look for a burrito, I get burrito cats. Here is a wet cat and if I open that video, it should work and it does. Here's your favorite cats, just getting wet and miserable. Ah, that's it. That was my demo. Cool, so this is live. It's polymerlabs.github.io slash withewid. If Elliot didn't forget about me, he'll probably switch it to open source now so you can actually check the code and judge me and complain about it later at the party. And I think I have like one more slide to tell you about if we can switch back to the suite. Oh my God, it worked. Never doing a live demo again till tomorrow. So because we had that 80-20 problem, I know that this is not going to fit all of your problems and it's not gonna fix every single workflow out there and it's probably gonna enrage you because drag and dropping is kind of annoying. But the best thing about it is because it's open source and because of how I try to build it, you can take it, you can fork it, you can clone it, you can change it and make it work for you. Maybe you need a marketing site assembler where you just build things so that your marketing team can put together, I don't know, pamphlets, whatever marketing does, something awesome that sells shit and gives you money. Fork this tool and build it into that. Maybe you need a designer where your designers can use four of their UI elements to build things. Please fork it, please build it into that and give it to your team. Maybe you want designers instead of designers instead of designers because why not would you want that, in particular, if your name is Dan Friedman and you get a bigger screen so you can do like nine of them inside of each other, best use of designer ever. Wendy said this morning that her dream was to bring web components to every browser and every developer and I think the way we're gonna really do this is to start thinking more about visual tools. It's not all about your CLIs and your web packs and your gulps and your whatever other tools you use. Sometimes assembling is really visual and we haven't been really good at building visual tools for visual people. So I hope that this at least proves that there's a value to do this, that it's kind of exciting to build this and to use this and it takes it a little bit closer because we're now a little bit closer to just assembling more things and crafting less. Thank you. Awesome job. So happy end of day one, Polymer. Thanks everyone for coming. Thanks to our live stream attendees for joining us. Got a full day tomorrow, starting at 10 a.m. I got some announcements for our in-person attendees now. Just a bullet list was getting longer and longer. I memorized the points until there were 20 of them. Okay, so we've got a party going on in a little bit so stick around. We have a fun after party with dinner drinks and a DJ. There are both non-alcoholic drinks, non-alcoholic drinks, drink responsibly. And don't forget the community guidelines that are posted around the venue. On a similar note, before you get started on the party, have a plan for how you're getting home or wherever. If you need help figuring out how to get a taxi, you can talk to any of the staff members that are wearing the pink shirts and they'll help you with that stuff. Let's see, coat check, be open until the end of the night. Tomorrow morning, we've got breakfast starting at 8 a.m. There's also starting at 9 a.m. You may attend the Polymer Women's Breakfast and that'll feature a panel discussion with Wendy Ginsburg, Mariko Kosaka and Monica Dinkulescu. Sessions tomorrow start at 10 a.m. sharp and finally, oh, anyone that attended the code labs, some of you may have noticed that we might be rate limited on the GitHub API with all those Polymer init calls. I think that has to do with our IP locality. If you are encountering that issue, you can go to the Polymer elements. So let me just read this verbatim instead of paraphrasing. If you are running into this issue when running Polymer init, you can instead just press the download zip button in the Polymer starter kit repository from the Polymer elements org. Okay, thank you so much, see you at the party. Party!