 All right, hello. Good morning, everyone. Thanks, Monica. And thanks, Dion, for getting the day started. And thanks, all of you, for being here. I hope everyone's really excited for day two. My name is Justin Finiani. I lead the Polymer Tools team. And I'm really, really excited to tell you this morning what we've been up to recently. So yesterday, you got to hear all about the future of Polymer and web components. You heard about the new web component standards, which are now starting to ship native in browsers like Chrome and Safari. And you heard about Polymer 2.0 and how it builds on these new standards and is even smaller, easier to use, and more interoperable with other libraries. And just like the Polymer Core library, we're also working on some major upgrades for the Polymer Tools so that they work with those new standards, Polymer 2.0, and so that they're useful for the entire web components ecosystem. In the next 45 minutes or so, we're going to take a tour of our tools, recover some thoughts about why we build tools, updates on a few of the core tools in our toolbox, and finally a look at how we actually deliver these tools to you so you can use them. Let's start by looking at why we build tools and why we even have a tools team on the Polymer team. The Polymer team hasn't always built a lot of tools. In fact, the core library team is famously averse to tooling. What they and really probably most of us love about the web platform is the immediacy of working on it, that instant edit refresh cycle. So their tools consisted of basically an editor and a browser. And that's pretty much it. Components and apps and tests would load in the browser without any build step required at all. By not adding too many layers of abstraction, the hope is that there's not as much need for a complex tool chain. But you almost always need at least some tools. And because we're using these new platform features like custom elements and HTML imports, many of the existing tools didn't quite work how we wanted. So first we needed a dev server that could easily serve up components and their dependencies very easily. And then we needed a bundler that worked with HTML imports. And then we really wanted a test runner that worked with our component-first workflow and could load multiple HTML files. And this need for tools went on and on and on until we ended up with this virtual kitchen drawer full of tools. And this is very, very hard for our team to maintain. But much worse than that, we ended up with many, many pages of tools in our documentation. And this is hard for our users. And it's especially hard for users who are new to the Polymer and web components world. So at the beginning of this year, we knew that we needed a dedicated tools team. And we knew that we needed to go back and re-envision our tools as a cohesive and organized suite that's easy to approach and understand. And this was also a good time to stop and ask a really, really important question. Why? Why is the Polymer team building tools in the first place? Well, the tools we had already created didn't come out of nowhere. Like most tools, we built them because they solved real problems. But what kind of problems? Well, web development in general, as we all know, has many, many problems to address. And as we all know, web development has many, many, many tools to address them. But with Polymer and web components, we have these new and unique problems to deal with that the existing tools don't address. For one, HTML and CSS are now extensible. Understanding HTML used to be pretty easy. There's a fixed set of tags determined by a committee. And tools like text editors and linters could just build in a hard-coded database of elements and their attributes. With custom elements, this all changes. And now HTML is an open-ended set of tags. So how do you know what's valid HTML anymore? Same thing with CSS. CSS used to be global and have a fixed set of properties. But now with Shadow DOM and custom variables, CSS has scopes and an open-ended set of properties. Knowing what's available to use is a lot harder now. There's also many different ways to write an element. Already at the summit, you've seen a few. We have Polymer 1.0, Polymer 2.0's class syntax, its legacy syntax. We also have vanilla custom elements that don't use any library at all. And we have these other web component libraries like Sk8js, Bosonic, X tags, and reactive web components. And I'm sure a lot more that I'm not even aware of. And then with HTML imports, HTML files can now import each other like libraries and modules and other programming languages. This ability for HTML to form a dependency graph is completely new to web development tools. In existing bundlers and linters and minifiers, they just don't know about this new structure of HTML. And so we can notice something about all of these problems. They're not Polymer specific. They apply to web components in general. And this brings us to a really, really important principle that we have on the tools team. The Polymer tools are not just tools for the Polymer core library. They're tools for web components in general. Our goal is to help all web components, developers, and users. I think this is something really, really unique to the web components ecosystem. That because of the inherent interoperability of web components, the success of any one web components user or library is a success for us all. Because together we grow this entire interoperable ecosystem. So this is one major motivation for tools to solve problems that are unique to web components. But at the same time that we're in the midst of this web component C change, other parts of web development are evolving as rapidly as ever. There are a whole slew of new technologies and expectations that are drastically changing the way we build a web apps, and especially mobile web apps. To offer a truly great mobile experience, apps now have to have what was until recently a fairly sophisticated structure supported by a sophisticated tool chain. On the Polymer team, we've been trying to codify these best practices into a pattern that helps you craft an optimal loading experience. And we came up with this acronym that you've heard about, Purple, PRPL, and that stands for push, render, pre-cache, and lazy load. Purple involves things like using HTTP to push to push exactly the resources needed for a given route, service workers to pre-cache an apps resources, and per route bundling when push isn't available. These techniques are critical for engagement and to compete with mobile, with native apps. And tools can help make this manageable. The goal of even defining a pattern like Purple is to make it easily repeatable, and repeatability is something that tools are really, really good at. So this is another motivation for Polymer tools. We wanna take these cutting edge techniques and make them standard practice. We wanna make things like optimal loading and offline support the default when starting any new app. So that's why we're building tools. This is what gets us to work in the morning. Let's look at what we actually build. Our tools are organized around a set of core single purpose libraries, like the project initializer, test runner, and the build system. These tools each address a problem along the life cycle of a project from getting started all the way through building for production. And then we take these tools and we integrate them all and deliver them to you in the Polymer CLI. And then powering most of these tools is a common analysis engine that allows our tools to really understand your project. Now let's take a look at that engine that powers our tools. A few months ago, we embarked on a project to build a new and flexible static analyzer for the web, which I'm really happy to talk about for the first time. It's called the Polymer analyzer. The analyzer helps other tools understand applications and components in this new, extensible world. Applications made up of many different types of files, using new elements and written in different libraries. Its job is to dig into this complex graph of resources that make up an app, find and extract important features, like element definitions and HTML imports, and then provide a convenient API for our other tools to operate on this structure. And it does this all with plugins so that the analyzer can be extended without changing its core implementation. The analyzer comes with several parser plugins so it understands HTML, JavaScript, CSS, and even JSON. And in the future, we can add support for additional languages like TypeScript. It also understands many different ways of importing files into each other, like HTML imports, inline and external styles and scripts, and soon it will support native JavaScript imports. And it has a large set of plugins to support different ways of writing custom elements, like Polymer and vanilla custom elements. So when the analyzer processes a file, it parses it with one of its parser plugins, here HTML. And then it scans the document, looking for important features with its set of scanner plugins. For instance, we have an HTML import scanner and a DOM module scanner that can find a DOM module and parse the templates in them. And then we have a Polymer element scanner, which can find a Polymer element declaration and find important features of it like its properties. Next, the analyzer parses and scans any imported and inline documents until it's analyzed everything in your application. So here we have an import that's importing another file, and the analyzer finds that. And then finally, the analyzer can resolve references between features and really understand what's going on. And here, we know that this tag is an instance of a definition in another file. The analyzer extracts a lot of information about elements. It understands its basic API, its attributes, properties, and events. It knows how to style an element. It knows about its prototype chain, its superclass and mixins. And it knows how to use the element too, how to import it and it can get its documentation. And then this information is available to all the other tools that build on the analyzer. As part of the analyzer work, we're also adding a new feature to help our tools better understand app shell style apps that use lazy importing. We call this declarative lazy HTML imports. The analyzer and its plugins work well because of the declarative nature of HTML and Polymer. The more that your application is declarative, the more that we can tell what it's doing without having to run it. But lazy imports today use an imperative API that trips up static analysis. If we take a look at a typical app shell style app, we usually have a shell that's loaded before rendering. It then dynamically loads the code needed for a particular route. To actually load a route, you almost always see a snippet of code like this. This is a method that takes a page name and then it loads the associated HTML import with a page with import href. But our tools don't understand this imperative code and so you get problems like linter warnings or maybe the bundler might miss some files. Declareative lazy imports allow you to specify the imports you will load right in your markup. Lazy imports are like link tags, like HTML, they're link tags like HTML imports, but they use a different rel attribute and they have a group name. When it's time to actually load an import, you do it by group, not by URL. And we provide a behavior to help with this. And by using group rather than URL, tools like the bundler can change URLs around without breaking your code. And by being in markup, tools like the analyzer and the linter can see these lazy imports and know that you're gonna import this URL in the future. So you can try out a preview of lazy imports today. The works with Polymer 1.0. We'll be adding Polymer 2.0 support pretty soon and doing a real release. And before I finish with the analyzer, let's talk really quick about linting because it's such an important consumer of analysis information. We're rewriting the Polymer linter right now on top of the new analyzer, and this is gonna bring some nice improvements to you very soon. The existing lint command in the CLI is already very useful. It checks things like you declared properties that you use in data binding or that the elements you use are actually defined and imported. But the current linter has a hard-coded set of rules that only works with Polymer 1.0 elements. And like I just talked about, it doesn't work very well with lazy imports. So because the new analyzer powers the linter, it understands all the different types of elements that the analyzer does. Not only that, but we're making the linter itself extensible so that new linting rules can be added by users. This is especially important for internal use at Google and for our enterprise customers who often have stricter policies they wanna add and enforce. We're also adding rule sets which we're gonna use to offer different linting modes like a Polymer 1.0 mode, a Polymer 2.0 mode, and a Hybrid mode. Choosing the right rule set will make sure that you're only using the features that are available in the specific versions of Polymer that you're targeting. So the main insight of the analyzer is that web applications are not just HTML and they're not just JavaScript. They're built from many types of files and many ways of importing them. We might have HTML, JavaScript, CSS, images, and more. And so we can't rely only on JavaScript tooling or only on HTML tooling. We need something that brings them all together which is exactly what the analyzer does. And then we can do some pretty awesome things with the results. The new analyzer is key to our goal of supporting many different tools and many different web components library. If you use or write a web components library we encourage you to get in touch with us so we can help you write plugins for the analyzer. And if you write tools or are interested in using the analyzer please get in touch with us so we can help out as well. It's still very early. The APIs aren't completely finished but we wanna launch with support for as many web components libraries as we can. All right, so let's next talk about package management. Yay. All right, as you all know we use Bower to distribute all of our Polymer packages. And Bower works really well for us actually. But the ecosystem has been coalescing behind MPM. And many of you have asked for MPM support so that you can use a single package manager. And there's even an infamous issue on our GitHub repo, issue 326, publish sub-projects on MPM and add them to package JSON. If you can see the small text there this issue was open almost exactly three years ago today and it has 202 lively comments on it. Now MPM support it turns out it's much harder than it seems at first. And while we haven't solved this issue yet I don't wanna get your hopes up we're not gonna announce MPM support right now. Yeah, I'm sorry. But we did come up with a plan at least. And I wanna go over that plan and I wanna talk about some really, really awesome progress that's been made recently. So the plan looked like this. The first thing we wanna do is publish raw packages to MPM. So not make any changes to them. We generated package.json files from our bower.json so we just threw them up on MPM. And we said if you can figure out how to use them let us know. We knew there would be some problems especially because MPM installs things possibly in a nested structure. And that's difficult for us. So the next step was to find or build a flat package installer something that might take an MPM installation and flatten out all the packages. And then step three we needed to build a bower plus MPM release tool something that smooths out some differences between bower and MPM packages and then actually tries to install and test our packages from both bower and MPM so that we can make sure that we push working packages to both repositories. And finally once we have all that and we know that you can trust that when you install something from MPM or bower it's gonna work we're gonna publish all of our working packages to MPM. So step one we did a while back and we published things as is to MPM and like we suspected people weren't able to get them working that easily. So we started working on the design for a flat package installer along with everything else we were up to at the time when Facebook got in touch with us and gave us some really, really, really awesome news. And that awesome news was Yarn which you might have heard about. So they were working on this new package manager which uses the MPM registry and they asked us what we would need to use it. And so we started working with them and gave them requirements and filing issues and we were a little busy to submit code but they were very happy to add the features that we needed. And we got Yarn to do the one thing we really, really needed which is flat installation with proper version conflict resolution. This is huge for us. And not only that, not only can a user say that they need to install things flat but a package can say that it requires a flat installation. And this is what all our packages are gonna need to do because they won't work if they're nested. So this finally makes Yarn a viable replacement for Bower and it really unlocks our MPM support plan. Step two there was by far the most difficult thing and the Yarn team basically did it for us. So now we can move on. This is where we're at now. We can build this release tool which is going to change some small differences between Bower and MPM and then we're gonna be publishing everything to MPM although you have to use Yarn to install them. Yarn is a really, really big deal for us, for MPM, for the entire front end development community. We're super excited. I think some of the Yarn team is here today. We're very excited to continue working with them and make sure that it works great for Polymer. So this is our plan for MPM. Stay tuned, we'll have news on this very soon. All right, next let's talk about bundling and code splitting. All right, today we're announcing a new version of our vulcanized bundling library and we've named it to simply the Polymer Bundler. Bundling your code is extremely important to reduce load times of your application. Before HTTP2, serving many small assets files would just cripple your app's loading performance. But even with HTTP2, bundling can be important to squeeze out every last bit of bandwidth. And sharded bundling, or sometimes called code splitting, which is creating more than one bundle for your app, is increasingly important to make sure that bundling doesn't actually work against you by increasing your load times because you include too much in your bundle. But sharding can be a bit complicated. There are lots of different ways to break your code into shards and different options are suitable for different applications and different environments. The Polymer CLI supports sharding and it makes it really easy to do bundling with a good general sharding strategy. And this is powered by vulcanized today, but vulcanized itself doesn't do sharding. So we had to add this logic on top of vulcanized that called vulcanized over and over again for each bundle. And this was slower than it needs to be and it wasn't very flexible. So now we're taking that bundling logic of vulcanized and the sharding logic of the CLI and we're combining them into the Polymer Bundler. And along the line, we're teaching the bundler some new tricks so that it can produce much better shards in many different scenarios. To understand what the bundler does and how it works, let's imagine a simple app shell style app with a shell in three lazy loaded routes. The shell always loads first and then based on the URL it dynamically loads the appropriate entry point. And these entry points all have dependencies, some of them shared between entry points. And then usually there's some dependency kind of like Polymer that are shared by just about everything. And so the trick in say a purple app is to only load what's needed in a given route. And this ensures that your route loads as fast as possible. But the challenge when bundling is to not bundle way more than is needed which would accidentally slow downloading. So what we've done with the new bundler is make it strategy for bundling and splitting your app pluggable. And we've made the default strategy configurable with a single parameter that can control the number of shards that you get. The way that we calculate bundles is to build up this table that maps files to the unique set of entry points that use them. And this is already one way that you could bundle your app where each row is a bundle. But this is usually far too many small bundles to be useful. You might end up with a bundle for every possible combination of entry points. So this is where strategies come in. Bundling strategy takes this fine grained bundle plan and modifies it. One strategy could simply be to merge everything together in one huge bundle. And that's what Vulcanize does. And the strategy the CLI currently uses is to bundle everything that's used by more than one entry point into one shared bundle. And then bundle everything else into a bundle with one entry point that uses it. This is what it looks like in the CLI. You end up with one shared bundle for the shell, the common dependencies. And then you end up with a bundle per route. This is usually a pretty good approach, but in a really big app, it can result in a really big shared bundle which basically defeats the purposes of sharding. So we made this strategy configurable by allowing you to set a threshold. And only if a file is used by more entry points than that threshold is it put into the single shared bundle. Otherwise it's put into a bundle just for its unique set of entry points. So we think this is a way that lets you easily go from one bundle to one shared bundle to many bundles depending on the needs of your application. And then if you have even more custom needs, you can write your own bundling strategy. So we're gonna be rolling out the new bundler very soon in an upcoming release of the Polymer CLI. And we'll keep you posted through the usual channels. Okay, speaking of the CLI, it's up next on our tour. The CLI is the gateway to our tool suite. It's the first and main tool that most Polymer developers are going to directly experience. We announced the CLI five months ago at Google I.O. as part of the Polymer app toolbox. And our main goal was to help you build progressive web apps and purple pattern apps. And we wanted to offer a turnkey out of the box experience that automatically produced extremely fast loading and rendering apps that took advantage of things like client side routing, lazy loading, push, pre-caching and worked offline. And the CLI helps you build apps like that by default. We also wanted to use the CLI to solve two very big problems we saw with our existing tools, discoverability and ease of use. Because we had built up a set of many, many separate tools over time, it was difficult for our users to find everything they needed. And it was often hard to use these separate tools together. So the CLI solves this by including everything in one install and by integrating the tools and sharing that they all work together. So to talk about that, let me bring up my teammate Fred Schott. He's gonna talk about the CLI. Hey everyone. All right, I'm gonna start by stealing a line from Peter's talk yesterday. Is everyone feeling excited? All right, let's try that again. Is everyone feeling excited? Yeah. Is anyone feeling overwhelmed? Is anyone feeling a little dread? Well, the CLI is gonna help make that all better because the CLI is our gateway to the tool suite. It makes using all these tools and developing with web components faster and easier. So I'm gonna give an overview of its five different commands real quick. And we're gonna deep dive into one of those to show how the CLI fits with our greater tooling ecosystem. So five commands. The first one is a knit. A knit brings custom templates right into your project to help you get started. So let's say you have a great idea for a new element. This thing is gonna be huge. It's gonna get thousands of stars on GitHub. Rob's gonna invite you to do a polycast. This thing's gonna be awesome. Well, you can get stuck creating your environment, setting up Bower, setting up NPM, setting up your tests. Or you can just run a knit element. And it brings a custom element starter template right into your directory, all automatically. We also have a starter application template. Same thing, really bare bones to help you get started. Or if you'd like something a little more feature complete, we have Polymer Starter Kit, which is a progressive web app already configured to use cool navigation using app drawer and a lot of other goodies. So these are all really cool templates that help you get started quick. It comes with four bundled with a CLI in total. But my absolute favorite part of this command is that you're not just limited to those four. You can actually create your own templates, publish them to NPM and share them with the community. And so we're already seeing a ton of community templates pop up for working with internationalization, ES6, Google App Engine. All of these are custom templates built by the community and usable by anyone. So we've only really been talking about this for a few months now and already we're seeing templates pop up. So thanks to you guys who've created them. I can't wait to see what we do in the next months, years to come. So that's a net. Serve, the serve command will create and start a dev server for you automatically so that you can see your code in the browser while you work on it. No configuration necessarily. It's built for web components and does a lot of good stuff to help you develop quickly. Lint, Lint's your project. Helps you catch errors fast. Test, helps you test your code by setting up a dev server to automatically run your tests in the browser right from the command line. And build helps you build a web app for production. And these five commands together represent an entire developer workflow and it helps you get started. Serve, Lint and test help you develop your code quickly and build finally helps you get a web application out to production users. So the analogy I like to use for the CLI is that it's like your Swiss Army knife for web components. It's simple, it's easy to use and it's always available to you in your project. So each of these commands is worthy of its own talk but I'm gonna focus on build so I can show how the CLI relates to the rest of our tooling ecosystem. So let's talk about it. Build takes your project and processes it to make a optimized web ready version of your site ready to deploy. So what do I mean? Well, oh yeah, this is it making your website. It starts with your project and the first thing it does is pass it through the new analyzer that Justin mentioned earlier and this lets us do a ton of cool stuff in the build process. For example, it can analyze exactly what files and what dependencies you're using and filter out all the other ones that you're not and so this results in builds that are about 96, 95 we saw percent smaller than your actual development directory by filtering out all the unnecessary stuff in your components dependencies directory. It does a ton of other cool stuff in the build pipeline which you'll see in a second. The next thing we do is optimize your code so we minify your code, we can run your JavaScript through Babel, we optimize it for production use in older browsers. Next we bundle your code and so we combine files together based on the analysis that we did earlier so we combine them where we can to reduce the number of requests that your users need to make and then we generate a service worker for you. Again, because of the analyzer, we know exactly what files you need to pre-cache on the browser to get an offline experience for your users and that's it. So with the CLI you get this full build pipeline from start to finish already configured and ready to use and so we launched this and it was great. People were using it, they were using it on their projects but we started to get a lot of feature requests and it turns out people like to customize their build process. Who knew? And so we could have kept going and okay, add new features and new options and new options for new features but we had to take a step back and remember that we were working on something that was simple and easy to use and we were doing that but we were kind of leaving our advanced use cases more complex apps out in the rain and if we kept adding feature after feature to try and help them as well, we wouldn't have a simple Swiss Army knife, we'd have something like this and this is probably the best representation of software that's hard to work with, I've ever seen and that was until I realized that that's a secret compartment in the handle filled with more tools and now that is the most just amazing photo I've ever seen and so I think we've all worked with something like this, right, like some software that was trying to do way too much and just ends up hurting you and so we knew this wasn't what we wanted to build, we knew we wanted something easy to use and simple and the CLI was never meant to be the one tool to do everything, it was meant to bring together the best parts of our ecosystem in one easy to use way and so we knew that this build logic shouldn't be living in the CLI, it should live in its own library that anyone could use so if you wanted to create your own build pipeline, you could do it and we could give you the tools to help so I'm gonna show you a quick example of what this looks like, we only need two things from the Polymer Build Library, the first is a Polymer project which is gonna power our build stream and the second is Add Service Worker which is a helper function and now let's recreate this entire build pipeline using some JavaScript, the first thing we need is your application, your code and we're gonna get that with two different streams from the project, sources and dependencies, now we split it this way so that you could handle those differently if you'd like, you could maybe process your sources in a different way, minify your dependencies, totally up to you, flexibility and power are the name of the game here. So we're gonna combine them into one merge stream and then we're gonna analyze them and that's gonna power the rest of the build stream. Then we're gonna optimize them and we're just gonna run it through some minification here of different tools for different code. One thing really cool, split HTML and rejoin HTML, pull out your inline code into their own files in the build pipeline and that lets these tools see them and minify them before combining them back into inline JavaScript CSS once again. Next we run it through the bundler which helps bundle them, which Justin talked about earlier. And finally we write your build to disk and then create a service worker. The order on those is flipped but the result is the same. You now have a complete build pipeline written in only a few lines of JavaScript and now you can do anything. So now with JavaScript you can hook into one of these places to run something else, you can remove things, you can add new code, add new minifiers, you have complete control over what you wanna do here. And so that's really what we're talking about when we talk about the CLI and the rest of these powerful tools. We wanna give you one way to bring them all together that's easy to use, doesn't take a lot of work to get working but we also wanna give everyone the tools to go out and explore and create their own different implementations as they need them. So this provides a really nice off-ramp from beginner to intermediate usage into the more complex use cases as your project grows and matures. So that's just a quick, quick overview. Both of these tools are available on MPM. The CLI helps you work with web components really easily. Polymer build helps you build if you'd like to do something with a little more control and we're really excited to keep developing these. I'm now gonna hand it back to Justin to talk a little more, thank you. All right, I hope most of you are able to use the CLI. It's a very convenient tool. Give us some feedback. We're gonna try to do a final release at some point coming up here in the future. The CLI is just one way that we expose our core tools to users but your terminal isn't the only place where you work. You certainly spend much more time in your text editor than you do at the terminal unless you're one of these VI or Emacs users. And so there are some really important problems to solve while you're editing code. First, running Crash is a really, really bad way to find errors. It's much better to see the errors as you make them if you can. Next, APIs and documentation are hard to remember and jumping between your editor and documentation just slows you down. And also, large code bases are hard to navigate. Text search isn't really advanced enough for programming you want something better than that. So luckily, these are all solvable problems and they're solved in a lot of other programming environments. And the declarative nature of HTML and web components gives us the opportunity to deliver a world-class editing experience for web apps. So it only makes sense to bring our tools right to where you work in your editor. Oh, I forgot to advance through my slides. Here's some bullet points. So I'm happy to announce today that we're releasing a set of text editor plugins for Polymer. And I'm not gonna tell you about it. I'm gonna bring on our other teammate, Peter, who's gonna talk more. Welcome Peter. Thanks a lot, Justin. I am really excited to be up here to finally unveil our work that we've been doing on editor plugins. Providing a fantastic developer experience has always been part of the plan of the Polymer team. But it's only just very recently that our tooling infrastructure has caught up to our ambitions. So I'll start by talking about linting. We released last year Polylint, our first pass at linting. It's very powerful, gives you a lot of great warnings and great information about problems that come up as you're writing web components. But we learned something else from using it and from getting feedback with people. It is too slow. I mean, for linting to be a really productive part of your developer experience, it has to be instantaneous. Anything less, and you start to lose faith in it. You start to lose trust in it. You start to fall out of your flow and you start questioning, okay, what's the linter doing? Is it helping me here? Or most of all, it can sometimes start to make your text editor start to stutter. All of that's totally unacceptable. So we've been working with a new analyzer to have incredibly fast incremental updates of analysis of your code, which lets us do much faster linting. The analyzer also gives you very precise information about what's going on in your source code so we can give you very precise underlines of problems. So let's take a quick look. So here is a text editor. It's called Visual Studio Code. It's open source released by Microsoft. It's very fast, smooth, and extensible. And here it's running our new editor plugin. You see we've got a red squiggle underneath that HTML import. This is an index.html file. And it's underlining right where the problem is. Let me get an error message. Also notice how fast it is. As I'm typing, each single character, the moment it's correct, squiggle goes away. It also understands inline JavaScript. So the moment that there's a syntax error, let is fine, foo is fine, equals syntax error. String, everything's fine. Okay, so that's linting. Linting is nice. Linting is good. What else can we do? What about intelligent, contextual, adju type auto completion? So here we've auto completed the paper spinner element. I can hit control space. I get all of the docs extracted right out of the source code. Can scroll through them. Can push enter, auto complete that element. Paper spinners are really awesome element, but it's got a very simple API. Let's take a look at another element that also looks very simple, but has a surprising and I think underappreciatedly rich and detailed API, paper button. Okay, so let's take a moment, go through what happened there. With every single edit to this file, every single key press, we are reanalyzing this file. And the editor plugin notices there was a new import added. So it tracks down that file, parses it, analyzes it, scans it, extracts the metadata for those elements, including documentation from JavaScript and HTML comments. And it puts that into a cache. And that cache is available as you're typing to power these auto completions. So once I added the paper button import, I get paper button and paper ripple now, also as custom element auto completions. Okay, so I'll select paper button. And I'll move my cursor over here. I push control space and I get auto completion of the properties and attributes on that element. And we're able to extract a ton of structured information about these attributes. So I know the type. I know the docs for each of these attributes. I also know where they were defined and if they were defined in another file or as part of another behavior, I can see that too. So for example, toggles comes in as part of Polymer.ironButtonState. How many people here knew that you could use paper button as a toggle element? Like for a checkbox or a radio button? I had no idea. Yeah, I see about 10 hands, maybe. It's a really powerful element and I think it's really underappreciated because it's kind of hard to attract down all the documentation. It takes you out of their flow to do it and you sync up paper buttons and paper button. Okay, but actually what I wanted to do here is just have a simple paper button with a little shadow. So select raised. I'll start typing because I want to customize it a little more and it narrows down the auto completion, as you'd expect. I'll select elevation by hitting read the docs. I can mouse over and I get a pop-up with the documentation for that attribute. Okay, so it's a number and the number is zero to five. Bigger is a deeper shadow. Okay, that's nice. So I'm gonna select one. I might tweak it later. But what if I wanted to go a step further? Maybe I wanted to understand how that shadow effect was implemented, how that raised material design aesthetic comes in. I can move my cursor over the raised attribute and push a single button, F12, and jump straight to the definition of that attribute. Notice that this is in a totally different file. This is paperbutton.html. And not only that, it's not even part of my project. My project is index.html. This is in my components directory. That's not a problem for the analyzer and that's not a problem for the editor service. But I didn't see elevation there. Let's jump to that definition. Oh, right, this is in a totally different file. This is in a behavior, right. So now I'm in paperbutton.behavior.html. Okay, and that's actually in a totally another package, paper behaviors, paperbutton.behavior.html. So that was coming in from a behavior mixed into the paperbutton element and then made available right there in your editor. So you might notice that I've been saying custom elements, web components, not just polymer elements here. We mean that. As Justin was saying earlier, we built the analyzer as a pluggable, extensible system for where you can, it can notice multiple different ways of writing custom elements and declaring them. So let's take another look at another example. So this is a 100% vanilla custom element declaration. The beauty of this is you can copy and paste this code directly into Chrome Canary or Safari Technical Preview and it just works with zero dependencies. This is not a polymer element. This is not an element of anything but the web platform. I can jump into index.html and I'll import vanilla.js and then immediately I get auto completions for this vanilla custom element. We've extracted the documentation and I can auto-complete it, go over to its attributes. I get auto-completions of its attributes as well with documentation and type if it's annotated. I can mouse over for the pop-up and select it and jump to definition. That's right, every single feature that I demonstrated today for polymer elements works exactly as well for vanilla custom elements and we do not want to stop there. If you're in the audience and you use another custom elements framework or you're the author of one, I know there's a couple of you in the audience. We really wanna work with you. Come join us on GitHub or in the Slack channel. We have a tools channel and I'll be opening up the editor channel just after this talk. So we have had one more dilemma while we were implementing this. As I started to think about editors, I went around to the Polymer team and there's about 20 people on the team. I started asking them about what editors they use and I discovered a kind of shocking fact. There are maybe 20 people on the team and there are at least six editors in active daily use. It's kind of crazy, right? But it's also really good. That means that the editor ecosystem is really competitive and there's lots of really great editors out there with lots of features that are vying for your user experience. But what do we do as plugin authors? Well, we're taking inspiration from a number of other projects including Rust, Go, and TypeScript in implementing a standalone editor service. This is a simple Node.js binary that, well, it's a complex Node.js binary that exposes a very simple JSON API. And this JSON API is expressed exactly in the terms that every text editor understands. File names, line numbers, column numbers. So your editor can say, hey, my user is on line 15 of index.html third column and they just pushed my jump to definition button. What do I do? They don't know, the text editor doesn't know about HTML, doesn't know about JavaScript, doesn't know about any of this stuff. Just says, what do I do? Where's the definition? And the editor service says, vanilla.js line two, column 13. Exactly the information that your text editor needs to pop up in a new tab and direct your cursor exactly where that definition is. So we've proven this out. We've implemented plugins for Visual Studio Code, as you've seen, Sublime Text, and of course, Atom. All three of these plugins support instantaneous as you type editing, linting, and contextual intelligent auto completion of custom element tags and attributes. And we're just getting started. We have an alpha release today. It's out now. The Polymer Editor Service is here. It contains full documentation of everything you need to know about the protocol and how to add it to your text environment of choice. And we have one step install in Atom and VS Code today. APM install Polymer IDE for Atom, and you can do that EXT install in the VS Code actions pop up. So to recap, the editor plugins make it way super easy to edit and maintain your custom elements, web components application with Azure type linting, Azure type documentation auto completion, and jump to definition for making it really easy to navigate a large and current code base. With that, I'll hand it back to Justin. Thank you very much. Yay. That was really, really awesome. I'm super excited for all of you to use the editor service and the plugins. We've been testing it on the team, and it's the kind of improvement in development workflow that makes you not just more productive, but happier, because it removes so much frustration in day-to-day development. I mean, I personally want feedback as soon as I can get it. And I want to see my mistakes as soon as I make them. And this is exactly what this delivers. So we're really, really, really excited about this. We hope you are too. And that concludes our tour of tools. We have a lot of changes coming up that we hope will make you happier and more productive Polymer users. And with projects like the Extensible Analyzer and Linter, we hope to serve the entire web components ecosystem. This is something that's really important to us. We think interoperability is a huge strength of web components, and we hope to push that forward as much as we can. You might be wondering when you can use all this. So as you saw, some of it is available today. The CLI is in beta now. The IDE plugins are in preview now. And the new Linter and Bundler will be in preview soon, available through the CLI. And then we're gonna be working on MPM and yarn support very, very soon. All right, that does it for us. Thank you so much for joining us. Have a great day. Thank you.