 Hello. How's it going? I'm Trey. I work on the front end of Atlassian by day. But by night, I'm a hopeless romantic for web components. And it's almost shameful. So you may or may not have heard of a library that I created called Sk8JS. It's like Polymer, but it approaches things from a different angle. But I'm not here to talk about that. But I do make mention of it in the slides, worth mentioning. I'm here to talk to you about some techniques that we're using to surface-side render web components and why we need to do so. But before we go ahead first into that, I really want to briefly cover what pushed me to actually try and do this. So I tend to straddle the line between the web component community and other communities. And it puts me in a somewhat unique position to empathize with other viewpoints from the other communities. As long as I'm able to keep my bias in check, of course. And of those criticisms, there's really three main ones that keep popping up. The first one is you can't represent complex data structures with attributes. It's always been an issue with HTML. But so it's not really specific to web components. And Polymer and Sk8 help you get around this because they set properties and they manage that property to attribute reflection for you. So it's not really a must-have. The second one is that there's no opinionated templating model. And Polymer and Sk8 kind of do this for you, too. And there's also libraries like LitHTML and Preact to weave into your custom elements. So it's not really a must-have. And lastly, the one that I hear the most, really, though, is that you can't surface-side render shadow. And it's proven to be something that we do need in order to solve a couple of problems. But why can't we do this out of the box? So rendering shadow DOM on the server is not possible because there's no way to attach a shadow root and represent its content without executing imperative JavaScript on the client. And while you can render your element without a shadow root declaratively there was some light DOM, you must then imperatively create the shadow root and then attach the shadow root and then put some content inside of it. There's no way to express this within the bounds of HTML. So the two main reasons that's compelling us to do server-side rendering. First one is SEO and bots. And the other one is to help with user experience. And this can be kind of contentious. And I'll explain why shortly. Server-side rendering isn't a solution that's looking for a problem. There's prior art here. Many JavaScript communities, such as Ember, React, View, Svelte, they're all turning to server-side rendering to help solve these problems. Search engines and other bots need to be able to read, scrape, and index content and then do something with it. These bots may or may not execute JavaScript. And on the ones that do, they're going to do so to different extents. If we wanted to do something like render an app shell with a single custom element, not all search engines and bots are going to be able to read the content of that. JavaScript, they might, but if they don't execute JavaScript, they're definitely not going to see it. Some have suggested not putting content in the shadow that you want crawled. And it works, but I don't think it's really fair. This precludes using custom elements as templating hooks. For example, the about page here. It might be useful to embed or share this in another part of the app or the site. Twitter is a great example of an app that would probably use an app shell and possibly use a router to then render a custom element for the page. And it needs crawlability. Googlebot is probably the biggest and most notorious search engine out there. And it was recently announced that it's based on Chrome 41. Bing comes in second with a fairly sizable market share, but I couldn't find any documentation about what it supports. But it turns out they both execute JavaScript, but they don't execute modern ES2015. This means that you have to use the polyfills, and then you have to use ES5 to write your custom elements, or you have to use a transpiler for it. And during my testing with Bingbot, I didn't get consistent results, because depending on which webmaster tool I was using, they all behaved a little bit differently. And what about other bots, things like social shares, simple crawlers, and language-specific libraries? In my limited testing on social media sharing, the server-side rendered pages behaved better. So the moral of this story is that there's a vast world of bots designed to massage content. Some of them don't execute JavaScript, but of the ones that do, your mileage is definitely going to vary. Testing SEO has proven to be tedious and time-consuming. And the results also sound a little hand-wavy, and it decreases my confidence that what I've tested is actually going to work. And something I'm not really sure of is if relying on JavaScript is even a good thing, because on top of targeting a browser matrix, you now have to target different bots with different capabilities. And as a developer, I really just want everything to just work without having to jump through hoops. The second major reason to use SSR is for user experience. Many argue that server-side rendering can hinder user experience, and this can be true, because you have a larger HTML payload. It looks interactive, but it might not be. This is called the uncanny valley. Anchors and other built-ins will probably be an interactive, of course, but what about complex UI components that require JavaScript to boot up? And others argue that it can improve user experience, because it's a faster time to first paint. And users can start consuming that content and plan their initial action while the JavaScript boots up. They're both right. The key here is context. Does your target audience mainly consume or interact with your content? Do you use a lot of built-in elements? Or do you have a lot of custom components that require JavaScript? Now, how long does it take for your JavaScript to download, parse, and execute? So the moral of this story is actually very similar to the last one in that your mileage is going to vary. Server-side rendering is a tool that can help you improve your user experience. Use it if you need it, but you need to measure and you need to care, and this is something that we should be doing regardless in our jobs every day. Using server-side rendering to solve your problems depends a lot on context. If you're only targeting Googlebot or Bing, you might be OK. That said, you have to know the limitations of what you're targeting. The same goes for UX. You have to know your audience. Measure your app's performance, and if you think it will help, try it. Measure it. If not, you don't have to use it. So now that we've defined the problem face, let's look at how we can solve this. At this point, I'd like to reiterate that our goal here is declarative shadow DOM. And I wish I had a dollar for every time I used the word declarative because I'd be rich. But it's very important. It underpins this talk because it's a core tentative HTML. Therefore, it becomes a fundamental principle for being able to express shadow DOM. I'd like to start off by defining some terms real quickly. The first one is web components. This is just custom elements in shadow DOM. That's all I'm talking about here, not HTML imports or templates or anything. The composed DOM is the state of the DOM when the light DOM is distributed through the slots in the shadow root. And serialization is when you transform a DOM tree into the composed HTML. Rehydration is reverse engineering that string and taking the composed tree and turning it back in light and shadow DOM. At a high level, we plan this into a three-step process. First, we wanted to be able to run web components in Node because we really wanted the simplicity of universal JavaScript and being able to co-locate client and server code. Next, we wanted to take a DOM tree, a constingable shadow in light DOM, and then serialize that down and transform it into a string. And finally, we needed to then run JavaScript on the client to rehydrate that. There's a couple of different approaches we could have taken here. The first approach is running into something like Headless Chrome or Electron. I haven't tested Headless Chrome, but my first forand to all of this was trying Electron to do this. The DOM API support is top-notch, but there's a bit of friction because you can't co-locate your client and server code. Everything has to be proxied through a separate tool. And because of that, performance is questionable. So the other method of doing this is running your code in something like JS DOM or Domino or Undom. Depending on the implementation, this can actually be very fast. The API ends up being a lot simpler because there's no context shift between tools. And it opens up a lot of doors within Node, and I'll talk about this a bit more in a little bit. Unfortunately, none of them support Web Components yet. There is work happening to fix that. JS DOM has an ongoing PR for custom elements and Shadow DOM. And Domino has a PR open for just custom elements. However, we decided to use Undom. It's written by the author who wrote Preact, Jason Miller. And it's a subset of the DOM APIs. And we did that so that we could build on top of it and build our own subset of custom elements in Shadow DOM. We really wanted the ergonomics of co-locating client and server code. And it's only 1K. So the upfront overhead is pretty minimal. It's also fast because it's basically just array manipulation hidden behind a DOM-like API. And it's easy to extend because you don't really have to worry about implementing the standards in full. Being a minimal implementation allowed us to move really quickly. We needed that because there was a lot more work to do in order to concept this idea. The first pass of all this work is a new library under the Sk8JS org, excitingly titled SSR. The only thing you have to do to start writing web components in Node is to make sure the DOM APIs exist in your execution context. To do this, you simply require the module that registers the APIs. Once this has been done, it's as if the server and the client have joined forces. So now you can write your code as if you were running on the client. And here we've defined a simple hello world component that we're going to be using in the next example. So now that we can run our DOM code in Node, the next step is taking a DOM tree and turning it into a string. Our requirements for the string are pretty stringent. First, it must represent the composed tree because it must be legible to bots without executing JavaScript. And the concept must appear in order and retain semantic meaning. Second, it must contain all the information required to transform it back into the state before it was serialized, so the state that was on the server. And third, it should be reminiscent of what a standards-based approach might look like because we do want to take this to the W3C as a future proposal. So assuming we've already defined the hello component in the previous example, we don't need to do that much to serialize it. The library's default export is a render function. So we load it up to serialize the hello element. And then we create an instance of the hello element that we're going to serialize. And we want to project some content in the light DOM. So we set the text content property. And when we call render on our element, it returns a promise that resolves with the serialized result. Using a promise helps account for components that might render asynchronously. For example, Skate does this to batch property updates into a single render. And then it happens on the next microtask. You can also pass a custom resolver. For example, if you have a fetch request somewhere in your code that you need to account for. So the previous example would output something like this. And this is the compose tree with a few extra bits. So now that we've serialized our component, let's look at the rehydration process. This is the previous result that we had. Without built-in declarative primitives, we have to execute JavaScript in order to rehydrate this. So we dedupe the rehydration code into a single function. That way, we can call that function whenever we need to rehydrate a shadow root. And it's called rehydrate here. But the library is going to make sure that's not going to have any collisions in the global namespace. This is just for the example. So upon rehydration, that script tag that invoked rehydrate gets removed. The place over shadow root element is also then removed. And the real shadow root is then attached to the host in its place. And the content from the particular shadow root, or sorry, the placeholder shadow root, is then transferred to the real one. Now, we have to take all the top-level slots and then take it to sign nodes or direct children and then move that back to light down on the host. We have to be careful, though, so that we don't move any default-slotted content. The best way we've come up with so far is by flagging the slot within an attribute called default. But for the sake of simplicity, I'm not going to do that here. And this is what the full rehydration ends up looking like. This is something similar to what you'd see in DevTools. It's also a really, really great example of why we need to serialize out the composed tree. Looking at the order in which the text appears here, so you have world appearing before hello. Even if world came after hello, it's still appearing after that exclamation point. And if you had anything placed around the slot, for example, to place emphasis on the text inside the slot, you're going to lose that semantic meaning. Using a shadow root custom element, though, does have its drawbacks. It's not perfect. It's really good because we can deliver a custom element in the future instead of using script tags. However, we didn't do this initially because delivering a custom element opens up questions around requiring polyfills and shims. And we wanted the usage of this to be as simple as possible. The possible alternative to a custom element would be using something like a composed attribute or a shadow root attribute on the host. Doing this means that there's less DOM elements that need to be thrashed or mutations happening. And it seems to match the imperative API a little bit closer. Either way, declarative shadow roots are currently only designed to work for the initial parsing of an HTML page. Using another declarative abstractions like React and JSX and other ones hasn't really been prototyped yet, but we hope to be doing that soon. Between the time that I wrote this talk and now, we actually implemented experimental encapsulation for CSS class names. This means that you potentially don't have to execute any JavaScript at all to server-side render, service bots, and actually have scoped content on the page. I got about halfway getting all this stuff prototyped and it kind of just hit me. Do you realize what doors this opens up? And all of a sudden, I got pretty excited. Behind door number one is the ability to run your test in Jest as a testing framework by Facebook. Jest runs in Node and normally uses JS DOM as its default environment. So we wrote a custom environment that uses our DOM one plus the extensions that we put on top of it. All you have to do to configure Jest is to specify the environment in the package JSON. And then you can start writing tests as if you're in the browser. Skate's entire test suite is now run through Jest using this. Similarly to Jest, you could use Mocha to run your test directly in Node. It looks a bit different, though, because Mocha doesn't have a concept of environments. You're just running it directly in Node. So instead of configuring it to use an environment, you would do as you normally would and just require the APIs at the top. And then you can just write your test as normal. If you find yourself rendering your content on the server statically to a single file with these APIs, you can actually quite easily generate an entire site statically. So we built a little CLI tool that will take a glob of JavaScript files that have custom elements, constructors, there's a default exports, and it'll render each one to a static HTML file. It also accepts a props argument, so you can do pseudo-dynamic renders with it. And this could be useful for anything that doesn't have much dynamic content or isn't very interactive, like documentation sites. However, if you want to pre-render your pages and deliver your JavaScript separately that upgrades some or even all of those rendered elements, you can do that too. So dynamic SSR is a lot like static SSR. You're just running it in a Node server like HappyJS or Express. You can make your components render dynamically via props by assigning the request parameters to the component. Doing this allows your component to then dynamically re-render according to those props, and then the output is serialized and sent to the client. Writing libraries can be fun, and it can be really useful to the community. It can also be kind of glamorous. However, the definition of success here is if this library can be made, mostly if not completely redundant. We've built all this stuff out on top of Undom, and Undom supports plugins, so it would be pretty logical for us to then just extract this stuff out into a plugin and into a separate repository. And once JS DOM has support, then developers can choose, depending on which one meets their needs. Since serialization and rehydration are pretty tightly coupled, it would be really great to get a standardized way to represent this composed tree as a string. Maybe something like element.composed HTML. I don't know. That way, we can be confident that what we send to the client is going to be properly rehydrated. And finally, to the main point of this talk, we absolutely need a standardized, declarative way to represent Shadow DOM so that we can service bots, present scope content to users quickly without requiring JavaScript, and so that we can use the power of Shadow DOM declaratively in other libraries and frameworks. Before I go, I just want to say thanks to Bede Overund. He's somewhere in here. He spoke yesterday for all of his help. Thank you.