 Muchas gracias. Esta charla es en inglés, pero quería asustar a las personas que hablan inglés. Yo trabajo en EMBER. Si usted no conoce EMBER, es un verbo español que significa construir apps muy chulos. Por ejemplo, yo embo un app de los medios sociales. Nosotros emvemos una aplicación para compartir fotos. Lo siento por la mala broma. Hey, everyone, I'm Tom. The talk's not in Spanish. I just wanted to freak all the English speakers out. So when I was putting together this talk proposal, one of the paragraphs said this. It says, along the way, he will share decisions and practical considerations for tackling this massive change in a backwards compatible way. And undertaking one industry thought leader has described as, in some sense, like trying to change the engine on a 747 mid-flight. Now, the thing about this is no one actually questioned me or fact-checked this at all. I was just quoting myself as an industry thought leader in my own talk abstract. So now, if you guys ever submitted an abstract for a talk, I would really recommend you just quote yourself and describe yourself as an industry thought leader. So I'm here to talk to you guys today about Glimmer. And if you're not familiar with Glimmer, Glimmer is the brand new next generation rendering engine for EMBER. But before I really get into the details of what Glimmer is, I just want to talk a little bit about what EMBER is and what I think makes it different from the other tools that are available. And the thing I'll say here is that, in general, web application development moves very fast. Many of us are used to picking a framework because it's like really hot. It's got all the buzz on Hacker News, or Reddit, or Friendster, or whatever the popular social media website for programmers is. And you go off, and you build your application. And it's great, it starts off really well, and then you run into a few problems. And the next thing you know, you tell your friend, oh, yeah, we built our app using Hot Framework X. And they're like, oh, you're using Hot Framework X? That is so 2012. Actually, Hot Framework Y is the thing that you should be using right now. And that's really frustrating, right? Because now you feel like, well, now I have to figure out how to migrate a way of this technology that's not hot anymore to the new way of doing things. And I think a lot of programmers that I talked to in the front end space deal with something that I like to call hype fatigue. There's this sense that, oh my god, everything is changing all the time. Things that are a good idea one year are a bad idea the next year. And my boss is getting really frustrated with me because instead of building features, I'm constantly telling him why we need to rewrite our app in the next framework. And my friend Godfrey Chan wrote a really great blog post about hype fatigue that I'd like to quote from here. He says, when a new framework comes around, it probably solves an existing problem in a new and interesting way, enabling you to take on some old challenges much more easily, or even enables you to solve some new problems that you didn't think you could solve before. From there, you begin to extrapolate your experience. Six months later, you begin to realize that you are running into other problems. Maybe perhaps you are spending a lot of time reinventing the tools you had from your previous life in the other framework, or perhaps sometimes hard problems are just hard, regardless of your tools of choice. Whatever the reasons, this new tool did not magically transform you into the 10xer that you had hoped to become. And so for me, the idea of Ember is basically this idea that when you start a new Ember application, you are opting in to not just a framework, but an entire community that cares about the entire stack of how to build front-end applications. So out of the box, you run this NPM, we have an NPM package called Ember CLI, you initialize a new application, you get all this stuff out of the box, you don't have to spend a day or a week or two weeks configuring it and making 1,000 different decisions to customize this bespoke framework just for you, we say, hey, there's a common set of good ways of doing things, a common set of best practices, and by default, we're just gonna opt you into those, and then if you wanna change them, you can, but you can be productive right away. And more importantly, not only do you get all these features, you get a commitment from the core team and the community that we are going to continue maintaining these. As best practices in each of these areas evolve, we're gonna keep upgrading and maintaining them. And most important, we're not gonna let the framework stagnate. We want Ember to be as competitive as possible with whatever great new ideas people come out with. And so what that means, we call this principle stability without stagnation, which is that we are always open to adopting great new ideas, but before we do it, we're gonna commit to a migration path. We're not gonna move to something unless the community moves with us. So we're aggressive about adopting the ideas, but not at the expense of breaking backwards compatibility. So Glimmer was our effort to basically completely overhaul and rewrite from the ground up the rendering engine in Ember. And that's obviously a big undertaking and you need a good reason to do it. So for us, I would say that the initiative to do Glimmer stems from three different issues. The first is just raw rendering performance. When we started building Ember, i.e. seven and eight and even six, older Ember pre 2.0 would run in like i.e. six, even if you can believe that. Things that made apps fast and i.e. could make them slower in newer browsers. So the ecosystem had moved on and it was time to rethink how we can get better performance using modern browsers like Chrome, Safari, etc. The other thing is we wanted to adopt an idea what we call data down actions up, which is essentially a more explicit way of modeling the data flow throughout your application. We're gonna dive into that a little bit more. And then lastly, and probably definitely for me, the biggest incentive for redoing the rendering engine is something that we call fast boot. Now fast boot is, you can think of it as server-side rendering for JavaScript applications, but it's actually a lot more than that. It's a lot cooler than just server-side rendering. And again, I'm gonna get into that more in the talk. So talking about rendering performance first, I don't know if you guys saw, Ryan Florence gave a talk at ReactConf and he basically just like really embarrassed Ember because he showed this application, which Ryan was not trolling us. Ryan had tried to build this app in Ember and it was really slow and then he built it in React and it was really fast. So this app is called DBmon or you might have heard of it called DBmonster. So this is Ember pre-glimmer on the left, react in the middle and then glimmer on the right. And what's really interesting about this is that those two Ember apps on either side, Ember on the left, glimmer on the right, those are the exact same application, the exact same app and all this change is that it's running this new rendering engine. So we could bring really, really quite amazing looking at this video, really quite amazing performance improvements to apps that were even a few years old, which I think is really incredible. So like I said, rewriting the entire rendering engine for a framework like Ember, which is quite an ambitious framework, is a little bit like trying to change the engine on a 747 mid-flight. It's a huge undertaking, especially because we're planning on shipping this to all of these users who have production apps out there being used by users and we really cannot just break those apps and have them get stuck on an older version. So how do you do this? How do you do a wholesale rewrite of a rendering engine? How do you deliver performance improvements that dramatic even to older apps that have been around, Ember's been around since 2011. So how do we deliver performance improvements to apps that could be like four years old at this point? And I just wanna take a brief aside to talk about something called the rule of least power or we can also call this section indefensive templates, because I think right now it's in vogue to hate on templates and things like JSX are very popular right now. It's like, oh, just use JavaScript because now you don't have to learn two languages. And I think that's fine, but the rule of least power is something that guides my thinking a lot and it's actually based on a document written by the W3C back in like the 90s by Sir Tim Berners-Lee. You can go online, you can read it. I put the URL here if you'd like to check it out. And they basically adopted this paper. It's like a position policy and it says there is an important trade-off between the computational power of a language and the ability to determine what a program in that language is doing. Expressing constraints, relationships and processing instructions in less powerful languages increases the flexibility with which information can be reused. The less powerful the language, the more you can do with the data stored in that language. And so if you read this document, they say this is a good practice. This is from the W3C tag, the technical architecture group, and they set basically the broad direction and vision of the web and the W3C. And they say in general, when you're designing something for the web, use the least powerful language suitable for expressing information, constraints or programs on the worldwide web. And that's why we use handlebars. We really like handlebars because the set of syntax and the semantics of it are constrained. In our opinion, the minimum set of tools that you need to express what a dynamic UI written in HTML looks like. So you have expressions and you have block expressions. You can write your own helpers, you can have primitives, you can have string values, you can have booleans, you can have numbers. But really we try to limit this as much as possible. And this simplicity allows handlebars, allows developers to express their intent very declaratively. So they're not telling us how to render the UI. They're not writing JavaScript to render the UI. They're saying, this is what I want the UI to look like. And then that allows us to build a totally different rendering engine. Use brand new optimizations and brand new techniques that didn't even exist when we designed this initially. And we can better take advantage of those new browser technologies and techniques. And I contrast this to simplicity of something like handlebars. I mean, if you want to write your templating in JavaScript, that's fine. But now, as a maintainer, now you have a 600 page document you have to read in order to understand and analyze what this program is doing. This is just a screenshot of the ES6 specification. So I want to defend this idea that I think templates are very powerful from the perspective of a framework author because what they let us do is continually improve the performance of your application without you having to change a line of code through that declarative markup. So fundamentally, what is Glimmer? You know, I called it a rendering engine before. But I think maybe the best way of thinking about Glimmer is that it's actually a very low-level engine for Dataflow. And so I want to talk a little bit about how different frameworks model things like change detection and Dataflow and bindings and so on and then show you how you can essentially represent all of those different kinds of things using this low-level Glimmer engine. So at a high level, there's two levels or there's two types of Dataflow in most front-end applications. So the first one is to a binding popularized by things like knockout, angular, sprout core, ember, and so on. And then kind of this new paradigm which has been popularized by Flux and React which is this notion of data down, actions up. Now I was trying to think exactly what terms to use on the slide and I think what some people would suggest for the bottom one and one that people at Facebook use a lot when describing React is unidirectional Dataflow. But I think that's a little bit misleading because unless your app is very boring, of course, it's not unidirectional Dataflow. You have data going down to the components and then they make changes that they send back up to the top. So unidirectional Dataflow technically correct but I feel like a little bit misleading. I feel like a better term to describe the fact that there's this two-way communication is via data down, actions up. So let's look at how two-way binding works. So in two-way binding, the model is the source of truth that all of these components kind of plug into. So if on the left you have a form that the user is filling out like they've got this text field and they start typing in their name, every keystroke, that data is going to be synchronized up to the model and then that other component on the right-hand side is going to see, oh hey, this property changed and it's immediately going to synchronize. Now that's nice because that's nice in a lot of situations but in a lot of situations it's kind of annoying as well. Like what if you want to be able to have an editing dialogue where the user can click cancel or save and only once they click saved does it show up on the other side. Two-way bindings create this problem of liveness and we'll talk a little bit more about this in a second. The approach that things like react and flux take is a little bit different which is that in order for data to move throughout the component hierarchy, instead of what you do is you introduce this root component. So on the left we have a component hierarchy and on the right we have another component hierarchy and then encapsulating all of those is this root component that both of those are children of. So what happens now is that when the user goes to type in the form, okay, they're going to type it into this form but nothing actually happens yet. So they type in the form and now this field or this component is going to generate what we call an action and that action is going to go to its parent component and it's going to say, hey, by the way, we have this new first name property, Jedediah, so do with that what you will. And now it's the parent component's responsibility to reflect that onto the model and that's not even enough so it's going to fill in the model but that's not even enough. The other component hierarchy doesn't update yet. Now what it has to do is have to send some in reactors to be set state. It has to notify this component hierarchy, hey, not only have I now changed the model, I am letting you know that you should re-render with this new state that's happened and now it fills it in. So that's how data flows throughout the system of these two models. You have a two way binding where everything works in reverse, everything's connected in live all the time and then you have something like reactor flux where you have what we call data down action up. So that's how the data flows but there's a second piece to this question which is how do you actually detect that a piece of data has changed so you can synchronize it across the system. So there's really three types of change detection that we're gonna talk about. So the first one is what I'll call mutation tracking. You may also hear this referred to as like KVO, it's a very cocoa-like paradigm if you've programmed in cocoa before. There's dirty checking which was popularized by Angular and then there's the more manual notification style that was popularized by React. But fundamentally change detection is about figuring out what exactly changed in like what is the smallest amount of changes needed to reflect into the DOM and bring the UI up to date. So all of these strategies are about basically getting the best performance out of you as a developer making changes to the models that underlie your application. So the first style is mutation tracking which is what Ember uses. You've probably also seen this in like backbone if you've used backbone models, it's somewhat similar to that. Again like cocoa and sprout core. And the way that mutation tracking works is we say okay well we can't peer under the hood of the JavaScript runtime, we can't see when you change a property in an object at least until when we get ECMAScript proxies but those aren't in most browsers yet. So instead of what we have to do is say instead of using the dot notation to change a property on an object instead what you have to do is use this set method that we provided. So if you wanna change the first name property of a person you just say person dot set first name and then you set the new thing. And here's what's cool about this. This method of changing properties on an object is so precise that it can be extremely fast because we don't need to do any kind of diffing, we don't need to figure out what change happened. You just told us, here's a very atomic small change. And so in Ember we can reflect that one atomic very small change into the DOM very quickly and we don't have to do a lot of work to do that. It's also really explicit, like hey, when you're doing person dot set first name instead of saying person dot first name equals it's obvious that there's side effects happening here, right? Because you're invoking a method. Whereas I think a lot of JavaScript developers can be surprised, maybe you've worked with someone at your company who has put a ton of behavior in a JavaScript setter and you're just like, oh, object dot whatever equals blod. It's like, holy crap, my app just exploded. Debugging that, it's kind of annoying, right? So this makes it very explicit that something other than just raw property setting is happening. The cons with this are of course poor interoperability. So the fact that every library that you interact with has to be aware to call these methods, otherwise you don't get any updates happening is extremely annoying. It's also very slow in situations where the mutations happen somewhere else, right? So how many apps work by pulling a JSON endpoint and just dumping that raw data in on top, right? Because some other computer somewhere is mutating the data in the database and you're just periodically getting a new copy of that record. Well, how do you know what changed between different versions of that record? You have to manually figure that out and that's extremely annoying. And that's the thing that made that VBmon demo very slow, is each 20 milliseconds you're just dumping in this raw payload and we have to throw everything away and start over because we don't know exactly what small thing changed. So we just throw our hands in the air and throw it out. And probably worst of all, it looks really weird compared to vanilla JavaScript. So people have criticized them before because they'll come look at it and be like, oh my God, it looks like you've written Java and JavaScript, oh, it's so good. And also interact with things like TypeScript. The fact that every type goes through this method makes it really hard for tools like TypeScript to analyze the types in your system. Then there's dirty checking and dirty checking is really cool and I think Angular became quite popular right away because dirty checking feels like magic, right? So with dirty checking, what happens is you just write your JavaScript like you're totally used to doing, you put some property on this dollar scope object and somehow through magic, Angular figures out like, oh hey, you changed this thing and now it goes and reflects that into the DOM. And that is really easy to learn. You never forget and do the wrong thing. It's really intuitive and again, it feels like magic, you can just dump raw JSON into that scope and it doesn't matter how complex it is, Angular is able to automatically back out those changes. But there's a few cons to this. One is it's really hard to debug when it doesn't work, so you have to, how many of you have written Angular apps? Can you raise your hand? Any of you? Okay, awesome. So how many of you who have written Angular apps have had to figure out what scope.apply does? Okay, so basically it's the same set. So this can be very hard to debug when the magic doesn't happen as you think that it might. And I'd say even more fatally, it's really easy to quickly hit performance limitations, because your instinct is to keep all of your models in the scope and remember that every time any event happens in an Angular app, whether it's like a click, a key press, anything, the scope has to be scanned, you have to go through every array, you have to go through every property on every hash. Now yeah, JavaScript runtimes are fast, but they're not infinitely fast and as we build more and more ambitious applications, the amount of work that the computer has to do just to figure out that the person's first name changed grows exponentially. And then there's the manual notification style of React and in this style we basically say, it's up to you to figure out exactly what, or it's up to you to let us know that something's changed and you just say hey, I have this new model, something in here has changed and so it's just gonna go through the rendering cycle. And the cool thing about this is that unlike something like Angular Dirty Checking where you have to go through the entire scope and diff the entire scope to figure out what to update in the DOM, instead what you do is you just go through the component hierarchy and only the things that the components need do you actually need to diff. So the way that I think about the way that React does Dataflow is that it's a strict subset. Yeah, it's doing diffing, but it's by definition a subset and it's by definition limited to just the components that you have in the UI at that moment, meaning that it's much more efficient, it's gonna be a lot faster for most use cases out of the box. I think the most important thing about this manual notification though is that one of the things I've heard most from people who are writing these web applications is that the two way bindings that they create these really hard to understand graphs throughout the whole system. So what happens is you hire an intern who goes to town wiring up two way bindings across every piece of the application cause it like a demo is super cool, right? Like you probably like first time you saw a two way binding demo, like how stoked were you about that? But then as your application gets bigger, what's happening is you're setting some property on your component or your directive and now like all this other crap in the application is changing and because they're so decoupled and they're across so many areas of responsibility actually figuring out like where the heck did this value change from? Debugging becomes very painful and I think that's why things like React SetState API are very nice for people who have gotten to this hell of having to debug this manual process. So anyway, the one downside with this in my opinion is that it turns out two way binding is actually really nice in some situations. So two way binding can get really tedious if you're having to manually set state, listen for changes, wire up actions, send the actions up. React has something called React link that lets you add two way bindings back in on top. But I think in general, there are cases where you want to use two way binding and there are cases where you want to be explicit and a lot of that has to do based on the type of data or where the data is coming from. Because if you don't do that then the DOM can get out of sync and if the DOM gets out of sync you can have these really annoying user experiences where the model changed on the server but only a part of your UI updated because you didn't do a good enough job of propagating that change event through the system. So when I think about Glimmer, when we set out to build Glimmer, what we said is okay, is there some primitive, is there some engine we can build that can accommodate modeling all five of these things, right? So along the middle there you have ways of detecting that a change has happened and along the top you have different ways of getting that change to propagate throughout the UI of your app. And that's really what Glimmer is. So what I'd like to do now is kind of take you through the internals. It's gonna get a little bit, it's gonna get heavy man. There's gonna be some code here. We're basically gonna learn about compilers today. So I hope you're not all too hungover and you've got your coffee because we're gonna go deep. So the interesting thing about Glimmer is that there's no project called Glimmer. You cannot download Glimmer on GitHub. Glimmer is just kind of a name that we gave. One HTML bar is quite a shit name, isn't it? I don't know why we called it that. But more importantly, Glimmer was a name for an initiative that spanned many projects. We believe in pulling out small packages, small reusable packages that do one thing well and sharing them generally, right? So parts of the Ember router, for example, we pulled out into microlibraries that Angular 2 is using. And so that, we believe in pulling out these different microlibraries because we think it leads to better collaboration across the ecosystem. So Glimmer is, it just involves changes to four different projects. So one is handlebars. We use the handlebars lexer to parse handlebars. Well, I have a slide for this. So handlebars parses that handlebars syntax that you saw with the double curlies. And I think what's really cool is that the way Glimmer works is we actually wrote an HTML5 parser in JavaScript. And that has some cool benefits that we'll talk about. And then HTMLbars is kind of the runtime. It's this very highly tuned and optimized piece of software that provides most of the magic in Glimmer in the sense that it's where all the interesting ideas live. And then Ember just takes HTMLbars, I don't know why I stay now, I'm sorry. Ember takes HTMLbars and it basically patches into the runtime. Modo, okay. Sounds good. So this is kind of the stack that powers the entire Ember rendering engine. And for the most part, it's just small micro libraries all the way down. And then we just integrate all of those and package them up into a really nice, easy to use API in Ember itself. So I was mentioning what's really cool about having this simple HTML tokenizer, which is just an HTML5 parser, is that it makes it really easy to validate your HTML as a developer. So HTML was really designed to consume documents that could be malformed. It's designed for amateurs to be able to author them and they should still show up and work correctly in the browser. So the fact that you can write just like the most malformed, terrible, awful HTML and it still works in browsers is actually like a cool feature, right? I think we can all agree that's a really cool feature of the web. However, you guys, everyone here is like top of their game, right? Like this, I can see that in all of Europe and the US, this is just like the finest developers that have ever been assembled in one room. So for you, if you make an error in your HTML, it's, you probably care, right? Because malformed HTML can lead to errors. But historically what happens is you throw your HTML into the browser and it just accepts malformed HTML no problem and it leads to really hard to debug bugs. So now, as you're authoring your template, the second you have some kind, like the, there's no agreement between opening and closing tags, Ember will actually tell you, Ember CLI will be like, hey, I noticed that you forgot your closing P tag. And I, when I was working on a big production application, when we switched out of this feature and we moved over to Glimmer, it actually caught eight different templates that had malformed HTML on them. And there were at least like two customer reported bugs that I could not reproduce that ended up being due to these malformed tags. So it's actually a really big benefit. And another thing that's really cool about how a handlebar is an HTML bar's work is that, I mean, handlebars is a programming language, right, handlebars is a programming language. So we built this rendering engine essentially like a real programming language. So if you're familiar with how this works, when you take your handlebars file, the first thing that we do is we run it through a lexer and a parser and we convert your source code into an abstract syntax tree. So there's no regular expressions here. And then I think what's really cool here is that inside part of the step is almost like, there's almost like a handlebars CPU. So what we'll do, kind of like how a virtual machine works, is basically like a handlebars virtual machine. And what it will do is go through all of your handlebars syntax as represented by an abstract syntax tree and it will generate a series of opcodes. So these are basically like CPU instructions, almost. So these are the abstract instructions that I want you to perform. And then we have what's called a JavaScript compiler, which actually takes those opcodes and does some code gen to just generate the JavaScript that actually ends up running in the browser. Y this is nice because what it means is that the parsing doesn't actually happen on the user's browser. By the time the user is running your templates, they're just pure JavaScript that the browser is already optimized for parsing and can very quickly jit. So you get awesome performance. So this works just like a compiler, which is awesome. And I want to show you how you guys can get deep into the guts of the handlebars compiler and understand what it's doing because it's a really, I think, fun way. This is how I was first introduced into compilers and parsers and lexers and so on. So if you go into the HTML bars repo, there's this really beautifully designed debugging tool that we created. And what's cool about this is you can enter at the top a handlebars template and then down below, it will actually show you what the compiled code that gets sent to the browsers looks like. So you enter the template, you click the compile and render button and it was going to show you what this compiled code looks like. And I think if you've ever developed a number app and you've looked at the output code, if you've ever stepped through this, it probably just felt like black magic. You probably didn't, like you just treated it like compiled code. But actually if you look closely at it, it's doing some stuff that's pretty cool and once you understand how to read it, it'll help you debug these programs and understand at a very low level what's going on. So in HTML bars, there are three basic phases of how we build these templates. Phase one, because you're writing a handlebars template, we don't need to evaluate any code in order to understand which parts of your template are static and which parts are dynamic. Because by definition, all the dynamic parts are wrapped in those curly braces. So we create the static DOM. Then what we do is we create pointers. These are like objects that point into where the static content should go. We call these things render nodes. And then when we run your application, you give us like a model or you give us some JSON data from the server and we populate those render nodes at runtime. So if you look at what the compiled output looks like, you see this function here called build fragment. And so step one again is to build the static pieces of the DOM. And if you read this code, if you just read it line by line, you could probably figure out what's going on here, right? So this is saying create a new document fragment, create a new P element, create a new text node and then wire up the hierarchy de cómo los son relacionados con el otro. So you can see that what we're doing is basically what the browser does under the hood. When a browser parses HTML, it reads that HTML in and then it builds a DOM. So we're just taking care of that step on the client. Now once we've built that fragment, you can see that this template has a property on it called the cache fragment. So here's what's really cool, is that if you render the same template multiple times, let's say like inside of a loop, we only have to build this fragment, this DOM structure one time. We only build it one time and then we cache it, we save it into this cache fragment property. And so every time you re-render it again, we can just use the browser's clone node API, which is extremely fast. And then once we've got the DOM element, now you can see we have this method called build render nodes. So you can see that there's no build render nodes here or we have one but it's empty. And the reason the build render nodes is empty is because if you look at this template, it's very simple, there's no dynamic content. So it's literally just static HTML. So all we're doing is building that static content. But now you can see with this new template where we've put this first name property inside the p tag, we still have a build fragment and still building this p tag. But now what it's doing is it's building these render nodes. So you can see it says create more fat, a morph is another name for a render node. And again, we use a clone version of the cache fragment, each type the template is rendered. Note that here, when we're building these pointers into that DOM, they're all specified as offsets. These numbers, the zero here, these are all offsets, which means that we can create pointers into this cache fragment just by cloning it and saying, okay, it's like the first child of the second element or whatever. And then down below, we have this thing called statements. And statements is kind of like the opcodes like I was describing earlier. These are the pieces of code that Ember will evaluate at runtime. And what this is saying, they basically tell Ember how to fill in these dynamic pieces. So this example means fill in the first dynamic section with some content named first name. And then the stuff you see at the end is just the source information. So if we have to print a deprecation warning or some kind of error, we'll show you the location of the source code where the line number that it came from. And there's some additional stuff we won't get into, it's kind of in the weeds. But one thing I wanna point out here that's pretty cool y you've already figured this one out is note that in build fragment, we're not actually calling the DOM API. We're not calling document.create element. We're not calling document.create document fragment. We're going through this thing that we're passing called DOM. And the thing that we're passing in called DOM is something that we call a DOM helper. And this is going to be key for the server-side rendering stuff I'm gonna talk about in a second. But the bottom line you should understand is that the handlebars templates that are compiled have no dependency on the DOM or on the browser. They can run just fine in a node environment. You can probably figure out why. Okay, so now that we've gone through the step of getting the parser, getting the abstract syntax tree, you might be wondering, well, how does that JavaScript actually get generated? So the first thing that we have to do if you're familiar with compilers at all, you probably know that an abstract syntax tree, just like it says on the tin, is a tree. So it's this nested hierarchy. And the first thing that we have to do is somehow linearize that. And we linearize that into this internal format. It's just an array of instructions. We basically do like a depth-first scan of the tree and we linearize it into this array. And then these instructions get converted into what we call opcodes. So, like I was saying, separating the compiler just helps us better reason about, okay, what is the compiler actually doing? What is the template, what we call the program? What is it actually doing? Let's think about that as a logical unit. And once we figure out what it's doing, do the code generation to turn. Here's a sequence of steps that you need to follow into actual JavaScript code that can be run in the browser. So we create a list of opcodes that look like this, and then this is just an example from inside the JavaScript compiler. What this is doing is saying, okay, here's an opcode, consume this opcode called create text, it's gonna get some information about it. And then it's going to actually, at this point, generate the raw JavaScript like it's evaluated by the browser. So once this gets evaluated, this will turn, this line, the second line here will turn into something like this, right? And you remember seeing that from the build fragment. So we actually have a quite sophisticated compiler pipeline that has many different discrete steps for transforming raw string source code into this final JavaScript that you see running in the browser. Now I wanna talk about render nodes a little more because render nodes are really, I think the most critical cool thing about Glimmer without this idea of render nodes, they really unlock most, a lot of the performance benefits that we see. So the thing about a handlebars template, just looking at this as a human, in the handlebars syntax, you can see which parts of this are static and which are dynamic. And I've highlighted them here for you, but the TLDR is that anything with double curlies or triple curlies is dynamic, anything that's not is not. And if you look at these, you can see that these curlies are used in two different contexts. So one is used inside, like for text content is the text content of a node and the other one is describing the attribute of a node. So what that build render node step does that you remember seeing in the template, essentially what it's doing is creating these two objects that are pointers into this document fragment. And so when your application runs, we get the model data or the JSON data from you. These render nodes are responsible for pulling that data from the model and putting it into the DOM. So if you think about what your application looks like once it's booted up and running, you can think about, oh Christ, not again. Did it go down, oh, it's still there? I had to watch my mouth, there's a written record. Okay, so when your application is running, you can think of it as your UI in a glimmer app as being essentially just a tree of render nodes. Every piece of dynamic content is represented by a render node. And what's cool about this is that using this render node, what that means is that we can tell a component, hey, I want you to re-render yourself. So what's going to happen when you call re-render on a component is we have, each render node has a notion of what we call dirtiness. And dirtiness is just a boolean flag on a render node and it means this render node could be backed by a value that has changed. And so this is how we implement react style, set state updates. You say, okay, re-render this component and what that does is extremely quick and extremely cheap. It simply sets a flag on each render node that says, hey, this render node could have a different value backing it. And now what happens, so it marks them as dirty, and now what happens when the re-render happens, as many different components can re-render themselves within a particular run loop as they want and we wait to actually flush all of these changes to DOM until the end of it. Now for the rendering process, what happens is we go node by node and we first ask, is it dirty? If it's not dirty, then that means that you have not told us that there could be a change here. In react terms it means that you haven't done a set state on this component. And then here's what's novel about this. Because each render node simply represents a value that goes into the DOM, it also caches the last value that it's seen. So for every dynamic piece of content in that component's template we go through and we can very, very quickly diff and say, has this value changed since the last time I saw it? And as you can imagine, that's an extremely cheap operation in JavaScript, just checking identity. And if it hasn't changed, then we don't do anything, we just clear the dirtiness, we say, okay, actually this render node didn't end up being dirty. And only if that render node is dirty and it's value is different, do we reflect it into the DOM. And so what this allows us to do is to, at the last minute during this rendering, reduce the amount of work we have to do only to changes that are actually needed. So again, it's about that mutation observing and backing out the smallest set of changes that we need to actually reflect into the DOM because that's the thing that ends up being slow. Now there's one cool thing that we can do here on top of this. So this is, like I said, the react style set state where you say everything inside this component could have changed, so please go diff it for me and figure out what's changed. And that's really great, again for the case where you're just like grabbing a big JSON payload, dumping it onto the component and saying go render it. It gives you very cheap, fast re-renders when most of the content ends up being the same. But you may not know that, but we'll figure it out for you. But because we still retain this mutation observing KVO style system from original Ember, we can actually go ahead and add an additional optimization on top. If you are using those mutation APIs, if you're using .set to change a property, we can say if you have a component and you set the first name to Tyler, well that's an observable object and we know that if you change any properties on that object, we're gonna be notified by the system because you use .set. And in that case, we only have to dirty that one particular node. So I think what's cool here is that we can actually create a blend of these two models of like the Ember model or the Angular model and the React model based on what's most useful for you as a developer. And I think, we'll talk about this in a bit, but I think having the ability to change the semantics based on where the data is coming from actually gives you the best of both worlds. Sometimes some data you wanna model one way, sometimes other data, especially model data you wanna model a different way. So the thing I want you to keep in mind is that change detection and syncing can happen in a glimmer app per value, not per component. The level of granularity is the value. We can apply different optimizations, we can apply different diffing semantics, we can apply different dirtying semantics to different values passed into the component. And one thing I think is really promising is the idea of automatic two-way binding for models which helps you deal with the fact that sometimes you have a model and you have it in two different parts of the UI, you don't want that model to get out of sync because then the user doesn't know what they're sending back to the server when they save. But for component state, the thing that people most complain about when they're building like an Angular or Ember app is that these two-way bindings, it's hard to reason about like you change the property of your component and the whole other side of the app changes and it's impossible to debug. So being able to opt in to different observation semantics and making it manual for component state, again, I think it solves a lot of the problems that people have and it lets them opt into which system works best for them for solving that problem. And again, it also means that optimizations happen per value, not per component. So one way to think about it is that a virtual DOM implementation basically diffs, it goes through your component hierarchy, it asks for a virtual DOM representation of that UI and then it takes the old virtual DOM and the new virtual DOM, diffs it and applies those to the DOM. But that actually ends up being in a lot of cases more work because you have to evaluate the entire component. With Glimmer, instead of what you're diffing is a tree of values rather than an entire DOM structure, which in the same way that React is a subset of Angular's scopes that you have to dirty check, the Glimmer's render nodes are a subset of the values that you would have to diff with a virtual DOM implementation. And I think the other thing that's cool about this level of granularity is some of you may be familiar with the idea of like in React, you can, if you're using immutable data, you can update, you can implement a should update component hook that says, hey, if this data is immutable, just do an identity check and if nothing has changed, then just there's no work to be done, which can make your apps extremely fast. The problem is all of the attributes being passed to that component have to be immutable and if you wanna add a second attribute that's not immutable, well now you've busted that optimization. Because we're doing it on the per value level, you still get the speed for the immutable data and you're only paying a cost for the base that is immutable. So just wrapping up here, I wanna talk a little bit about what Glimmer unlocks for the end developer. So the first thing is that it gives you extremely powerful helper primitives. So if you wanna write your own helpers like if and each, it's extremely easy to do and you get great performance basically out of the box. So to contrast this, this is a screenshot of just a selection of the each helper in older versions of Ember. So the code that I've screenshotted if you can see it is doing all this mutation observing. It's saying, okay, let's detect when the array has changed, let's figure out exactly what changed in the array and then if something's been removed, remove it from the DOM and if something's been added, reflect that onto the DOM. It's quite complex. And worst of all, anything that wasn't using our mutation API would incur a significant performance penalty. And of course, the arrays oftentimes you're throwing away the old array and creating a new array and putting it in. It'll be a full re-render in Ember every time. So that's terrible. So here is what the new each helper looks like in Ember. So 17 lines of code compared to 800 or so is quite an improvement. And not only is it significantly faster, the implementations you can see is incredibly simple. And this is all using public API. So if you want to write your own each helper and maybe customize a little bit how it works, you can do all, this is all public API. Note here, this is the yield item, this is the core of the API here. It basically says render, so if you have the each helper, you can pass in a block and that's what it says blocks.template. That inner template is what it's referring to. So you can just yield that template, you can render that template with a new model every time. So you can write your own each helper in like 20 lines of code, which is quite nice. And this is the if helper. You can see it's extremely easy to reason about. It's extremely easy to debug. And all of the diffing that happened, you just tell us what you want to render and all the diffing happens under the hood in HTML bars. So not only did this make things a lot faster, it also dramatically simplified all the helpers that we had to write in Ember to get these live templates. It also gives us some cool tools in terms of backwards compatibility via compiler plugins. So if you're using a backwards compatible, like legacy syntax that we've deprecated, you only have to pay the cost of that if you need it. And those costs happen at compile time, not at runtime. So here's a good example of this. Like way, way, way back in the day, the way that you would bind things in a handlebars template is you would add this like magic name of capital B binding. But that hasn't been that way in like two or three years. But again, there's still apps that are like three or four years old and we don't want to just break them for not a good reason. So before what would happen is that we would have to handle this case at runtime in the code. But now that we have a proper programming language, we can actually write an AST rewriter that at compile time walks the entire AST and says, are there any bindings that have this magic name? And if so, just at that point, rewrite it to the new syntax, which is quite nice. So it's like, if the key ends in binding, then we're going to slice that binding piece off and we're actually going to rewrite the raw AST that goes into the compiler before the compiler ever even sees it. So that's nice. And the last thing I'll just mention is server-side rendering. I was gonna give you guys a demo of this, but it took me more than two hours to do an NPM install on the hotel Wi-Fi. So you can imagine what it looks like. But fastboot is more than just server-side rendering because server-side rendering is great for SEO, but really what you want is for the HTML to come to the client and then the JavaScript to kind of catch up and do what we call rehydration, which is you don't want to re-render once the JavaScript loads, you want to use the HTML that's already there. So fastboot is really two things. It's to run the Ember app in Node, get it to render, get it to generate the HTML. And that's the demo I was gonna give you because that's working now, today. You can run these apps, as you saw in the demo, everything is going through that DOM helper abstraction. So every time it makes an element, it just goes to this delegate object and in Node, we have a fake DOM that we use. And then once we've got that HTML generated, then obviously the next step is all of those render nodes I talked about. Render nodes are the thing that makes the magic happen. So serialize those render nodes on the server, ship them over to the client and have the client use them as pointers. Remember, when you were looking at the template, we saw this method that was building a fragment and then it was creating render nodes that pointed into it. But given that it's caching that fragment and reusing it between re-renders, it should become obvious to you that we don't need to actually do this step if we have the raw HTML sitting in the DOM already. We can just take it, use it, and because the render nodes are specified as offsets, we can basically go through the same process. It's like we skip this step and just use what the server gave us. And then lastly, I just wanna thank a big thanks to both Bustle and LinkedIn who have extremely generously sponsored all of the work over the last six to eight months that Yehuda and I have been doing, all of this essentially re-writing this rendering engine. So a big thank you to them, if you would. Thank you.