 Thank you all for being here. Before I get started, if you all look behind you, there's two folks from the AV team back here running cameras, running audio, making sure I don't look bad. So big round of applause for them. Thanks for the help. Thank you to all of you for choosing to spend part of your afternoon here, to hear extra special modules, a tale of two loaders. So bonjour. If you've seen me talk before, maybe you should kill it at some point, but I just love it. Let's look at this cat. I just do this. I'm tired enough right now to do that. I could just do this for 20 minutes. But it started as an exercise for myself to just relax, during a talk and get more present. But over time, it actually became a little test for the audience, because I'm a little silly and if you all don't laugh at that, I'm in big trouble. But you all seem pretty cool, so this should probably go well. So my name is Miles and I work at Google. I'm a developer advocate focused on the Node.js, JavaScript ecosystem, and Google Cloud Platform. Then this thing at the bottom, got a laser so I can show it to you, the opinions I expressed on the talk are my own. I don't know. I put that all the time, I've never gotten in trouble. So maybe it works. Before we get started here, I started doing this in some talks a bit of a glossary. So the topic of ECMAScript modules has a whole bunch of buzzwords that we use and a handful of them are made up, like most words. So I thought I could spend some time going through them. The first one is ESM, which is short form for ECMAScript modules. ECMA being the European Computer Manufacturers Association, which is where the JavaScript language is specified. You may not be able to read my handwriting, so I'll walk you through it, but at the top we've got an import, at the bottom we've got an export. So this is a little bit different from CJS, also known as CommonJS, where you require a module and then can call module.exports to export what will be imported by someone else. These are two fundamentally different loaders. CJS is the loader that Node has been using for gosh, over a decade. That's wild. ESM was specified in 2015 is kind of becoming a real thing now. Interoperability is something that we'll talk about and interoperability is the ability to access ES modules from CJS and vice versa. This is an extremely important thing for our ecosystem because we have a giant repository online of many, many modules and moving to ESM should not be a time where we say, okay, well, like smell you later guys. We wanna keep those modules being able to be used and the flip side is when we have a whole new ecosystem of new modules, we don't wanna necessarily shut folks out from being able to use them. We can't necessarily make it as ergonomic as possible, but we'll do our best. Transparent interoperability is the ability to require ESM and CJS or import CJS and ESM without having to know the module type of the dependency. We'll talk a little bit more about transparent interoperability later, but spoiler alert, it doesn't work. Sorry, we tried really hard. There is a thing called the goal, which is a pairing of a top level grammar and a top level execution model and don't worry if that doesn't make sense to you. I have to reread what that means every time I give this talk, we'll dig into that a bit more later. A specifier, that's the string that a runtime will use to locate a module. So right there, you can maybe see at the top, it says dot slash module.js, that is the specifier. We use the specifier to resolve where we're getting the path to a resource and we fetch that resource from disk and then execute it, although you could get resources from many different places in theory in a browser, you're getting that resource over the network. But the specifier is the string that we use to locate that resource. Specifier resolution is an algorithm that's used to convert that specifier into the path of a module. So when we see something like dot slash, how do we figure out what that is? If we in node don't include the dot.js, how do we figure that out? That's the specifier resolution. Bear imports is a specifier that does not start with a relative or absolute path. So you'll notice in here require underscore import low dash, there's no dot, there's no slash, there's just a specifier. That is called the bear specifier and as we all know, that's how we grab things from the node modules directory and it's a pretty key part of the developer experience of writing modules because it's really nice to not have to write like dot dot slash underscore, node underscore modules slash and know the path to all the things that's kind of the worst. Dual mode modules is supporting both common JS and ESM entry points with a single specifier. Similar but different than transparent interop. It can end up with a lot of the same behavior that we wanted to see in transparent interop where it's like, hey, I've got a module and I can require it and I can import it but it's not exactly the same thing because we're talking about kind of having two different entry points, potentially even two different items in the graph which gives me the willies but I got over it. And then finally, existential dread which is the feeling I get trying to get CJS and ESM to play nicely together. You can tell that I have a theme in my talks today. But a question that you may have is like how do we get here? There's a lot of different things going on. What's the history? And so modules and packages originally started in the ES4 spec, it introduced the concept of packages. It's somewhat similar to C++ namespaces and the intent was to create something similar to the Java jar system. Unfortunately, it was ripped out of the standards track and never seen again and ES4 as a whole as far as I know never really became a thing other than action script which I think was an implementation of it. But CommonJS was then introduced afterwards with some members of TC39 working on it but it's important to note CommonJS was not done in a standards track and it was primarily created specifically for server-side JavaScript. Some of the decisions that they made especially having a synchronous inline execution which we'll talk more about afterwards is very, very specific to the server and doesn't really work well for the browser. NoJS implements a variant of CommonJS and we could arguably say like the primary reference implementation of CommonJS as it's known today. Anyone remember AMD? So AMD was only like specced by RequireJS it was more of a convention. It was nice because it was asynchronous. It worked better for the browser if any of you did Angular back in the day. It was basically AMD. And UMD was a thing too in case you couldn't make up your mind and just wanted to ship something that all the things could load. It was really fun. I used to use BrowserFi to make UMD bundles all the time. But ES modules landed in Equa 262 in 2015. So we could have the future of modules yesterday but they're not here yet. So why did it take so long? And I do have an explanation for you. And part of why it took so long is the concept of a loader and the loader is actually not specified in Equa 262. This is actually one of the really interesting standards hacks. If you need to, if you're trying to standardize something you're generally like building consensus around a large group. So if you have part of it that you can't build consensus around the easiest way to get over that is not to try to build consensus but just not specifying it. So the loader was actually never specified because consensus couldn't be reached on it. So the loader ends up being defined by the runtime. And there's kind of these phases which are specified but the implementation happens in the embedded. So the loader has these different workflows. It has fetch and then transform and then evaluate. Common JS has, as I mentioned, a synchronous load and inline execution. So if you say node index.js, node just starts executing index.js from the beginning and the second you hit a require, I'm sorry to tell you, we literally do FS refile sync on that thing. We grab that thing and then we start executing that thing inline. And we just run your program until it's done and if you have things on the event loop that are keeping things open we don't stop your application. But it's nothing fancy. It's just like we're running the code and every time you have a reference to another module, okay, cool, we run that code. And one of the important things here is that there's no explicit load step. It just kind of all happens at once. And this is actually how Babel originally implemented ESM. And a lot of how people first experienced ESM was like, oh, there's the same. We'll just like, we'll turn all the imports into requires because we can write a regular expression. But it turns out that this doesn't work because ES modules specify an asynchronous load and a synchronous execution. And the load goes and it starts from the root of your graph. It fetches that, it parses it, it finds all of the other references within that and then it fetches, it resolves the specifiers and then it loads all of those references and then it figures out all the specifiers in those modules and recursively grabs every single module that's in your graph. That goes then to a linking phase that needs to have the graph in memory and then in pre-traversal order, so going from the root down, it links your graph, removes cycles and makes sure that the graph is laid out properly. So we talked about pre-traversal order. So if this was your graph of root ABCD, in the fetch phase, the first thing that would happen would be that we would load the root. We would parse the specifiers of the root, which would say, hey, we need to load A and B. It would then fetch and load A. It would parse A and say, hey, A needs C and D. It would then fetch C, parse it, know that it has no more things that it needs to load. It would fetch D and then finally it would fetch D. And then it would again go through the same order in a linker. I didn't do an example with cycles but bear with me on that. Now, execution requires that you have a linked graph in memory and it's done in post-traversal order. And this is one of the things that is both extremely intuitive and extremely unintuitive at the same time. So if we wrote this graph in common JS and had all of our required statements at the top of the graph and all of our module.export statements at the bottom of the graph, we'd have very similar behavior. But because common JS is inline synchronous, if you had any statements in between the required calls, you'd have slight differences because ESM will always execute in this order where the first node that executes is the bottom most left node. C will execute because it has no other dependencies. It will then export its symbols. Then D will execute and export its symbols. Then A, because its dependencies are now available, will similarly execute and export its symbols. Then B will execute and export its symbols. And then finally, last but not least, the root can execute. And what's fundamentally different about this compared to common JS is there's no way that the root will execute anything before the rest of the graph is done. Whereas common JS, you could have a whole bunch of things before a required statement and all of those would be guaranteed to execute. Now, I made a bunch of strong claims. Top-level await kind of makes this a little bit less clear. It changes the synchronous execution guarantees. Specifically, in this graph, if we said that C had a top-level await here, C would defer. D would execute and export its symbols. If C had still not finished resolving, then B would execute and export its symbols. And then finally, C and then A and then the root. We do bookkeeping within the VM so that disconnected graphs are able to execute. Generally, if you have top-level await, you'll continue getting the execution order that's expected except for the parts of the graphs that deferred. But this loader is not entirely specified, as I mentioned, and it's implemented by the Embedder. And as I mentioned, you can't simply convert common JS to ESM. And just a little bit more of a difference, the specifier resolution algorithm that I mentioned is also not specified and it's different. So TC39 leads it to the hosting environment to determine how you take a specifier and then turn it into a resource. Node.js and common.js has a very specific resolution order, which many of us are probably familiar with. It supports bare imports, which are great because that allows you to NPM install stuff and just get them by the symbol that you'd know it by. It allows you to import JSON. It can support that and understand what JSON is. It allows importing native dependencies. It allows optional file extensions. You don't need to include the file extension for the module that you're importing. It also allows you to require a directory that has an index.js in it. If you're ever curious to learn more about the CJS resolution algorithm, if you look at modules.markdown or just the modules documentation inside of the Node repo, there's actually a spec text outlining the actual algorithm for the resolution algorithm in common.js. Now, this is a problem because if we're talking about wanting to have interoperability with another platform like the web, well the web doesn't support bare imports. So right away from that first point, we already have a divergence. But thankfully, there's a proposal happening at YCG, which is the web incubator community group within W3C called import maps. Import maps would allow you to support bare imports. It would allow you to do deep traversal by file name. It would also allow you to do deep module traversal by reference. This doesn't get you all of the niceties of the common.js algorithm. Most notably missing is file extension resolution and directory resolution. But it gets us like 90% of the way there, which actually may bother people more. But what's really great about this is it's like a consistent thing. And this is what an import map would look like, a very straightforward import map, where you have moment in low dash and it specifies the node modules folder in the entry point. What's really cool about this format is that we can generate this from the package JSONs of the existing packages. So a tool like NPM or Yarn or PNPM or a tool you may write sometime in the future can add install time, install all your modules and generate an import map for you so that you can have the same node modules folder be supported both by a browser and by node. But that's also kind of making the assumption that everything else works. But if you want to try import maps today, you can use it in Chrome. If you do chrome, colon, slash, slash flags, there's two different flags you can turn on. One is called built-in module infra. The other one is called import maps. You can try them today if you want to play with it and see what the future of modules in the browser would look like. If it wasn't enough, code executes differently, too, between these different modes. They have a thing called a goal, which we talked about before. And they go, what? There's four types of goals. And when I list these four types, then all of a sudden you'll be like, oh, that's what a goal is. There's script strict, script sloppy, ES module, node.js goal. That's like a really high level naive way of thinking about it. There's old school scripts where you didn't say use strict at the top. That has a particular thing in certain features that are available. If you say use strict, then other features are available. In the ES module goal, there's even more restrictions. And then the node.js goal has a whole bunch of things that are specific to it. So when we talk about that top level grammar, that's kind of the strict mode versus sloppy mode and certain things that are limited based on using that pragma. But the module goal has additional grammar changes on top of strict mode. So it's already got all of the things of strict mode. It no longer allows you to use HTML comments. I know many of you in the room may have just figured out that you can use HTML comments in JavaScript. So sorry to take it from you right when you got it. Await is a reserved keyword. This is actually really important because await being a reserved keyword is exactly what carved out the space for top level await to be able to be specified. These divergences could potentially increase over time. In general, TC39 has a mantra of not breaking the web. So I don't think that we should expect any significant changes. But it is possible. And so it's possible to take code that could run in strict mode, run in sloppy mode, without a problem would throw when you run it in the ESM goal. But what gets really strange here is ES modules don't have an in-source way of determining the goal. So there's a pragma that you can use to switch between strict and sloppy mode. But there is no in-source way of doing it for modules. In the browser, if you're loading a module, you say type module in the script tag, and the script tag knows to send it to the right parser. It doesn't exist. There is no pragma. So for node, that became a challenge for us. How do we know the difference? And on top of this, there's other potential future goals that could exist. That could be web assembly, HTML modules, web package, binary AST. These are all potential goals that could be imported in the future. Another thing that's different between the two is that ESM does not have a way to inject lexically scoped variables. CJS has lexically scoped variables. Every time that you require a module, we wrap that module in a lambda. We inject a whole bunch of stuff. I mean, now it's a little fancier. V8 has an API. But for a while, it was literally just shrinking catenation. Sorry. But the lexically scoped variables that you're probably used to using include underscore, underscore filename, underscore, underscore, dername, require, and module. These are all things that are available to you as if they're globals. But they're not actually globals, because require is very much scoped and has context as you filename, dername, and module to the module in which is running. And when node executes your code, we are instantiating those objects based on the file path of your module and then injecting them into the closure scope of the module when it's executing. Equipment modules have something kind of like this. It's called import.meta. And we're able to put meta information into import. So the primary one that we use in nodes ES module implementation is import.meta.url, which gives you the URL path to the module that's executing. And from import.meta.url, you can recreate a lot of the things that were included in the common JS lexical scope variables. You can figure out the filename. You can figure out the directory name. You can use an API we have called create require to make an instance of the require function. It's not perfect, but it gets us, again, 90% of the way there. And what's really exciting and probably what you actually wanted to come and hear about is the fact that you can use ESM in Node.js today, which is really exciting. And you can even use it without a flag in Node.js 13, I believe, is 13.2, where we unflagged it. And just some highlights about the implementation that we have. So it's getting close to stable, but it is still experimental. And there still could be fundamental changes to the API. It is ECMAScript compliant. And that does make it harder for us to do all the things that we would like to do. But it also makes it closer to having an environment where we can share these modules in various runtimes. ESM files need to use the MJS extension. I know what you're thinking, unless you set the type to module in the package JSON. Jeff, who's up here in the front row, was one of the people who helped kind of get that going. So if you, in your package JSON, say type module, then you can use .js for ES modules. What it does mean, though, is you can no longer use .js for common JS modules, so you can use .cjs. There is support for bare imports. And support for dynamic import. So dynamic import is a function. You give it a specifier, and it returns a promise to evaluate to the module. Dynamic import, along with top level 08, is a cool way of recreating some of the dynamic require stuff, although it's not necessarily a best practice. But there's times where you may want to do that, like internationalization or lazy loading in the browser. There's reasons why you may want to. We also support import.meta.url. But there are some limitations to our implementation. The full file path is mandatory. And maybe full file path is not the right way to put it, because you can put relative paths, but you have to include the extension if there's an extension. There is no support for folder resolution. You can't import a directory that has an index.js or an index.cjs. If you import a common JS file, there is no named imports. Trust me, we've tried to make this work in a way that would be spec compliant, but it is, at the moment, not possible. We're going to continue to try to see if we can figure out a way to make that work, but we do have some cool things that we've done, I'll show you in a minute, that can emulate this experience with just a little bit of configuration. There's only experimental support for importing JSON. You need to include a flag. We are working on standardizing that upstream, so that's some stuff that we are still working on. And there's no support for requiring of ESM. This is not something that is easily accomplishable. There are members of the team who are exploring the space and actively interested in making it work, but it is a bit of an uphill battle because moving from a synchronous system to an asynchronous system and back to a synchronous system creates something known as zebra striping, which you can Google and you won't find any information on, but it's Zalgo. But we're still working on a couple things. We're working on package exports. We're working on conditional exports. We're working on self-referential modules, JSON modules, and WebAssembly modules. Package exports, this is unflagged in 13 today. I think it's one of the coolest things that we worked on. It's supported in both CommonJS and in ESM in 13. It allows you to specify the exports of your module. This creates a public and private interface for your module. For this specific module, the dot is the equivalent to main. So if you imported or required it, you'd get the index.js. And if you imported the module slash CJS, you would get index.CJS. If you tried to deep import any other file, it will throw. So this is a huge departure from how things had worked before. It allows you to define the interface of your module. It allows you to make sure that people are not going to be deeply going in and grabbing things that you don't want. It is possible with the export map, though, to use an asterisk or a directory to map so you can still recreate part of the deep traversal system. But it would be more defined and more of an explicit interface. You'll also notice that there is a main here. That main will get shadowed by the dot. In a version of Node that uses exports, it will never load the main. But what the main does allow you to do is have legacy support for versions of Node that are pre-exports. So you can transpile a version of your package that will work on older versions of Node, throw that entry point into the main, and now you can support both new Node and old Node. So this is an experimental thing that we're really excited about called conditional exports. So you'll notice here that we have the dot and then we have a module entry, a Node entry, and a browser entry. Node will only support the Node and module entries and the module entry will work for the ESM loader. The Node entry will work for the common JS loader and then other environments can extend this object with whatever data they want. We're going to ignore it. But a tool such as Babel or Webpack or Rollup could use the browser entry to know, hey, like if we're in a browser environment, this is the entry you want. One of the things that's really cool about export maps is you almost have a one-to-one ability to convert the export map into an import map. One of the things that we worked on really hard is that the module loader that we have is almost feature equivalent to the browser loader when import maps land. So with all of this metadata, you could have a tool that could go and generate import map for the browser so Node would know how to read the Node modules folder and load all the files you would want. And then in the browser, you would just include the import map as a script tag and then you could be importing all the same modules and package authors would be able to hide the implementation details of the differences between packages. In my talk earlier today, I talked a little bit about universal JavaScript and the difference between the API and the plumbing. This is, in my personal opinion, the exact same thing in action. It allows package authors to worry about the plumbing of how this all wires, but from a consumer perspective, it just works. The conditional exports may change. So we may be changing a module to import and having a require flag. There's an issue if you have feelings, check out the modules repo. We're debating it. Almost done here. Thank you for your patience. This is one of the coolest ones. I'm really excited about this. This is called self-reference modules and you could turn on with experimental results self. Both conditional exports and self-reference modules can be turned on on 13 with the experimental modules flag. With self-referential modules, you can import yourself by your module name. Where this is extremely great, in my personal opinion, is for tests and for examples. You can actually test the interface that's exposed to people who are using your package. With export maps, this is extremely important because it's the only way to actually test the interface that you're exposing with the export map. The other thing that becomes really cool for examples is you can now have code in your examples that can literally be copy and pasted and just work. Previously, you were limited to either A, having examples that didn't work inside of your repo or B, having examples that used relative paths for the modules or you have to do some gross things with symbolic links, which never works out well. To kind of wrap this up quickly though, we do have a roadmap for where we're going with things. The hope is to unflag conditional exports in early quarter one of 2020, or 2020. There is the possibility that we may come up with another alternative way of accomplishing this. It hasn't come up yet, so it is looking like conditional exports will be unflagged then. We are aiming to backport all of these changes to Node.js12 LTS and Q120, but still behind a flag. Unflagged in Node 12 by Q220. So the hope would be that at some point in the second quarter of next year, unflagged modules would exist in both 12 and 13. And we're all hard at work on this to ensure modules is a best in class experience in Node and a good base for universal packages. And so thank all of you for being really patient as we figured this out. It's been a bit of an uphill battle, but I'm pretty excited with where we're at on this right now. It feels like we've really managed to cover a good chunk of the use cases. I still know there's things that we need to handle with loaders, especially when we're talking about like testing or stubbing, but we're still working on it. If you have opinions, please come jump into the modules repo and help us figure out how to solve it. Again, this is a surfing dog. Thank you on Myles.