 I'm Miles Borenz. I'm a product manager at GitHub, a contributor to the Node.js project. I sit on the technical steering committee. I have also helped extensively with our modules team. I also do standards work at TC39, where you may know me from such proposals as top level weight and import attributes. I'm joined today by Guy Bedford. Guy, would you like to introduce yourself? Sure, thanks, Miles. I do a lot of open source work. The main projects I've been working on are System.js and JSPM, and I also do software consulting and have collaborated with Miles quite a bit in the past from the JS to TC39 as well. Thanks, Guy. This session was titled Developer Fan Fiction Modules Edition, which was heavily inspired by this tongue-in-cheek phrase that I use sometimes called Developer Fan Fiction, which is where either people are opening issues or they're reading speculative fiction about what the future looks like or what they think things will be. Sometimes these are also just called future requests. The guy and I have been actively working quite a lot on the module system in JavaScript, both at the standards at TC39 and making sure that the fundamental building blocks are there and work, but also in the platform, particularly of Node.js, to make sure that the actual implementation of this is something that people can use. This ranges from everything from just making sure that a file can actually load based on a specifier to examining different formats. We find ourselves rather often actually having to write Developer Fan Fiction because we consistently run into problems that we can't solve the way that we expected them to. In particular, different runtimes have different requirements. The browser and Node have different security models and different fundamental infrastructures. Even newer runtimes like Deno have completely different requirements because, for example, they aren't bogged down by the legacy of Node and they're able to do something like let's examine URL-based loaders rather than relying on package managers. Guy, do you have any kind of insight into some of the places where you have found recently you're writing some Developer Fan Fiction? Sure. I mean, I think when you're working in open source, half the joy of open source is that you get to write this Fan Fiction. That's where the excitement is and the interest is because we all get to be a part of this. We all get to build these things. JavaScript is quite unique in that way. Many languages have a core team that decides what the experience of the language is, how it works in all these details. You're a user of it, you're a consumer of it, and you don't get much of a say. You can post those feature requests, but with JavaScript, there is no core team. There's TC39, which decides the specifications of the language. But TC39's responsibility doesn't encompass the entire usage of the language. As you say, tools like Dino get to decide how they want to do things. Browsers get to decide how they want to do things. So all around, as we're using the language, we're writing this Fan Fiction. That's exactly what we were talking about earlier today, how tools like Webpack have led to this explosion of what we're currently calling foe modules that, in itself, is a form of Fan Fiction of how tools have been able to shape how we use JavaScript and create those workflows for us. It was never decided from the top. It's sort of everyone's working together to build the language. Yeah, foe modules are one of those things I find particularly interesting in that people have been writing ESM, and I want to kind of say ESM for years. And I don't mean this in a derogatory way, but what is really interesting when you're using a tool like Babel or a tool like Webpack or even TypeScript to a certain extent to write your modules, they're going through a build phase. And that build phase allows you to do some fancy things that you couldn't do otherwise. One of the things that I find particularly interesting is, this is a theory of mine. I don't know that it's true, but I believe that a lot of people adopted ESM rather early simply for wanting destructuring, which interestingly enough, named imports, is actually a completely different feature in the language from destructuring regarding how it works. But looking at the text itself at the source text, you know, like import thing in braces from module looks very similar to like const braces thing equals require thing. That probably would work a lot better if I actually had some text. And perhaps when we edit this, I'll add a little writing on top. So it's not just me waving my hands. But obviously there's advantages to code splitting and tree shaking that you also get from named imports. But realistically, at least in very early versions of Babel, it wasn't even using the proper execution model that ESM specifies. It was more or less like taking those import statements and just converting them into require statements. And there's all these like interesting ways under the hood that ESM is subtly different than common JS that actually has made the job in the NodeJS modules team just so much harder trying to figure out how to get these environments to play nicely. Because one of the goals that we had in Node Core was A, ensuring that we had spec compliance, and then B, ensuring that, you know, we don't require any sort of transpilation or build step. And so if you were using Babel or if you're using Webpack or if you're using TypeScript, you can do something like make a named import from a common JS module because you have that whole pass where you're compiling where you can, you know, kind of sort all this stuff out. This is why we call them full modules, which in a way, and honestly, I wasn't thinking about it when I came up with the title, but you're totally right. It was like a fun kind of like developer fanfiction from like three years ago, when people were trying to speculate like, what does it look like to write modules in three years? We all know we still haven't totally figured that out. But I think maybe that's like a really fun jump into kind of like loaders and module types, right? Because the specifier, when you import a specifier and the specifier is like the string that you're importing from, there's so many things that we're used to doing, but like none of it is actually standardized. Like the idea of like how you resolve a specifier into a resource Yeah, it's a huge gap and you know, TC39 goes as far as saying it's a string. And then every implementation is just like, okay, we'll do the thing that seems natural to us to do with it. But everyone maybe treats it slightly differently or Node.js uses file parts and browsers use URLs. And then there's minor differences between those systems. And then, of course, that gets on to file extensions, which for Node.js has been, I mean, actually, full credit to you, Miles, because I never would have thought that it would have been possible to remove the automatic file extension adding in Node.js. And I thought that was something that we would possibly have to live with this huge difference between the platforms. But somehow in this modules process, we managed to create the same resolution behavior between the browser and Node.js and for when it comes to relative specifiers. I think getting those kind of details rights was so crucial for us to try and set a base for the language where and use their code between these different environments and not suddenly run into a whole bunch of bugs and issues when things don't work between these environments. And yeah, I thought that was very, very cool that we could get that out of that process. I mean, is that something that you were always thinking about in the back of your mind that these kind of universal use cases? Yes and no. Universal modules or some people like to call it isomorphic, but just the idea that you can share code between environments has always been like, near and dear to my heart as someone who grew up in the like, paste some JavaScript in a browser and it works world. The specific changes that we made to the node resolution algorithms, specifically that in Node.DSM implementation, if you import a module, you need to have its full file path. It will not automatically resolve the file extensions and you can't import directories. That was actually inspired by Brad Ferrius was the one who kind of talked me into that. And I would actually like to tip the hat to Deno on this a little bit too. Ryan Dahl and I had early conversations about this as we were designing this in node and Ryan pushed forward for, you know, not recreating the node file extension resolution. And it was actually one of like these core points in that like JS company you talked about. And, you know, I would argue that Deno in doing this helped pave the path in that like, I mean, back to this kind of fan fiction thing, not to like, not to like, drive the point home a little too hard. But like, there was a lot of conversation that we had about like, how would developers respond to this? Like, we have this future where we're moving this feature. And there were members of the team who very much love that feature, still love that feature and think that the shame that we lost it. And I think, you know, people come to me and they're like, you know, Miles, what do you think of Deno? You must hate it. I'm like, no, I love JavaScript. Like the more places to put the merrier, the more environments that we have that share similar kind of ethos to node in being like first side runtime as opposed to a browser first gives us more opportunity to think about like standardization and correlation across runtimes. I love the fact that Deno helped pave the path here. And I think that there's a number of different examples. We're like having another runtime like Deno. And again, one isn't tied to the same degree of legacy actually allows us to move forward faster. The fact that Deno shipped this and that people were not like up at arms was actually something we could point to as a reason to do it in node and not just that it's like, Hey, well, like the browser is not going to do this. And Deno is not doing this. I really think that we shouldn't and we were able to get it through. One of the other ones that's really interesting and you started talking about briefly was just kind of the inconsistencies between these environments. So like node for a long time has had this thing called bear inputs. And that's where the specifiers neither a URL nor a relative path or an absolute path. It is just a string and node has a whole algorithm using package JSON and the node modules folder to determine how you turn that specifier into a path on disk that you can load. And you know, a future I would love to see would be one where you can npm install something on your system. And can you hear those sirens right now? Yeah, welcome to New York City, USA home of the sirens. It was nice and dramatic as you're getting to your point there. I mean, I thought it worked pretty well. Well, I mean, this is what happens when you do it live. But so the point that I was getting at was, you know, browsers don't have any concept of bear specifiers at all. And this has been a challenge like if you npm install some module and then you try to import it with the browser, it's just like not going to work. You're going to have to like import the node modules folder. And if that if that dependency doesn't have any dependencies, it might work. But the second it refers to another dependencies by a bear imported breaks. And so there's this technology that is in the process of being standardized called import maps. And Guy, maybe you could tell us a little bit more about that. Yeah, so import maps are I mean, they have come out of like many years of spec work and discussion about how you you resolve modules in the browser. Because and again, it's sort of builds on top of that base node use case you just described, where as a user, you just want to import the package or the code that you're loading. And how do we do the same kind of thing in the browser? How do we enable the browser to just load a package where you don't have to necessarily copy and paste a URL, look up a URL somewhere. And then also, as you say, that the dependencies also need to do that. So this is kind of iterative process that has to happen if you want things to depend on each other. So import maps grew out of what was originally the idea that you would have be able to hook the resolve function in the browser. And then it kind of simplified down into let's just have a map. And the the closest we had to that in the past was probably something like the required JS configuration, which was an old loader for JavaScript, where you could write this map configuration and point names to different target paths. And it's in many ways quite similar to that. In other ways, it's quite different to that. But the core principles are the same that you write at this JSON object that has an imports field in it. And then you can just write a dictionary of your packages and the URLs that they can be found that. And it also permits subpath mappings, which is actually a very interesting use case as well. And touches on tree shaking and features like that. But import maps have been under development actually for quite a while. And still there still seems to be a little bit to work out, but they're shipping in Chrome today under the experimental web platform features flag. And I think there's been a lot of wider platform interest in it as well with projects like Dino also adopting import maps is the way that they want to use their specifiers. So just to touch on it briefly, both Dino and Node.js now use URLs in their module systems like the browser. And so import maps naturally will have the same semantics when applied in all these environments, which I think is very cool. Yeah. And one of the things that we adopted in Node in our ESM implementation, and I believe that this started as a proposal from Jan Krems, but you can correct me if I know you were actively involved in some other people were actively involved, but I think it was Jan's proposal. And I mean, but lots of discussion goes into these things, of course. So it's called package exports. And it's a new field in the package JSON called exports, where you can define the external interface for your package. So you're probably used to putting in a main or a browser field when you're writing a package that's going to be consumed. The algorithm that Node has when you import load ashes, it goes into the Node module folder, it looks for a folder with the name of the specifier that looks for its package JSON, and then it looks for main. This is something that's actually specified in our documentation. You could look it up and look at how the resolution algorithm works. It's fun for me, maybe not for you. I don't know. I like reading smart books sometimes. So package exports without them, you can deeply traverse into a module and grab any file from anywhere in the module. With package exports, you're able to define that interface. So you can say, dot slash deep module and have a path to it. And when someone imports your module slash deep module, it will resolve into that. Part of the reason why this is so powerful beyond just the fact that it's cool to have this public private interface for your package, which is a great programming tool, is that it makes every single specifier within a module, assuming that you're writing it this way. Absolutely static. And this is something that plays really nicely into import maps. So what it means is that anyone who's consuming your package and specifies something, we can completely statically resolve the path of all of those specifiers. And internally in your module as well, all of the specifiers that you write are also statically resolvable. Now, this is making the assumption, of course, that you're writing a tree that is all ESM or written in a subset of common JS. This is something you probably can lint for. And it's a way in which NPM or other package managers can likely give extra signal that packages are written this way. But the magic of it is if you had a tree in the future, so now I'm doing my speculative fiction. It's a five year fiction. Yeah. But in like five years, let's say you have a tree where all the modules have this package exports field identified. Another tool at install time could go through and completely generate an import map for you from your node modules tree. And the exports map also has this thing called conditional exports where for each entry point, you can specify an export for particular runtimes. So you could at install time generate like a browser import map or a node import map. And then you could have the exact same generic node modules folder, completely static, no translation, and have different code paths depending on the import map that you're using. And this is something that there's an awesome tool out there right now called Snowpack. It's part of the Pika project that's done by Fred Shot that that is experimenting a lot with, I mean, they're doing transpilation and they're making a new folder called the browser folder. And like, they've gone in a slightly different direction, but they are playing with tools that can generate import maps, they are playing with import maps as a way of allowing for bear specifiers inside of the browser. And it's really cool kind of seeing these kind of tools in a way, writing kind of their fan fiction of what they'd like to see the future of what development be. Yeah, it's a very, very compelling use case to be able to see the start of package management port to the browser. And I think it was amazing that so as we were working through some of these problems in the node JS modules group, that we were able to look slightly wider than node JS again, and look at import maps and see what was going on there. Because I mean, when we were discussing this in the original days of the modules meetings, import maps were still relatively new. I mean, that they're I think they're much more widely understood today. But we were very much sort having to keep an eye on these new technologies and then try and come up with something that was compatible with it. And what's very cool about exports is we ended up designing it in a way that it works quite naturally with import maps. The way you define your package boundary in your package JSON with this export field very naturally works with the same way that you would want to define the import map for that package. And sort of have the converse names of import and export. So one is the how the package defines itself to be consumed. And then the import map is how you as the consumer are consuming packages. And the encapsulation was such a huge feature as well. Again, with these kinds of things, I never quite know who to credit because these ideas get dropped in and you never know quite who is behind them. And I guess when you think these things through, it's nice always to imagine it's one one's own idea. But you know, we're all discussing these things and things get really mixed up. As far as I'm aware, it was Rob Palmer was was the original proposer of the exports encapsulation. Oh, really? Yeah. And they they've been using something similar at Bloomberg. And so that they they've valued encapsulation and valued in their internal workflows. I mean, certainly correct me if I'm wrong, but this is as much as I've been able to clean from the process. And that that encapsulation is huge because normally when you publish a package to MPM, you change one internal module. Maybe you didn't realize it, but a user was importing that module and relying on its interface. So it really makes that package boundary very well defined. And also that enables optimizations, which is cool because now you could optimize that package and you can load fewer files. You could you could be removing all the exports and doing tree shaking like optimizations to the package by the exports field. And that's something that I've been exploring recently with with the latest release of JSPM is how we can how we can optimize this this exports field. And what it does at the moment is it actually automatically treats all of the definitions in the exports field as the entry points of the package in a roll up code splitting build. It does a code splitting build. And then you get the chunking and the code sharing and the minimum number of modules. But it's part of your actual package management process. So it sort of almost becomes an implicit build process. It's no longer something you have to define that the configuration for the build yourself. You don't have to go into roll up a webpack and say I'm building these files. It can automatically try and optimize those individual packages for you, which I think another step as we try and get these individual package import maps running in the browser. Very close like smart bundles. Yeah, and doing it automatically based on the information because the exports is exactly the information you need to know to be able to optimize the interface for package. So there's there's some there's lots of interesting tooling that can build on top of that field. And this is yeah, I think a great success of that process, that field, because I never would have imagined that it would have been possible to ship it or I never would have imagined proposing it personally. So it's very nice to see that out of this process we could these things could emerge. Yeah, I guess like one of the biggest differences and I think we're getting close to time. Oh, sure. Had some closing thoughts. But one of the things I think is rather interesting is that if you look at the way that people have been writing, like kind of JavaScript module systems for the last couple years, they were all quite dynamic. And one of the like real advantages that ESM has, in my personal opinion, over like, you know, common JS is the fact that it is a static module system. It has these phases. You can do this kind of like introspection and smart things with it because of its static nature. And I feel like these various technologies that we've been talking about have been kind of like leaning in to the fact that it's static. We're talking about all the various like static like kind of like metadata to a certain extent, but you know, like unlike node, unlike browser file, unlike Babel, which had all these kind of dynamic build steps, which if you if you need a bundler, like at the point, I kind of I gave a talk last year. And I talked about like nihilistic transpilation. And it's kind of like this idea that like once you have to have a transpilation step, it's already dynamic. So you may as well just like just keep strapping things onto it. But when we start from this like kind of static core and add, you know, kind of more static insight, we're ending up with this like kind of really nice result, as you're saying, where like we can implicitly determine all of these things because of the static nature of it and end up with something that is much more flexible than my personal and I think what it comes down to as well as is making it well, at least trying to in this in this eventual goal, you know, we're trying to make it easier for users to to optimize their packages easier for users to configure their build systems. And trying like, in lots of ways, the complexity around JavaScript tooling and stuff has been the sort of cabrian explosion of, you know, methods and ways of doing things. And they're both benefits and costs to that. And the benefits have been huge and exploring all these different workflows. But maybe now we're moving into a phase where there's a little bit more consolidation between tools. I mean, we've been having discussions with Tobias on webpack and we've been having discussions with roll up and we've kind of been bridging bridging a little bit more between tools and trying to build conventions that can that can allow things to be a little bit more implicit and put less overhead on the user to get every exact configuration right. And I think that's the hope with this. It's still a lot of work to go, certainly. Yeah, I guess a closing thought for me. I used to work at this startup and one of the claims was this concept of a short time to wow. And when I think about like what I loved about web technologies, I don't come from a traditional background. The thing about the web and the browser and JavaScript in particular, which like enabled and empowered me was just that like, it was so intuitive and quick to get started. And I feel like we as an ecosystem have had to layer on a lot of complexity to like allow for advanced productivity. But the result is that we've lost a lot of that short time to wow. And I know that we're talking here about a lot of like kind of, you know, it's not nothing all these extra technologies that we're talking about. But I do really hope that in like three years in four years, a lot of these can be baked and kind of generated enough that we can get back to that kind of thing where it's like, Hey, like, here's a link. And now I'm going again. And I'm personally just like really excited for that future. So guys, thank you so much for joining me today and participating in this. And to everyone who has tuned in. Thank you so much for watching and we'll be around for questions in a bit. Have a good one.