 Hi, everyone. My name is Beth Briggs, and I'm a senior software engineer at Red Hat. I game like Fedora not long ago when I moved over from IBM, and I'm excited to be here at my second virtual OpenJS World Conference. Today, my talk is titled NoJS, the new and the experimenting. As part of my role at Red Hat, I helped to maintain NoJS runtime. I'm a NoJS technical stirring committee member, and I'm particularly active in the NoJS release working group. So I'm often spending my time looking at the content and producing the releases of the NoJS runtime. One of the other work I'm involved in at Red Hat includes contributing to cloud development tooling and helping to build out a reference architecture for enterprise NoJS applications. Today, I want to spend some time talking about how new features get into the hands of NoJS users and why some features land as experimental first. In the latter half of the talk, I'll touch upon a few recent new and experimental features that have landed in the wrong time. NoJS is an impact project under the OpenJS Foundation. Normally, the project was under the NoJS Foundation, but the project moved under OpenJS when both the JS and Node Foundations merged back in 2019. And the OpenJS Foundation is a neutral home for around 30 JavaScript projects from the likes of jQuery, Electron, NodeWrite, NoJS itself. And the Foundation values open governance transparency, which is why it was a natural home for the NoJS project. You may be surprised to learn that the NoJS project has no formal roadmap. There's no prioritized task list. There's no single corporate sponsor and the project is completely decentralized. Generally, the features and changes that get added to the runtime are some of the interests and requirements of our contributors. But what's it like not having a roadmap? Well, I spent a short while trying to come up with a good analogy and I thought this one was appropriate. This is a famous roundabout or turning circle in the UK. It's actually made up of five smaller roundpiles all arranged in a circle. And as you can see, it handles a heavy flow of traffic heading in all different directions. Some cars are heading in the same direction, others not. And occasionally, when a car's crossed paths, there are hold-ups. And I think this somewhat represents feature development of NoJS. There are a lot of contributors, a lot of activity from all different areas. Some folks are heading in a similar direction, whereas others are working on distinct parts of the project. And inevitably, sometimes things are held up, be that for technical reasons or the consensus needs to be reached on some details. The main takeaway is that there's a heavy activity flow in the project. But folks are heading in numerous directions, so it's not always obvious to see what is coming next. But how can you find out what's in the pipeline? Or in a few ways, you can keep track of what's happening in the project. The project has a medium blog, which is nojs.medium.com, where we post any of our major announcements. There are Twitter, so NoJS has its own Twitter handle, but there are also many of the active NoJS contributors are also active on Twitter. So that's also a good place to follow what folks are working on. And you can probably find some of those folks at this conference speaking too. And also GitHub. You could follow the project on GitHub and its many repositories. But about the GitHub notifications, if you do subscribe to no call and any of the other repositories, they warn there is a lot of activity. It's really hard to keep up. You can get several hundred notifications a day. And admittedly, I declare a notification bankruptcy every week or so and just mark everything as read. I really do rely on the participating notifications to keep track of why I need to address. So rather than keep up with the massive notifications, there are some specific efforts you could follow. So despite having no roadmap, there are still longer-term efforts in planning within the project. For example, we have working groups dedicated to certain areas of the project. And these groups act as task forces for pushing certain subject areas forward. For example, in the past we had a modules effort focused on driving the end script modules and deletion. And as a note, if you're looking to get involved in the project, a good way is to look at these working groups. See if any of them align with your interests and join one of them. They all have GitHub repositories with their own tasks and issues. And many of them also have public, streamed video meetings that you're free to join and pass it to you. As well as teams and working groups, the project also has what we call strategic initiatives. These are agreed initiatives that the project hopes to make progress on. Again, there's not really a priority list or deadline. These are just some important goals we've listed. And we think it'll be useful to track the status on these over time and try and figure out ways we can help address these initiatives. At the moment, our technical initiatives include promise-fying the core APIs, VA currency, quick, improving startup performance, some initiatives around build resources and the future of the build talk chain. And similarly, under the community committee, we have some more community-focused strategic initiatives. And these include things like internationalisation, mentorship and others such as outreach and website redesign. Again, these are good areas to look at if you're interested in getting involved in the project. And one of it I'd like to call out in particular is the next 10 events. This team came together last year to focus on making the next 10 years of Node.js as successful as the last. So what we've done within this group has been spent some time looking back on the successes of the project over the past decade. And we've also tried to determine the values and constituencies of the project. The idea was that we use these as a lens to analyse future development for it. And it may even help us identify which new features we bring the most value to the Node.js users. The next 10 group did put out a survey a couple of months back to confirm and validate that the constituencies and values we defined for the project did align with those of our users. So we put this survey out and the results are now back in. So the next 10 teams at the point of going of release results figuring out where we align and figuring out where we need to adjust. So again while we do not have a roadmap we do look at things like survey results user feedback and many other aspects to indicate the future direction of the project. So there are some ways of keeping up to date with the project but typically our users will only find out about new features when we get into their hands via new releases. And there's a single flow in which you can expect new features to arrive in Node.js releases. Node.js has a predictable release schedule. We always have two major releases per year with the even number of release lines being promoted to long-term support. The even number of releases are always released in April and promoted in to long-term support in the following October. Within our release schedule we have three defined release phases. Current, active long-term support and maintenance. And it's during the current phase that the release line will pick up most of the non-major changes that land on Node.js core main branch. During the active long-term support phase only new features, fixes and updates that have been audited by the LTS team and have been considered appropriate and stable for land. And in maintenance that tends to be limited to critical bug fixes and security updates only. We rarely have new features but we might if we come to the agreement that adding this new feature is beneficial and will help support migration to later release lines. So you can expect to pick up the newest features first in the current release line which at the moment is Node.js 16. And typically for current you can expect about one release every two weeks. After some time you can then expect the new features that have landed in the current release line to come back into the long-term support. But not all the features will make it back into long-term supported release lines. Some will be considered too unstable to be brought back and sometimes the code delta in the release lines is just too large to be able to feasibly bring that feature back. And if you're interested the Node.js release working group does keep a draft schedule of the upcoming releases in the Node.js release repository in GitHub. So if you ever just want to gauge roughly when the next release will be on each of the given release lines check out these issues and you should get an indication. All of them of course subject to release availability. And two things I'd like to call out from the release schedule is Node.js 10 is now end of life. It went end of life at the end of April 2021. So if you're using that be aware that there are no more security fixes being given to that release line. So if you are on it you should start planning to upgrade. And in case you missed it Node.16 was released in April. Node.js 16 is at the moment our current release line but will be promoted to long-term support this coming October. And if you're interested in learning what was included in Node.js 16 release you can check out the release announcement on the Node.js medium and it details some of the new features and highlights at release. One of the highlights which isn't really cast as a new feature was Node.16 marks the first release line we are shipping pre-built binaries for Apple Silicon. And just like to call out that the build working group invested a lot of effort into this. It requires a lot of hidden work to get these binaries built and made available to our users from finding donors or hosts for our build hardware to configuring the machines using Ansible to integrating those machines into our continuous integration farm without disrupting any of our rocket pipelines. So I'd just like to thank McStadium and also Neoform for helping us source these Apple Silicon machines and hosting them for us. And just call out the hidden effort that goes into keeping the project running which may not necessarily be reflected in PRs or commit lists. That's a bit about how you can follow what's coming next and how features end up in the various release lines. What about new features? How do you know when a feature is safe to use in your production applications? When Node.js project provides a stability index throughout the API documentation there are indicators of stability for each API. At the moment there are four stability levels stability zero, deprecated, stability one, experimental, stability two, stable and stability three, legacy which has only recently been added. So digging into those for stability zero, deprecated and API can be the documentation or runtime deprecated. As you can expect a documentation deprecation is just that an indication in the API docs that given API is deprecated. And this is an example of the deprecation socket buffer size, includes a deprecation ID, the version that introduced the deprecation and documentation deprecations can land in minor releases of Node. It's good to be aware of these because the deprecated APIs may be removed in future versions of Node.js and because the documentation deprecations are just that written in documentation it can be difficult to know or follow what is being deprecated over time. Node.js does provide a pending deprecation process flag, so some documentation only deprecations will trigger a runtime warning when launched with this flag. So if you want to see whether you're using any of the deprecated APIs you can start your Node process with this flag and you should start to see warnings coming through if you are using an API that's here marked for deprecation. We also have runtime deprecations. A runtime deprecation will by default generate a process warning that will be printed to standard error the first time the deprecated API is used. This here is an example of one that may be familiar, the unhandled promise rejection warning that you may have found familiar in Node versions 14 and under. And because a runtime warning is an observable change it actually prints something to standard error. We treat all of these as major or breaking changes. This is because it could break anyone who's running their tests and testing standard error output. So generally you should only see new runtime deprecation when upgrading to a new major version and ideally an API will first be documentation deprecated and then elevated to runtime deprecation. And there are a couple of further process related flags. We have the Node deprecation flag and what this does is it silences all deprecation warnings and you should really be cautious using this as you're likely just silencing a problem that will need to be fixed. A deprecated API that's emitting a warning may end up being removed so by using this regularly you're essentially just kicking the problem down the road. And there's also the throw deprecation and that takes a more extreme state of throwing an error when you hit a deprecation warning. And I personally would treat these as kind of like a smack test. I just occasionally run my apps with these to to see whether any new deprecations have been introduced to it will impact my project. Then we have the legacy stability status. The legacy status covers is APIs that we want to discourage the use of without breaking the ecosystem. So if you are writing new code you shouldn't use the legacy APIs. But if you do have applications that are still using these APIs you can be confident that the project's unlikely to remove them. APIs that have been indicated as legacy include a number of the assert APIs such as deep equal. This is because you should be using the strict version of each of these APIs where possible. There is a slight caveat. If you are using the strict mode of assert then it's okay to use deep equal. It's only the non strict mode non strict APIs that are marked as legacy. But again they're indicated as legacy as we don't plan to remove them because we think it would cause too much disruption. Also the ATOB and BTOA APIs, they're also legacy. And then some others include a process HR time, the query string module and the URL API. And for the URL API you should be using the newer WG URL API instead. So onto experimental features. Node.js project stability index states that experimental APIs may change behavior. The traditional semantic version contract that we try to adhere to everywhere else does not apply to experimental features even in long-term supported releases. And this is why we tend to say use experimental features with caution especially in production workloads because the API may change even in LTS. And why do some features land as experimental and others don't? Well in some cases the most suitable API design may not be agreed upon upfront. We may want to get almost a draft of the feature out there so we can get user feedback and evolve the API accordingly. Essentially we just don't want to lock the API definition in too soon. And there are a number of whole core modules in Node that are still designated as experimental. And when I say core modules I'm referring to the ones that are built into the runtime itself like FS and HTTP. And all of these are still experimental as of Node 1610. So one of the first experimental core modules is the Async Hooks module. The Async Hooks module provides an API for you to track asynchronous resource. What's an asynchronous resource? These are things like promises, timeouts, immediate syntax. And in particular within this experimental module I'd like to call out the async local storage class. And this class is used to create asynchronous state within callbacks and promise chains. And this example simulates how it might be used within a web server. So you'd initialize the async local storage. Within our server handler we call async local storage run. And what this will do is create an async local context for each flow. And so within our log request with ID function, when we call getStore it will be able to pull out the ID of the request. So to getStore returns the value specific to the async flow. Your orders are relevant. It will always get the right one for that async context. And this is particularly useful for application performance monitoring. And there are also discussions within the project at the moment about whether it's the right time to elevate async local storage to stable status. Another experimental module is the diagnostics channel. The diagnostics channel module provides an API to create name channels to report message data for diagnostic purposes. And the intention is that you create numerous channels to report your messages through and then you can subscribe to receive those messages. And the use case for this is say you have an app that runs in SQL queries. You could create a dedicated to channel for those to be pushed to and you could subscribe to that same channel to receive them. And just like callout, this was recently brought back to the Node 14 release line. So the latest Node 14 release will include this. And then we have the experimental inspector module. As the name suggests, this module provides an API for interacting with the VA inspector. And in this example, we're using the inspector module to interact with the inspector programmatically to profile CPU. And the way you use this, you require the modules. You initialize an inspector session. You enable and start the profiler. You then run the code or task that you wish to measure. And then you stop the profile and write the results somewhere. Similarly, we have the experimental trace events module. And this module provides a mechanism for you to centralize tracing information generated by VA, Node Core, and even your own application code. And again, it's a case of importing the module, enabling and disabling the trace at the appropriate point in your code. And the output of this will be a trace log file that you can open in the frame tracing window. And then we have the experimental web crypto API. This is an implementation with the node of the web crypto API. We also have the experimental web assembly system interface core module, and this provides an implementation of the WASI specification. So that's the experimental core modules. But we also have some notable individual APIs that are designated as experimental. As you may know, the Node.js atmospheric modules implementation is now stable in all currently supported versions of Node.js. It's stable in Node 12, Node 14, and Node 16. But although the underlying implementation is now stable across the release lines, there are still some specific APIs related to ESM that are still designated as experimental. This includes the loaders API. The loaders API is used to customize the default module resolution algorithm. So you can optionally supply your own custom loader, and it will customize how the expert modules are loaded. Note that it will not change anything to do with how common JS modules are loaded. And there's actually been a new team gicked off within the project to focus on defining the specification so that eventually we can reach a stable implementation. Two of the other experimental ESM APIs include JSON modules and WASI modules. And then we have one of the most anticipated features that falls under the remit of exscript modules, which is top level of it. This feature allows you to use the await keyword at the top level by outside of an ESM function within modules. And this follows the exscript top level await proposal that went through TC39. This means that exscript modules can await resources, causing other modules to import them to have to wait before they start evaluating their body. And what are the use cases for this? Well, these are some of the use cases called out by the TC39 proposal, and that includes dynamic dependency, parsing, resource initialization, and also dependency fallbacks. You can try to import ex if that fails, import y. And then we have policies. Policies are a security feature intended to allow guarantees about what code Node.js is able to load. You can create a policy manifest, which will be used to enforce constraints on the code loaded by Node.js. In this example policy file, we're adding an integrity check for the file check.js. And this is inspired by the browser's security mechanism to enforce resource integrity. This means that when you try to start your process, these checks will happen, and an error will occur if it fails to check. And this has actually been a Node.js core for a couple of years now. Then there's a range of other experimental APIs. This includes the buffer blob, API, events, capture rejections, some of the module-related APIs under the VM core module, and broadcast channel within the workers implementation. And a number of these experimental features are hidden behind process flags. This is to make them opt-in. So to use some of these features, you will actually need to start your Node.js process with the corresponding flag. For example, the dash-experimental loader flag. And at some point features may be unflagged, and that may or may not coincide with the need marked as stable APIs. And then we have some very experimental features that are hidden behind build time flags. To use these features, you'd have to compile Node.js yourself from source. And why do we expose some features in this way? Well, it's predominantly so we can work on crafting the very early implementation details of a feature, have it built and tested in RCI, without them yet being released. We do this where we anticipate a lot of change to happen to the feature, or we just consider it too unstable for general use. Finally, we have stable features. For stable features, semantic versioning applies. You can have confidence that we'll try to keep the API contract. So you should only experience breaking API changes when you upgrade to the next major version. There are some very, very exceptions. For example, if a security issue requires us to make a breaking API change, then we only do that where absolutely necessary. And how do features graduate from experimental to stable? Well, it's really when the contributors most involved in the feature have confidence in the API, and don't believe that major further changes are likely. How can we get there quicker? Well, the more use of feedback we've received on an experimental feature, the quicker we can gain confidence and consensus in the API structure. And it's worth noting that not all features will ever make it out of experimental. Some have already been in that state for several years, and some may be, may end up being removed completely, never have made it out of experimental. This is again part of the reason why we suggest using experimental features with caution. And there are a number of new stable features in the latest releases. We can include abort controller. So this provides cancellation and aborting on some of our former space APIs. And there's an ongoing effort to incorporate it across more APIs. And it's also recently been backported to Note 14. We also have the crypto random UUID, which again has recently been backported to Note 14. We have MPM7, which features MPMdiff. MPMdiff allows you to supply two different versions of a package, and we'll show you a git-like patch output to show you the difference between the two versions. And also some of the recent versions of MPM7 have improved workspaces support. We also have SourceMap v3 support. SourceMap's provide a method for translating from generated source back to the original. And this is aimed to alleviate some of the observability challenges we have when folks are using alternative flavors of JavaScript, like TensorFlow. And then we have the promissified timers API. And this API provides an alternative set of time of functions that can return promise objects. And then we also have v8 9.0, and it's via our VN engine updates that we get the new JavaScript language features. And recent versions of v8 brought optional chaining to Note 15, and in v8 9.0, which is in Node 16, we gained the regular expression match indices feature. So if you're interested, I'd like to encourage you to take a look, try out and provide feedback on some of the experimental features in Node.js. Maybe don't rush to use them in your critical applications just yet. With that, I'd like to wish you all a enjoyable rest of the conference. I look forward to joining you all in the Q&A.