 Hey everyone. Today I will talk about developing high performance Node.js add-ons with Rust and NAPRS. First of all, let me introduce myself. I'm the creator of the NAPRS and I worked at the Waseel two-pack team and Bad Dance, Lead Code, and Team Vision before. And I'm now leading the Octobase team at a fine, building the local first cooperative infrastructure for developers. So recently we have heard a lot of popular front-end tools are rewriting with Rust, and they are all in the Node.js ecosystem. And as you can see, they are all choosing NAPRS and what is NAPRS? First, in simple terms, NAPRS is the bridge between Rust and Node.js. And as you can see, the front-end tools are performance-sensitive libraries and they are choosing Rust to rewriting their slower JavaScript components and replacing it with Rust. And what's the features does NAPRS have and they cannot resist to choose it? First of all, NAPRS allows for very consistent syntax to turn a piece of Rust code into a function that can be directly called in the Node.js and JavaScript, compared to the other Node API frameworks in Rust or other languages. NAPRS does not only generate core-bound library and binary, but also generate corresponding DTS files for TypeScript projects. This is the first big advantage of using NAPRS. In the code example on the left, we can see that with just a few lines of Rust codes, we can provide a functionality of Rust library UUID to any Node.js caller. Not only it includes the function of UUID 3.4, but it also provides amazing performance. Compared to the crypto random UUID provided by Node.js itself, it's about 13 times faster, as you can see the screenshot on the right. So HAL is a more complex example. Let's pretend we are building a bundler using NAPRS and exposing a plugin API to JavaScript. The bundler function takes a parameter called plugin HAL, which accepts a string passed by the bundler. We will assume that this string is the source code we want to process with. And the plugin function retains a promised string type HAL. Let's take a look at how NAPRS handles this situation. From the implementation of the plugins, we can see that there is a TSR type used to override types automatically generated by NAPRS. This is because the Rust type system is very different with Rust type system. So NAPRS may not be able to generate perfectly TypeScript types automatically. Therefore, there are some APIs for developers. We can declare more process types using the APIs. In the specific logic, we pretend that the source code that the bundler needs to process and pass it to the plugin API. The core async HAL means that we are async waiting the return value of the plugin function so that we will not block the execution of JavaScript HAL. The biggest magic HAL is the promise written from the plugin can be directly await in Rust. And in HAL, we await the value from the promise and return it to JavaScript site, just like bundler do. Returning to the generated DTS files from APIs, we can see that the async FN in Rust returned a promise value in JavaScript, which means you can also await the async FN in JavaScript directly. Okay, this is a simple example for a bundler. So as we can see we can call the bundler in JavaScript like this. HAL is classical style Node.js callback for plugin API. Well, the first argument represents whether an error is happened, HAL. And from the screenshot in the right, once we run the JavaScript file, we can get the result from the bundler. Here we pretend we are transformed our source code with SWC HAL and process it async. So now that we have learned about how to develop an NPRS module, it's time to start thinking about how to distribute it. In traditional native add-on solutions, developers may use post-installed scripts to compile native code during the installation, but were downloaded the pre-compiled binaries from CDN or GitHub release. You may have gone crazy dealing with various post-installed scripts failures when using another native add-ons. One of the best features of NPRS is out-of-box pre-compiled and distribution solution with zero post-installed scripts, which is especially compelling for GitHub action users. Overcross API also has other advantages such as compatible with serverless environments such as RSAO or Netlify. In terms of pre-compiled, NPRS maintains a complex tier chain that supports most of mainstream platforms on the market. In this big table, you can see which shows the NPRS can pre-compile add-ons for particular work platforms on which node jays can run. In addition to the multi-platform pre-compiled solutions provided by GitHub Action, NPRS also offers cross-platform compilation solutions for other CI platforms like SQL-CI or GitLab-CI. You can use NPRS-CI to compile binaries for the following platforms only on Linux and macOS. It's really easy to integrate into your own CI scripts and your workflow. Until now, we have been talking about the advantages of using NPRS, but if it had no drawback, why haven't I rewritten all the packages on NPRS using NPRS? So now I will discuss some of the trade-offs using NPRS or similar technologies. It's including the following points. The first is cross-boundary overhead. We all know that Rust is a very fast language compared to JavaScript, but you may not know there are many call-of-head if you're calling a Rust function from JavaScript, so I will talk about it later. The second one is trusting, and the final one is debugging. Let's start from the cross-boundary call-of-head. Contrary to making complex tasks, this table shows making Rust to do simple calculations and wrapping them with different Rust and API frameworks, then comparing them to pure JavaScript with the performance. You can see that in the simple computation, JavaScript can be several turns to hundreds of times faster than Rust, but why is the case? Hell shouldn't Rust be much faster than JavaScript? Actually, it was cross-boundary call-of-head I mentioned about. Cross-boundary call-of-head refers to the performance occurs when making function calls between different binaries and languages. In the case of Node.js and Rust add-ons, they are compiled to separate binaries. Node.js is a single executable for binaries and your add-ons is another dynamic-linked library. So traditional native language compiler optimization technologies such as LTO and PGO and such things do not work when making a cross-binary function calls. Secondly, the function calls cross the JavaScript engine. So there is a lot of additional work to be done during the call, resulting in more performance overhead. As we can see in the sum function in the right section, in addition to the necessary A plus B operation, there are five additional Node API calls compared to the pure JavaScript implementation, which are used to handle type conversion for the different runtime and the language. And cross-boundary calls are another issue. In this example, the pure JavaScript implementation has the opportunity to be deeply optimized by the JavaScript engine like V8. But however, in the implementation on the right, the inserted native add-on function may cause the entire sum hot pass function to be abounded by the engine for optimization. So this will introduce more overhead hail. And the second trade-off is trusting. There have been more and more news about attackers using NPM to distribute their calls to steal secrets from developers and users. So using NPRS or the other native language to publish native add-ons to NPM will increase the, will exerberate this issue as the released binary cannot be effectively added by the tools like. And however, this situation has recently improved as the NPM officially released the provenance feature for sign published NPM packages, preventing them from being tempered with, during the publishing process. The third overhead hail is debugging. In order to reduce installation size, most NPRS packages just drop their debug symbols during the release and the screenshot hail is from the issues on the official RS pack repo, which shows that when issue or panic happens, the error stack cannot be displayed at all because the debug symbols were removed during the publish, making it much more difficult to locate and fix the problem on the developer side. To solve this problem, NPRS will provide the debug symbols download feature in the next major release to reduce this problem. And if you want, well, and you can read download the removed debug symbols again if the panic happened on the developer machine. Okay, so conclusion is NPRS is very easy to use and provides an end-to-end solution from development and deployment. And NPRS can improve Node.js performance progressively and not all scenarios are suitable for developing with NPRS and RAS. So that's all. Thanks. Any questions? It depends because even, you know, I have written thousands of NPRS libraries, but I can't give an answer that which part of your application can be rewritten by Rust because it needs to be profiled. And as you can see, the Rust code may, for example, 10 times faster by JavaScript compared with JavaScript, but it could introduce more overhead like de-optimize your JIT in Node.js or some other overhead we can't see in pure JavaScript application. So the only way to do that is re-write small pieces, which introduce less call of head, like less Node API calls, and profiling it progressively and see if we can get a better result of performance. Can you give an example that your NPRS package is significantly faster than your Node.js? Yes, maybe R-spec is a great example. R-spec is a library which has the same API with Webpack. It has the same API and the same configuration and the same features, but it's 10 or 20 times faster than Webpack in Boundary. It depends on projects. If you are using less loaders and if you are using more JavaScript written loaders and plugins, it will be more slower. But if you are using the built-in functions like TSTransparal or CSSTransparal, it will be much more faster. Is the refund backend written in Rust? We don't have a backend server right now, but we have a universal infrastructure for cooperative and real-time cooperative and it's written by Rust. In our election client, part of the data-presisted layer is rewriting by Rust and NPRS to replace the JavaScript version. Yeah. I'm curious if you want to use Rust from the backend, why did you choose to build on Node and have that library instead of frame with Eno? Yeah, it's a good question. We have tried to build our business codes on Rust before, like CRUD business code and it's really non-productive for engineers because it has a very complex type system and border check and we can't ship our features before it changes. Yeah. So in most cases, I think Node.js and JavaScript is more suitable for this business. But in other scenarios like AI, like our business model like cooperative and for real-time performance sensitive scenario, we needed to replace the part of Node.js application with the Rust. Okay. Any other questions? Since we will have swag points, can you share? Yeah, yeah. In the IRS, we're going to do some previews for the matrix of the supported OS. It's an architecture, right? Yeah. So what will happen if the user is using some new architecture? You can still compile your own binary if you want. It's not conflicted with the per-compiled version. Yeah. But for the mainstream OS and platform, it's not necessary. Yeah. Okay. Can you help me compare the performance between like ZIG and the last way of operating? You mean ZIG? ZIG alone? Yeah. From my knowledge, ZIG is a fancy C. It's needed to manage your own memory by hand. So if you go into the bound jays issues, you can see many, a lot of segmentation faults and panic issues here because of this. But Rust is different. It has border checking and it will manage your memory automatically. So it's more memory safe in this way. And I think it's more convenient and trustable for the infrastructure. Yeah. Okay. I think. Awesome. Yes. One more round of applause for Rocket.