 Okay, well, you might as well get started thanks to our talk, which is improving the open source security of a large open source project one step at a time. Rafael and myself are both from the Node project, so we're gonna be sharing our experience about what we did in Node. So first, a little bit about myself. I'm the Node.js lead for Red Hat and IBM. That means I get to spend a lot of time in the Node community. I'm on the technical steering committee, active in many of the working groups and teams. I'm also involved in the OpenJS Foundation, and I also get to work with a lot of great teams within Red Hat and IBM who are deploying Node at scale, helping our customers do those deployments, or working on tools to make it easier to deploy your applications to things like OpenShift. So, over to Rafael. Right, okay, hello folks. I'm a staff engineer, senior forum. My name is Rafael Gonzaga. I'm from Brazil, so long flight, long journey. I'm also an Node.js technical steering committee member, so, well, if you don't like Node.js, partially it's our fault. So, I'm also an Node.js releaser, so if any of the Node.js builds breaks you, it's also our fault. And, well, we run the Node.js secured working group meetings, and I'm a Node.js working group lead or chair in that meetings. So, well, I have some societal media, so if you scan this QR code, you'll be able to follow me, so please do it, it helps a lot. I'm trying to do more content on Twitch, so it's more live. Let's see if that works. Well, that's it. Okay, so before we get started, I'm gonna give you a little bit of an idea of what we're gonna dive into. We're gonna start out with a little bit of background about the Node.js project, some OSS funding that we've gotten that's really helped us. We'll then jump into sharing our experience, and we'll share our experience in sort of two ways. One is like the reactive part of the process, so when we get vulnerabilities reported to us, what do we do? But then also the proactive part of the process, what is our security working group doing to look proactively in terms of improving the security across the project, what kind of initiatives should we have, and so forth, and then finally, we'll sort of dive into, if you haven't, if hopefully we've peaked your interest in getting involved in helping out, what are some of the ways that you might do that? So, how many people know the Node.js project? See, there's a good number, but still a little bit of an intro is worthwhile for other people. It's an open-source project that was coined by one of the original collaborators, Rod Vag, and what that means is there's no one company who's really backing a large portion of the collaborators. It's individual contributors, people from companies, but everybody has sort of their own goals and things that they work on, so it's very much organic in terms of what takes place, which has some good things and some bad things, as you'll see, in terms of what's working and what's not working for us. It's widely used, there was over a billion downloads from Node.js org just in the last year, and that doesn't count downloads through things like Docker and other places. It was at the top of the open-source stuff criticality list, which is one of the reasons we've got funding and probably why we're talking a little bit about it here. I do wanna say that security's always been top of mind. The people who put in the original infrastructure we're always thinking about that. We have a separate release infrastructure just dedicated to do releases. We think about who we give access to machines and stuff like that, so it's always been sort of something that we've been thinking of, but as I said, because we're an open-source project and a lot of people are volunteers and sort of volunteering to work on different pieces, that's not always a great match for time critical work. If you have something which has to happen by a certain date, a certain time, that's not something volunteers are a good match for. They're great for like, hey, here's a problem. I'll get to it when I have time in my spare time. Security work is often the more like we wanna do it faster. There's some expectation that that happens. So I do wanna say a big thanks to the funding we got from the OSSF. That started in 2022, it's continuing 2023, and the key thing I really wanna highlight is it doesn't enable just the work for the person that's funded to do that work. They have enabled our security team and more people in the project to actually do work across a number of areas much more than any one person could have been doing. So there's that really good multiplier and that's one of the key things of getting somebody into the project who's basically their job to worry about security and to help move security forward. That gets you that multiplying factor and without that it's really hard to have to make progress as a project because everybody is willing to do some bits and pieces but with nobody pulling it together, it doesn't work quite as well. So we'll start out with a look of like the reactive things because they're sort of the reactive and proactive side of things and we're gonna share what we tried, what worked, what didn't work, that kind of stuff. So the first thing is we'll talk about the life of the security vulnerability in terms of like looking at it with our threat model, getting reports, creating fixes, and actually doing security releases. So without a threat model, some of the discussions on vulnerabilities reported often felt a little bit like this. Often it's a communication challenge where people who are reporting the vulnerabilities have invested quite a bit of effort. They may have a very strong opinion that this is something that the project needs to fix. On the flip side, the project has as a view of what they consider a vulnerability or not and before we had the threat model, it might have been hard to figure out what that was though. So that could lead to some friction in terms of like, well, why are you telling me this after I put all this work already in? So the threat model really helps us to have those kinds of discussions. And so this is an example. I just wanna ask everybody and say like, okay, here we are, we're requiring FS and we're asked to read a really big file and node falls over. How many people would put their hand up and say that's actually a vulnerability in Node.js? So I see one and every time we've done this, we've gotten a number of people and this is sort of the extreme case. There's subtler cases of this and within our threat model, the answer is no. We trust the code that you asked us to run. So therefore, if you ask us to read some huge file, we're just did what you asked. And the other cases which we've had in real life are subtler but it's sort of, the threat model helps to kind of put the bounds on those kinds of things. So what's in the threat model? So the first thing it starts out with is like, what do we trust? So there's certain things that we trust. We trust the code that you asked us to run. We trust the environment that you're running the node application in. So for example, the file system, if there's files on that system already, we trust that you've configured it properly. What we don't trust is say a remote client making a request to one of our local HTTP servers, right? Cause that's outside of the control of the person who's running node. And so if that remote server can connect to node and do something unexpected, we would consider that a vulnerability because we don't trust them. We've published it in the security.md file so anybody can read that. It was a recent addition from one of the initiatives of the security team. And I'll say it's hard to define and it's probably still a work in progress. We wrote down our best attempt at sort of capturing what we think is and isn't a vulnerability. Every time we receive a new vulnerability, we'll take a look at it in that context and we may either extend or modify the model to reflect what we think makes sense. Cause sometimes we may look at something and say, oh maybe the threat model doesn't quite say the right thing on that front cause this should be a vulnerability or shouldn't. We also always reserve the right to treat something as a vulnerability even if the threat model says maybe it's not, right? Like we may say, oh yeah, but that's important enough say to the overall ecosystem that we will decide we're gonna make some change and we're gonna keep that private until we've released that change. So now that you know a little bit about the threat model, we'll look at how we reactively handle reports. The first thing is please don't open public reports. I'm probably most nobody in the room who would do that who's here. But that's our first message is don't open them in public cause that causes us a lot of headaches. We use Hacker One which is a tool that lets you submit in private. We can then review. One of the really nice things though is at the end we can also make all of the discussion public cause the project really does have a strong emphasis on doing everything transparently and in the public. With security vulnerabilities of course we don't wanna do make things public until we've released the fix but Hacker One is a tool that lets us sort of get the best of both of those. So people will report to Hacker One. That goes in and you can go to the Node.js project on Hacker One, there's a nice submit button. That ends up in the inbox of the project. And so now we have a list of things that people have reported. Once it's been reported we will have people do initial triage and like I said before we look at our threat model and we'll either say yep we believe that that's a vulnerability based on the model or we'll say sorry we don't think that is. You know it's okay to publish that publicly because you know it's not something security related. Once we've accepted it then we need to do a CVSS calculation. This can also be hard as well because I find no matter almost 90% of the time it forces it to be like high turn on the alarm bells. The challenge I have with that is that costs the ecosystem quite a lot of money. You know companies have policies to say like if it's a rated high it must be remediated within two days or you know so the high ratings have an actual impact so my sort of take away on this and suggestion to everybody is like really think about it when you're doing your CVSS score and try and make it reflect how important or how urgent you really think it is because that's gonna have a follow on effect to lots of people. In terms of what didn't didn't work we started off just trying to handle these through email so we asked people to report vulnerabilities through email that's not very easy. It's hard to manage, it's hard to follow up and then it's hard to like hey we had a report before this seems similar so that wasn't too great. Even once we moved to Hacker 1 we tried ad hoc triage so basically we've got 20 people on the technical steering committee everybody will take a look sometime and figure it out. That unfortunately didn't work very well we would either have nobody doing the triage or one yeah usually it ended up being like one person who started to do it and then felt overly compelled to continue doing it because nobody else did and then they got burnt out. So that really didn't work well either. Even scaling that up to like a small number of triagers so for example in our team at Red Hat I tried to get one of our team members who was part of their regular role was to like help with the triage but because it was a small group and because of the previous picture of cats it's not always the most fun conversations that you're having they also got burnt out. So even if you're like a fully paid person and it's kind of part of your job it still burns people out after a while. So really you need a larger number of triagers so that you only have to do it every so often. The other thing that we're still challenged with I think is handling features that are experimental and I'm still not sure we've quite landed on exactly what we want to do. You know, experimental features are things that are fly just experimental so it's kind of like use at your own risk. But if there's a vulnerability in one of those we still acknowledge that they can affect people who are using it so it's not like we should say no they're not a vulnerability and so today we treat them like any other vulnerability but back to that causing the alarm drill for everybody in the community that can cause a lot of churn and cost to people who aren't even using these features. So we're still trying to work through like what makes the most sense on that front. What is working? Triage team greater than three people I think we have four or five people on the triage team five so it means like once every five weeks we have somebody on rotation for two weeks at a time where they don't necessarily have to figure it all out but they're the first like hello thank you very much for your report we'll be looking into this they may find the right person to do the triage or they may do the triage themselves so that really works better both the part about having a rotation so that you know you only have to do it that one out of five weeks and it's not gonna fall all into your shoulders and having enough people to sort of spread it out so that by the time you're getting annoyed you're it's on to the next person. And hacker one as I mentioned you know I think that's working well for us because it's a private place to report but we can make it public afterwards which is important to the project. It also gives us easy CVE assignment we did act as our own CNA to issue CVEs for a while but the management for questing those and doing that manually that this actually made it much easier so we sort of defaulted to just using a true a true hacker one because it was easy. So security reports handling those so now we've got the security report now we've got to actually do some fixes. So in this case you know we've started we've got our accepted thing now we move over to our node private repo so we have in addition to the public node GitHub repos we have a private repo so that we can actually have people submit a PR and run tests. Unfortunately the project I shouldn't say unfortunately but the project has a very broad set of supported platforms and architectures and we consider that if we run a test fixed through our CI we've effectively disclosed the vulnerability so we avoid doing that. But what that does mean is that in our node private we do some testing through GitHub actions but it's a very small subset of the test that we would run before we did a release. So I you know you can see like I don't know if there's maybe there's arm and GitHub actions but there's lots of platforms which aren't always combinations and stuff like that. So we'll have people submit their fixes in the private repo we can have reviews and we can get to the point where like okay we're now ready to do the releases. In terms of what's working and not working people availability was always a challenge because often the vulnerabilities are in a particular area and there's only a smaller number of maintainers who are up to speed in that area and we get cases where it's like yeah I can do that fix but I can't get to it for a month. And then similarly you know we had some reports where it's like okay you've got a 90 day deadline and so that's where again like your volunteers and trying to do something within a certain time frame can be a challenge. The other one is we've got lots of maintainers who really know node but some of these vulnerabilities end up being very specific to the OS or the architecture, mostly the OS I guess. So like for example Windows, if we've had a number of things on the Windows platform where we're like yeah would I call this a vulnerability I don't know let's get in a window can we get find somebody who we could sort of get like a Microsoft opinion and we found that to be quite challenging. So that's something that's still a bit hard is to like how do we get the platform OS expertise when we need it to make those calls. It's also harder to work in private. Again as I mentioned the limited CI testing. So you can get your thing tested but then find out at the day we're doing the release that it doesn't actually pass all of the CI. It's much harder to pull in people because it's in private. In public one of the advantages that we have with open source is that we can say to a broad number of people please take a look give us your feedback and get reviews and stuff like that. And then finally to get that testing at the end before we do the release the last few times I think it's been something like a week we've locked down the CI which affects all of the regular work. And so these are some of the sort of challenges in our creating fixes side of things. Looking at security releases once we decide that we've got all the fixes we need we've got actually a very well documented 26 security release process. We built this up over time. So we didn't want to we didn't say hey let's create a whole bunch of steps that we need to do and work for people to do. But these are things that like people would ask along the way hey you did a security release could you have given me a notification or heads up ahead of time we would integrate that in or could you have done this and that. So it ends up being a lot of work to coordinate across all the collaborators who may have done the different fixes giving advance notice to the ecosystem advance notice just internal teams we may need to say to our build team can you help lock and unlock the CI to the Docker team to say hey you probably want to publish new Docker images this day and so forth. It's harder to you know it talks about the sorry part of the process is also like pulling together information like providing an explanation of what those vulnerabilities are and what you see in the CVE reports as well as our blog posts that go out those take actually quite a bit of work and then actually doing things like locking and locking the CI. One thing that I guess you'll see on the next thing when I say is working is we now have what we call release stewards. So actually do our security stewards actually doing a release the release work itself is enough work that we don't want to add those 26 steps to the same person who's actually building and put publishing the releases. So we have a release steward who works with the releasers to put together those summaries of the blog posts that are going to go out coordinate with all the different teams do the notifications published to the node sec mailing list that the release is coming up and heard the many things that have to take place and many thanks to the companies that we have on the list. We really asked in this case that these be people that were not just individually volunteering but whose company has said to them yes you can prioritize this over other work and we think that's important because again this is something time critical and as an individual somebody may say yeah I'd like to volunteer but then it's kind of unfair to them almost to expect them to jump in and do something like this. I'd love us to build this list you don't see as many big companies on there as you'd like but at least it's a start and again having a rotation of people who do that is a good thing. So over to the proactive side of things. Right. So I'm going to talk about the things that we did on the security team that was more proactive. Well one thing that I would like to mention briefly is the history of the group and before and recent success and initially the current initiatives and how you can help actually. Well the secret working group basically is now called the security team but we are formed by a bunch of companies a bunch of people that are volunteering their times to work on the secret part of Node.js. So we have meetings every two weeks and we also have some initiatives and some ideas that if you want to have to give an idea or if you just want to complain about something in the secret space of Node.js you can join and we'll answer you. Yes. Well we have also the Node.js secret project vulnerable to database that whenever we release a secret patch we include to that JSON to that long JSON which CPU we have fixed in the Node.js part and well the primary focus is obviously on the Node.js side but we have a bunch of projects in the Node.js project like Ungechi and others that we contribute to. Well first of all I will talk about the recent success and we have talked about the threat model so I will just skip that part and I will go straight forward to the dependency vulnerable to checks. Well let's say that you are using Node.js. Node.js contains tons of dependencies. We use Zlib, we use OpenSSL, we have some dependencies. What if one of those dependencies release a secret patch? How will you know if that's affect Node.js or not? Usually in the past we receive an email or someone ping us okay this affect Node.js we don't know they open an issue but we are more reactive in that part and the idea of this initiative is to be more proactive in that part. We have a repository called Node.js-Dependence Vulnerability Assessment. It's pretty large I know but it works. So whenever you go to that repository and go to the issues we have an automation check that runs every week and check for open public CVs or one of our dependencies that affects Node.js. So basically in this example we have Zlib and then it opens an issue and we as a TSC or other Node.js contributors go to that issue and say okay it doesn't affect Node.js we don't use that function or we believe that it affects Node.js. So that's the primary point if you are running your scanner and your scanner are showing okay Node.js is vulnerable to OpenSSL3. You might want to go to that repository in Jack. Okay. And of course this is just for things that are public right? Yeah. If you find something new don't report it here but if you find something that's publicly reported and already has a CV you can open an issue here to ask. Yes exactly. Thank you. Well the next one is one feature called permission model. Well it was included in the Node.js 20. Who sells Node.js 20? Raise the hand. Okay just five three okay that's fine. So I will talk about the new version of Node.js that will become correct or will become LTS in moment. Basically let's assume that you are trying to solve a problem using Node.js and you are following a random tutorial on the internet and the website as usual or one of your developers and your team are following it and it's showing a blog post on YouTube that magic package will be the magic package for you and they will install it and solve your problem. However the problem is that sometimes the package might return the expected result for you but it can do things behind the scene that you are not aware. For instance whenever you store a package in Node.js this package will have the same permission as your main process. So if you are running it as a root user the package will have root user access when you are installing it. So in that case specifically whenever we call magic function it will try to read a file in that case an example of a sensitive file in the Linux machines but it will return the expected result. So basically as a developer you will never know that's happening behind the scenes unless you have some chats right. So that's the idea of the permission model. Whenever you run permission model basically dash dash experimental permission it will restrict access to the file system. Let me show you yes. It will restrict access to the file system to create worker threads. Well you can create threads on Node.js you can create child process on Node.js and it will also restrict access to the inspector protocol or use native add-ons on Node.js. How it works is pretty simple. For instance I'm passing the experimental permission flag. We need that flag because this is an experimental feature we are developing it. So that was not meant for production not yet. So well whenever you pass that flag you can also pass. Okay I want to allow access to this path specifically of my project. So it will whenever you try to run or call that package it will throw the following error called access denied which resource it was trying to read and which permission was denied. So basically you are now safe at some point. So that's the main goal of the permission model. So this access don't Node.js 20 so if you want to check I really recommend. So as I said we have several flags you can check it, you can restrict access specifically to write access or just read only and that's up to you. Okay we also have a runtime API. If you want to create a package on top of it you can. Well the next feature or recent success was the security best practice document. So well there is this QR code if you scan you go to the Node.js blog website basically is the document I will talk about. Everything started when we tried to create the threat model but in the middle of the process we decided okay that's normally we are targeting secret researchers but what about Node.js developers or Node.js users? How can they understand what can be a vulnerability or not? So we created that project that document called Node.js Secure Best Practice Document. It's available in the Node.js org so basically it will tell you that just because things are not threats specifically it doesn't mean that you can ignore it. For instance you are creating a net server on Node.js but you are not handling errors which means that if someone sends a message that will turn in an error in your application it will crash your application causing a DOS attack. That's not a Node.js vulnerability but it's up to you as a developer to handle it better, okay? So in that document you'll be able to see other checks and an example of a DOS attack or any other attack for instance in this case there is a slow lawyer's attack that is quite common in that space. Also we talk about how we mitigate prototype pollution and well there are a lot of mitigations and other specific threats in the Node.js space that you must be aware of, okay? Well next one, automation of the dependency updates. As I said Node.js contains several dependencies. You can see all the list here that's up to date. We have on this sheet we have LibuV, VA, Zlib, a bunch of that. Every day I assume one of those dependency release a patch so we need to be up to date with that. What happens in the past is like someone go to the project, update the library and then make a request. That's a manual work, that's time consuming and frequently we stay out of date. The idea of this automation is pretty simple. Let's automate everything. Whenever one of those packages release a new version we have a bot that will combine, will create the patch and create a request in the Node.js. You may think that, okay, dependabot code handle it but it's like different when you deal with the dependency in Node.js because we build everything in a single binary. It's not technically INPM depends that you can just use a renovate bot or dependabot. So that's working pretty well, thanks to Marco. And well, the next one is, well, we will talk about the open SSF scorecard, CIMS practice, some process that we are using to measure the security of the Node.js project, okay? So, well, first one, OSSF scorecard. Basically, the OSSF scorecard is a way to measure the score of your application based on several feedbacks or several workflows and it runs dynamically. So you get the score based on specific parts, for instance, range protection. You get the score out of 10. So, it helps you identify some issues that like good first issues. In that case, specifically, we have created that initiative on the Node.js project and every single day I receive a message from someone, okay, I want to contribute to Node.js, but I don't know how, it's so complex, it's a complex data source code. And I said, well, go to the working groups, they are cool and that was one of the examples. We created one task, like let's pin dependence by commit hash that is one of the request from the OSSF scorecard. And someone raised his first poor request and was very happy posted on LinkedIn. Oh, I did my first poor request on Node.js and that's pretty cool because that's easy and it's very helpful to the project. In that specific case, basically what we did is, do you know when you use GitHub Actions? When you use GitHub Actions, you normally tag the version in the, for instance, action checkout. You have the version four. Instead of using version four, which is mutable, we are using the commit hash which is technically immutable. So in that case, this is one of the things that increase the score in the OSSF scorecard. Currently, Node.js is 7.3 of them. We don't know exactly how far can we reach because there are some rules that are not applicable to Node.js. We have a different workflow, so we are still figuring it out. And well, let's move forward. This is the CIA best practice. This is also a set of workflows, a set of checkpoints that we do on the Node.js project to guarantee that we are following the best practice of the security standards. And whoever time you go, you normally receive a batch. You start from the bronze, I guess, and then you go to the silver and finally you get the gold batch. Basically, the batch doesn't mean a lot, like you can lie if you want. The main point here is to document all the process and see if you are really following the rules, following the standards, if you are really securing your project. That's very good. Okay, Michael told you that, well, we have 26 steps. Now imagine that we have 26 steps, but we need a security release steward. And we have usually three active release lines. So we have 26 release steps for each active release line, so multiplied by three. Plus they start from security release steward, which is a lot. Usually it's one week of time consuming to release a security batch on Node.js. It's very time consuming. It's painful. So the idea of these initiatives, okay, let's have two systems. One for normal release and one for secret release. Whenever we need a secret release, just click this button, we will automate everything, we will release, we will distribute all the package, all the binary across the board, and that will work. That's still a work in progress, and that's quite important because once we release a secret batches often and quickly, we are reducing the attack vector of a bunch of attacks, right? Okay, that's the easy button I meant. Well, we have some next initiatives, so maybe Michael wants to talk about the review of the build process and how we can guarantee the reliability of the resource. Sure, so Marco did some great work, so we know how scripts that automate the dependency updates, so we have a pretty good understanding of what that update process is. So looking at from the supply chain security side, that's good. The next thing we're doing though is we're saying, okay, these dependencies, they may actually have their own build steps or transformational steps themselves. A good example is we actually bundle in some WASM binaries within Node.js, and so to take the source that we build them from and build the WASM that actually gets pulled in and built in during the node release, that requires some tools, that requires an environment, and so this audit is to look at each of the dependencies and say, well, do we understand that fully of what environment's needed, what tools, what versions of the tools, and that'll help us also be able to say, like, okay, if there's a vulnerability and reported in one of those tools, maybe we need to go and do something, as well as if we have to do a security release for one of our oldest release lines, we can actually rebuild the same thing without finding out that the new version of the tool doesn't compile or causes some other problems. So that's sort of a follow on directly from now that we've automated the dependencies. Next is to make sure that we fully understand any steps from the source that it takes to get to what we actually pull in. Okay, so basically, how you can help? Normally, it normally takes a balance of both as an individual and as an organization. I will talk about how individuals can help and Mike will talk about how organizations can help. I will go bottom to top. So number six is basically, you can help by contributing to the security issues. We have not only private issues that is normally vulnerabilities that we work on. We have a bunch of secret issues or when I mean secret issues like new features or things that we need to research to include on OJS, we need to remove or something like that. So as an individual, would be great if you take a good first issue. And then, well, we can volunteer as a secret subject matter expert which means that okay, I'm very good on security and I would like to give that power I have to help assessing OJS vulnerabilities, to help addressing or implementing the new design of the permission model or new features of OJS. So that would be great. Or for example, if you're a Windows security expert. Exactly, that would be awesome, definitely. If you are expert on Windows 2012 server Air 2, we are looking for you, okay? So we can also join the Secure Time Group. As I said, we have meetings every other week. That's open for the public. So please join. You can also watch on YouTube, it's live on YouTube. And if you want, just scan this QR code and that would be great. Third, Champion of Secure Working Group Initiatives. What is Secure Working Group Initiative? Basically, permission model was one of them. Automation of the dependencies was one of them. So if you see something that would be great, a good initiative is actually a big initiative for the new project to come to the meeting and say, okay, I would like to champion it. So that would be great. And then volunteer as a release steward. As I said, secret release, secret triage are very painful time consuming. And once we have more people helping, that pain reduces it. Okay, and the first one, well, that's of course. Be an OGS core contributor. I say that, well, it's not difficult to be an OGS core contributor. You just need to have time. You just need to work on that. Like, we have a bunch of easy issues. We had some issues are not so hard as you may think. You can also ask all the Node.js group. We are very helpful on that. So we have a Slack channel on OpenJazz Foundation. So if you join that, and we have also a channel, Ask Anything. So you can definitely ask anything there, okay? Oh, by the way, we have Grace Hopper Day in two days. So it's Friday, yeah? So if you want to know how it works, you can do a workshop and how you can go to Node.js. So, of course, it's great if individuals decide to contribute and help out with the security. But really, I think that businesses and organizations that are using Node are the biggest beneficiaries, or actually have the biggest stake in terms of making sure Node's secure. So what are the top five, I guess this time, ways that organizations can help? Well, the sort of lowest level is if you, we talk to companies and some of them are like, we can't actually send our people to contribute, but we have some money. So you could contribute to the Node.js LFX bug bounties security fund. And that's a fund where we're actually getting some money from Hacker One as they currently have bounties which they pay people report vulnerabilities. But in trying to help people who have to fix them, they're actually contributing some money to the projects as well. So we're starting to build up a bit of a fund that we'll likely use to try and get some security vulnerabilities to find some volunteers to, or paid volunteers to do some work. The next one is join a foundation that supports Node.js. As we mentioned, we're really grateful for the support from OSSF. So if you join one of these foundations and encourage them to actually support development or at least security work within projects, that would be a good way to help. Next is implement vulnerability reporting procedures that are friendly to open source projects. A lot of the vulnerability reporting procedures, at least in my mind, were built with product paid companies in mind and intending to kind of force them to spend their money to prioritize this work. In an open source project, nobody's getting paid. So like, and there's nobody like, I can't tell somebody, hey, you go fix this, right? So I'd ask that you implement your vulnerability reporting policies in a manner that's compatible with open source because I think it's different. The next one is to report people for being a security point of contact or for helping to lead some of the strategic initiatives. So it's one thing for a company to let their people do some work, sort of on their own time for their own benefit. And it's another thing to say, this is strategic to our business and if you do this work and you do it well, we're gonna reward you for that. So that's the next step is sort of like, let them do that kind of work. And then the really highest one would be to have them reward your people for stepping in and helping with the triage, the fixing and basically the security vulnerability handling process because like I said, it's something where volunteers, the more volunteers we have, it makes it a bit easier but actually really supporting your people to come and do that kind of work is probably the top thing that you can do to help. So that takes us to the end of what we wanted to cover and we're open for questions for a few minutes. Yeah, I mean, we managed our own CVs and to be able to like issue our own CVs, at least back a number of years ago, there weren't things like the GitHub reports or through Hacker One. So you had to become your own CNA. I was involved in that process quite a bit. So like when we'd asked for a block of 10, we'd document them somewhere and then we'd have to use them but it was kind of a cumbersome manual process. And we actually are still registered as a CNA for Node and if anybody issues a CVE against Node that didn't come through us, I'm kind of like, well, why did you do that, right? Like we should still be the authoritative answer in terms of what is and what isn't a CVE. People can argue, but like in the end, that's part of being the CNA. Another question. The subject matter experts within the Node community that you guys have engaged in responding, is that all part of their tool? I'm not sure either. Well, it works pretty well. The only issue is when we need the reporter to review a pull request. Normally we send the patch manually through the comment but it's good because we can allow a specific people to a specific reports, not just to all the reports in the Node as we receive. So it's easy to find people to help. It's easy to comment, to address issues, to see in which version it affects and it's also easy to request CVEs and manage the status of the reports. Yeah, certainly what Raphael just pointed out, the fact we can pull in individuals through Hacker One, like they have to register, but like once they register, we can say, okay, we're giving you access to this one specific report is something we've used a lot to pull in, like somebody with some special expertise that we need without having to give them access. Like our Node Private repo, we have brought people in there too to look at the fixes, but we're basically letting them see everything when we do that. So the Hacker One one is nice and that we can do that in a much more targeted way. Yeah, okay, yeah. We lock CI because whenever you create a secret patch, you need to run the CI against all the machines. So if you run it, people that don't have access to secret patches, you'll be able to go to the CI and see what is the difference, what is the patch. So we normally lock only for TSE members, releasers. So other people like other Node.js contributors won't be able to create new CI, so run CI. So that's why we lock. The difference is that sometimes we need one week because it's a bunch of facts. Or sometimes we need a week because this patch affects a jest, for instance, affects a good, well-used library, like React. So we normally lock it to check, okay, this will break or this won't break? Okay. So the larger aspect of like not Node, Core, Node itself, but the larger aspect of all the modules out there, there is a package maintenance team within the Node project. They don't really focus on the security side thing, but there are some overlap with a collaboration space that's been spun up under the OpenJS Foundation, which is looking more at the secure supply chain. And there were actually, there was the OSTF, I don't know if you saw one of the keynotes this morning, but they actually made a large donation and their funding worked through the OpenJS Foundation to, like for example, there's some OSSF best practices, but those aren't specific to JavaScript ecosystem. So they're actually taking those and they're transforming them into something that can be more specific for the JavaScript maintainers. So that group is actually thinking about like what should we be doing on the security side of things? How can we improve the supply chain security? I think it's more at the early stages, but if you're interested, that's a group they meet, I think it's every Monday, every two weeks, you could go and listen in to what they're working on. So we do have Coverity running on all the builds. So we use that in terms of static analysis. Fuzzing has been discussed many times in the security team, but we've never had somebody who's actually stepped up and made it happen. So it's just one of those things that like it's a, hey, we would like to do it, but we would need somebody who would invest the energy to figure out how to do it and how to make it effective in the project. I know, like that's, I think for some of the like OSSF, I think you can easily integrate for like MPM packages. Node is like 50% or more C++. And so it's not really a JavaScript package, but I haven't looked at it to know whether it actually would work with the Node project or not. I do know that like it's been added to some other smaller packages, but not to make out. Yeah, so I think it's one of those ones, nobody is like, no, we don't want to do that. It's just more like, hey, yeah, we're open to contributions and PRs to help make that happen. Another question? I think we are all the time already. But yeah, thank you, folks.