 So thanks everybody for coming to the last talk of the day. Our talk is improving the security of a major open source project one step at a time. If you haven't noticed already, we're going to be talking about our experience in the Node.js project, and I'm hoping that maybe when we share some of our experience, maybe we can hear back from some people's experience in their own projects as well. So a little bit about ourselves before we get started, I'm Michael Dawson, I'm the Node.js lead for Red Hat and IBM. What that means is they get to spend a lot of time in the Node community. I'm a contributor on the Technical Steering Committee. I also get to spend time working with the Open.js Foundation that Node.js is part of, as well as working with a lot of our good teams within IBM and Red Hat who have large Node deployments. We work with our customers using Node and so forth. So now hand it over to Paula. I am clearly not Rafael Gonzaga, but Rafael is the person who contributed to this topic that we're going to talk about today, one of many people on the team. He's currently in Brazil where he was made. He's a Node.js Technical Steering Committee member, also a security working group lead, Node.js Releaser, and as of this week, he received the JavaScriptLondia security pathway finder award. So very proud of all the work that he's done here. But I am Paula Paul, and standing in for Rafael as best I can. He's in Brazil and I can be here. I'm a sponsor of the DX or developer experience team at Nearform or Ospo as well, and I am very happy to serve the Open.js Foundation Board and the Grace Hopper Celebration Open Source Day. A little bit more about that later, Open Source and Node Admirer. So I think I'll hand this back. So I'm just going to give you a little bit of an overview about what we're going to go through. I'll start with the background, a little bit of background, a little bit about the Node project, a little bit about some OSSF funding that we got, because we're very grateful for that and that's helped us to do a lot of the work that we're going to be talking about today. Then we'll share our experience, and we're going to start by sharing experience on the reactive side. So the life of a vulnerability, how do we manage them, and what's worked for us, what hasn't worked. Then Paula's going to jump into what we're doing to be more proactive. So we have a security working group, and that's really been reinvigorated over the last year and we're going to talk about some of the things that we're doing on that front and how we think that's helping. Then finally, we'll just finish off with how you can help. Often people are like, how do I get involved in the Node project and help out? I think a lot of our suggestions hopefully also apply to other projects as well in terms of that. So first of all, the Node project, I call it an open-open-source project. I didn't coin that but it was coined by one of the earlier collaborators, Rod Bag, and really, we use that because there's no one company that really directs the Node project. There's lots of people and lots of individual contributors, several companies. But it's not like there's one group that says, here's our roadmap, here's what we're doing. That comes with lots of benefits and some challenges, especially in terms of security. We don't have one company who's going to say, hey, we're going to fund all of the security work and make sure that things are secure. It's a large project. We've had over 3,000 contributors over the years. We have 96 collaborators. Those are people who can review and land pull requests. It's widely used. We had over a billion downloads just from our download site last year. That doesn't include things like Docker polls, which were probably another billion, and then however people just pull from caches and stuff like this. So very widely used. It's at the top of the open SSF criticality score and that's why we ended up getting funding. I should say security's always been very top of mind for the project. So I think you'll see through some of the things we've always been doing that right from the very beginning, people thought we have to make sure that things are secure. We have like a separate CI just for releases so that we can separate who has access to the test infrastructure and the release infrastructure and so forth. But I think one thing I've learned and seen is that volunteers are a pretty poor match for time critical work. It's easy to volunteer for something when you can do that on your own time, develop a feature and push that forward. But for security release, sometimes we even get vulnerabilities with deadlines and we had a case where like the deadline was going to expire over Christmas, not a great time to be asking volunteers to come in and do work to get the security release out the window. So that's something that's like been a challenge that I'll talk a little bit as we go through. So last 2022, we got some OSSF funding. It's continuing into 2023, and I think you'll see as we talk about some of the successes of that, it's not just the work that that person who's been funded could do. It is very good to have somebody who can basically make it their top priority to fix security problems, do releases. But they've also been able to provide the critical mass that lets you do more proactive work and bring in other people who then can contribute. But without that sort of critical mass, it wouldn't happen because you need somebody who's focusing on it and pushing it forward and bringing all those people together. So that's a little bit of background. We'll now look at the life of the security vulnerability and we'll talk through the different steps, the threat model, the security reports, creating fixes, security releases, and then we just have a small example just to illustrate the complexity of some of the issues we have to deal with and how that fits into that cycle. So first the threat model, and I put the threat model in the life of the security vulnerability because really we hope to get to the point where people use the threat model to decide whether or not something is a vulnerability up front. I have this picture is because without a threat model it often feels like this is what we're doing when people report vulnerabilities to us. We don't always agree on what a vulnerability is and people are quite disappointed sometimes when we're like we don't think that's a vulnerability. We can have some really long conversations which aren't necessarily very productive and with the threat model, hopefully we avoid a lot of those discussions because we've been much clear up front what we will consider a vulnerability and what we won't and of course it's not perfect, it's a living document and so we don't just do things black and white. But we have found that that helps quite a bit in the initial discussions. So this is just an example, who here would look at this and say, that's a security vulnerability because we can basically cause a denial of service of your runtime. So maybe we quite often get reports that are not quite this straightforward but actually are along those lines and we don't consider them a security vulnerability because if you ask us to run a piece of code and so you're basically saying, load this thing which is huge, we're just doing what you told us to do. So in our security model we wouldn't necessarily consider that a vulnerability. In the more subtle ones, it does get to be an abysmish. It's far less clear but it's still the same position that we have. That's where we hope our threat model will help to explain why we wouldn't consider this a vulnerability, why we would consider other things to be a vulnerability. So in the threat model, we actually base it on, what do we trust, what do we don't trust, and then we have a number of examples. We try and cast it in, if we don't trust X and through X, you can cause something bad to happen like a denial of service disclosure of information that shouldn't be. Yeah, that's a vulnerability. If it's something we trust in our model, like for example, we assume that you have a system which is properly configured, you have your security privileges set on your file system, and so if it's something where you change a file on your file system that it's under your control, we trust that and we're not going to consider that a security vulnerability. We've published it in our security on MD. It's a fairly recent addition. I will say it's really hard to define these. We went out there, we had some people from security companies, like their businesses working in security. There wasn't like an easy pattern to say, here's the security model or threat model and here's what's in it. So we basically built up what we think will work with us, but it is a work in progress. We've tweaked it a number of times over the last year as we get vulnerabilities. We look at it and say, well, what answer does it suggest? Maybe we don't agree, let's tweak it. Or yes, it makes sense, let's move forward with that. This is just a more specific example. So this is a case that says in our threat model, if we load a file and we haven't documented that that file is going to be loaded, we would consider that a security vulnerability. Because then, if we're putting it on or expecting you to control your environment, but we haven't told you that we're going to load this file, how are you going to protect that file, give it the right rights, that kind of stuff. So in that case, we would say, yep, that's a vulnerability. We may fix it through documentation by making sure we properly document that. So that's one of the statements and threat model. On the flip side, we have examples and we have examples of what is and isn't a vulnerability, but in this case, this is the flip side where it says, if somebody reports a CWE-15 and they're saying, well, hey, I'm setting something in this file and we properly documented that that file is loaded when we start up, then we don't consider that a vulnerability. That's the level of information we've tried to put into the threat model. I'm not going to go through everything in the threat model, but you can go read it in our security.md. The next part are security reports. So this is the way that we handle security reports. Of course, please don't open public issues. We document the process in our security.md. We use Hacker 1. So in the best case, somebody comes up, figures out they have a vulnerability. The first step is they look at our threat model. Yay, they figured out, yes, it is a vulnerability based on that threat model. They go to Hacker 1. You can find Node.js is one of the projects where you can report vulnerabilities. There's a nice submit report button. You can fill in the details, and our team gets a vulnerability in its inbox. So that's how things come in. In terms of once we've got it, we've got a triage team and I'll talk a little bit more about this where like what worked, what didn't work, but we have a triage team. We'll look at the things in the inbox, we'll discuss them based on the threat model, and we'll end up with, yeah, we believe that is a vulnerability or not. Often that there's back and forth. It gives us a nice tool where we can privately have that discussion with the reporter. We can also easily bring in additional experts if we need somebody who has expertise in a particular area to a particular and very specific report as opposed to giving them access to like all the reports that are there. Once we've decided it's accepted, we need to give it a CVE rating and assign, we use the CVSS score calculator. Our experience on this is unfortunately, it does kind of drive things to high. So, if you just blindly fill it out, you are gonna end up with something that can easily be a fire alarm for the whole community. So, my feedback is like when you're filling these in and doing that for your project, really think about each of the components because depending on how the score comes out, you may cause a lot of work for your community. There's companies that say like if it's above a certain rating, it has to be fixed within a week and basically so it can drive a lot of work and a lot of questions and all that kind of stuff. So spending a little bit more time in getting that calculation right is really worth it. So, back to the, we wanna share our experience and then maybe people will share their experience so we can go from there. So what didn't work for us? We started out like taking reports through email. That's kind of an easy way but it's very hard to collaborate. You get a long stream of messages, it's hard to bring people in. We also tried like ad hoc triaging so we were using hacker one but there wasn't really anybody responsible for triaging initial reports. And what we saw happen most of the time is like people would get involved and then they'd sort of feel like they had to do them all because nobody else did them and it kind of fed on itself so you would end up burning out somebody because with one or a small number of triagers, they would, a small number of people would end up doing it and it over time just didn't work. Like even on my team at Red Hat, I tried to make space for it to be like it's part of your daily work and we know that you're gonna be doing this but it still wasn't fun enough I think in the end to do that, to be sustainable long-term. What is working for us though is we have like now have a triage team of more than three people. I think we have about eight or nine people on that team. We have a very well-defined triage rotation so you're on triage rotation for two weeks and that doesn't mean you're gonna do all of the triage but you're gonna say like, hey thanks for the report and try and help move it forward, bring in experts if it's not your area, that kind of stuff. And other people still jump in and I think people are even more likely to jump in because they know they're not gonna be stuck being the only one who triages that one and have to take it all the way to the end. So you've got your two weeks, you can dedicate your time and that works really, really nicely. Hacker One's also working quite nicely I think because it gives us that private place to report. It's a nice place to collaborate but it also lets us make things public afterwards. Like the project feels very important about making everything as transparent as possible so we stream all our meetings, we try and do everything in GitHub. Security is a challenge on that front and that before we've actually fixed the vulnerability we don't necessarily wanna disclose it but this is a nice balance between being able to handle it in private but then actually disclose it afterwards, sure. There's actually in the tool you can make messages to the team or to everybody and I think it just discloses the things that are to everybody. So we can have a conversation with the reporter and all of that gets disclosed but we can also have an internal discussion which won't necessarily just get disclosed. So you get that nice sort of, so it's more than just the report, yeah. And it also gives us easy CVE assignment. We are a CNA so no became a CNA, I helped manage the CVEs, requesting them. I was quite clunky to sort of do it on your own. They managed getting them and assigning them and all that kind of stuff so it's nice on that front too. So let's move on to creating fixes. We've triaged them, we've decided it's a problem. We now actually then move over and we have a private repository so we have a public repository, obviously we have a private organization and we use that as a place to, it's basically a clone of NodeCore. We can create PRs, we can test them. There are some challenges which I'll cover in the next thing but that's kind of the way we now work on the fixes. The challenges we have is, it goes back to that volunteer problem is that sometimes it takes very specific expertise so it's a problem in this very specific thing and often vulnerabilities are things where it's a real edge case and people who are volunteers, they may not be available for two or three weeks because they're busy doing something at work right in their day job and so that's still a challenge. The OSSF funding really helped here and that we could get people who could say, no, I'm gonna work on creating a fix, that's my top priority. The other thing that hasn't completely fixed is that we support a lot of platforms and sometimes these are often, in the example we have later is like very platform specific so getting that platform specific expertise, Windows seems to be one we in particular have problems with and interpreting like in the Windows world is this something that is considered a vulnerability or not is often a bit of a challenge. The other challenge is that it's harder to work in private like we're all used to in our project actually working very collaboratively in public. Our CI is set up where we can do tests across all the platforms but we don't do that until we're ready to ship fixes because we consider once we actually run the CI we've disclosed to a large audience because all of our collaborators over a hundred people can see those and get onto the machines and stuff like that. So we actually have more limited CI testing in our private organization. We still have get up actions but that doesn't run on most of our platforms. Harder to pull in people we have to like add them to the organization and stuff like that so that's not so great and actually when we go to do a security release we actually lock down our CI so we can do a full CI before we do the release but that means that the rest of the collaborators can't actually do releases, do tests and our last security release we locked out for a week because we had a fun time and so the rest of the project couldn't actually have LAN PRs and all that kind of stuff. So then when we get to security releases we actually have a really well documented security release process and that's something where again I think if you're interested in our experience I don't have time to go through it all but you can go and read the 26 steps that we have. We've built this up over time. Definitely the experiences is having this well written down and documented is a really good thing. It helps us make sure we're consistent. We don't miss the steps. It involves coordinating with a lot of the collaborators. We give an advanced notice to various parties like we have a team that actually builds Docker containers so they need to know something's coming on a particular day. We have to put together the information about the vulnerabilities, the CI lock and unlock and stuff like that but there's a lot of work I guess is what I was trying to say there. If we step off, take up where we were from before we in Node Private we have our fixes now and there we've gone through we've got those all tested at least with some GitHub actions but on a subset of our platforms. We then lock the CI and we will do a full Jenkins run make sure that we can get those green and then they get published from our private and actually we do the testing in our test CI but then we actually have a whole separate CI for security reasons so we have a much smaller set of people who can actually get access to the info that's used to publish to our actual download site and that's our release CI and then things go out to the download site. So what didn't work on this side basically releasers having to do all this work. We added 26 steps only one of those steps is actually to do the release itself which is a big piece of work. So having expecting the releasers to add in all those other steps wasn't working all that well. Again ad hoc coordination didn't work that well like trying to hope that the people would be notified and stuff like that so writing down the process and this is I'll talk a little bit about more security release stewards but similar to our triage having somebody who was kind of dedicated to help the releasers with those 26 steps and doing it every time. Again didn't work that well because they got burned out and eventually it was like I just don't wanna do that again and we ran through a few people doing it that way. So what's working for us again you might see a trend is we actually have a rotation and we have a security release steward sort of role for any security release where they're gonna walk through those 26 steps and basically do all the coordination and work that you need to do other than the release itself. That involves like making sure we have the CVs assigned what's the text that goes in there giving the pre notifications to the different teams sending out the emails like we have a note sec mailing list that we announced security release stuff so sending out all that stuff. And so what is working is having a rotation of a larger number of release stewards. In this case we really push to make it company commitment as opposed to an individual commitment and a lot of cases we see individuals can make a commitment because they're interested in open source but if their company hasn't committed that they can make this their top priority when we need to do a security release I think that's a stressor on them, right? Like you're saying we need this now but they, so we push to say no we want our release stewards to be people who their company has said we sponsor you as a release steward we actually have it on our public site that it's the company that's made this commitment and like here are the people who have done it. Unfortunately we're missing a few big companies there that actually are active in Node.js and active users so I wish this was bigger but many thanks to the companies that have stepped up to be the security release stewards. And I'll just close out with a real example before I hand it over to Paula. So this is just one of a real example so it's imagine you're a Windows developer you have open as a cell already installed on your Windows box and you install a module but you have a typo I think you probably heard many people talk about the sort of typosquadding that installs something called providers.dll and you can see there's some code here it just pops up the calculator executable that's not really that bad on itself but just a good example and what happens when I actually do the install the calculator pops up and like so me I've installed a package and now it's run some arbitrary code on my machine in the example here you'll see that it's actually in the post-in-scall script but really anytime this could happen anytime after the package has been installed so what's actually happened so when require crypto is called so you've got some code you want to use crypto that loads open as a cell when open as a cell loads it actually does a search for providers.dll starting in the current working directory so it actually goes up and there's a standard set of Windows 2 rules that say how do you find a dll and it starts in the current working directory and so this package is installed that in your current working directory it finds it, it runs it and it'll, in this case we've actually just run npm version because npm uses crypto so like if you actually cd'd into the directory where you did the install and you ran npm anything you will end up with the calculator popping up for you and that's just an example like that one was hard for us because like okay is loading a dll in your current working directory a vulnerability? Well that's standard Windows behavior but we did make a change so even though our model might say that's not something we should we do look at them and say well okay in the greater good we're gonna make a change and improve things so at this point I'm gonna hand it over to Paula Thank you Oh and this is the Okay so I'll move to the proactive side although the reactive side has been really an adventure as you can see I'll touch on the history again and security working group active roster recent successes ongoing initiatives and most importantly how you can help as an individual or more importantly as an organization the working group history we touched on earlier that this was something that came up from the node security project vulnerability database being donated to the Node.js foundation that never really stuck it needed some ownership and the OSSF funding really provided critical mass to form the working group and the working group focuses on node itself there's an illustrious group of people in the working group if you wanna see the full roster it is all open in GitHub so we'll talk about some recent successes the threat model was also mentioned previously I'll cover next dependency vulnerability checks permissions model and then security best practices being proactive means knowing about the vulnerabilities before somebody sends you an email and that is what automation is really important here because tooling that scans to vulnerable dependencies for vulnerabilities lets the working group know about these things before someone in the field says hey I've got this problem and sending an email that highlights it to the rest of the world the nice thing about this automation is that if there is a vulnerability it will open an issue so that the team can address it permissions model and again I'm stepping in here for Rafael he did a lot of work on this it's something that I think we're all very proud of and the use case for this is oh I'm sorry it was released in node 20 as an experimental feature it's very cool so we'll talk about the use case I'm a humble dev and I've got a problem I need to solve and of course I start googling and I find a wonderful package that's the problem solver package it solves all my problems and of course I find it in a random tutorial on the internet because that's obviously the most secure place to find solutions to my problems the problem solver package I do like a brief look at the code and it's like okay it looks at the passwords of course it must need something from that but I'm a security conscious developer and I decide to use the permissions model so I say my application and my process really shouldn't be doing anything outside of index.js so if somebody wants to read that well that's fine but nothing else so when I run my code I find oh I have an access exception a little more closely here because the permissions model allows you as the developer to assert what resources your process will have access to and what not you can make sure that the code that you depend on is not doing anything that you don't expect it to do so it's really a very cool feature it's an experimental mode at this point to allow developers to become more proactive so we talked about making node itself you know handling things more proactively in node through dependency checks and so forth but now actually moving out into making our developers more proactive in terms of security so please if you do try this or use it in your development processes give us feedback it's really a cool tool to get developers to be more proactive about security right now the resources that you can restrict are read to the file system, write to the file system spawning child processes and using node worker threads so it's a very cool feature during runtime you can also assert and find out what permissions are available to you so it's an interesting tool for developers again to make developers more proactive and security conscious that leads to best practices because we're moving into this proactive world where it's not just about hardening node and having the node core team being proactive about security as they've been for a long time but part of it is making the developers more proactive as well I've got the QR code up there if you wanna link into that security best practices world it did start based on the threat model work that was discussed previously so threat model looking at node itself what does node trust, what does node not trust what's the threat surface of node but then as you heard there are things that developers can do that are not secure and we don't want node to necessarily be blamed for that but we also need to enable our developers to use node in a secure manner and that's where we have a fork here and the creation of a node best practices document targeted at developers so that they can do secure coding the threat model is more for the security researchers for determining whether something really is a vulnerability in node or not but now we also have some proactive advice for developers for example, denial of service I mean if you're not catching errors in the web server or the web service socket you're opening yourself up to denial of service and it should be something that a developer doesn't have to scour the internet for if they're a node developer they should have something to refer to that tells them how to be a secure node developer mitigating prototype pollution this is something that's a good example of what does node trust and what does node not trust because prototype pollution is something that's inherent based on the JavaScript language but we need to educate developers on ways to avoid it so those are a lot of our recent wins but the work is ongoing there's a lot more work to do more automation on dependency updates the OSSF scorecard, very cool automating security releases we talked about 26 steps extending the permissions model that was just launched with 20 and looking at SigStore and Salsa to make sure that the dependencies are who they say they are this work is just starting and if you'd like to get involved please do, there's a link to all the current initiatives which has these and more I'll kind of beat you over that had to get involved a little bit later but automating dependency updates step one is the vulnerability and the dependency side making sure that all your dependencies are not introducing new vulnerabilities and then of course you move on to hardening the build so the first step is well underway and still being enhanced and growing but you see on the on your left on your left the dependencies that are already being automated and then a nice diagram from sneak or sync I've heard it pronounced both ways I tend to do sneak of the source integrity problem and the build integrity problem so that work is ongoing and much needed OSSF scorecard node is currently this is implemented for the node project but also you can get a detailed report with the scores by the repository which is a very cool way to say is there something here that I depend on that's a weaker link in the chain overall right now it's 7.3 out of 10 but most importantly there's a link there from step security about how can this score be improved and shout out to step security for this I've got a link to the actual scorecard there if you're interested so the step security piece is really cool because it was a good way to get more people involved in this topic it was a good first issue to pin dependencies and somebody here was very proud that they had their first contribution to node because they pin dependencies and I think this area is great to raise awareness of people who wanna contribute to node but to contribute to security again the more people who know about this and the more people who care about it the higher quality and the better security we got automating the security release process 26 steps this involves a security releaser for each release line plus a release steward and all together about 700 hours of work and we heard that a week of elapsed time so malicious actors won't wait and so automation we've heard throughout the conference that automation is really the way forward to get a meantime to response down that keeps the ecosystem secure and ideally because normal release would have one path and a security release has a slightly different path the team would need like two buttons that doesn't preclude perhaps some of those normal release activities from happening while we're working on a security release and it should be just an easy button I wish one for each my favorite part and it got wrapped there is how you can help as individuals and organizations because sure the individuals are the people who do the pull requests who do the triage who do the security releases but those individuals nine times out of 10 work for organizations they have day jobs and the organizations are the people are benefiting from these open source projects I mean they most likely all of them have websites so they're using JavaScript they're benefiting from the open source ecosystem they employ the people the people do these important tasks to keep the ecosystem secure so there has to be a balance of both but as an individual in order of perhaps increasing impact you can have on the ecosystem take on a good first issue have a look around they're tagged volunteer as a security subject matter expert help the community this is one I really like is just come to a security working group meeting I've got the QR code there the meetings are all published and it's open for anyone to just come and sit in you can learn something you might find something there that you would want to contribute to champion one of those initiatives that are ongoing with the group volunteer as a triage person or a lease steward or an actual releaser and then of course at this highest level for the node ecosystem become a core contributor and if you're saying yeah that sounds pretty daunting I don't know how to do that I'll plug the Grace Hopper celebration because we are the largest gathering of women and non-binary technologists in September 22nd we have a pure virtual event that's open source day and node will be a featured project there and I know some of my colleagues are going to do a workshop on your first node contribution so I'll plug that as well organizations top five contribute I mean for the don't buy one of your executives a new chair and take that money and contribute it to the bug bounty you know it's like these are little things price of a cup of coffee kinds of things join a foundation that supports open source packages that you depend on like nodes so I'll bet every organization depends on JavaScript if they have a website which they do join the openJS foundation implement security vulnerability processes that consider the open source ecosystem you heard about you know it's hard to really do these things with a pure volunteer workforce please don't send emails or open issues that everybody can see if you find a vulnerability so start to work that into your organizational DNA reward people this is one of the most important things because the individuals are the people who do the work but the organizations that benefit from open source need to reward that work just being a security point of contact for one of your key open source dependencies reward people reward people for helping with triage fixing vulnerabilities and doing the security releases and stewardship and we're between you and happy hour so thank you so much for participating and anybody have questions? thank you yeah but like you called out the sort of pattern where you move so almost like a like a tier one support kind of like operations model where you had rotations and defined schedules and you were avoiding burnout so there were like a lot of interesting patterns there that aren't necessarily security specific but sort of alleviate that like was that like just a natural evolution or did you find that that was like a really somebody pointed you at that and said like hey this this work that you're doing is very similar to the kind of operational work that I'll say unfortunately it was a journey of discovery so like I work at Red Hat and I worked at IBM before and we've always had a pretty good team contributing to node and so I prioritized having team members involved in the triage or team members involved as security stewards I've done it a number of times but me encouraging other team members just having that one person like I think it ended up that that one person because they stood up and did it once, they did it the second time, the third time, the fourth time and then being the only one who was doing it just it was not a fun thing to do on your own and so that didn't work they eventually said I don't want to do this and that actually happened to me with several team members right so it's like well okay this just isn't working and the triage was a similar case where we had a number of people not necessarily in my team but from other companies we had stepped up and I think the problem is because there was no defined scope they would get involved and then they would kind of feel the responsibility and other people maybe didn't jump in because they saw somebody doing it but then nobody else did so this person then felt like they needed to do it and they needed to do it some more and then eventually it became more than they could handle and they would battle it completely and that actually we went through that cycle several times as well so I think we learned the hard way that the just sort of having like somebody will jump in and handle it doesn't work so well because you end up maybe it gets done because somebody jumps in but then they take on more than they really should and the rotation has helped a lot in both of those places where like you feel more comfortable like I think there are people who volunteered for the rotation knowing that the scope was two weeks on triage or the next security release versus oh well if I volunteer for one I'm going to be stuck with them all right like so I think it's kind of counterintuitive right like you're making a higher a higher commitment but at the same time it's almost an easier commitment because it's bounded and the one on the stewardship we really pushed to say we want your company saying yes you can do this not just you personally volunteering the triage people are still just like a personal volunteer but like the security steward it was like we want to make this a company thing we want to give credit to the company for committing that to make sure we're you know when we say hey it's now they've got the they don't have to like feel bad because their company saying no you shouldn't it should be their company is like yeah we committed to do this it's your turn go do it yeah thanks I feel like that's a really interesting pattern that could work in lots of different open source or volunteer spaces where you know burnout for people who are sort of overstretched right like you're sort of inequitably sharing the burden of whatever that work happens to be so yeah and I think the interesting thing for me is that it's like I think there was resistance to that kind of stuff if we hadn't gone through those cycles I don't think people would have they wouldn't have been in favor because it's it's much more structured than like open source it's not so voluntary it's like yeah we're gonna have a schedule you're gonna do your thing but counter to what you might think it actually reduces the stress and then as opposed to the opposite thanks any other questions test test curious if you've tried out github's private vulnerability reporting feature and if you have what features it lacks and I could I could see some clear features that it lacked from the flow you needed but are there any things that like really stand out for you as features that are completely missing for you to use it over the hacker one flow that you the hacker one and then private owner private repository flow that like you're currently using so I can't give you a really good answer because I haven't personally used it I do know that in DG which is one of the projects that's under our organization they use I think the CV assignment side of that flow they still don't use the reporting because we get all of our reports in through hacker one so that's the only feedback I can give you is that they have used the CV assignment which they found to be easier I think and because they're it's like they're not assigning them for no they're assigning them for in DG which then we basically assume but you could ask Mateo or even ask you like if you you know if you opened an issue that asked about that in the in DG repo that would be a place you might get some feedback the context on this question is one of the things that we're working on in the open source security foundation is and Alpha Omega is bulk generating security fixes at scale and the idea is currently driven around the idea of using PBR as the driving force because we can both open a private issue but also give you the fix in a simultaneous automated way but you know if there are other organizations similar to Node.js that are like no we're not going to use PBR because we have our flow and PBR is private owner reporting which lets reporters open GitHub security advisories and so you're triaging it all as if it's coming in but it's like a hacker one report but coming in via GitHub I don't think we'd say we've got our flow you can kind of see that's a part of the value you can see what we do and don't do and how it fits in so I don't think anybody would say no we wouldn't consider that we probably don't understand it well enough there are some subsets of the people who are sort of experimenting with parts of it so over time we may get experience and you know whatever whatever works I think for us and we definitely have the challenges in using a private repo our GitHub minutes always run out because I guess there's less minutes for private than there is for public and so just doing testing and that sort of part is where our pain point ends up being right today. It's pretty much I would say the last couple security releases that have and we've talked and they're like you already have as many as we give anybody and we've had that conversation I guess we're looking at can we cut them back because we don't have nearly as much testing our full test suite is like a Jenkins instance across like AIX SmartOS all sorts of different architectures and platforms and so we don't use GitHub Actions for like our platform coverage but we do have GitHub Actions for like our sniff testing and we will run on Mac, we'll run on Linux we'll run on Windows and I think we discovered like our Mac won the pricing is like 10 times per minute or I'm picking a number out of my head but it was enough higher that like our Mac OS runs were burning through like all our minutes somehow and so that is one that we and then we're stuck and like yeah we had several times with like Miles who's been we've been like hey can you do anything with us we've got a security release we just ran out and I think that's happened several times Have you thought about standing up on separate Jenkins instance just for security stuff? We have a whole separate Jenkins instance for release already so we have one of everything we're going to ship which is a subset of what we have on the main test suite so we ship on CentOS not on CentOS we ship on RHEL now but we actually test on RHEL and Boutou a whole bunch of different distros right we haven't we haven't considered that just due to the extra work that it would take to spin up something that would give a significantly more coverage most of the time like the GitHub Actions gives us a good enough sniff test that you know things don't go wrong the last one where we were locked down for a week is clearly an example where you know it's it's usually that different having a large broad platform set actually gives us better test coverage because usually your problems are on some edge case maybe it's timing maybe it's like networking stack so it's not really that platform has a problem but they often expose the problems and you don't find them until you've got that like hey we just covered ten different platforms the timing is like our Linux one machines are like so fast they expose all the things that when you're too fast and then we've got other ones which are so slow like the Raspberry Pi's that we used to have they would expose the ones that are too slow so it but to get that same coverage that it would be just too much work we can barely keep our current one going any other questions I'll say thanks again for coming to the last talk of the day and sticking around all the way to the end thank you very much