 Hey everybody, I'm Wes Todd. I'm a senior software engineer at Netflix working on the Node.js platform team Hi everyone, my name is Darcy Clark. I'm the senior engineering manager on the MPM Cli team here at GitHub And we're here today to talk about the first-ever open JS collaboration space Which we will be focusing on the package vulnerability management and reporting Take it away Darcy Yeah, so we were hoping to kick off things off by talking a little bit about the state of the ecosystem and why this all sort of came out of And sort of spun out of the package maintenance working group actually So there's been some recent data come out about sort of the state of the ecosystem in 2020 What we continue to see in the JavaScript ecosystem is exponential growth There's one point over 1.6 million packages on the MPM public registry, which is incredible Our community continues to publish new and exciting packages and we see over a hundred and twenty-three billion downloads a month on the public registry, which is just amazing recently It was MPMs 11th birthday back in October and we hit the hundred billion milestone for package downloads monthly package downloads So there's a lot a lot of downloads a lot of folks using JavaScript in the wild And we see that also with the data that we get from GitHub That JavaScript is still the number one language on the platform today and coming up Real quick is also TypeScript. TypeScript has recently jumped a few spots in terms of a top language that we are seeing in the repositories that live on GitHub and this just tells you how important our Ecosystem is and these languages are to programmers As a part of this data and part of this Sort of discovery work we notice that there's over 94% of the pause stories that are you know utilizing JavaScript they rely on open source and our job as a very Healthy community we tend to want to consume lots and lots of JavaScript as well and we see that packages and projects In within our ecosystem specifically within JavaScript we have roughly 683 transitive dependencies PHP comes in around 70 and Ruby comes in around 68 transitive dependencies The depth of our dependency trees and the scale of this issue definitely put JavaScript in a unique place But I think Julia said it best Vulnerabilities in software they occur like regardless of language and framework and ecosystem Getting them fixed is really where I focus on From like the security side. So getting things fixed kind of in any ecosystem is really difficult I think they call it dependency hell everywhere for a reason Knowing that these problems exist and are difficult to solve in all language ecosystems We think that the JavaScript ecosystem presents a really unique opportunity for us to set a precedent and to try out New ideas for how to make managing reporting and dealing with these security issues better across the board We'll hear more from Julia later So there's a huge surface area here in terms of projects You know potential for you know Vulnerabilities but also potential for for growth. This is you know, our strength here is also in how we use open source within our ecosystem So we also want to take some time to look at the state of the advisories that get flagged within our ecosystem as well and Utilize some not same data that we found within the October's report from 2020 First off, we just want to take note that it's since you know, October of 2015 the MPM advisory DB has filed all over 1600 Ivory the advisories which is pretty amazing in terms of, you know, the folks flagging issues that are coming up within the software that we're consuming And another really interesting fact is that 59% or there's a 50 my percent chance of You being flagged or they're a CVE being filed against while your dependencies Based on, you know, those numbers that we were seeing before with how many transitive dependencies we have It's pretty that's a pretty large number And and if you're working in JavaScript, you know that for JavaScript projects, that's really 100% of projects will be flagged in the next week That's definitely right. There's probably a lot of projects that get flagged a lot higher This is definitely a rounded average number for sure and Looking at also the breakdown of what is contained in those advisories We've seen that at least from, you know, GitHub side that there was only 70% of those that were sort of actively malicious that those CVs were, you know Potential worms or malware. Whereas there were the other 83% of those seem to be results of mistakes potential, you know errors and Vulnerabilities within our code that were being caught by researchers and and folks that were flagging these things to To maintainers and to the ecosystem So those are really interesting numbers and and really show you the scale and scope of the problems that were for challenge with today One of the best parts about working in such a large ecosystem is all the amazing people We thought that to kick off the collaboration space it would be great if we could hear from some of them and how CVE reporting and remediation affects their work Great, thanks Wes. So I'm Nick O'Leary. I'm the project lead co-creator of Node-RED one of the other projects of the OpenJS foundation which is a low-code programming tool written in Node.js and I think what's a bit different about some of Node-RED's interests is our users aren't necessarily node developers or JavaScript developers We use NPM and we use node to Ship Node-RED so users run NPM to install it But they aren't necessarily node developers themselves We often focus so much on the developers of libraries the developers of applications But I think we really miss out on the perspective of the end users who often are just seeing reports in their CLI And they don't really know, you know, what that means. Yeah Does that does that play a big role in in your users and the reports that you get? Absolutely You know, suddenly for users who aren't familiar with NPM, you know, we document the commands they should run We don't go overboard on all the flags You can give NPM to quieten down the warnings because you know, you don't want to suppress genuine warnings that users kind of should know about But there's always going to be stuff in there that We know don't doesn't matter But some users lots of users ignore it But some users do get concerned by having even Moderate severity audit warnings or whatever it might be and again The nature of node-RED is there's quite a large dependency chain of modules because it's a programming platform It can pull in all sorts of different modules depending what else you install into it to supplement it so You know the scope of what warnings you might get is pretty pretty large depending on the modules being being installed Now that we've heard from Nick about how these things affect end users and authors of open source libraries and platforms like node-RED Let's bring back Julia and hear a little bit about how larger companies deal with these kind of things and how engineers and security teams can better work together Sure, my name is Julia connect. I'm on the Netflix application security team I'm a security partner to some of our developer productivity teams at Netflix in my my pretty strong opinion is that Developers shouldn't have to be security experts and I think it's security experts jobs to help developers get their job done and to Help make you know kind of the right decision. We have this Concept at Netflix of freedom and responsibility and context not control and I really see that as The security person like our job We're employed to be these these security experts and to really help people understand but not to like Go put a ton more context in your head and say also become a security expert and interpret this how I would interpret this, right? It's it's given all these inputs given my background given My security expertise This is what I think is you know kind of the most correct or the safest path forward And let's you know work together on ways to address any Productivity issues or can we can we join those things right like can we say productivity and security? Get to hold hands and cross a finish line together, right? One of the best ways to tie together Productivity and security concerns is through tooling. We've got a lot of great engineers in the tooling space in the JavaScript ecosystem One of those is Bishak or as he often introduces himself to us English speakers as Zeebe He has been instrumental in helping kick this group off and he also wrote a tool called npm audit resolver on the web, I'm known as an actor but my real name is Bishak if anyone wants to try to pronounce it and I've been I've been playing around with Node.js since version 0.8 and I've been Growing a team of Node.js developers for the last 70 years at Ignite meanwhile, I'm also doing some open source and Trying to work around Node.js diagnostics working group and some other places and a bit of security. So, yeah, so it all started when we had around 20 apps and No security was important but we discovered That we can monitor dependencies for security a Bit late in my opinion well npm audit didn't exist yet, but node security project was already there So I installed the nsp command and run nsp check and thought well This is awesome It checks my dependencies and says I'm okay because that's mostly what it did at the time There were not a lot of vulnerabilities reported yet. So I Obviously took it and put it as a step in our CI so that the CI would get read when we get a vulnerability and then Adam Baldwin started working on finding more and more vulnerabilities So one day like two or three weeks after I put it in CI Everything literally everything that read because there were like 10 or 20 different dependencies That got us The bad status. So first thing we did it was spotting that most of those are dev dependencies and most of Vulnerabilities are regular expression denial of service So, yeah, the one that everyone loves So NSP check at the time would only allow Ignoring by The vulnerability advisory number. So we ignored regular expression denial of service and Fixed everything else and moved on but that didn't last long because then he started finding and I literally mean Adam Baldwin He did find most of these Early on so we had to switch off the CI step and then npm audit came out so we switched to npm audit, but it was still too much and That was the decision So I have to choose between not running this as a CI step Which I still consider a good thing to run it in CI to build the The culture of caring for security on any team and so the other choice was to make it Reasonable to run npm audit as a CI step and that's where npm audit resolver came in One thing you said which I think is really interesting is building a culture of caring I was wondering if you could talk a little bit more about how you can do that on a team for a team of software developers security is Going to be like initially is going to be the thing that gets in the way Unless you build the right mindset so my goal as a leader of a team in terms of security is Not to make sure that everything we build is perfectly secure Instead is making people care about security and not consider it an annoyance. So I want to Everyone to fill the mission to Build something secure I want people to think about it every now and then And if you get a tool that tells you now this is insecure every single time you commit something That's not gonna build this culture because people are gonna treat it as an annoyance. So you need a tool That lets you make decisions Hopefully informed decisions on what to ignore because only when you can ignore Some vulnerabilities you're gonna be able to really care about the ones that are important Where can we take decisions? Maybe away or abstract them away, but put in the safety faults, right? I think I think safety faults with the option to undo with the understanding that like that's an action that developers Taking so that they have to you know kind of understand what they're undoing rather than Going in with the expectation that something secure because why would they give me something insecure to start with? And I have to go like do all of the work to pop security onto it Yeah, I really like the idea of like Security people and productivity folks like working together towards a common goal and saying like how can we make the? Right choice the securist choice also the easiest choice and also like the most correct choice Making informed decisions is super important at Netflix We call it context not control the reason we say that is because we want to give our Engineers and our teammates as much context as we can so that they can make the right decisions when it's necessary Often times the information that is gathered on one end of the life cycle is not Propagated all the way through to the other end. I think in the case of CVE remediation. This is especially true To help understand this better. I asked each of our interviewees a little bit about how they think we can improve this situation I Suddenly see I think it's two-fold one is certainly on the tooling side So I think Having some way for our module. I mean don't get wrong. It is good that That security vulnerabilities get floated up and we get notified You are depending on something that has a vulnerability that that is good and that is important to the ecosystem The bit that I feel is missing is a way for us to then say You know in a in some metadata somewhere in the module saying You know this This vulnerability we have evaluated and Please you know suppress this from when you install when a end user installs us because You know, we have done the technical work to validate that that vulnerability does not apply to us and You know, please don't bother our users with that now I'm also well aware. There's a lot of good will in a mechanism like that and It would be right for abuse for people to Not do the technical work to do that validation and just you know do whatever it takes to shut the warnings up, but You know, I think there's going to be a trade-off between the user developer experience for End users of modules and maintainers of modules and I do think there's got to be a better balance to what we have today The security game when you're not attacking when you're defending is about prioritizing So we always need to be able to say, okay, this is the last day of the sprint And I need my build green Even if it means there's gonna be some vulnerable dependencies that I didn't have time to review what now So I literally build a feature for that which Let's you quickly ignore a Failed dependency scan for 24 hours. This is enough for the sprint review And then your build is read on the next day again You're you're back to fixing it, but you didn't have to break everything. You didn't have to fail a sprint Because there's a tool so it's no longer an annoyance. It's something that helps you and it builds the culture in the team to well, try to stay on the ball with Making your dependencies secure and I think just to just to like dig into that a little bit more. I do think that That there's been a historical disconnect I've seen it getting better in the industry where like security people would just come in and say just update no matter what which is probably why npm audit says just update and Right like it's kind of like this this mandate rather than in nuance like Update if you use this library in this way, right? Like it's vulnerable for these reasons But when I've actually talked to developers like everyone wants to like understand what's going on I just think it's also like it kind of is competing for resources, right? So if it's like tell me tell me the right path to get rid of this thing to answer your actual question that you asked I do see that there's opportunities to kind of pass through that guidance and I like I wonder also if that's a just a way of Standardizing how that stuff kind of comes in doing these interviews was a great way to validate that there's room for improvement in CVE reporting and remediation But at this point, I am sure you're asking yourself. Well, what is a collaboration space? Why not talk about these things in some existing groups like the package maintenance working group or with folks directly at npm Or sneak or one of the other parties involved since this is the very first one I you know, we want to make sure that people understand what what we're doing here so the collab space is a New program set up by the open jest foundation It gives us a neutral ground to bring together parties from from different parts of this ecosystem different parts of the CVE's Lifecycle and and gives us a nice space with support from the foundation To to discuss these ideas and come out of it with You know solutions So so our goal in this group is to bring together folks from different parts of the CVE Lifecycle and and hear from them and really understand the problem space Because we don't think we have all the solutions And we think that the best end solution is going to be one where we hear from all the voices across the SPLC And we bring together something that's really going to work for everyone involved And so, you know examples of some successful outcomes that we see from the collaboration space It would be like even just defining better domains of control over the different parts of the STLC Something like improving communication lines between main or maintainers of open source libraries and users and then also the other direction maintainers open source libraries and The folks the security folks who file CVE's And then there's the obvious Space where we sit in the middle, which is tooling We have a lot of tooling in the ecosystem, but you know the the context that the security folks have It's not always reaching that that folks who have the control in the end the application developers the open source library authors So, you know finding the ways that we can better tool around these so that the People who are most impacted by these security incidents have the context they need to be able to Remediate or ignore because we always know sometimes ignoring is the right answer when that particular CVE doesn't apply to your project So the next obvious question is how can you get involved with this. We're going to be starting regular meetings monthly to get kicked off. We'll be scheduling them on the GitHub repo. Our hope is to start next week. So hopefully y'all can can hop over on to the GitHub repo and and and join us for the meeting and we'll discuss So thanks everybody and we hope to see you in the GitHub and the special thanks to our interviewees Julia, Nick and Zeeby We couldn't have done this without them and thank you so much for the Conference organizers and for Robin again for reaching out and making sure that we had this session This was great conference and we appreciated being a part of it. Thanks everybody for your time We hope that you're as excited about this collaboration space as we are and we hope to see you in our upcoming meetings We're gonna send you out with a couple more clips from them that we thought were really great You know, it's the open-source ecosystem. So everything you see in the open-source ecosystem is More free information So every time I get an audit to me, it's just free information These are things that I could have spent a lot of time figuring out for myself But I'm getting this for free and it's literally the same attitude When it's code and when it's information about the vulnerabilities. So that's my point of view So this is free information and I decide what to do with it. No, the side is not the right word I'm responsible for Figuring out what to do with it. So if I pull in some code from NPM, that's my responsibility now If I pull in information about vulnerabilities, it's also my responsibility. I can decide to ignore it I can decide to trust it. I can decide to postpone a release because This thing looks scary and I'm gonna look into it like do you feel like that response? Has built trust in those users like do they do they respond in a way that Seems like they still trust Either node red as a project or the node ecosystem more generally or do you feel like that erodes their their trust in the system? That's a good question. I don't think I necessarily get a good sense on that certainly when it is someone who is Is emailing us rather than asking, you know, when it's someone who's following our security policy to report this kind of security issue You kind of immediately understand that this is someone who is perhaps a bit more switched on to these problems so and in general, you know the response when we explain the situation the response we get back is Oh, no problem then, you know, you know, thanks for explaining it and You know, it's a There's there's an understanding and appreciation that the fact that we've taken the time to respond and that we are able to explain it but you know having some way that NPM audit could say, you know Yes, there's a vulnerability, but here's what the node red project says about it And put that up front would just again put that Put more confidence it would remove that period of doubt when someone has felt the need to report a security vulnerability To the time when they get a response from us, you know, if the tooling could at least give Give our assessment to that vulnerability up front, even if you rather than hide it, you know, just say You know, and here's what no dread says about this Which could be Thank you, we're aware fix coming in in a week's time in our next mentioned police or You know judged as not relevant to the node reduce case whatever it might be I think that would just remove that period of doubt between someone seeing a vulnerability and then Going through the process of getting a response from us. Yeah, that's great Do you feel like you your team and other engineers in your sphere are getting enough context around the vulnerabilities to Understand what they need and and do the right responsible actions Well, I want to say yes because theoretically There is a link to a write-up with a POC often, but Not really. So I've seen it way too many times that That conversation on the team We're we're building a feature. There's a there's a merge request It's getting reviewed meanwhile The build is read because of audits. So hey, what's happening? Oh, I gotta fix the audit. Okay Ten minutes later. What did you do? Oh? I I ignored a bunch of things And it took ten minutes, okay, I I think I I think I can trust most situations That these were really things that you just look at them and decide. Yeah, this doesn't affect us But at the same time It's a denial of service in the Mongo client Well, that's looks serious. Okay, let's let's have a larger Debate so we pull in more people and Try to figure out what to do with it Like frankly the way that we're doing it in Everywhere I've ever been has been like the only way that security feels like in scale is just say update Right, like we can't possibly triage all this stuff someone on your app set team can't write tests for everything Honestly, I think one of the most Underappreciated items in the npm audit Output is exploitability So I did take that into account a few times When deciding Okay, exploitability is very low. So this looks serious, but it's very unlikely to be exploited I can spare some time to research it now, but we can Move forward with the feature or exploitability is very high. Okay, let's stop the work and look into it So that seems to be Like a very valuable thing, but this is just a number and I don't know where it's coming from so I might just Be very naive about it okay, so So the particular scenario with no dread was the fact we use bcrypt that's to In two places one for the user to encrypt their password to put into their settings file and Then equally to decrypt when as part of the login to the tool to to compare well, not to decrypt to Encrypt the password that's been submitted to be able to do the comparison with what's in the settings file and For a variety of reasons we depend on both bcrypt and bcrypt.js. So bcrypt.js is a pure JavaScript version bcrypt is a binary module and the reason we do that is a bcrypt Historically hasn't always compiled cleanly on all the platforms that we need to run on and As a strategy to like on the low-powered Raspberry Pis where a lot of people run no dread They wouldn't necessarily have the right build tools so we actually have bcrypt as an optional dependency and bcrypt.js as a main dependency So that if for whatever reason bcrypt fails to install we can fall back to the JavaScript version that's slightly slower But as it's only when you log in we can afford for it to take slightly longer. So So we were actually in this position where we had bcrypt marked as an optional dependency and now That's another interesting factor because tools like npm outdated ignore optional dependencies So there there was a period of time when our checks, you know, are there updates available for modules? We just completely overlooked bcrypt for for a time because again the tooling just didn't pay attention to it because it was marked as optional which is a separate issue But then along came a an issue that someone pointed out that the bcrypt we had installed was had a CVE That the the encryption it did in certain scenarios was not strong enough. You know, it had had vulnerabilities in its encryption Now Because bcrypt is a bind well as from my understanding of it Because bcrypt has a binary component They have been quite they have a compatibility table of if you're using these versions of node this is the version of bcrypt and It's actually quite a You've got to try and get that right and thankfully that it's always the case their latest version supports Yeah, everything and higher. So it as long as long as you're on the latest version, you should be golden however Just through through the timing of this at the time the CV came in the fix Was on a version bcrypt that no longer supported node 8. I believe Whereas within node red we still supported node 8 and node 10 Again reason being we know there are users running on embedded devices that can't change their runtime So, you know, we were we had made a statement a couple years ago that we will keep supporting node 8 up until No dread 2.0, which is due now To coincide with node 10 going out the window at which point we drop support for node 8 and no 10 But it does mean since this one of we got reported we have been simply unable to upgrade bcrypt Because that would force us to drop support for node 8 and that would break our commitment to our user community So when we looked at the actual details of the vulnerability Again, I Can't remember this the specific parts of the internals the module But our analysis of it showed the the very limited way we used group bcrypt purely just to encrypt the password Which is stored on disk. It's never transmitted anywhere And then to do the comparison of the password that's received That didn't go near the The vulnerable code so we were We were satisfied that the the risk was not present for for no dread users. I think Yeah, we fight this Frequently, right? I think everywhere I've been just software supply chain is is hard And so a lot of times like the only way to scale it has been, you know, let's let's have people update their libraries even if it's not Necessarily vulnerable, but we but it has a vulnerability, right? Like the easiest answer like the answer that gets to us is Updating that thing and so I think there I think if we can make consistent the way that these vulnerabilities come in and Have some requirements or even if we can if the security people maybe don't write the test themselves but you know, they can turn in their their vulnerability and and their CDE and You know, if there's library maintainers or people like this who can help You know understand the vulnerability itself write the tests and then so then npm audit becomes a thing That's that's more customized to your code and your use cases rather than just we've we found a match for this library You know kind of somewhere in your dependencies, okay, right so one last thing is the thing you care about the most in this topic and it's how it affects The the people who build the packages so from my point of view The other way around To me being the consumer of npm in this role I'm installing dependencies. I'm checking the audit What What I would find useful is Information in the audit or elsewhere About Other people's choices, so there is Thousands of people like me Checking the audit looking at it deciding like this regular expression denial of service in a fourth level dependency of express Doesn't seem like it's affecting anything, but what do I know? So the question is are there any people who already researched it and made a informed choice That's probably much better than my educated guess And if so, I want to know what they did so One thing to look into one thing I'm hoping for the future is a system where The the file with decisions that I produce is something that I could potentially Share with others and being in a position of someone who's informed to make those decisions I could be Okay, I'm a troll. I'm gonna say influencer in this area so what I publish is gonna be a An informal guideline for some other teams for example if if I'm involved in a certain project I Can publish information about what I consider safe to ignore and people could use that information as a Factor not one by one. So not importing my decisions, but just Reading up on them treating them as a suggestion where when making their own choice