 Good morning, everyone. Thank you for joining me. We're going to kick it off with the state of open source security. I want to talk to you about high level view of everything that happens on language-based software ecosystems, container technologies, projects of open source maintainers, and developers as well. So quick introduction about myself. My name is Liran Tal. I'm a developer advocate at SNCC, where we build developer-friendly security tooling to help developers build secure applications. I'm also involved with the Node.js security working group and a bunch of other activities across OWASP and publishing books. If you want to follow me on Twitter and ask me anything about those, I'm happy to help. And I'll be here the whole day and doing the collaborators on it. So just find me around. So kick it off. Today, I think nobody would question the fact that open source has made an incredible impact on modern software development. And it continues to expand every year. GitHub had reported that 2018 had seen more new users signing up than all of its first six years combined. So open source software is really everywhere. And contributions are made across all languages and platforms impacting growth across different industries and undoubtedly an essential part of business technology strategy. In 2018, actually, Java packages doubled. NPM added about 250K packages as well. And we actually surpassed already 1 million packages early 2019, so this year, around June on NPM. So the use of open source is accelerating. However, with great adoption of open source comes great responsibility and risk that we need to mitigate, whether we are owners of open source, maintainers of open source, or just using open source software. In 2018, vulnerabilities for NPM grew by 47%. PHP and Maven had grew by considerable percentages as well, something around 27 and 56% respectively. So we're seeing all in all about 88% growth of application security vulnerabilities in the last two years. What is really interesting is that vulnerable versions have a long tail of downloads. In other words, how long does it take for users to adopt to a new version that has a fix from an old one that is vulnerable? How long does that take? So we turned into Python, the PyPy registry, and they took a look at a pretty popular package called WebSocket. We found that Eventoid has a high denial of service vulnerability. Disclosed in August, you can see that downloads have been thousands of downloads, or tens of thousands of downloads have been actually accumulating even after that. So people are still downloading vulnerable versions of that. Or this could be for different reasons. Maybe there's a legacy. Maybe there's issues to upgrade to a fix. But considering this fact, we're still getting those long tails of downloads, even for vulnerable versions. This trend is actually increasing of security vulnerabilities, even across well-known system libraries. So whether you look at things like Red Hat Enterprise, Linux, Stepian, all of those Linux distros, we're seeing those same trends of increasing CVEs and security vulnerabilities being reported, being disclosed. And we'll get to that in a bit, but I will say that those CVEs that we're seeing here Linux OS libraries are not something far away from us. Actually, they manifest in the container technology that we're using, most probably, to power applications and bundle them with them. So kind of transitively, we are being affected and impacted by these vulnerabilities. So let's take a look at what happens in language-based software ecosystems and how much do we rely and know about those open source dependencies that we use. So recent academic research paper had actually investigated the characteristics and properties of different language-based ecosystems. It took, for example, Python, PyPy, the PyPy registry, and also NPM. Try to compare and figure out what is different, what is familiar, and similar around those. What it found out, for NPM specifically, is that 61% of all packages on NPM could be considered abandoned packages. Now, straight out, that seems very outrageous proposition to say, but it depends what is your opinion on what is, or how do you measure what is an abandoned package, right? So for the sake of this research paper, and research paper being something easily to consume, we have decided, or the researchers have decided, that an abandoned package will be that which did not have a release in the last 12 months. So rightfully so, you could go ahead and argue back that last 12 months did not have a release. Maybe that package had reached maturity. Maybe it's already so well known and well developed it does not need any more new releases. Everything is fine. Except EventStream incident happened, a security issue that kind of, I'm pretty sure some people in the room have heard of. If not, there's a whole post more of them, you will not have a problem finding and Googling this information, but just like put it into a small disclaimer of what was going on there. EventStream has been on the NPM registry for almost eight or nine years since then. Definitely mature, did not see any release in the last two or three years, but someone through a social engineering technique was able to go ahead overtake publishing access and through that being able to inject a malicious package inside transitive dependencies of EventStream that you would usually also probably will use not as a direct one, but as a transitive dependency. So through all of that injecting something into a package that being downloaded about two million times a week. Another interesting insight that this research paper pointed out was that if you take a look at what you install, your average NPM install, there's a whole difference between what happens on Python, for example, to NPM. So for NPM, your average NPM install for a package would pull in four levels deep of nested dependencies, which means this is great if you're tracking something like Express or Fastify, but at the same time, they're gonna pull in a bunch of those other dependencies as well that you need to track and understand as well. Have the exact mindset of a security of what's going on there. Let's do a mental exercise. Imagine you're building Node.js app. Your mental image of the application is this following blob where you see your application being deployed or being used somewhere. Except the reality, however, is that the actual code that you write, the total code that we write as developers is significantly smaller amount than what we actually ship out. So your mental image of your application might be distorted towards the reality of what you actually wrote versus all the app that you are actually responsible for as well. All of us are relying on open source software and community-powered code, which is leveraging this beautiful open source world and boost our productivity, but we need to understand this concept of what we write and what is not ours and then what is our risk and responsibility towards those dependencies as well. So granted, it is, I think, hard to imagine these days writing software, delivering products without being reliant on any kind of open source software or dependency. And managing dependencies for a project is an important task. And it requires due diligence, tracking those dependencies that you use and that you rely upon and making sure that everything is okay. After all, the application that you're deploying is making use of that code and bundled that as part of your dependencies. So we wanted to understand what is going on in terms of where do we find security vulnerabilities. This is all, like most of this is part of the state of open source security for that we published, in which we have taken a look at what happens, both for users of SNEAK and the ecosystem itself, et cetera. And what we found for this example is 78% of the time when we will find security vulnerabilities for users using SNEAK is we will find it for NPM in transitive dependencies. So again, going back to that example, if you are a JavaScript or a developer, you're tracking all of the Fastify changelogs, all of the Angular and React changelogs, et cetera, most of the time, 78% of those, when we find security vulnerabilities for your project as we scan it, as we offer you fixes for it, it will not be for that direct dependency that you will use. It will actually be most of the time for those transitive dependencies. You can see this is actually a bit different between different ecosystem, which could say a lot about what's going on with our ecosystem, but I will not go into that right now. So what can possibly go wrong with transitive dependencies in my applications? So I have a whole different talk about what is happening with malicious packages on ecosystems, and I won't drill into one of them. And these are all examples and use cases of things that happened in security incident that happened in the ecosystems. I'm gonna drill into something that's called GetCookies. GetCookies sounds, I guess, pretty simple in terms of what it does. It parses HTTP headers for cookie data, or does it? So actually, GetCookies is nothing less than a command and control backdoor. It's sole purpose of allowing someone to attack your web server through sending command injections remotely. So any web server that would bundle this dependency, but actually allow a malicious attacker remotely knowing this vulnerability to go ahead and inject any kind of arbitrary code into your app. How does it does so? So the whole kind of exploit code in GetCookies is roughly four lines of code. I've actually summed up the important part here. To process this remote code injection, it has a simple switch case, does three things, reset the buffer, load data into the buffer, which in our case will be JavaScript code, and then execute whatever is on the buffer. So someone having control on this web server could go ahead, input, inject malicious JavaScript code through things like HTTP headers. This will process it, reset it, and run it. Now the attacker had to build a whole pyramid of nested dependencies to hide GetCookies behind them. Mind you, all of these three dependencies are actually offsprings of the same attacker. All of them belong to the same one, but one or two or three malicious packages on NPM having one billion packages inside will not be that much of a threat, as in who would go ahead and install GetCookies, which has maybe zero downloads. So without a vessel to propagate this and kind of claim trust, it's gonna be hard. So what this attacker was able to do is compromise this library called MailParser, which has something like half a million downloads on the registry, and through that, be able to push those dependencies into the MailParser project. Now MailParser itself is not a web server, so having that bundled in or even required in use may not have been put you on harm's way, but perhaps this was all done in order to provide some legitimacy in terms of someone searching for a GetCookies management package and says something gets downloaded half a million times. Well, maybe I'll use it. So this kind of malicious packages happen all the time. Here's an example from the NPMJS advisories. You can find that also as like a sneak. Vuln advisories doesn't really matter which one you're tracking, except I wanted to give you the fact that all of these malicious packages, different kind of typosquirting attacks, happen all the time, right? This is November 27th, this is just like two weeks ago. How do we handle all of that? How do we mature into being responsible to open source dependencies to what do we install? For the sake of that, I've created a project a while back called NPQ. What it does is when you do an NPQ install something like jQuery, as you can see here, this is a wrong abbreviation of jQuery because this is actually a typosquatted package. It will go and do some due diligence. It will check, for example, how much is this package? Is it something that gets 20 downloads? Is it just new? Someone is trying to maliciously inject users or is it something that gets downloaded a million times a month to probably get some kind of trust from the community around it? Does it have a repository, open source repository associated with it to allow you to go ahead and ensure that there's like an open source code base that you can check, et cetera? Maybe it has vulnerabilities in it. Why would you not know about it before installing it rather than after the fact and then finding out ways to mitigate it? Here's one example that we can go ahead and use to be a bit more responsible. And security vulnerabilities happen all the time whether you're using different types of language ecosystems. Here is Mark, for example, a very popular Markdown parsing library for Node and JavaScript kind of used between the server and the front end as well. You can see that there's a fix for it for like a read-os vulnerabilities happen just a while back, just a few months back. And the interesting thing about vulnerabilities at least in the last two years is that they have a bit shifted in terms of the trends that we're looking at. So when 2016, as we look at, the high and medium have like a kind of ratio where there's more medium than high. In the last two years, we've seen this ratio actually flip. So more of these vulnerabilities that we're seeing actually high than medium. So most developers and maintainers, I think, would agree that security should play an important role when we're building our applications. Except there are no textbooks on how do you build secure applications. There are so many guidelines and always kind of like standards or like semi standards. But there's no open source maintainer in this room that would say, I'm following this and that standard and this is how I do secure code. So standards can also vary between different projects which means one project can follow a very good and highly secure guidelines, secure coding convention, et cetera. But another open source project, very, very popular in the same sense would not follow those as well. So all of these security standards things for open source project is very varied in terms of how it is done. So just in this year's 2019 state of the octopus report from GitHub, security was actually the most popular project integration app category. And the more we use open source software, I think that we realize this reach that we accumulate upon ourselves as we're trying to use someone else's code in our application. And having automated tools that we can use in our CIs is our most important, right? Because this is how we're gonna be able to scale up security as part of the way that we scale up the delivery of our applications code. So through this survey, we asked some questions for maintainers and developers. In this part, we asked open source maintainers to rate their security knowledge and how good that is. So we found that 70% of open source maintainers would actually not fill that confidence in handling a security issue if it was this close to them, so averaging their security knowledge somewhere around 6.6 out of 10. Moreover, as much of, you know, we're seeing adoption of CI tools that's things like CircleCI, for example, to help us have a good CI, CD kind of pipeline thing. They still go underutilized, so we're not enabling, we're not empowering the CI integration that we have for applications. This DevOps pipeline to go ahead and put security into it. As a testament of that, when we asked developers how often or what is the cadence of their security auditing, one of four open source maintainers not perform any sort of security auditing for their projects. So security practices are taking many different shapes and forms, and some of which are really easy wins. For example, choosing a really good password so your package would not get compromised, right? Like mail processor, like many other packages and incidents that were happening before on the ecosystem. Another option is, for example, to enable 2FA in your code, whether that is the NPN package registry or Docker Hub or something like that. But how often are we doing that? So what is the state of 2FA, for example, in the NPN ecosystem? Well, as we look back into it, and 2FA had been available on NPN since the late 2017, and maybe despite the fact that it had been there for so long, there's only a very small percentage of the ecosystem of developers and maintenance actually enabling that. And this accounts for a very small, insignificant amount of packages out of the whole million packages happening on NPN itself. So it's the responsibility of all of us enabling 2FA, making that possible for us, making the security possible for users consuming open source software. Interesting take is, how does this affect and how does this look like in adjacent ecosystems? For example, what's the state of open source of 2FA, for example, in Docker Hub as an ecosystem? Well, funny thing is, it's 0%. And there's a whole funny story around it, and that's kind of accurate to October 1st, which I'll get to in a minute. And why does that happen? Because somewhere around June, someone on the Docker repository chimed in on this issue and said, you know, we're planning to rolling out this multi-factor authentication at the end of June. Everyone were thumbs up, great, amazing, let's do it, except it didn't happen. Someone chimed in in July and said, you know, hey, we're July, what's up? You said June, and then came August, you know, someone chimed in again and said, you know, this is August, what's going on fellas? How are we doing there? I like it that they are giving the programmers kind of perspective into it, I'm a programmer myself, how hard can it be? All right, so we didn't have it back in July, back in August. Guess what happened in September? Nothing much, we did not have 2FA available there, reminding you that we're talking about Docker Hub. This is a very primary registry for Docker container, probably firing a lot of applications for everyone in this room, including myself. And October coming up, right? What is happening there? So there's an interesting update. We've been rolling out personal access tokens, which is not the same as 2FA, but maybe a step in a good direction, except we need to remind them that maybe we need 2FA, and then it happened, everyone are happy. So I hope after this talk, everyone is gonna have 2FA enabled on Docker Hub and on MPMJS. So the security blind spots of Blockfile, I wanna talk about this as an example of going back again to maintainers and open source activities, and how often have we as maintainers and as developers are kind of putting this out of pilot mode where we don't even consider the security risk of an attack vectors of things that we do as like a day-to-day activities for the most basic things. So I'll give you an example. Here is a pull request I opened on GitHub. You can see that I have actually part of this pull request for real project, actually changed some dependencies as I needed to. What's part of my contribution? You can see that my yarnlock file is actually not being displayed because we take that kind of for granted. It's a machine-generated thing. Is there anything that I need to look at? Maybe not, there's a whole lot of codes being changed as well. So it's kind of collapsed and do not show you all of that. And if I will open this up and show you what were my contributions along this dependency update, along the code that I actually added as well, maybe those of you with a good eyesight here will actually see that on the left. I'm actually changing something that is an actual package that is being used from the registry. And you can see on the right that my change is actually using the exact same package, but from my own controlled domain, whether that's like an NPM proxy mirror or I can already install it directly as you can install NPM packages from GitHub and it has malicious code in it. So as you will merge this pull request we'll probably not see it because you did not even take care of go ahead and reviewing what's in a lock file, go ahead and merge it. Maybe you're at this point when you're doing an NPM install the next time. Any of those developers, we usually know in a lock file usually being used for authentication developers of a project specifically, not the consumers of it. You might be susceptible to this injection attack. So why don't we have tools to help us with these very simple things? So there we go. I built this thing called lock file lint. I think as JavaScript developers we heavily rely on static analysis and linters. So there we go. Another one that you can add to your CI lock file lint. You can tell it to validate everything is first of all HTTPS, specific sources, what are you using? Just NPM or just yarn. Don't want anyone to inject anything from GitHub or anything other like that. There we go. Another tool that you can go ahead and use in your CI. Perhaps a silver lining in most of this talk, I would say. We're not doing that bad, but across the ecosystem but I would say what is the silver lining here as we asked developers and engineers who is actually responsible for security and the most of the respondents have been around developers. And this is great because we're seeing this strong statement, strong test command of developers being like fools back and it's not just owning the DevOps or the backend at the front end. It's also being responsible for things like performance of application, accessibility requirements, how about security for applications as well. Understanding the risk for us as maintainers of open source software to actually mitigate and push out security fixes is really, really important in terms of how are we actually rolling this out. So I wanna show you an example. Happened not a long time ago. This is a GitHub project for a very popular NPM package. Doesn't remember, doesn't matter the name because these kind of things happen all the time. But I'm giving you one example where someone was trying again and saying, you know, there's a vulnerability that was reported. There's a link to the sneak phone, you know, telling the person to go ahead and release like the owner of this package to go ahead and release like a new version. So this can be consumed as a fix for the package that they are relying upon at the transitive depth that has vulnerability in it. The maintainer did get involved, right? This did happen, you know, everyone were proactive about doing it, except, you know, I think kind of like lack of education of how you mitigate security issues. It was published as a major version. So if version two is vulnerable and you publish as a maintainer a fix in version three, that's gonna be a bit tricky because having an automated upgrade from two to three has different semantic meanings to it. Maybe an API was broken, maybe a very elaborate CI will not go ahead and update it because they are afraid that this will break their applications. So there's a whole lot of security education, not in terms of how you write secure code, but also how do you push that, how do you make that available to users to consume it in a very seamless way that automated upgrades things like dependabot or sneak upgrade PRs would be able to go ahead and pull in those new versions for you. So there's a whole lot of best practices for open source maintainers. I've got this cheat sheet, some of it is kind of left here. You can find it online as well, I've worked on that with Juan Picado who is the maintainer of Redatio, a local NPM proxy. He's been doing an amazing job as well for open source. So moving on, what about open source dependencies impacting container security technology? What is this increase of adoption, I think around Docker and you know, the strong growth around open source that we're seeing and is expected to grow more and more. We're talking about more than one billion downloads happening probably every one or two weeks on the container registry. Docker Hub reported about one million applications in the form of container images being uploaded to the registry in the last, in the Docker Hub registry, I should say in the last year. So this is very much fueling our open source growth except Docker images almost always bring non-vulnerabilities alongside our great value. So if we take a look at just scanning those 10 most popular Docker images on Docker Hub just scanning the most popular page of all of those images, we would find that if you just get default images of each of them, they at least have each 30 vulnerabilities inside them. Notice presumably here with 580 as well. So most of those vulnerabilities actually originate from the base image of your application. So this is why it is so crucial to understand why are you using as a base or as a current image in your Docker file? If you're using something like Debian Jesse, you're gonna pull in something like 700 dependencies. If you're using something like Buster or Jesse Slim or some other variations of it, you're gonna pull smaller images, smaller dependencies of all these libraries inside them and thus also you're gonna pull in less vulnerabilities as well. That's a smaller image. And the thing is that fixing it can be really easy. So if you understand, if you know this fact, you understand that fixing it is something that is very easy to do. For example, 44% of those Docker image vulnerabilities can just be fixed if you change to a newer image. If you do not use Node Latest or Node 10, but you use Node 10 Slim, for example. Here is the open source vulnerabilities in each of those image tags for Docker Hub. So you can see that using Node 10 will pull in 582 vulnerabilities into that image just by using that. You're vulnerable to this amount of vulnerabilities. Now sure, granted, they may not all be exploitable. They may not have all exploits in the wild. But why would you shift by default almost 600 vulnerabilities with your Docker application? There's no same way or same reason to do that. Use a different image. Except I know it is not that, it's a bit of a smarty kind of way for me to just go ahead and say it, but we need a tool to help us do this. This is an example of how stick that so that you can use other tools as well. But the idea of friendly tooling that help us figure out, well, maybe I detected that you're using Node 10 something. It has 900 almost vulnerabilities. You can consider moving to any of these other alternative images that you can try and use and mitigate and the security that you have. So instead of having pulling in 862 vulnerabilities, you'll be pulling in only 54 of them, which may be an acceptable risk in your workplace. The other thing that we should kind of like pay attention to is that just by rebuilding an image, we can go ahead and mitigate 20% of the Docker image vulnerabilities because rebuilding an image may pull in depending on how the image is built. When you opt to get updates or new upgrades coming in and pulling in your versions, if nothing else have been filmed by the dependency manager inside the OS itself. So we're talking a lot about container technology. And we also asked some questions about this. So when do you scan your Docker images for OS vulnerabilities? Interestingly, even though security is such an important part, even though there's a whole trend of security CVs going on in Linux OSes, 50% of developers will fail to do so. They will not scan those dependencies even it is something that is very easy to do. Many tooling available free and some of them open source as well. You can go ahead and take that. What about you have those containers deployed to production? What about going ahead and testing this on production because unlike functions which are very short lived, Docker containers could be very long lived. You could have a very legacy service, a microservice that the team is using has been deployed to production. No one had pushed any update onto it. It's been running for like a month or two. It is now in production. Maybe there are new CVs affecting it, something like a new hard lead or whatever that could be impacting it. Still 50%, almost 50% of developers or engineers would not even find out about it. I guess another silver lining for container technology is that developers are still, even for the sake of Docker images and Docker files and stuff like that, with the empowerment of developers to own their infrastructure as well. We're seeing a good and positive trend in terms of ASAs developers owning the security of our container technology as well. Some best practices around Docker image security that you can find are the whole linters and how to scan them and very easy things that you can go ahead and pull into your pipeline. It's like you can go ahead and think me afterwards I'll give you all the links and all this will be available after the talk as well. But I would wanna end out with kind of like this note where attackers are kind of targeting open source because finding one vulnerability, one CVE affecting, you know, Fastify, Angular, whatever kind of open source project that you may use is also kind of translates into many victims because there's always a lot of users for these open source packages. So if something has a very large popularity, it means they'll probably be able to inspect or attack a lot of consumers as well as not everyone may be up to date, not everyone may be rolling out patches and upgrades as fast as they can. And this is why it is so easy for attackers to just target open source software. What if security wasn't bit easier, was kind of more developer friendly, was actually actionable. So not just allowing you to learn about this new vulnerability, but actually fixing it for you. Like opening a pull request to let you know, I will tell you that I will upgrade your dependency because I wanna pull in a new version for that, maybe a smart minor version upgrade and not pull in the latest version to not break your apps. What if it was something that you would just push into your CI, example, showing sneak what you can use and PMO data or dependency checks in your CI tooling as well, but what if you would have this security integrated into your pipeline, into your workflow. So when someone adds a new vulnerable package, when someone adds a new security vulnerability to a transitive package, your CI breaks, right? You're a bit more conscious in how to do this in a more significantly considerable way, more responsible way to protect your open source dependencies. So thank you for much. Use the open source, you know, stay awesome, stay responsible, thank you. I think we have time for questions if anyone is, yes, it's a good question. I wanna show we are defining a specific ecosystem like NPM if that's what you mean as polluted, as in Java would have, you know, a similar amount of dependent of like open source vulnerabilities growth in a similar kind of, it's not like a, like an order of magnitude difference, it's kind of the same playing field as well. It is more about, I think, understanding these securities can being able to do something about them, understanding, you know, what you should be responsible for not taking those things for granted. So I don't think like a specific industry is polluted or not. There's a whole, if you go and dive into the report itself, you would see that we actually looked into kind of very early stage growth ecosystem is like a goal that would have, you know, they also have kind of like this trend of increasing vulnerabilities and even though it has like a whole significant amount of security vulnerabilities in total compared to something else, but you could attribute that to, you know, maybe Go isn't that popularly used, maybe security researchers are simply not looking at it to go ahead and find vulnerabilities there. So there's a whole kind of wide range of what would be the reasons that we're not seeing this in Go versus, you know, NPM or Java, for example. Go ahead, yeah, awesome, yes, I am. So glad you're bringing this up. So actually the security researcher who found this, his name is Daniel Ruff, he's actually involved with Redatio and a bunch of other projects. He's a very security minded developer as well. He's not a security researcher by profession. And what he discovered is if you're using NPM or YARN, they both, so the NPM kind of like, not the ecosystem tells, but like the tooling around it allows you to distribute binary files. So you could go ahead and, you know, by the way that I was building this NPQ and lock file link CLIs, you could go and build in a CLI where you would install something, make it global. NPM or YARN as a client will go ahead and make, you know, make the whole path and same link changes. So anywhere in your shell kind of prompt thing, you would go ahead and be able to run commands. The thing that Daniel found is that you could have, he called it, like we call it terminology as like a bin planting. The idea is that if say one NPM dependency, we can take, you know, whatever example you want, but if one defined a specific bin file to be executed like a CLI and installed it, and then there's like another one that you install. Like a different package that you install, but it uses the same kind of name for that bin CLI. It will override the original one that was created before that. So you could actually say, you know, for example, if you're using Yeoman, which, you know, I'm gonna go a completely hardcore retro old school kind of thing, using Yeoman as a CLI. I could go ahead and create this Yeoman dash two, whatever, get you to install it. Once you install it, I will actually be able to override the original Yo command, the Yo command that Yeoman had actually declared. I'm actually being able to override something else. It is, you could say that you could do the same thing with scripts. So basically when you install something, you could just go ahead and do like a post install scripts and do the exact same thing, except NPM and Yarn has these ignore scripts, which is a convention that, you know, security convention that you should probably understand and use. But even if that is used, this kind of bin planting could still happen. So it is still a very significant security issue. Interesting to say about it is that PNPM had not been vulnerable to all of the cases that we've seen. And it actually warns you if like, if you're trying to override something that already exists. So yeah, if you are using NPM and Yarn, you should probably update to the latest versions so that you're not vulnerable to that. The only way to kind of become vulnerable to that is if a dependency is being installed and, you know, declaring those bin files, which is something that, you know, could happen if you install it like specifically, like can NPM install something or maybe that gets installed as a transitive dependency as well. So, you know, we don't have a lot of control on what gets there. No, I don't think there have been. And as we've seen, it's a fairly new issue. Daniel actually consulted with me about it. We talked with SNCC security team, we involved both NPM security team, as well as Yarn. Myle is the coordinator for Yarn. We actually contacted all of them, kind of to consolidate all of this messaging and communications. And have all been like NPM and Yarn have been releasing security updates. So we haven't, this is like freshly in use for like the last four days or something. There is one through GitHub that they have requested. I don't know if it's like, have already been a block assigned, but there is like three CVS as far as I remember, concerning all of these core NPM specifically. This one? Yep. Yeah, I graphed that as well. It's basically this one. To consider like also Golang, what is it? Yeah, yeah, that was just NPM specific. Far as I remember that was the one, yeah. This one for ecosystem considers growth for everything, Golang as well, it's all kind of growth. I think historically, yes, because like you can see from the Linux OS, that's almost going, you know, I won't say exponentially, I do not want to go there. But yeah, that's kind of taking that trend. I would say, you know, being a bit more conscious about what does this mean. So first of all, for NPM and JavaScript, we have been having a lot of activity around that. So the security working group being there, you know, being a bit more active and vigilant and what's going on assigning CVS, but there's also been like a lot of activity. So obviously like there will be more CVS as well, if no one had looked at it before. So that contributes. The other thing is not everything is, you know, exploitable. So you have, you know, you may be vulnerable to like some high issues, like for example, just may have a redos attack versus whatever. But you need to understand that, you know, just is maybe a depth dependence. You're not deploying that to production. So that is not something, you know, specifically to like worry a lot, right? Under kind of like a statement here. So there's a whole kind of like prioritization, understanding, you know, what is an actual risk, you know, what is a manifested risk and what is something that may not impact you. That said, I would say, you know, you do not want to go into production with having like 20 low vulnerabilities. If you can just lower it to zero with like zero effort, right? So you do want to go ahead and mitigate these security risks if you can, as much as possible. Good question.