 So we have Adam, a founder, CEO of HunterDev, and then we have Erez from BT Street, a computer research tech lab. They are going to be talking about the supply chain. Thank you so much. Hi, everyone. Thank you for joining us here today. So we believe there are many reasons that traditional application security is failing us. Join the long series of talks I'm doing on that, but today we're going to talk about supply chain in specific. So as introduced, my name is Erez. I'm the VP of security research at Checkmarks. I started my IT life as a developer, and then everyone noticed that I'm better at breaking things than building them. Later I noticed that I'm better at managing other people to break things than breaking them myself. And I'm also a big believer in spreading security awareness and education. And this is why I'm here today. Adam? Yep. Hi, everyone. So yeah, Adam Nygate, founder and CEO at HunterDev. Previous roles include working in security architecture at GDS, a part of the UK government's cabinet office. I'm a kind of rehabilitated hacker. I first had my run-in with the authorities when I was about 17 years old. Ended up fleeing the UK, moving to Singapore. No, just kidding. But did end up moving to Singapore. And from there, I started helping organizations to stay secure in Australia, Singapore, Kazakhstan, and various different countries. And then over the last four years, I've been working with over 10,000 hackers and maintainers to help keep the open-source ecosystem secure, sometimes unsuccessfully, sometimes stepping on a few toes. But it's making it all that sometimes you need to break a few eggs. So if you are standing here, you understand that supply chain security is something that is everything that is the code, but not your code. You have the parts that you write in-house, and there is part of code that someone else writes. Yeah. And so there's vulnerabilities in malware in affecting an open-source supply chain. With regards to vulnerabilities, we're trying to understand the situation, what does it look like, begin to evolve our understanding of the risk in the supply chain, explore the vulnerability disclosure lifecycle, and then think about how to approach it differently. If you're here, this is not news or something you don't know. The reality of open-source vulnerability is that 80% of the application code is open-source. We are focusing usually on the 20% when we're talking about security, but the 80% is left alone. Also, this is also not something surprising to us that 97% of applications use open-source software. Now, when we're talking about supply chain, people ask me frequently, OK, we have supply chain for years, basically, since we had supply. What is the difference? Why are we talking so much about software supply chain, security? What is the difference? What happened? Definitely in the last few years. So if we compare the traditional world of supply chain and the software supply chain, it's pretty simple. We can see that it starts with raw materials when we talk about traditional supply chain. In software supply chain, it will be different sources or dependencies. And then we move through build systems and networks and application repositories. And instead of using tracks, we're using the internet. But then eventually, it comes outside to production or to the users. Nothing is different, right? Well, almost. Let's imagine a well-known scenario of supply chain, a very, very old one these days, assume that you are a manager or running a factory for assembling cars, OK? So you're running the factory. You're getting your tires from Japan, the chassis from the US, the engine from Germany, whatever. Everything is strictly done. Everything is very, very carefully under specific standards and compliance. And everything works very well, every nut and bolt. Now imagine that you walk in the street one morning as this factory manager. And someone comes to you and tells you, listen, I created an amazing system for breaking cars. It's a breaking system. You should use it. It's free, by the way. Why don't you use it? And you say, wait, wait, wait. I'm worried about security. Is it good? And he tells you, of course. It has 5,000 likes on GitHub. And you say, oh, OK. So let's take it. I see you smiling, but this is how open source security looks like today. And this is the supply chain that we're talking about today. And it is basically all about misplaced trust. We have trust paradox. If we have someone, random person, come from the street in our office and start, I don't know, typing on the keyboard into our code base, we will probably have security or police or both escort them out. But if the same person is doing that on GitHub or any other repository and call themselves not a random person but a maintainer, immediately, let's use it. This is a problem. This is a real problem. And when we are trusting these people, the maintainers, what do we really trust? We need to trust them, first of all, that they care about security. If they don't care about security, there will not be any security. We need to trust them that they know what they're doing. If they don't know what they're doing, even if they care about what they're doing, there will be no security. And most interestingly, although we really like to ignore that because it's a horrible thought, we need to trust maintainers to not be malicious. Now, the first two kind of trusts is something that is easier for us to accept. It's a mistake. It's a bug. It's coming from some sort of misunderstanding. It's a vulnerability. We know how to deal with that. We have mechanisms to do that. We even have some tools to do it for us. But as I said, we forget that when we talk about malicious maintainers, we are dealing with malware. And this is kind of the two sides of the coin we want to mention and discuss today. Let's start with vulnerabilities. Yeah. So to kick us off on vulnerabilities, let's start exploring the situation a little bit. So a few examples of well-known open source vulnerabilities. While ago now, there was a pretty prolific vulnerability called Heartbleed. It affected open SSL, one of the most used components to help keep websites secure globally. Now, at time of discovery, Heartbleed ended up affecting about 17% of the internet during the time in which there was this exposure. Millions of records were stolen. And it remained undiscovered for two years. In 2017, another big vulnerability affecting Apache struts. This was the exposure that led to the big Equifax data leak. 160 million records stolen pertaining to people, real people, their private information. This vulnerability was undiscovered for four years. And most recently, Log4Shell affecting the package known as Log4J. It's estimated that this vulnerability affected 93% of enterprise workloads. And it was undiscovered for eight years. And this is just the tip of the iceberg. And I'm sure you guys are familiar with this kind of image, memes on the internet showing that we know what the tip looks like, but beneath the surface, there's all these other scary things. But I think that's what's most terrifying in open source. We don't know what lurks below the surface. All we do know, as an example, is in 2021, across 50 million open source repositories on GitHub, only 5,000 vulnerabilities were found. And this is just dropping the bucket. So let's zoom in a bit. What does this risk look like in depth? So when we start to think about risk, it's important to bring in the concept of time, thinking about how does exposure develop with it. And typically, we think about this risk from the moment the vulnerability was disclosed. So once we're made aware of the vulnerability, to the point in time that it was fixed. And we can crude the model like what this risk looks like, something like this. And what this effectively means is that for a period of time, the risk is accelerating. And according to GitHub, typically, this risk affects us for about three months on average. So we're exposed. We have this vulnerability for three months. That's typically how we think about it. And this is what I like to call recognized risk. This is the risk that CSOs, security engineers, security analysts, think about today within organizations about how to keep open source supply chain secure. But I want to introduce a phenomenon, multiple discovery. It's a concept that explains that most discoveries are made independently by many. An example of this was with Heartbleed. So prior to it being publicly disclosed, it was exploited by multiple threat actors for up to five months before disclosure. And allegedly, the NSA was aware of it for over two years before public disclosure. So with that in mind, how should we start thinking about the risk? We, again, start with time. But this time, we want to think about the starting point of the risk is not when it was publicly disclosed, but when the vulnerability was first introduced. Because that's when we can get hacked. That's when organizations are really exposed to this risk. So we pull in these two data points when it's introduced, when it's fixed. We bring that zero day, that public disclosure point, back in. And now this is what the risk starts to look like. And there's our recognized risk. And then this area to the left of the zero day is what I like to call unrecognized risk. And what's most terrifying is that risk exists for over four years on average. Now let's zoom in a bit more. What does the vulnerability disclosure lifecycle look like? So let's walk through it. First of all, when the vulnerability was first introduced. Again, as Erez mentioned, this is simply a bug, a mistake of code that the author introduces as part of a feature or a bug fix in their project. Some point in time later, it's discovered. Fingers crossed, that's an ethical hacker, a white hat, who reports that to the open source maintainers. They then go ahead and verify it to make sure it's a real vulnerability to determine how significant it is. They will then hopefully create a fix for it. They will then publicly disclose of that vulnerability, normally via something called a CVE. And that's when organizations can start hearing about it. They can get notified by tools or by subscriptions to databases which watch for these CVEs. And finally, if a fix is available, the organization can adopt that fix. They can deploy it to their production workloads. And again, so this is four and a half years. So this lifecycle from when the vulnerability is introduced, according to GitHub, is over four and a half years on average. And here, again, this is the portion that we recognize as a risk that we try to address today. But to the left of that public disclosure point is this unrecognized risk, which we currently are very limited in the ways that we address it. When we address the recognized risk, this is what I like to call traditional thinking. It's quite reactive in nature. We use tools like source composition analysis, which I always like to describe as kind of like an antivirus. It's got a database behind it which knows about known vulnerabilities. It'll scan your open source code, try to match up the open source code that you use to its vulnerability database, and then alert you when it finds a match. There are also newer tools out there which include things like automated patching. So if a fix is available, it'll help you include that fix in your source code. And there's novel tools coming out here today, like runtime SCA, which tries to improve upon kind of that traditional SCA. And just to be clear, SCA is very much needed, but it's just no longer enough. We need to start becoming proactive to the way we approach security in this process. To date, the only things really done to try to address this unresolved risk is limited investment in critical open source software. And what I mean by this is a few organizations help to try to secure the 0.0001% of open source projects that we most depend upon, like the OpenS cells of the world. But an organization doesn't just depend upon that 0.0001% of projects. They have a whole breadth of different shape-sized projects from code that's sponsored by big enterprises to pieces of code that's written by a university student out in Algeria or something like this. And I know how hard it is to try to solve this problem, how to create new solutions to addressing this problem of how to become proactive. I've spent my last four years trying to solve this problem. So let's talk about a new approach. How can we begin to invest as the open source ecosystem as the open source ecosystem to harden open source software? So first of all, supporting security researchers or security researcher maintenance, not just with funds but also with people. So it's not just about engineers contributing code back to GitHub or something like this. It's also researchers delegating time or using their time to find vulnerabilities in open source software that's used. And analysts helping to conduct things like thread modeling and other security activities to try to understand the security perspectives of a project. And it's really important that we don't just focus on the 0.001% here of projects. As mentioned, an organization depends upon different shapes and sizes of projects. And any single one of these can be the weak link and the chain and can bring down an organization. So how can we start? We can ask for help. But we reach out to the organizations that are trying to make a difference in the space. This is the open source security foundation and myself and others. And we can help answer questions like how should we spread our resources? Should we focus on what part from the 0.001% to the one developer homebrewed project? How can we spread our resources? And don't just take this from me. Take this from Isaac. He's the founder of NPM, one of the most prolific open source tools out there. Supports open source code usage across JavaScript, TypeScript, et cetera. He really significantly backs an approach like this. And to capital loss. So on top of trusting maintainers, we need to also think about supporting maintainers. And the longer we wait here, the longer we are exposed. So that was the interesting part of vulnerabilities. But as we said, this is something that we as APSEC professionals been looking at for years. But we are conveniently ignoring everything around malware because malware is attackers, attackers is hunters. This is not our problem. Well, it became our problem. And I want instead of explaining what malware is because we all instinctively understand what it means to get malware running on my machine or on my developer's machine, I want to start with examples that will show you the attacker perspective here and also how useless almost is everything we were brought to believe that secure software means from the side of developers and shows us how hard it is for a developer to avoid that kind of mistakes. So let's start with Brandon. This is Brandon Nozatsky. He is a really cool guy. He runs a YouTube channel about superbikes that are electrically-engined with electric engines. He's pro a very green world. And he's in his time, free time. He is contributing a lot to open source. He has 41 different packages very popular on NPM. One of them is called Node IPC. Very, very popular, very, very useful. Would you let your developers use that? I see yes, I see no, I don't know. Let's check, let's check together because I'm not sure myself. So popularity, 1 million weekly downloads. That's a huge amount of users. 1 million weekly downloads for the package. To be honest, this is where I say, yeah, I would let people use that. But let's not be sure. Let's test, let's check another factor. How much time it's on. This specific package, Node IPC, is being maintained for more than eight years. This sounds good to me. Anyone here would say no, don't use it? Of course not, because these are the only factors that we have. And everything was fine for many, many years until March 7th, 2022. Brandon adds one line of code. You can maybe see it below, but let me magnify it for you. This is the line of code, even if you're really good with code, you'll have some problems because this code is obfuscated. So let me be obfuscated for you. And the only thing we need to look at are three new functions that Brandon added. The first one, first one is going to a website called IP Geolocation to check where the code is running from. When you run that, if you're in the US, you'll get back United States. If you're somewhere else, you'll just tell the software where you're running from. Why does Brandon want to know that? We have a hint in the next functionality. The next one checks if you are from Russia or Belarus. Starting to get the picture. Why does he want to know that? Because if you are, and you don't need to be a developer to understand that, delete, delete, delete, delete. Brandon is trying to delete everything you have on your machine. Obviously, Brandon decided to take sides on the Ukraine-Russia war and put it in his open source. And many people asked him, Brandon, this is being used all over the place. We trusted you. What happened? What's wrong? What are you doing? And this was Brandon's answer. You downloaded my software for free, so I'm allowed to wipe your computer. Also, he added that this is all public, documented, licensed, and open source. So I guess he's not wrong. And on that day, a new word started to being used and it's called protestware. It's software that is protesting things. Let's go back to the first question. Would you let your developers use this package? Of course not. Because now he's against Belarus and Russia. The more it can be something else. We already understand that this developer is problematic. This package was reported to NPM and they removed it, obviously. So this is not a problem anymore. What about these other 40 packages? They're fine. There's absolutely no problem with them. Would you let your developers use these packages? The answer is yes, you would, because you wouldn't know. There is no mechanism, no test mechanism that tells you this is a problematic maintainer. He has a problem with his reputation. There's no such thing in traditional testing. So the answer is yes, you would let your developers use these other packages. By the way, what about these packages on Python or Java? Do you think that the platforms are connected somehow or discussing between themselves? NPM just removes it and that's it. There's no even metadata stays. Nothing, it just never existed. Another example, two packages. One called Pompey, one called Pompeio. Which one is the right one? Which one is the malicious one? So both of them are doing the exact same thing. They have the same code. If you use them, you know that none of them will break any of your tests, no bugs at all. Everything runs smoothly and that's it. And it runs like this forever. The only thing is that Pompeio has one small dependency. That's called red APTY. We'll talk about it in a second. But this kind of attack of renaming something very, very closely to the original correct package is something that is called typo squatting. Here the typo is a bit different. It runs on the fact that the website of Pompey is Pompey.io, so they just registered Pompeio because many people know it that way. But we see typo squatting all the time. And you can see that many popular packages, this is the middle column, got their evil brother or evil sister with kind of a different change there. And you see attacker packages. And the monthly downloads of these packages is amazing. If just one tiny part of it is doing some sort of glitch or fat finger or something like that and you type something else, you will be hit by that. And again, the functionality is the same. As long as this package exists, you don't break any tests, everything runs smoothly. Do you have any tool that is checking that in traditional application security? The answer is no. As long as it doesn't break anything, it works. So let's get back to the small addition. This small addition takes one picture from AIMGOO, this PNG. This is the picture, as you can see here. Just sad eyes and apparently, we found that in the picture is embedded some code. Again, obfuscated code that when installed, runs and steal your passwords and send them to the attacker. Everything else is the same as the original package. This is not something you can find out by testing or by any other means. So we started removing them. We started letting know the Python platform that they should remove this package so the attacker just made another one and another one and another one and eventually they started to talk to us to communicate with us through the names of the packages. It started with lowboy, continued to not so awesome and you can see how it ended. But this is happening all the time. And when I say all the time you want to know how widespread it is, in checkmarks, I'm proudly running a team of threat hunters and there's no typo in this number, okay? Only in checkmarks we already flagged, found and reported more than 200,000 evil open source packages. This is not going anywhere. This is a huge number and the rate is just keeps on going. So why is it so problematic? We said malware, it's not a vulnerability, okay? It's not a mistake, it's not a bug, it's not a glitch, it's not an error, it's doing exactly what it's supposed to do. We just don't want it to do that thing. But there is no pattern of an error that we can find. Also it has no CVEs, no CWEs. If something is removed, there is no access to it anymore. We cannot learn from it. And you don't even have CVSS or something like that because as long as you run it, it's the most critical thing that can happen to you. You're literally running malware on your system. While vulnerabilities we talked about four and a half years which sounds awful, but as long as somewhere on the way you fix it, you will be maybe okay. You cannot allow it to run even once. Even one time means that it's game over. And sometimes you don't even have to run it. Only installation of an open source package with some sort of malicious payload might mean game over. So the only thing we can do to avoid it is to make sure that these things are known and found ahead of time. Again, proactively. We cannot wait for our developers to do that and then something test for it. The developer needs to know, no, you cannot use this package immediately when they try to do so. Yeah, let's wrap up, Ed. Yeah, to wrap it up. So some key takeaways. So when it comes to vulnerabilities, the risk starts, the day the vulnerabilities are introduced and not just once it's disclosed. That's the four and a half years from initial disclosure that we talked about earlier. Also important to remember that it only takes one week link for the chain to fail so we shouldn't just focus on the .0001% of projects. And the longer we wait, the longer we're exposed. When it comes to malware, it's not about vulnerabilities, it's about the attackers. We need to hunt them down before they strike within your organization. And some closing thoughts on open source. So open source software is a blessing and a curse. There's a reason as to why it's so prolifically used. It comes with a lot, a lot, a lot of benefits, but it's wide usage also makes it a very, very attractive target for both people introducing vulnerabilities intentionally, unintentionally, but also to malware. And widely exploited vulnerabilities are only going to increase. We saw that trend when we looked at the rate of these big critical zero days from heart bleed to log for shell. And it's important to remember that open source software is a commons. There is no higher return on investment for each dollar slash hour spent trying to improve it. And the reason being here is because each dollar spent to try to secure it doesn't just help secure ourselves here, but it's also helping to secure partners, customers, users, and people worldwide. Yep, thank you very much. Thank you both very much. We have time for questions, so please be happy to answer. So specifically when it comes to typosquadding or intentionally malicious package names like that, have you looked at how many other packages on the registry import those as a dependency? So people that might be getting exposed to that without even realizing it if they're not looking at their transitive dependency tree? The answer is yes. I don't have the numbers now. We've seen several of those. It is less common than just hoping to use that to drop someone using that as a, let's call it first degree use. But I remember seeing at least three or four things like that in the last year. That's interesting things for sure. I mean, I guess do you have a way or do you have any suggestion or anything like that? So as a best practice, if someone is proxying the dependency server or something like Nexus or Artifactory or something like that, do you have any recommendations for how to protect your supply chain against the torrent of packages like that other than developers being really careful? Yes, I'll try to stay off the marketing pitch. Well, the main reason that we're seeing that is because we at checkmarks are looking for that. We have a specific team that is doing that. We have our own engine part of the platform that is checking that either when you're mowing things or it can be used as API or it's part of the platform for every customer. It's probably not perfect, but it is doing exactly that. It's running several tests for a supply chain. There is an engine looking for typosquoting. And yes, the solution is exactly that. And to do that, our people are walking all the time ahead of that exact place in time where you are trying to use that dependency. So when you're using something, it was either hopefully removed because NPM, PyPy or whoever did what we asked. By the way, this also takes sometimes days and weeks when we ask to remove something malicious. But you're being flagged that I'm not sure that this is the package you want to use. More questions? Yes, please. I was just curious. You mentioned at the beginning of vulnerability or was it the Heartbleed thing that the NSA knew about two years before it was reported? I think Adam specifically said allegedly, right? It's a local. Oh, did you say allegedly? Yeah, yeah, yeah. Are we bugged? I was just curious on why they would do that. So we can only speculate, I think, right? But when it came to Heartbleed, listen, to have knowledge of a zero day, especially with something as prolific to what Heartbleed affected, which was open SSL, right? So it basically gave you a way to kind of access a server like a server that was protected with open SSL. And so I would expect that the NSA wanted to keep this information private or undisclosed so they could leverage that for their purposes. At least that's what I would speculate. Duh, I should have thought of that. It's kind of the ultimate backdoor, right? Yeah, yeah. Just follow on another question, which is another challenging for the software development community. The code snippets from Stack Overflow, what do you guys are thinking? So, objectively, they are bad. Yeah. We see problems, more around vulnerabilities, honestly, than malware. I was part of a try many years ago, like six or seven years ago, to convince Stack Overflow to add a red frame when the community says that this code will work, but it's also vulnerable. They refuse to do that. They said that it will make people use less of Stack Overflow, and I guess they're right. Same thing. We need to make sure that we know what's going on there, specifically in vulnerabilities. You need to scan that code like you scan any other code because eventually it's your code. And if you see there the use of dependencies which might either have a vulnerability or something malicious, again, it's currently no one is taking responsibility. We need to be responsible for that. So this was a long answer for, and so basically telling you that, yes, it's a problem. And yes, that's a big problem. Right now, take out the Stack Overflow for the new generation who are graduating from the universities, zero. They have no capabilities, I have to say, up front. So, yeah. In reality. I don't think you need to take it away. I consider myself a Google developer myself. That means that if you take Google away, I cannot do anything. But education. Policy. Policy is important. Monitoring and policy. Monitoring is okay. Policy will not always work. Education. Just educate what might happen. That's it. I'm sorry, just to make the answer a bit longer. But I think there is a bit of a difference though because open source packages, software, software developers download, right? We're talking about hundreds, thousands, if not larger amounts of code, right? And so you can't really have the expectation for a developer to read through it and to understand it and to form some kind of opinion on whether it's secure. But I think most snippets on Stack Overflow are quite short, five, let's call it five, 10 lines of code, maybe 20, I think the majority of snippets are probably quite short. And so I think we can maybe place some expectation on developers to get an understanding of anything they copy-paste. But I think the problem is gonna get far worse now that we're walking into a world of GPT, AI, whatever you're gonna call it, spun out code, where it is generating hundreds, thousands, maybe whole applications in a few months or a year or whatever it might be. And I think that's a scary thought. It's generating so much code that you can't form an opinion on it or it's too labor-intensive to do so, but you rely upon it nonetheless. So other than procuring checkmarks, right? So if we're looking at, in an SSDF attestation, is there anything we should put in that sort of attestation that someone would, to get the company to say that they're looking at these sorts of things, whether they're buying checkmarks or not, but they're testing to some set of activities. Any recommendations on what that activity would be? You're talking specifically about malware, I understand. Yeah, so secure software development framework, right? So we're part of the effort right now within the government to have all companies attest that the software they're delivering followed secure practices. So the question is, what would they attest to, right? That says that they're looking for the type of squatting in sorts of errors, right? And malware sorts of things in their software deliveries. Yeah, I was just gonna, but I think there's two levels to it, right? So there's the kind of, are you doing the checking for the kind of the malware supply chain side of things, such as type of squatting and these kind of things, but then there's the, we could take that one level lower, which is, what about attestations from open source, like the dependencies that you're building on top of, right? And I think it gets really tough there because I think it's going to be a very complicated journey to getting open source maintainers to elect to do the work, whatever it might be to get them to do attestations. And then also, I would say that there's probably a better legal perspective on what attestations look like from companies and responsibilities around that, but I'm not sure of a maintainer who provides an attestation to either security best practices that they're encompassing in kind of their workflows. Not sure what that then looks like, right? Like, does that include the work, you know, is that scoped by the warranties of open source to say, you know, no liability, you know, use it at your own risk kind of thing? I think this is where a lot of questions start to be asked about, you know, what does the attestation mean for them? Right, right. Yeah, and just to add to that, this is almost a green field. I mean, supply chain security was the buzzword of 2022. We're not there yet. We promised to show you the problem, we did that. I don't want to see it. We are trying to do some sort of mindset change, to understand what we completely missed, almost deliberately so far. I think there is a process here. The process is, if we need to take something here, really one word from this entire thing is proactive. We need to do things proactively, because the traditional things does not answer to that, even, Adam, you mentioned automatic patching. Automatic patching is great for attackers, because if someone like, right, because if everything so far was great and then I want to change something to become malicious, excellent, you're the first one being infected. So we need to have some sort of mindset changer, mindset switch, thinking proactively. I have all this open source here being used. Malware is a specific thing, okay, type of squatting and all that, there are 30 different other attacks. I gave you one example. Think, think before you use something you don't have to, if you're in charge of a repository or a mirror repository that have only the things that you are allowed, make sure that you allow the right thing. If you have an option to support, again, not check marks or Hunter Dev or anything else, but the way to, any way to support the open source you are using, support them. Some people are, they will be very happy to do things securely, the maintainers. We need to help. We need to, the entire community of open source, users, maintainers, supporters, creators, whatever you want to call ourselves, vendors. We need to help this community, because eventually this is the only insurance we have that what we're using is secure. So, maybe I will have a specific answer if we talk in a year. So, let's have a date here. Same place, same time, next year. Sounds good, next year. Okay, do you consider open SSF scorecard a good possible solution for a flagging, malicious packages, type of squatting sort of things? Maybe now, maybe as it evolves in the future. I think it's a great project, I personally love it and I think everyone should use it. This is not directly telling you if your, what you're using is malicious or not, but it will tell you how healthy this package is. And health has a very, very clear line towards what are the odds that this will be malicious somewhere in the future. If we see how many different entities are contributing, so there's no, just one person who might do that. How many people contributed in the past? Is it trending up or down? What was the last time that something happened? So, definitely this is one of the tools to use. Again, like everything in security, we're doing the onion thing, right? Another layer, another layer, another layer, scorecards is one of them, for sure. Hey, as someone who plays malware researcher on TV sometimes, have you submitted any of these samples that you found to VirusTotal or some other clearing house or plan to release some sort of research samples or research set? So, we have a plan of creating some sort of package zoo, making sure that it happens. We are currently contributing a lot of code to Backstabbers. I don't know if you're familiar with it, a very good project. And as far as I know, OpenSSF is planning something similar to what you're describing specifically for open source malware. It sounds good, open source malware. For open source infected with malware in the future and when they do, we'll be first in line to contribute. Definitely. Excellent, thank you. I do have a question and follow up on that. I go back to policy, right? You create policies in the organization or check marks and you apply those policies across your teams to discover this malware. Are you currently working with the manifest managers like NPM, for example, as you discover this malware to notify them of your established relationships to say, hey, this is malware discovered in your package. Can you remove it and rapidly address the issue? Is that something in place now or are you trying to create that sort of relationship? Everything you said is happening except one word, rapidly. We're doing that. I mentioned before I think in an answer. Sometimes between the time we are letting them know until it happens, it can take our days even a few weeks. During this time, the only people who knows about it is basically the people we reported, but we're waiting for them to make it faster. By the way, this is not, we're not complaining about that because we know that many people over there are also volunteers and they do it. So it's not a complaint, it's just a status. For example, when we find a malicious campaign with 10,000, 20,000 packages just going at once, theoretically we're supposed to post them one by one, for example, for NPM. This is not possible even for us, so we just send them complete lists and hoping that they will do it fast enough. As we said, the mindset is waiting for a change for everyone. It's not just users. It's also the platforms. No one is ready for that and this is why we're screaming off the rooftops. But yes, the process exists and we share the information with them. This is the first thing we do, actually. Thank you. Okay, cool. So we'll still be hanging out around here if you want to talk. And thank you for joining us today. Thank you so much. Thank you.