 Hi everyone, welcome to our talk. We're going to talk to you about today about hidden vulnerabilities in open source. So a little bit about us. My name is Sharon and with me is Shaour. We're security researchers at Prisma Cloud from Palo Alto Networks. We deal with open source vulnerability research, vulnerability management and whatever comes within it. I find vulnerabilities in open source. And a little bit about supply chain security because we deal with open source security. So just to give a bit of an abstract about what we're going to talk about today. So we will give an overview about vulnerabilities and CVEs. And then I advance to open source and hidden vulnerabilities, what they are, a different type of hidden vulnerabilities. And we'll talk a bit about behaviors that we found in the research that we did in the last two years in the open source security world. And just our findings and our incomes. Let's just dive in and start. So CV, let's talk about CVEs. Security vulnerability is just an issue, a flaw in the software or in code or something that causes things that we're not intended to do. It could be just a privilege escalation or cause execution or things that the code is not intended to do at the first place. And to identify a vulnerability to distinguish between vulnerabilities, there is CVE ID. CVE just a description of vulnerability and ID with a short description about it. Each one gets an ID. And the CNA, a CVE numbering authority, is the one that is responsible for assigning those CVEs. If everyone can request for a CV, I can, researchers and users and developers, everyone can ask from CNA. And I just assigns a CV. Palo Alto Networks is also a CNA, for example. But the problem with CVE is just a list with a description. It doesn't have the metadata that we need to understand what is really the components that are impacted. And with version, what version is fixed, what is the severity of the vulnerability. So this is one NVD, the National Vulnerability Database, commonplace. It shows us all the missing information that we need, that vendor needs, in order to understand they are vulnerable, they are affected, and they need to update or fix the vulnerability. And there are also other feeds, like GitHub advisory, that also give this information and more other feeds. So let's talk a bit about open source. This is why we're here, right? We all love open source. We use it all the time. We contribute to it. And there's a good reason for that. It's why they use it for the last, I don't know how many years from now. It is well maintained. It is very easy to use. And it's really transparent. Like, I can just import something, but I can also go to GitHub, or whatever the code is based, and see the code and contribute. And actually, it's all out there. It saves tons of resources, time and money for vendors or for users that don't need to go and bend the wheel and do something that somebody else has already done. For instance, I wouldn't just write a code again, Kubernetes. It's a very complex platform, when somebody else already did a big community. But with it, and with all the fun in open source, there are security issues that comes with it. First of all, it's code like any other. It's relevant, what I'm going to say is relevant for both closed source and open source. Code is written by human, we're human, and some bugs can exist. And some bugs, like I said at the beginning, can be security flaws, security issues. When a lot of people work on the same project, it's also challenging. Sometimes it's also too convenient. We just import or require a package. And we don't really go and check what is exactly the function are in this package. How does it look like? How the code looks like? First, there can be a flaw or something vulnerability inside. And also, we all use package managers, which is great. But there could be malicious packages that we would import and won't even notice. Malicious packages, I mean, packages that have malicious code inside of it. And this is not the package we intend to use. For example, they could come out. I could install a malicious package because of typo, which is called typosquadding. Someone uploads to the package manager a package that its name is very similar to the name of the package that I want to use. So I can do typo and incidentally download the package that I didn't intend to. Or dependency confusions or even it doesn't have to be something that is damaged. The maintainer itself could do, could install something or just release a new version with a security issue. I'll give you an example. I'm sure most of you have heard about the colors package. Not long ago. The maintainer of the package released a new version of the package with a security issue inside with an infinite loop, which actually caused a denial of service for most of the users that just uses this package with the latest release. So it's quite dangerous. Also, because it's so wide and so spread so wide, it's an easy platform for supply chain attacks. For someone just to come and search for vulnerability in one platform in one application, and then use it to damage a lot of applications that uses it. So there are issues. But who is responsible in open source? Like, I mean, in large vendors, there is usually a disclosure process, a security policy with disclosure deadlines. Usually, it's up to 90 days between the security issue is reported until the reporter can just publish it. So there's a time there's time frame that the maintainer or who is responsible for the package for the for the software can fix the issue. But it is less common in open source, it exists. And some packages do have a security policy, but it's not that wide as in large ventures. And the result here, when it's really hard to to contact a maintainer sometimes or vulnerabilities are just like when a really reports are just ignored. The result is public disclosure or what calls full disclosure. And that the public that the information about vulnerability is published out loud published in GitHub or in Twitter, someone out there. So it's coming but both ways. The responsibility is is both way. The maintainer also responsible for putting this this policy is not doesn't exist. But also the reporter that needs to to understand or follow and contact the maintainer and don't just publish it out loud. So yeah, this is talk about our problem. There are vulnerabilities out there. And they're published and they don't get a CVE. So what do we do? So let's talk about the hidden vulnerabilities. What are those hidden vulnerabilities? How they look like and where we can find them? So just like a tip of an iceberg, the visible, the visible part is perhaps very impressive. But the hidden part is much larger and much more significant. So based on our research, we were able to categorize this type of hidden vulnerabilities into three main types. We call the first type hidden but visible. It's meant that there is a commit or security issue. Usually there are also a fixed version and a click clear states of problem or security vulnerability. Like this example in Octoprint, Python library. And this example, we can understood from straight from the title, that there is security vulnerability here. There is also a fixed version and very detailed description about the vulnerability. The second type of hidden vulnerability is hidden but fixed. It's basically just a commit or just an issue, not always a security issue. The bug is usually fixed. And we not just mark the word bugs, it's managed and handled and treats like a bug. So not always we had some clearly starting about problem or in the vulnerability or no indication of the security impact. So this is a good example, Envoy, go long package. There is a fix, you can see the fix, but it's look like a bug fix without any details. But in practice, this is a fix of a security vulnerability. The third type of hidden vulnerability is hidden and undercover. It's look like an announcement or feature but actually address a security issue. It will be difficult for the simple user to notice there is some security vulnerability here. And you need some technical knowledge to understand the security impact. And a good example is a list monk. Okay, from the initial reading is look like an addition of some capability to the package. But from the point of researcher or an attacker, we can understand that behind this add of capabilities, or without this addition capabilities, there is some security vulnerability that can be exploited. And we can say that in this currently, this this commit is currently in some disclosure process and we'll get the CV. So let's talk about the timeframe between the first time there is some vulnerability discussion or commit to the time it will get eventually CV. We need to remember that this vulnerability has exist open and visible and attack here or some research here we look to this public discussion or this fix or commit and try to exploit. So the first example is well commonly an NPM package handlebars. And it's a remote code execution on a well downloaded package over 9 million download per week. And it was hidden open and public for 58 days. So here is the the first time we introduce this hidden vulnerability on the February 21. And it finally got a CV almost two months later. So I will repeat it, it was public, visible and exploited exploitable. The second example is a well commonly use the Golan package, GTA, it was a arbitrary file deletion and also well commonly use package and was hidden for 55 days. It was first introduced in March in the middle of March, and was finally got a CV almost two months later. So we talked about the time it takes to this vulnerability to finally get a CV. And now we will see a live example of hidden vulnerability we found as a part of our research. The vulnerability was hidden. This vulnerability was hidden for 70 17 days. And we wrote an exploit. So let's take a look. So this exploit is about less open UI, it's an MPN package for generate some themes. And we create a craft, crafted team to to get a CV. So in the right screen, you can see the exploit that eventually will got an RC. In the left screen, you can see that we import the vulnerable package version. And as a part of our building process, we will point to use the crafted theme that we wrote, and we'll get a remote code execution as a part of the building process of the team. Now we point to the crafted the theme. And we got a CV and we got an RC. So we understood the problem. Like, there is a problem here. There are hidden vulnerabilities that are publicly out there without a CV. So meaning there are scanners from our beta scanners that shows there is a vulnerability, just one catch them. So why are they existing? And why doesn't don't all vulnerabilities get a CV? So we don't have a clear answer for that. However, some there are some some someone needs to file CV in order for it to be opened. And as we said before, there is no enforcement or some standard for open source for vulnerabilities in open source. So it doesn't require anyone to file CVE. If the maintainer won't ask for it. Probably it wouldn't be done at all. And it's okay, we know that maintaining a package, it's a very, very difficult job. And in addition to their normal job, their normal day job, most maintainers just do it on a free time. And they have to deal with a lot of other things that keep keep the program and the application running and do new features and also just correct bugs. And but they do have to rely on security researchers or reporters that goes and just tells them about vulnerability. So, yeah, we need to ask for our vulnerability to be for a CV to free to be a CV, right? I want to give you an example that I've seen in one of the discussions in GitHub, for for package, that the maintainer really asked, why why we should file a CVE. This example shows that that not all maintainers knows knows and realize the impact of CV. So, when see, when we file CVE, when there's CV for vulnerability, it's published. So everybody can know that a release that is fixed is fixed as a vulnerability has has an impact that I need to update my package. Otherwise, I wouldn't be usually updating a package it can break my build or something. So in this discussion, someone asked the maintainer if he could file CVE. And he said that he's curious why we would be providing to the community. So yeah, when it's out there to the community, users this library knows that they need to update. Simple as that. But why actually should we care that there are vulnerabilities that out there. And as I think that we emphasize is a lot of time in the stock. There are vulnerabilities that are out there, not published and not disclosed, are published, that everybody can exploit. Us as security researchers, we find these vulnerabilities and report them to to our our product. But there are also black hat attackers that looks at the same public issues are the same, the same things that we look for. But they're looking for it for a different way to trying to look for unpatched vulnerabilities and use them. And then just exploit them and do things that we don't want them to do. And this is also a part of taking place for supply chain attacks, like the first link in the chain. When there is a vulnerability out there that the data curse, it can look for and just exploit. There are a lot of instances that are open because they don't know they need to fix. So in our last couple of last two years of research, we found that there are a lot of behaviors, common behaviors for maintainers, and how they deal with security issues. Just as disclaimer, we split them into four types. But these are the edge cases. It's maintainer. It's one of us. It's it's part of in the same in different part of the area. And each one has some different merge of that of those behaviors. So let's talk. So the first type of behaviors is by the book Responsive Silent and Neglectful. So the first the first type of behavior, we call it by the book, the type we really love. It has a security policy with security MD, very detailed, responsible disclosure process, and usually very detailed release notes, usually request for CV. And it's very welcoming security researcher. The second type of behavior is the responsive maintainer. It's usually fixed issue and release or update usually with the fix, the commit message has mentioned the security impact and it's very clear and easy to the simple user to understand the security impact of the commit. It has willing to disclose vulnerabilities, usually request for CV if necessary. But it's dependent, it's dependent on the wheel and the mood of the maintainer. The third part is the silent, the silent maintainer. It will fix the issue will fix the security problems, but it will do it will do it silently. It's very hard to identify the commit with some security impact. And most of the time you need some technical knowledge to understand the security aspect. Usually, there is no documentation of the security issue. And the main the main behavior we make limits merge to master. And you need to understand that merge some commit to fix to the max master is really similar to a role. So if you clone the package before the marriage, then you're vulnerable. And if you clone the package after the marriage, so you are safe. And just like a role. So it shouldn't be like this and maintainer should understand that they have some responsibility to the to the user and to the open source community. And the last type is the neglectful. It's maintainer with no security awareness, security discussion will stay open for long time or even never will be closed. He had some even vulnerable dependency usually won't fix vulnerability and is an welcoming security researcher. So we just we just covered a lot of some behaviors of maintainers and how to deal with security issues. We talked about also hidden vulnerabilities. And really realize that there's a gap here between researchers and between security between security and between the maintainers and contributors and whoever develops the the the application. I have to say that this gap is decreases in the last in the last year. And I hope it will keep decreasing. We keep closing. So each one would know the vulnerability reporters and the maintainers would know their with corporate like would know which would be responsible. But there's still an issue that exists. There are a lot of vulnerabilities that are publicly discussed without a severe ID. So let's talk about a bit what can we do in the meantime. So the first thing I can advise is to get with the visibility. Both users and maintainers should use vulnerability management scanning tools to know what vulnerabilities they're exposed to. And I know this solves only the problem with CVS. Most of the time, scanners won't have the information about hidden vulnerabilities. This is the problem that we discussed. So there are some some some fees or some applications that scanners that do have researchers who look for these vulnerabilities and just publish them. So you can scan and do know. There are open source tools are also closed source tools like something that we do. But yeah, just the first thing you need to know is to gain visibility to know what you need to fix. But and public discussion on security issues still exists. So we need only to be responsible. The responsibility is on all of us for maintainers for users for researchers. We all needs to to act in the way that we would like to do to do when we want to to have a security policy with security and that really is really details with the detail about how we should how security researchers should disclose vulnerability. What is the time gap that the time frame that the maintainer have to fix the vulnerability. And also just a platform to to communicate. And also it's need to be respected by the researchers, just researchers or just humans and users that don't think they report on a vulnerability and just put an issue. So there need to be clear way to to to communicate. And both ways are both both directions, both people are responsible. So also, you can dedicate a team to handle reports or just even someone that can be the maintainer or or contributor even. So it will be reachable and people will know who to reach to. So we need to talk about it more. We need to put it in our in our life. And to know that security is something that we need to we need to to take care of we need to respect. Because we all do we don't want to have a package that is insecure. And so just be with a security mindset and keep that in mind. And thank you very much for joining I talk. If you have any questions, we are happy to answer. And thank you very much.