 We saw the previous example, the Pompey IO. I'm generally saying I would fall for this trick if I get a lead from Google Stack Overflow and then someone would recommend this package, I would go over and install it. Bad stuff will happen once you install a malicious package, credentials sent and game over. From an attacker point of view, once a user, once a victim, installs this package, people install supply chain demo, but you can install it in many ways. This is what the attacker is receiving. He receives SSH keys, environmental variables, that's simple. And we, as we developers store these credentials, you know, we enable two-factor authentication on GitHub. We state whatever we want, the related Git repository, no one checks if it's actually true. It's hard to know who's telling the truth. Accounts are taken over. We saw a couple of examples. Not all maintainers enable two-factor. In our domain, some developers use custom domain names and forget to renew them, so attackers re-register them and recover the account credentials. We have mechanisms to auto-update our dependency by design. We want to be on the latest security updates. So you might be asking for a specific version or, like, nearby this version and you will get the latest, depends on the convention of our semantic versioning. We saw that take advantage by attackers in the UEPasterJS incident. This is why the attacker bought Faisal's credentials, simultaneously published free malicious version at the same time to cover the surface. And we don't have a right answer, but we have a trade of either we do slow updates and more exposed to vulnerabilities or we do rapid updates and, unfortunately, more exposed to supply changes. One more reason this stuff might happen is maintainers get busy over time. This is a message I received last year. I maintain a couple of open source projects and one of them was a Flutter library. I played around with Flutter a couple of years ago and I created some open source package. And someone, I forgot, like, I didn't have enough time to maintain it. Like, I moved on to other things. I have my first child and I started a startup company, a lot of, you know, occupation. And someone sent me this message. I see you're not maintaining this project. What do you say? Do you pile on some GitHub issues? Do you have some pull requests? Would you like me to help you and you give me, like, contributor permissions? Guess what I said. Yes, take it away. I mean, if you want to help, I would love that because I want this project to help other people. And so far, so good. I mean, nothing bad happened, but I might be part of the statistics. This is the project, by the way. And we're handing over, at some point, are adding all kind of strangers to our projects. We don't have, this is open source. We have trust paradox. I mean, CISOs apply zero trust across all kind of parts of the organization. While we blindly trust strangers, might be attackers. And back during them into our sensitive data centers, development machines, this is a paradox without a solution. It takes a lot of time to detect malicious packages and the open source. A research finds, on average, it takes over 200 days. And if we look at the attack's life cycle, attackers launch their attacks. At some point, defenders detect this malicious activity. And the package registry has the permission to remove them. This is MTDD, mean time to detect. MTTR, mean time to remove. So we saw around 200 days. And this one might take a day, might take a couple of hours, might take a month. In order to improve it, we need better transparency. We need to share more information. This will result in reducing the MTTR. We have many moving parts across the supply chain. A lot of us love to customize their development machines. To install of linters, plugins, all kind of tools. I love to do that. And we talked a lot about dependencies, but we have a lot of other threats across the supply chain. Just to give you simple examples, someone might install a malicious IDE plugin on his Visual Studio code. Or a GitHub app. Like, it might be OK, but someone will hijack it in some way and send malware. Package registries, even private, might cache malicious dependencies. And I want to put this spot on this part. Malicious Build Server CI flows plugins. So we are all aware and love GitHub actions. For those of you who are not aware of it, it's a way where you host your code on GitHub. You have a built-in functionality to run all kind of workflows you want, to package, to build, to test your software. You can write your steps, customily, or you can use steps from the marketplace. GitHub has a rich marketplace where you can find a GitHub action for every purpose. For example, sending a Slack message. This is something you don't need to develop by yourself. All you need to do is add this step, provide it your credentials, and you get your messages, whatever messages you want. For example, this is what a YAML file of GitHub action looks like. You have a region of steps. This is a step, and this is a step comes after, where you have these references, the marketplace GitHub action. This as well. What I want to show you right now is how using a technique called reprojecting, attackers might hijack GitHub action pipelines. So GitHub actions takes, when you're adding a dependency to a GitHub action, you take the GitHub actions code from straight from the source, from the GitHub repository. So this is the user account on GitHub. This is the repository, and this is the release tag. And it matches. There is no middleman's, like different package registries like NPM, like PIPA. You don't have these CDNs. It's being consumed straight from the GitHub repository. Taking this setup, we have a GitHub action that goes under the name NPM publish. The user account, for simplifying the example, is called user. And we have a corporate using this GitHub action as part of his pipelines. What this GitHub action is doing is publishing a package to NPM. In order to do that, the project needs to provide the credentials to NPM. And all of the code, all the packaging of the nasty stuff is happening on the GitHub action. No needs to be implemented. So when we have situations where users rename the user account, we have automatic redirect by GitHub. So functionality may continue to work because dependence are unaware of this change. So we have users rename the user account, and their dependencies keep working. When attackers understand this, and we started taking advantage in the wild, they can rename their user account into what was renamed by the user. So if user is now free on the market, the attacker can rename its account. Whenever this happens, attackers can also publish their GitHub action on the GitHub marketplace, which gives them full control on the CI-CD flows in this specific step. We found this flaw on GitHub late 2021. We worked together to fix this issue. Right before it got fixed, we saw someone take advantage of it. But luckily, it was resolved very fast. So the impact of malicious CI plugins is they steal your private code. They can modify your code and all the behavior. And steal your credentials, like account takeover. Even if you enable two-factor authentication, you have their credentials right over there. So this is why it's that easy. I'm running out of time, so I'll move a couple of slides. What do we do? Couple of important messages. It's a different term. Vulnerable code is not malicious code. We can live with a couple of vulnerabilities as a risk management. It's never OK to have malicious dependencies. You need to remove them as soon as possible. As an ecosystem, we need to share more information. We need the industry to audit what's being reported, finding of malicious packages, samples of malicious packages. As of today, it's being removed and deleted. So defenders can't learn from attackers' actions. We need better standards. We don't have universal IDs for malicious findings. So we audit vulnerabilities using CVEs, but we don't have it yet for malicious packages. We're in this together. What package registers can do is verify the information developer states or attacker states and they display for developers. We report a lot of findings, thousands in a month. We do that manually, like going to a form, clicking next, submit. If this action can be exposed to defenders via API, it will be very, very helpful. And when package registers remove malicious packages, we will appreciate, even in a closed group, having the quarantine samples. And this is summary, running out of time. Important messages, it's our responsibility. It's not someone's responsibility to find it. We're all in this together. Please don't take code from strangers without verifying. And I have some cool demos. If you want to experience the package lab in VR, come to our booth at the Expo. You can see by yourself how easy it is to publish a malicious package from an attacker's point of view. Don't worry, we will not cause damage to any innocent developers, but it's a super cool demo. Check it out. Thank you very much. Feel free to ask questions. Yes. Yes, we are active on the OpenSSF calls. What's your name? Jonathan asked, for those of you watching remotely, if we're participating in OpenSSF's working groups, yes, we're going on the calls, very helpful. And actually, I'm gladly meeting a lot of great people at the summit, so hoping to some of the things we need as ecosystem to create a better world for developers will happen in face to face. Yeah, let me go back. Well, what's your name? Art main. So I'm going to ask, I stated these IDs, and I refer to similar to CVEs. What I mean here, when we are reporting and classifying CVEs, we're having a new universal ID. But we don't have this for malicious packages. So what's end up is every vendor makes up its own identifier. So you can see for the same incident, this is what happens in check marks, and this is what happens in Snake, and Sonotype, and GitHub. I mean, we need a universal ID to track these. We would love to give our opinions. So I'll talk with you after the more questions. Cool? Thank you, guys.