 Thank you, Aditya and Joshua. My name is Brandon Lam, and I'm part of the Google open-source security team. It's an honor here to be here serving alongside Psy as the event chair for Cube Day Singapore, and I'm at Google open-source security team, and as part of that team, one of our missions is to secure open-source at scale, and this means covering things such as vulnerability and security incidents, and this is where I'm going to talk to you today about my talk. Keep calm and keep coding how to not panic during big CVE drops. So let's talk vulnerabilities, and I know log for shell, you've heard it 10,000 times for the past few years, but it's repeated and it's cliché for a reason, right? It drives the point well, right? Okay, so for those that are not familiar, log for shell is a vulnerability that was introduced in a popular Java library, logging library, log for J, and you know, we see thousands of vulnerabilities come by every year, but what makes this one particularly special is that not only did it have a high impact and high severity, but the likelihood that you were affected and could be exploited was also fairly high because of its pervaded use. So when log for shell dropped, right, what we saw was a lot of panic, everyone scrambled, Google included, right, to say, you know, how do you fix this? Am I affected? What do I do next? And we found that the mama panic of this varies from organization to organization and a big factor of that is knowing what you have, having a good inventory and this is what we'll be talking about today. So around when log for J happened, there was also something that came out from the United States White House called the executive order or EO 14028. So this executive order talked about multiple things, talked about, okay, how do you secure a supply chain? How do you do zero trials and all these things? But one particular topic and one particular concept kind of readied the industry around, right? So and this thing was S-bombs and because vulnerabilities were the main pain point, S-bombs were touted as this is going to be how we're going to solve vulnerabilities like this. So what is an S-bomb? So S-bomb is a software built on materials. Essentially, you can think of it like an e-grain justice. If you go to the supermarket and pick up a jar of peanut butter, you can see what's inside. That's what an S-bomb is for software. But S-bombs kind of are only a start and like food, you know, knowing what I eat is is great, but as my doctor would say, right, it doesn't help if you don't have a diet and you need to know what you need to be able to eat and not eat. So likewise in software, we need to be able to manage and use information in order to manage our software risk. So let's do that with an example. So let's talk about curl. I think most developers probably once or twice in your lifetime have at least used curl. And back in October, there was a particularly interesting incident that happened. So the curl developers on October 4th came out, opened a GitHub issue saying, hey, everyone, there's a high severity vulnerability in curl. It's coming out next week. More details. Patch is going to be released next week. Have fun. And basically, they didn't release much details, right? The issue was edited a few times, but it wasn't enough to really do much. So we're kind of in the same situation, popular tool. Two years later, a lot for Shell can be do better. And so I want to kind of talk about this in terms of this Netflix show, Seven Days Out, if folks have watched it. Basically, it's about, you know, there's a big event and Seven Days Out, like a NASA mission, Seven Days Out, what do you do to prepare when it actually comes? And so we're going to do this like curl-CV version, right? So a smaller version. But the idea is that we want to strive to be able to create action plan such that when the CV details drop, when the patch comes, we are immediately able to take action to remediate the situation. So step one is know what you have. And I don't want to get too philosophical, but we gathered some feedback and some experience from the EO. And what we found that most organizations did not really know what they had. And so we found that for most organizations that were implementing S-bombs or tried to implement S-bombs, they actually got a lot of value from the process of actually trying to implement S-bombs than the S-bombs themselves. So in particular, what was difficult, right? So what we learned was that's a bunch of stuff. If you have a large organization, you have multiple programming languages. Aditya and Joshua talked just previously. They showed different teams with different requirements. And, you know, multiple languages, frameworks, great for development. This is something that we touted for like microservices, right? Do what you want, use any language you want. The downside of that is that you need to have observability for all of them. And this includes supporting every different platform and ecosystem. The other thing we saw was multiple teams have different workflows. So different teams with development, different ways. They use different CICD pipelines. They use different builders. They all have different registries. And what you end up with is a bunch of different flows that are distributed and are hard to kind of get observability on. And finally, and the last, but the hardest part of this is so what we call the long tail of software, which are the teams that have like a special niche use. You want to use this technology that doesn't fit into a lot of the other things that the organization does. And this makes it very difficult to manage and observe. So like I said, the difficulty of this exercise will vary depending on the organization and how complex it is. I could go into exactly how Google does this, but it's going to be a whole talk on its own. So I won't do that. But I will provide some ideas and some questions to kind of start with to help guide the process. Right, so one is, you know, go through the development flows, talk to the different teams, you know, figure out what sex they're using. And this is one of the areas where less is more, right? Having less sex, having less ecosystem, allows you to do integration at a less basis. And having organization policy is important and able and allows you to be able to do that better. You know, saying, you know, usually you have less builders, so usually you have less, only use your favorite CICD pipeline. The second part of this is figuring out where the endpoints are, not only on the software that you produce, but also the software that you consume. You consume everyone in POTS libraries, you consume a lot of open source software, you consume a lot of the body sass, right? And this is essentially what the US government has asked, right, is that if you're selling to a federal agency, please provide an S-Bomb. And so in order to manage your own inventory, you also have to hold the software providers that you're using also accountable to that. And last one of these is, you know, go look at all your assets, go look at all your services, you know, these are places where you find a lot of scattered into the closets, you know, you find where all the bodies are buried, a lot of deprecated and unmanaged code. So, great. So we have this next thing, the easy part, let's generate the S-Bombs, right? The run nature tool, spit out some documents. Great. So now what do we do with these S-Bombs? And so one of our favorite tools, Grap, again, we can grab through everything. But as you can see here, I grabbed through Coal and a bunch of S-Bombs and I got a bunch of information that wasn't relevant like file names, license names and things like that. So this can make it almost as though, you know, finding a needle in haystack. So this isn't, it's great, but it's limited. So for the rest of the talk, I'm going to be using this tool called Guac, which is a Linux Foundation open SSF incubating project. And what it is, is Guac stands for Graph for Understanding Artifact Composition. And basically it takes a collection of S-Bombs and creates a knowledge graph and then you can go ahead and ask it from questions about that graph. So this will also organize the S-Bombs and get some value out of it. So step two, am I affected? So you can do a query to the API here. This is, you know, find me, anything that contains Coal in it. And in this case, we see that, you know, Coal is a very prevalent software. It has 27 different instances of it, both Debian packages, different libraries and different ecosystems. So am I affected? Yes. What are the things? Step three, tell me where am I affected? And so with this, we can use the CLI and basically say, in this case, we say Debian package of Coal, tell me where this is used in my entire organization. And we can see the output here is a graph and it's a little bit small, but what it shows in the bottom right-hand corner is all the versions of Debian Coal that we own. And basically the dependencies, the container images on the far top left-hand corner showing which containers depend on that. So graphics are good, but the output will be able to be more actionable. So from what I can see here, we can see that, okay, it tells us, okay, here's the first list of things that you have to patch, all the Debian packages and all that. And then you have to patch these things next, right? Because it's like container images, if the vulnerability is a base image, you're not gonna get that much. You're only gonna be able to patch the image that uses the base image once the base image is patched. In the output, we can also see there is points of contact and this information that can be useful to find out who the product owners are. So finally, get ready for a patch plan. And this is where it's largely based on the organization. With all this information that we have, first identifying product owners, getting them to understand what the issue is, where it resides in their code, and then to help them understand what the risk is and how to manage that risk and evaluate it. Another thing they can do is to, once the images and the libraries are patched, we still need to make sure that they are propagated to the runtime and so making sure to find where these things are running and make sure they get redeployed with the latest build horizon. And of course, there will be instances where patching is not possible in the case you have to talk through and manage your risk appropriately. Awesome, so after doing this, there comes one week later the CBE drops, but we do have some semblance of a plan to already know, okay, if, what will affect these, who are the owners I have to talk to. And so given that, we don't have to, on the day itself, panic and try and talk to 10 different people. And so, 11 October comes now already. So before I end, I'd just like to talk about a few things. A lot of what I talked about in terms of supply chain security and security concepts in general, this is something that the CNCF Tag Security or the Technical Advisory Group talks about we have a whole working group just on supply chain security. A lot of conversations happen that I do encourage folks to check it out. Some of the material that I've used is also for my book, my Manning book securing the supply chain, software supply chain, so to check that out as well. And so in conclusion, preparation is half the battle, taking the appropriate steps, managing inventory and figuring out a patch plan and action plan is the first step to getting better sleep at night as a security operator. Thank you.