 Thank you and good afternoon and welcome. I'm actually really honored to be out here and getting to present after two years of not presenting. So welcome to everyone here in person and everyone joining us online. It's actually a real honor and it's actually a privilege since it's my first time at KubeCon and SecurityCon. And I'm just curious with a show of hands, how many of you folks are new to the conference? All right, well, it's great to meet everyone and hopefully get a chance to talk to you guys, today, tomorrow, and this week. All right, so session today is on the five reasons to invest in the software supply chain and security software supply chain and recover some myths as well as to things that come up in the context of the supply chain that we wanna make sure that we address and get into the right context as we look at investing in this area. So just a little bit about me. I got my start in Wall Street IT. I moved to the West Coast, joined Microsoft, primarily working systems management, virtualization management. I got to work with some standards organizations around storage management, primarily DMTF and SNEA and I got a chance to ship some cloud on-prem products as well and more recently I joined the Strategic Missions and Technologies Group in Microsoft where we focus on supply chain and supply chain security. And so, at home, I currently find myself keeping my senior dogs from getting annoyed by the new puppy that's been introduced, so that's always fun. All right, so with the talk, all the talk this week are on S-Bom supply chain at the conference and just in general with what we hear in the news and just the ongoing and relentless attacks that are pretty much hitting every industry, it's good to put it in context as to exactly what is to gain, I guess, from the cyber crimes that are out there. In terms of cyber damage cost, we're looking at $6 trillion in 2021, potentially exceeding $10.5 trillion by 2025 and as well we see the investments in cybersecurity also increasing actually substantially from $3.5 billion in 2014 and then cumulatively going to about a trillion dollars between 2017 and 2025. Now I put these side to side just to compare the growth in both the investments and the damage, not so much a statement on how much should be invested to match the actual damage that are out there but we do know that more is needed and faster. So let's put the supply chain in context. We hear, I mean, so the supply chain in context to me covers three areas, software development, software deployment and software runtime. So if you start with the software development side, to me the supply chain and everything that goes into it has to account for not only where the code is actually being developed, when it's where it's being checked in, who's checking it in, are these verifiable sources, are these repos healthy? There's a big part of, sorry, one. So there's a lot that goes just to making sure that the code is authentic and the integrity of the code itself and that spans the software and the underlying hardware of that development environment. And then when we move forward to the deployment phase, now we have to make sure that we know what we're getting is healthy, we know that we can verify the sources, both of the repositories and the issuers of whatever claims, that's where we look at the S-POMs that are being generated. We wanna make sure that those S-POMs are generated at check-in time and not after the build is complete. And we wanna know that we're actually using hardened deployment processes, especially when you're deploying at the edge or in a sovereign environment, you may not actually have direct access, direct control over the infrastructure that's being deployed. So you wanna make sure that your processes are hardened both in the software side and also in the infrastructure deployment. And then once you're running this environment and running the software, you're gonna have drift, right? Unless you're investing heavily in the automation and all the tooling that can guarantee that drift doesn't happen, you want that monitoring, you want that observability, do you have those hardened package management tools that will get you the right updates, the secure updates? If you've got your infrastructure team, your operations team, actually doing break fix on those environments on that infrastructure, are they producing signed scripts, right? That's something that we learned as part of deploying cloud on-prem is how do you seal the box and everything that our SREs have to do on those systems can actually be from a trusted source, in this case, being Microsoft, but that the customer gets the guarantee as well that whatever's being put on their systems to initiate a break fix is actually verified, right? So the supply chain, primarily, we talk a lot about software development side, but it does span all the way into the runtime through the updates and eventually into decommissioning, we put a lot of work, especially in the hyperscale cloud service providers on the secure destruction of data to ensure that nothing actually leaks out of the data center, okay? So let's get to the first myth. Once you create your S-bomb, you're pretty much done, right? Well, first question is what are you describing? We talk a lot about describing the dependencies, the binaries, the Docker images, whatever you wanna, or the container images, you can start there, but that quickly starts, if you start peeling that onion or pulling that thread, you're gonna start asking more questions, right? Is it beyond the entire package? At what granularity? What are your upstream dependencies? What are you expecting to see in that metadata? Do you have any kind of vulnerability information captured inside? Do you have your CV scan results also in there? And there's a bunch of tools that come along with that, not only with S-bomb and the formats, whether it's Cyclone DX, SPDX, Ava Black covered Gitbomb earlier today, but then you get into the signing of those statements and what it actually means to be a verifiable statement and then the automation, right? The frameworks like Intoto and Tough, Tough giving you those sealed updates and eventually the observability side, right? How do you get to the multiple levels of a secure environment and the promises that you've gone through each of those steps and met those requirements and that's where something like Salsa comes in? So I like to use, I like to start with the fact that S-bombs are typically used as an ingredients list or the comparison that it's the ingredients list to something that you buy in a store or maybe you get something on the menu, but I wanna pull that thread a little bit more and I wanna use an example that is from a show called Portlandia, and in basically two minutes of a couple entering a store, they ask about the chicken that's on the menu and they learn a lot of great things like the chicken's name is Colin, it's breed and what it was fed, but then they start asking, they start pulling the thread more. Is it happy? Did it have friends? Is the information that's being provided actually real? Is it organic, right? Does it meet whatever requirements of organic? What kind of organic? And eventually, where did it grow up? Who owns that farm? And the person that's actually providing this information that service provider in that case, it's not an authoritative source. And whatever they're saying could be true, you can't verify, you don't know if it's authentic, they're just trying to do the best that they can to the best of their abilities. So in the same way, if you look at lettuce, you've got DNA markers embedded into the lettuce itself, that if you need to trace back a parasite outbreak or any kind of bacterial outbreak, you know what farm that lettuce came from. So I like those kind of examples because it gets us thinking more about what else could we include as part of that SBOM and what other artifacts we wanna include. When we say artifacts, sorry, when I say artifacts in this discussion, I'm going beyond just the SBOM, the document, but any kind of, for example, measurements that are being done on the code itself, any kind of logs that need to be added, or even a manual attestation of maybe some SDL process that isn't automated, but you wanna make sure that it's captured and signed by someone within the organization. So then we get to the second half of that myth, which is once you create the SBOM and you've got your software captured, you're pretty much done. And in fact, what we're seeing and some of the investments that we're making, actually we wanna extend the build materials concept and that artifact gathering and verification concept beyond the software into the underlying hardware. So you can see, for example, the chip manufacturing industry, highly automated, huge investments in fabrication, and they've got all of this nailed down, but they're really siloed because of that IP that they carry within those fabs. And so you really can't make any determination on the chips itself. You just have to trust that it's being done right. And what we've seen recently with NVIDIA is that anyone is susceptible to these kind of threats. If you look at system integration, once those chips leave the fabs, they get sent to these ODMs and eventually they get put into systems by a system integrator, that's another level of build materials. If you're in procurement, you know that anytime you order a server, you're giving that build materials to your hardware vendor and you wanna make sure that what you get back is actually that server and that it's something that ideally isn't tampered with, especially if you're going to a secure environment or in the federal space. So, two sides to that myth about just being one S-bomb and you're done, both on when you're pulling the onion on the fractal of dependencies, as Ava put it earlier today, and also how deep into the stack do you actually go beyond just describing the software? So, the next question is, you've got the S-bomb and that's it, it can be trusted, right? This is a really simple one. S-bombs can be modified, right? And the metadata that's contained there, there's really no way of proving who generated it and if it's an authentic piece of documentation and even if it's verifiable. So, this is where we get into, if you pull on the Colin example, a former could say, all right, Colin, grew up on Contoso Farms and I'm signing that statement. That's a statement that I'm making about this particular product. So, you get into the Celsius example where in this case, we're talking about attestations, right? Where you take that, the artifact itself is the document or the log or something else, but now you're making a statement. Making a statement about that particular artifact and then you wanna make sure that it's signed and you wanna make sure that it's something that's verifiable, right? And so, you get both the, you put it in an envelope and then you've got those bundles of attestations that actually describe your microservice, your service, your product, whatever you're trying to describe in your environment. Okay, so this gets us to, if it can be trusted and also the integrity. So, SCIT is actually a framework in an architecture that's been worked on in the open community where we're looking at how to enhance the security of the ledgers themselves. So, ledgers that you've heard of today, we're talking about append-only ledgers that are tamper-proof, built on merkle trees, all goodness. What we've done with SCIT is actually take that a step further and we give you three guarantees, both in when it comes to encryption. Encryption on disk, encryption in transit, and encryption in use. And this is where we get to the conversation about secure enclaves. So, on the SCIT side and the SCIT service, there's essentially three guarantees that the service itself is making. The first one is that the statements are issued by someone that's identifiable, that's authentic, and no takes you back to this, right? You can't say, yeah, I didn't say that. So, you're non-reputable, right? And then the next step is to say that those statements get registered on a secure ledger that's immutable, right? And that's secure from top to bottom, okay? And the next one is that the issuers, sorry. Yeah, that's right. So, the issuers can prove to any other party that those claims actually exist in that ledger with a receipt. Once you have that receipt and you actually have that, you know, the combination of receipt and the claim, you know that it was recorded in the ledger and that you can actually trust what's actually being put out there. Now, this architecture is actually built on confidential computing. So, in Azure, the confidential computing VMs running Intel SGX offer that secure enclave. And that's a technology that gives you that guarantees at the chip level that the memory space, the data, is actually protected while in use. So, there's no chance of tampering while the data's being written to the ledger itself, okay? So, let's quickly walk through the architecture and just, you know, a simple workflow. First, we start with who, right? So, we start with the DID, the decentralized identity. Once we have that identity captured, the next step we actually can then generate our artifacts, S-bombs, logs, whatever it is. And so, once I have that artifact, next thing I wanna do is generate a statement, right? This is what I'm saying about this artifact, which is what I eventually wanna wrap into the next step, which is, sorry. So, which I'm eventually gonna wrap into a claim. Now, I get my endorsement from the DID based on the identity, and I can use that to generate a claim. Now, the claim itself is what, you know, wrapped in a cozy envelope, which includes protected headers, the payload itself, which is the statement, and the signature that comes from your DID. So, now with that claim in hand, what I can do is now record it in a ledger. So, the skit architecture provides, the skit service provides a transparency service that transparency service is the one that records it in the ledger. Once it's recorded in the ledger, what you actually get back is a counter sign statement that, sorry, a counter signature that we call a receipt. Okay, so once with the original claim in hand and a receipt, now you have a transparent claim. And the beauty about this part of the system is that now you can use that transparent claim to actually verify all the information that was put into the ledger itself, because you have a receipt that the data was actually written to into the service. The transparent service itself will guarantee that a receipt is not given until the data is written. So, if you have a receipt, data's been written, and now you can, whether it's yourself, as part of verifying in real time, or let's say an auditor needs to go in and basically run through all the verification for an investigation or something else, they can use those transparent claims, which is a claim in the receipt, against the ledger to walk through all the everything that they need to verify. Okay, so the next one is, sorry, so you've generated the S-bomb. And we saw a lot of this as well as we were going through the executive order and making sure that we met all the requirements there. You know, a lot of the folks that are actually doing this work, they don't know all the details that go behind what's actually going into the guarantees that you wanna make, whether it's the SDL guarantees or the S-bomb and everything else. So, just saying that, yeah, I did checkbox done. We know that that's not enough. Being compliant is not a way of getting to be secure. Simple example is what happened to Target in 2013. PCI DSS compliant, yet they still got hacked a few weeks later. And even though they earned that certificate that said that they were compliant. So this is where we get into a way of looking at the executive order that actually has multiple parts that actually can contribute to it. First, you're looking at the secure software development framework, the SSDF, right? And then from there, the supply chain risk management document and also zero trust architecture and gives you a way of mapping from the EO and through what the requirements are or the recommendations are into actual practices and tasks that can be done to meet those requirements. And the beauty about this system is that it's really not meant to be fire and forget or one and done, right? You do it and you want it to be continuous. You wanna keep building those practices so that this becomes inherent in what you do on day to day. You shift left in terms of your test environment and you start these security practices from the beginning. And lastly, with willpower alone, you can push through those tough days. I'm gonna get a little bit more personal here, but between long hours, keeping up with the increased threats, coming into your environment, there's a widening skill gap that's been seen as well in terms of cancellations being, cancellation of events, the pandemic. And then just the relentless security reviews and audits and everything else, we know that there's eventually gonna be burnout, right? So part of that is asking for help and that's something that does require a certain level of vulnerability, but it's something that, once you have that network of support around you, it's something that can help you through those tough days, setting healthy boundaries at work as well. Sometimes it's not easy. Current recent survey that was done, measuring that about 47% of security admins and operations folks working over 40 hours a week up into the 90 hour range. So, and with the cyber damages and the incentive that these folks have to actually do this, to actually do damage into an environment, that threat is just gonna continue to increase. And also just understanding what you need to do to recover, whether you may have microdose of recovery and obviously sleep as well. So with that, I've got a few resources that I've added into the deck itself and that you can check up offline. And that is it. So I thank you for your time. Thank you for your attention and open to any questions as well if you have them, okay? Thank you. Any questions? So I have one question. So how does this align with the six store, right? So the framework that you mentioned, the architecture. So do you see any synergies there? As of right now, it's, they're separate, but it's something that we need to also look at and investigate. Cause there are some similarities there when it comes to having a ledger and ensuring that you've got, you know, your trusted, your root of trust. But as of right now, those are, those two are separate. Yeah. Yeah. Any other question? Yeah. Thanks. Okay. Thank you very much everyone. Appreciate it.