 So, my name is Gopi Rehbala, I'm CEO of Opsomex, Bob Bull is our VP product, he's got a personal emergency, he's not able to come today, but I'll cover for the rest of it. Today, we're going to talk about the enforcing supply chain security and simplifying compliance. We just heard from Pratik that Argo reached Salsa level three, so some of the things that you need to do to get to that stage and how do you simplify those, that sort of the talk here today. And also the deployments with Argo CD, essentially we'll just talk about what is the challenges in securing your software delivery. Here we are primarily looking at deploying directly with Argo CD and not necessarily shipping your software as a package. So that makes these things somewhat simpler and we'll discuss the requirements for getting the enforcement end to end, what are the framework that we can think about and what are the open source solutions that are currently available that we can put together to achieve this higher level of security end to end and simplify the process, compliance process. And some of the gaps that are still there, we just heard Pratik talk about some of the security things that we would be focusing on with Argo CD. At Opsomex we support, we are contributors to Argo CD and support enterprises. So the security threats have been increasing. Some of the changes have been there is executive order for it. And with the solar winds, the CSO is also charged with fraud. So that's a big change from what it used to be. Previously the compliance was mostly focused on saying, hey, we are compliant and do the minimum to get there versus now the security really is taking the center stage in delivery. So the challenges here are these attack surfaces are increasing. Previously it used to be that after you deploy in production, if you are misconfigured, you have a greater chance of getting attacked. But now Gartner's study has shown that by 2025, 50% of the organizations will be troubled with this supply chain attacks. So and then the delivery process itself is used, lots of open source components and the security for the open source itself is evolving. So all of these and within the enterprise for the compliance, you have the manual approvals that also delays things. So the compliance is a lot of manual checks. So if you look at what is the supply chain security threat model? It's not only the code that is going in from the open source into your applications, but it's also what the deployment manifests and changing the service accounts that you are deploying with the manifest that can elevate their privileges, access the secrets, and then do something else with it. There are many places that attacks are there. And the mid-tree attack framework goes through this list and you can see there are about hundred different techniques that one can get into. So the compliance security has to worry about the policy specification to start these attacks and have them automated. So automation is the hard part here. So what do we need for getting this security end to end? The primary thing here is the DevOps control plane, being able to get the data from different tools that are doing point security checks, whether you have a SAS checks or a dash checks or pentesting that you are doing. All of the data needs to be available for you to then apply policies on top. You have these automated approvals that you can do once you have the data available. So what are the systems which are available today that can help do this is what we are going to talk about in the next few slides. So the same structure, if you look at the architecture wise, what is this saying? There are three things here. One is the federated data, data collection from different systems. Two, being able to specify the policies and collaborate between the security teams, dev teams, and ops teams in a way that can be applied. And three, enforcing those policies on the data that you are synthesizing across these data platforms. So that's essentially the framework. Then what are these policies? If you look at NIST 800, either 53 or 2 and 8 that deals with this supply chain security, you have over 1,000 controls. And each one of them becomes a policy that needs to be applied. The complexity there is do you have the data to apply it and do you have in a format that can be easily shared between them? Some of these things can be a sort of an oxy kind of compliance requirement where you have more than one individual that is required to deploy and how do you verify those processes? So now let's see how this applies to our code delivery and what are the tools that we can use to simplify that. So we just saw the attack vectors in different places. In some ways, this is a software development and delivery generic process. Now when we look at our goal, you can see there are three things that are going on here. One is point security checks at each level, whether it's a source code or it's a deployment manifest, it's the terraform that's bringing up the infrastructure, for example. And the second one is the analysis from these individual tools or the artifacts that are built by the developers, they are being signed and the signed provenance data is stored in some central location. And the third one is at the deployment time, you want to verify whether you went through those compliance checks that are required by the policies set by the enterprise. So that's the three pieces. So you can see the point security checks are being done today everywhere. The signing is one of the hard, difficult things, right? You can sign it. This signing issues has been there for decades now, the Mac OS windows, Linux, they all do it. But how easy is it for the application developers and organizations that are using it to use the similar processes but not get into the way of their software development? And the last piece is how do you use that provenance data during the deployment time to verify the compliance? So to that, just a overview on what are the tools when you can use it for the build process and software composition, there has been a lot of work done in the last couple of years. I think Google is the primary one that's driving the Salsa framework as in conjunction with the Execute Order 14028 with the generation of S-bombs and the provenance data for it and ingesting that into a framework called GUAC. We'll look at that a little bit more. And then the second one is the securing access to the third party components. This is something called an update framework that defines your, it's essentially a framework defines what you need to do to sign and use them in your system. And six-storey is a collection of tools that makes it very easy to use signing and storing the signage as well as making sure the trace of what you have done during the process is stored in that's not, that's immutable. But there are bold ones of the places where there are still significant gaps that we need to fill. We'll talk about that a little bit. So what is Salsa? So supply chain levels for software artifacts. So Orgo reached level three. So what this defines is different levels of maturity on your build process and how secure your system is. The level four is the highest level of security that has some hermetic build systems that, you know, it's not necessarily followed by any organization. So Orgo CD is a level three, what level three says is the builds that are done are independent of each other. They don't reuse any cache between build one and build two and all the inputs are explicitly specified. The dependent is properly captured and the composition is analyzed. So if you take that and the external parameters that go into the build, internal parameters that the build type that goes in, you generate a provenance format. So Intoto is a structure, basically defines the provenance structure, it specifies the predicate which is what has happened, what were the inputs, what were the outputs, sign the data and put that in the subject. So Salsa framework, they have built these plugins for the Google Cloud Build or GitHub actions. If you're using them, it's very simple to plug those things in to generate this provenance data. If you're using Jenkins, it's slightly a little bit more work. You need to have these additional things that need to happen for that. So now you have an ability to do the build, generate the provenance data, and the signed system that you can use to verify at a later stage in the deployment. So you have the data, then how does that data get stored? How do you use it in the next stages? So there are a few companies like Google, Datadog, I think, Chain Guard, they built this system called Graphor Understanding Artifact Composition. They love their food, so they call it guag. That basically does four things. They just collect the data from different sources, primarily the Salsa, Sbom, OSV is the vulnerability external data, and Vex is exceptions. Some of these vulnerabilities, although they're there, they don't necessarily apply to your application or never reached. So you need to be able to specify those in the code as well. So when you're deploying, you can verify that and ignore those. So they can ingest data from different sources here. It also has built in the Cosign, which is a signing tool. I think they work with OpenSSF, they're under the OpenSSF as well. We saw Argo is using an OpenSSF scorecard for the GitHub thing. So that's also built in as a tool chain. So you can get actions, get checks, and the composition generated through the built Salsa ingested in, and vulnerability for them continuously changed, checked. So this vulnerability data also for a given composition can change. It may be medium today, but tomorrow it becomes high, because there was something that was found, and you need to act on the changes as well. So this system, Pulsing All the Data provides a GraphQL interface. You know, if you have a policy engine like OPA, you can easily query, get the data and publish to that policy to verify it. So that's one simple way to verify your policies. Then how do you verify at the time of the deployment? So you have a signed system. You verified some policies. As a shift left, we want to do checks at the time they occurred so we can easily fix them. This, for example, SAS at the developer, they need to fix them. But what we are doing at the deployment time is ensuring that this is in the chain that is not a continuous chain. At the time of deployment, you want to verify everything is compliant or expected as expected, and the security things are taken care of. So Kubernetes 126 has something, they followed this process. They published this six store-based provenance for all the control plane builds so you can verify. But how do you do that for your applications? This is still evolving in Kubernetes. What we've been doing is we've been using the Admission Controller as a validating Admission Webhook to get that data. And for those deployment manifest, we were checking the signature of the images as well as the manifest requirements and then allowing it to go through. So this becomes, say, Admission Controller, essentially. It's an Admission Controller that checks the policy for your images. It also checks for the manifest requirements. In some cases, in your applications, you want to ensure there are certain labels that are in there, certain annotations, so you know which application to belong to when you query and identify which applications have vulnerabilities, for example. So all of that can be checked. This is a new system that is still evolving in Kubernetes that automatically checks six store-based signatures. It can be a hosted six store that's a common for all open source systems, or you can have your own six store in the enterprise that does the checks against it. So the only issue with that, one issue with this one, is in an application when we are deploying, we're not necessarily doing a continuous deployment every time with individual manifest. So if you're using something like a templated system, like customized or Helm charts, so then if there is a problem with one of the deployment manifest, and that gets stopped, but the rest of them go through. So now you get into a situation, you have to go undo it after the fact. So it's always preferable to, for those kind of applications, have a hook in the deployment stages to understand if the compliance is there for the entire package and then deploy. So if you look at Argo, there is a project called Argo Interlace. So Argo Interlace works as a plug-in to the Argo CD. At the time of the deployment, it has a hook in there that takes in the manifest that our Argo CD is deploying and generates the providence for them. Essentially basically signs them and it says the Helm chart after applying the inputs, these are the size manifest that were generated and it signs that manifest and applies it into the namespace itself. It doesn't use SIG store right now. So what this does is this model really is nice. Now you work with Argo CD as a plug-in. You don't have to apply additional set of policies. It's basically specifies I verified the signature for this. But what it doesn't do is if it doesn't find the signatures or find the compliance failures, it doesn't stop the deployment. But this is a good model for us to use application deployment structures. So what we have seen until now is the tools that we can use to generate salsa providence for the build and specification there. And we saw how we can pull in the data for the secure bill of materials, composition data, and generate vulnerability information, known information from the open source. And provide that data to query and apply to the policies. And at the deployment time, we saw the admission controller for Kubernetes that can use to stop the deployments if there is no, say, security compliance there. And this model, it's still evolving. It's a project that I think is still growing. So this one we can use to make sure that we don't even deploy if the compliance fails. It's not a failure after the deploy. So now the question is how easy is it to tie all this together? So in different pieces, there's a lot of effort that's going in to make them easy to use. So it's not as easy as Argo CD, but they're getting there. It will still take some effort, but you can put these things together much simpler than, say, two years ago. You can have a SIG store with the trace of all the bills. You can have verification of, in the Kubernetes, that works with SIG store directly to verify the provenance of the images. And one thing that's really missing here is the process gaps. So what that process we had seen verifies the images that are being built. And those images being verified. But what about the requirements, saying in your organization, let's say you have to have a pen testing done before you go to production in staging? How do you verify those compliance? How do you verify the provenance of your build and delivery systems? When we saw the threats, we saw that some of these threats are compromising your build system itself or compromising your delivery system itself so that can introduce, you know, manifests or even artifacts. So those are some of the things that are still missing. And there are no standards right now that allow us to do it. So there are some things like a cloud events or CD events that are based on cloud events that are providing some kind of a standardization across these tools that allow us to know when something has happened and even control to take some action using those. So that's something that we would like to see grow right now as an organization we are working on making that simpler to collect the data and expose it as a virtual data layer to apply the policies. So we are more focusing on the virtual data layer part at this point for standardization than the tools themselves. So hopefully that gave you some quick overview of how you can put together a compliant system with the current open source tools, particularly for Argo CD. It's definitely simpler for Argo CD than, say, you have packaging and releasing your software or jar files from them directly. We'd like to hear from you and get your feedback on what you think, the processes and how difficult it is. And you can come see us in P14. We'll be there set up by the end of the day today. Thank you. Thank you.