 Okay. Hello, everyone. My name is Ripal Nargava. I'm a senior technical staff member at IBM Research. And today I'm going to talk about how we secure our CI CD pipelines. Now, let's revisit this office supply of jail, right? So it typically starts with a developer who is writing our code and while writing the code, we declare, it declares various dependencies, like the package dependencies, the image dependencies. Then we run our CI CD pipeline. In the pipeline, we do various security and compliance functions. Like we generate SBOM, we do vulnerability analysis, we do validation of all the dependencies, whether they're coming from trustee source or not. Then finally we build some artifacts, typically an image, we push it to some registry and deploy it onto the cloud. Now the supply and security applies is applicable across the spectrum from developer to the workload that is running on the cloud. Now, in this talk, we'll be focusing on the security of our CI CD pipeline. Because as I said, right, this is the place where we are embedding and we are bringing the innovations of how we do a lot of security analytics and how we secure our code. But it's also important that we make, we give due diligence of basically ensuring our pipelines are secure. So that's what is going to be the focus of this talk. So just like our applications, our pipeline also has their own lifecycle. It starts with the composition. Now this is the place where we are composing or we are creating our pipeline, we are defining our pipelines. Now there are a number of open source catalogs, like if you are building a tecton pipeline, then you have tecton catalogs. If you are building GitHub actions, then there is a GitHub marketplace. Most probably when we are defining our pipeline, we'll be using these pipeline definitions which are ready to use. So at this point, we need to make sure that when we are bringing in these pipeline definitions, we do the due valuations of this, whether they are coming from toaster source, can we trust these definitions. And also we need to ensure that when we define, we configure them securely. Then we set up or we configure or install our pipelines. At this point, again, we need to ensure that our pipelines is properly configured because our pipelines, they have access to a lot of credentials. They have access to our registry, they have access to our GitHub credentials, they have access to our keys. So it's important that when we are setting up or installing a pipeline, configuring a pipeline, we make sure that they are securely configured. We can again use some admission controller to ensure that when you are installing, we are only allowing this design and verified artifact to be configured on the pipeline. Then finally, our pipeline gets triggered. It could be a manual event, it could be a GitHub event, and our pipeline is basically ready to execute. Now this is the last point where we need to do the verification again, and we need to ensure that the pipeline that we are going to instantiate or execute. We are ready to trigger, it's safe to execute. We have done all the verifications, and now we can execute the pipeline. Then finally once our pipeline starts executing, we need to ensure that we monitor our pipelines. Because in my personal opinion, monitoring is probably the underappreciated security tool. It doesn't help preventing any security incident, but it is probably the most mature and oldest tool that we can use to discover if there are any issues with your pipelines or any issues with your applications on the cloud. So that's why I think when the pipeline is executing, we need to ensure that we have proper monitoring established. Then once our pipeline finishes, we need to collect the automated registration of the execution state and any artifact that is produced by the pipeline. And once at any point in time when the pipeline is in the history, we need to ensure that we have the ability to audit this pipeline. And there are exists, and we don't want to start from scratch. There are open source projects that are there that are trying to solve some of this problem. For instance, these are Tecdon chains that can capture the automation of the task run and the images that are produced. We can sign it automatically. I also put in Tecdon extension proposal to extend this capability to the pipeline run so we can have the end to end provenance collection. And we can improve on this. Then we are existing cloud native monitors that we can use to basically monitor our pipelines. Only thing we need to do is we need to do the right instrumentation of the pipeline components. So when our, for instance, the Tecdon pipeline, then when all the parts or all the containers that are executing, they have the common level so we can aggregate and view them in the common lens. And for rest of the stuff, there is some work in progress that I'm basically working on. And this is again, working with the open source community. So if you're interested, we welcome your feedback and help. So one thing that I'm looking into is pipe validate, right? So just like our container or our Kubernetes, they have their own CS benchmark that provides them the guideline of how you basically configure your cluster, how you configure a workload. Similarly, we are trying to come up with basically a set of guidelines that how you can basically configure your pipelines, how you can identify any misconfiguration in your pipeline. And again, these are going to be codified. So you don't, these are not going to be just guideline just ensure that you are using this latest version of this image on these are basically actionable ones that we can codify and we can validate them in your through our automations. And then the technician controller that we are looking into that that will again prevent execution of any providers, the ability to validate and verify the pipeline executions and pipe auditor is again some automation that we're doing to allow us the validation of any artifact that are produced by the pipeline. You can validate how it was produced, which pipeline produce it and what checks were done in this one in this particular talk I'm going to talk about the signer. Signing part of it. So why it is important is, let's say I'm building a simple take down pipeline right I have three tasks get clone vulnerability scan and build image. Now, if one of the tasks and I'm bringing this task from the open catalog. Now if my gate clone task is compromised. Now, in a sense that it is, it is tempering with the original artifact that are in the git repository and it is as a part of the planning. Now, my remaining tasks, they are automatically compromised because they rely on what the first get clone task produces. So, if one task is compromised, then my whole pipeline essentially can be compromised. So that's why it is essential that we are big when we are bringing in this open source element into our pipeline. We make sure that they are signed, we make sure they're coming from cluster source. And that's what we're doing in this pipeline signing utility again this is open source I encourage you to see this. So the approach that we are taking is we are assigning this in multiple layers right so this is again example of take down but it applies equally to the gate of workflows. So first thing we do is we sign the pipeline definition. Now, the pipeline encode the task layout and we share resources. So, we by signing it we ensure that these are the only authorized tasks and this is the order in which this task can be executed. Then we sign the task which includes the execution logic that these are the execution function that I'm going to run and these are the runtime of the base image that I'm going to use. So, when we are signing we are signing it in this multiple layers. We signed the pipeline we sign the task and the image separately. And then the we are basically taking two approach one is we are signing the YAML and now we are basically building a new approach where we are converting this task, the take down resources into into format representation. And then we are signing it with the six store right so we are signing it with cosine and all the artifact that are basically represented in there. I just want to show you quick demo so this is again as I said this is an open source project called tapestry pipeline. So, in here. In here, I have basically my pipeline definitions in this particular directory, and I'm basically saying TKN for take down and show. This will show this will pass all the definitions it will identify what are the pipeline it will identify the layout, the task that I use the images that I use step that I use. And then I can go ahead and I can basically sign this. I can sign this particular resources with my key and I can point it to the registries where, okay, I think I'm running out of time so I'll just show you one command called verify. For verification, we can essentially say this tool essentially allows us to statically sign the pipelines and verify it in the in your in your workflows, right. And at the same time I want to show you add simple admission controller, where if we basically try to apply some pipeline or create some pipeline that is not signed. We have basically admission controller that can identify it and it can basically deny the request that signature or message and addition so far, so we can block this execution of the pipeline. And finally, as I said this, the code is open source so you can find it. So at this particular location we have this the talk that I gave we have article publish also when you strike and if you want to basically we're looking for help right because this is a big spectrum so if you are interested contact me by email on GitHub or on Twitter. Yeah, thank you.