 Hey everyone, thank you for joining today. Good afternoon, good morning, wherever you're joining from. Today we're gonna talk about Tecton chains. So let me start off by introducing myself. My name is Parks Patel. I'm a DevOps engineer at Boxboat and IBM Tech Company. So initially it was Boxboat, but we recently got acquired by IBM. Boxboat specializes in DevSecOps. We do a lot of container containerization of a lot of applications in a managing cloud infrastructure utilizing various Kubernetes platforms in various cloud environments, as well as using OpenShift and also, of course, using various different tools, Helm, Opa, any kind of other tool you can think of that's related to DevOps, Boxboat does it. The main focus for, we've been focusing on recently is has been security. And as you know, recently, software security, especially software supply chain security has been very important with the solar wind attacks and everything. So this presentation is gonna focus on how can you mitigate some of those things? So let's get started with Tecton. So starting off, let's just talk about, hey, what is Tecton? So Tecton consists of a lot of different parts. So the first piece is, Tecton is basically just, it's an open source tool and it creates, it's used to create CI CD systems in either across any cloud provider or on-prem system. So you can, developers can use it to build and build pipelines, do some, do tests and deploy it, deploy different images and applications into a Kubernetes cluster or whatever else you need to. So Tecton consists of multiple different projects. So we'll walk, we'll kind of walk through each of these things and we'll also give you more in depth as I go through the presentation as well as the demo that's coming up later on. So the first thing is Tecton pipelines. So this is basically the CI CD system that you build your pipeline with. So it consists of tasks, consists of task runs, pipelines, pipeline runs and then a lot of different other things come along with it. And I'll give you an example of exactly what a task and a pipeline and stuff like that look like come in the coming slides. The Tecton CLI, it's basically you can actually, let's say you want to interact using a terminal with the actual cluster that you created, a Tecton pipeline you created in a specific cluster. You can use a Tecton CLI. You can use that to view your different tasks to view your pipelines as well as, initiate actual pipeline runs. The dashboard, so for example, let's say you are more inclined to use a GUI interface. The dashboard is a great example where you can actually use a web browser and interact with the actual pipeline that you have deployed. Again, you can visualize everything. You can see all the different tasks that your different pipelines you've created and you can actually initiate and delete and everything, do everything right from the web browser. And I'll show you that up and running in this demo and we'll use that to interact with our pipeline and actually initiate the runs. The catalog or the hub basically is basically it's a collection of different tasks and I'll speak about this more coming up but basically it's a good starting point for anyone that's starting out with Tecton. Kind of doesn't know like, hey, how do I make some tasks? How do I make a specific pipeline? And it kind of gives you a good starting point and I'll show you, I'll talk about it more in the coming slides but at the same time I'll show you, our example is actually taken from the catalog. So it kind of works out and helped me explain a little bit more about it. And then finally, the piece, the important piece about it is the Tecton chains. This is the one that's doing the supply chain security and of course the whole presentation is kind of based on that so we'll talk a lot more about that coming up. So starting off, talking about what a task and a task run is. So a task, like I said, a Tecton creates tasks and these are CRD objects that they basically exist in Kubernetes and you can interact them with using KubeCuttle or you can use a TKN or you can use the GUI interface that comes with Tecton. So let me just walk you through what a task is. So task is a building block of a pipeline and you can see in here that each task can have multiple different steps. In this case, the specific task has one step which is a builder and it kind of uses a specific image and takes some kind of our argument and accomplishes some kind of work within that image. So that's what a step would do and a task would have multiple of these steps in it. A parameter is actually you can define, let's say, let's say you wanna pass in different values into these different tasks and you can specify these parameters and these parameters will be passed into the actual steps that I call it later on. So this makes the task very modular and you can reuse a task multiple times based on the different parameters you pass it. So in order for a task to be run in a Tecton pipeline or in Tecton in general, you have to have a task run. So that's what this object on the right side that you see. So you can see here the task references that specific task itself. So task with parameters. So you can see the name matches with the one on the left side. And then like I said before, you can pass in the specific parameters in this case, the flag and some URL is the values are getting passed in to that task. So then the task can actually be, okay, then it knows these are the parameters I'm gonna be working with and it actually executes that task. The next piece going up the chain is basically pipeline. So a pipeline, you can think of it as basically a collection of tasks. So you can see here, it gives you a task and it gives you a task list. So in this case, there's one task being defined and it references a task. And if you think about this is basically the task run that came before it basically. So it's kind of like a pipeline because it's a collection of different task runs and you can have all those defined. So a task, this task run or pipeline is basically referencing a build push task and it's gonna pass in these different parameters. And you can have multiple tasks in that pipeline. Similar to a task before it, right? You can have different parameters. So this makes the pipeline again, very modular and you can pass in different parameters as you see fit. And for example, let's say, one example you wanna pass in you have a specific get URL or something or get repo that you wanna pass in. So you can set that as a parameter so that you can reuse that same pipeline but pass in a different get repo each time if you wanna build different images or something like that. In order for a pipeline to run similar to a task it needs a pipeline run. So again, you see that the name reference to your pipeline I'm sorry, frame reference down here is a pipeline with parameters. So that matches up with this. So that pipeline object gets a pipeline run. CRD calls a pipeline object and it passes in the parameters that it needs. So for example, this pipeline here needed context and flags. So it passes in the different values and actually does the initiation and runs the actual pipeline. And I'll show you all this when we actually get to the demo and it'll kind of visualize exactly what's happening. So like I was talking about before the catalog is a good place to start. So if you're starting out with Tecton it's kind of difficult to pick up like, you know writing your task or pipeline from scratch. So the catalog is a good way for, you know users to go in and look at the different tasks that the community has already created for you and you can just combine all those different tasks together into a pipeline or there's already predefined pipelines in there. So for example, the one we're gonna be using today is called build tax. It already has specific tasks and a pipeline pipeline run already defined for you. So it makes it very easy just to, you know hit the ground running and kind of see what's happening so that this way you can go in and modify stuff as you see fit. The Tecton dashboard, like I said, it's a collection or it's a user interface for the Tecton. So for example, like we talked about before the pipeline pipeline run tasks task run all are defined in here. So once you actually establish them or actually have them established in your cluster then you can visualize them and view them here and actually kick off specific task runs or pipeline runs as you want to and delete and modify and do whatever you want to from here. Of course, the main piece that we've talked about today is Tecton chains. So Tecton chains has a lot of features but the main features that we wanna look at specifically is that it has the ability to sign task runs. So whatever results that come out from your specific pipeline Tecton chains visualize or views this and waits for the task run to complete and actually does signing. So it uses the cryptographic keys to sign the task run itself. And also, for example, let's say you're in this case we're gonna be using build packs or if you're using canico later on if you're creating some kind of image it will actually sign that image for you and store those in a specific OCR registry. It also creates attestations in terms of provenance documents and we'll talk about that in the salsa. So we'll talk a little bit more in the next slide about what salsa is and how Tecton chains helps us achieve some of the levels of salsa. Signing can be done in with various different things. We're gonna be using a tool called cosine it's gonna create us public and private key and but it can be, we can also use a KMS whatever else you want to and attach it so that you can do the signing with that. The recent update that I actually recently made that the PR is basically that you can actually store to multiple different back ends. So by default, Tecton chains stores information into an annotation in the actual tax grant object in your Kubernetes cluster but you can have it store into an OCI or a Google container storage or Google cloud storage or you can store it in a dot DB. So these are different storage options that are available and I'll show you that you can actually store it in both locations or multiple locations at the same time if you want to. In the future, we're looking to start configuring or start having spire so that we can establish non-fossilable providence. It's a little bit out of scope for this presentation but kind of just get the community aware that Tecton change is moving in that direction so that it can meet some of the higher levels of salsa. So salsa is the supply chain levels for software artifacts. So there are four different levels that you want to achieve. So what chains helps us or chains helps an organization do is it helps achieve a level one and level two of the salsa, the different artifact levels. So for example, the first level is a build process must be fully scripted. So the Tecton pipeline gives us that. It gives us a build process as fully scripted and generates a providence. So the pipeline, Tecton pipelines will create is a build process as fully scripted, automated and the Tecton change priests will generate a providence for us. Whether it's signed or unsigned Tecton, the level one does not care. So level two does care about assigned providence but chains already accomplishes that. We talked about that in the previous slide. You can use, you know, private keys and stuff like that in order to do the signing and assigns both the task run results as well as the OCI image that's created and we can host the source and build. The third piece that chains kind of wants to go towards is the non-fossil file by providence. So that's what I spoke about before where Spire, if you introduce Spire, you introduce the non-fossil file by providence where you do attestation. But that's a little bit out of scope for this presentation but they are moving in that direction. So soon what one side has been accomplished and once all that work has been completed and tested out, then level three can be obtained. So what is the source of providence? How does it get created? So the providence is basically just an attestation of some entity, the builder. So in our case, our builder is going to be tecton, tecton pipelines and it takes in, so it says if it produces, you know, one or more artifacts, so in our case it's going to create producer in image and it's going to create us, it takes in, it does so by executing some invocation. So that's where our pipeline and our task comes into play. So it takes in the invocation, takes in different parameters, materials, environment, variables, whatever it is and does it takes all that information in order for us to create some kind of a software artifact. But in doing so, it also creates a specific providence file or providence document that kind of gives you a specific information like, okay, how was this software artifact created? What went into it? You know, what parameters went into it? What commands were called? You know, what environment variables were used? All that kind of stuff, it captures all that information. So that's what we want to capture. So that's what change is going to provide us. It's going to give us that providence document that kind of gives us all that information that can be verified later on. So a few things before we start the actual demo is that TectonChange has a lot of different configuration that you can change around. The main things that I wanted to focus on today, especially for this demo, is the artifact task run format. So this is the format that the task or the providence that we created at the end will be stored in. So by default, it goes to Tecton. So we want it to be in the in-total format. So this, the in-total format is the one that's salsa compliant that we talked about before. So that's the format we want to produce because that's the ones that are supported by the salsa community. The artifact task run storage is like I was saying before, you can store it into multiple different backends, you know, either as a Tecton, this is the annotation on the actual task run, OCI and OCI registry, GCS, you know, Google Cloud Storage and then docdb. By default, it goes to Tecton. This is an annotation, but you can change it and have, you know, specify multiple so that it stores both in the annotation and OCI. So for example, if you want a redundancy or just in case you want to keep a backup, let's say your cluster gets destroyed or something, you have a backup of your providence file. So this command down here basically changes that config, changes the config map so that it tells chains, you know, what format you want it in and where do you want to store it? So we'll actually be running this in the coming demo, just to show that here. So the one thing you do want to remember, especially if you're creating an image and especially if you want that, if you want the image or if you want chains to do signing of the image, you do have to specify image underscore URL and anything in front of it, right? And image underscore digest. These two results, as you can see here have to be specified in a specific task. So for example, either your build tax or your canico task, right? You have to make sure that at the end you have a results underscore URL or underscore digest and then chains will know, okay, this task created some kind of an artifact, some kind of an image that the user wants me to sign kind of thing. So that's what it's using currently. So this might change in the future but currently it requires these two parameters to be present, these two results to be present in order for that to work. And again, once we do the build taxes example, I'll point this out to you so that you can see it actually in actual task. So like I was saying, we're gonna be using cosine to do our signing. So cosine is a very useful tool comes from a six store. So you can use it for container signing, verification and actually you can store that into an OCI, storing the images to OCI registry. So for a chains expects assigning secret to be stored in the chain's name space. You can see down here, there's a command actually cosine generate key pair, Kubernetes in the tecton chains is gonna create us signing secret name, secret name secret in the tecton chains name space. So this cosine tool automatically will do that for you. So it will actually create as a key pair. It'll create as a key and a public key and it'll automatically create the secret for us and then change it automatically pick that up and use that to do the signing for the images. So I'm not gonna show you the installation in this demo but it's very simple. So you can install tasks, the tecton pipelines and tecton chains very easily just by using some of these release.yaml's that come from the specific GitHub repos. So it's very simple to install and get it running on your cluster. Configuration again is very easy using the big map and I'll show you that coming up. And then of course, if you wanted to use cosine you can use go install or you can do it if you're using macOS or something then you can use a group. You can always just install the binary directly onto your system by going to the GitHub repo itself. So let's go start working with the demo. So what I'm gonna show you in this demo is we'll start off by using cosine to create that signing secret. So I'll explain exactly what the build pack examples is doing to walk through what the pipeline looks like, the task looks like, all that kind of stuff. Once the actual pipeline finishes we'll kind of review that prominence document that I was talking about before. How does that look? What format is that in? So I'll show you that at the end. And then of course, we can also use cosine. Like I said, you can use cosine to verify the signatures. So change is gonna be signing that image and storing that into an OSA registry. So you wanna check that both the image and the attestation that prominence document is actually signed. And so that we know that it's not been tampered with by a third party. So before we get started, I did wanna mention that the build packs example that is in the catalog currently is does not have the necessary fields. Like I was talking about before, the underscore URL and underscore digest that are required in order for chains to pick up on that object and do the signing. So we'll be using a modified version, but there has been work going on basically to upstream all these different changes. So the catalog is being updated so that it is more chains compliant so that in the future, you don't have to worry about this. But I would say just if you are creating some kind of an image or some kind of a artifact and you want it to be signed, make sure you double check that whatever task is doing that actual image creation, you wanna make sure that there's a result underscore image URL and image digest associated with it. So you can see this is the Tecton dashboard running and all I did is basically I have a proxy. Let me go up the screen. All I have is a port forward basically going forward so that it is forwarding to 9097. And that's how it appears on it. I'm just running it locally and you can see right now I don't have anything defined. There's no pipelines, there's no pipeline runs. Pipeline resources is actually getting deprecated so this will be removed. So there's no reason to talk about this as well as conditions. So these two things are gonna be removed. So that's why I did not mention in this presentation. So we talked about pipelines, we talked about pipeline run, talked about task and the task run. A cluster task is basically the same thing as a task but the only difference is that a task is in a specific namespace in Kubernetes while a cluster task is not. You can be using any namespace. So you define it once and it can be used in different namespaces. But a specific task, if for example, your pipeline calls a specific task but it only exists in a specific namespace then it might not work. So that's what the only difference is. So here is the catalog that I was talking about before, the Tecton catalog. And it's a very good place to get started. So you can see there are a bunch of tasks already defined in here. So for example, the build packs that we're gonna be using, curl, there's also get clone. So if you wanna clone in a specific repo that you're using that has your Docker file in there, you wanna create an image from all that kind of stuff. So canico, Golang, there's a lot of stuff here for that. So whatever you need is a good place to start. And then of course the pipelines. So specifically we're gonna be using this build packs pipeline. So you can see in here that it takes in, it takes three different tasks in order for it to run. So right here are the dependencies and these define or these define out the three different tasks. So we will actually apply all these so that they appear in our Kubernetes cluster so that we can actually run the specific pipeline. And then after that, we'll actually install the actual pipeline and then do the actual pipeline run. So let me just walk you through. So here is the actual pipeline. So this is just, you can see it's much longer and much more complicated than the small example that I showed you before. But basically it has the same kind of things. The one thing I didn't wanna mention is workspaces. So let's say your pipeline has multiple tasks and they each are, maybe they're producing some kind of an artifact or something that gets passed between tasks. In that case, you would need a workspace. So workspace is basically just like an empty directory or a PVC that in Kubernetes that would store that specific artifact and that it could be shared between the different tasks. So in this case, this specific pipeline has two different ones. And then of course we talked about parameters so you can specify different parameters in here. Some of them are defaulted out. So for example, this source reference has a default of empty string, right? So let's say you don't wanna change anything there, then you don't have to specify it. So some of these that already have a default, you don't have to specify any values. So specifically like source URL, app image, you do have to specify it because there is no default for it. And of course the main point of this pipeline is to create a specific image from a specific source URL. So that's not defaulted out. So you have to specify those things. The third piece is the tasks. So you can see here, it's gonna call the git clone task. It's gonna call the build task, build pack task. And then finally it's gonna call this build pack phases. So the last piece is not actually not gonna be called in our example here because we're actually building a trusted build pack. In this case, this task is a build untrusted. So we don't wanna build an untrusted image. So this task actually will not get run. But in order for the pipeline to actually work initialize it needs all three of these tasks in the cluster. So I actually opened up this, this specific one. Here's that build pack. This is a specific task for build packs. So I opened this up here. This is that one, 0.3, you can see. And then what I wanted to show here specifically is right here that type painting that I was talking about before results. You can see it has the image underscore digest but it's missing the image underscore URL. So if I ran this pipeline by itself with chains chains will not pick up on the image that's being created and it'll actually not do the signing. So this is the reason why we're modifying this image this task specifically. So here is the updated one. It's actually linked in the PowerPoint. So you can use this specific one as a 0.4 version. And you see down here it has a specific underscore digest and has an underscore URL. So now change will pick up on that and actually do the actual signing when the time comes. So the first thing I'm gonna do is use cosine to generate that the signing secret. So I'm just gonna run this. All it's doing is it's calling the cosine function and what we talked about before. So I'm gonna generate the signing secret in the tecton chains name space. So that's done. It's gonna give us a public key also at the end. This is right there. You can see that it got written out. So the next piece we're gonna do is we're gonna edit the configuration, config map because remember we've talked about this in the previous slide we wanna be in the in total format. And we also want the storage to be both OCI and also stored into the tecton annotation. So this is gonna take care of that for us. So I'm just gonna run this. So nothing changed, but we can take a look at it. So in my case it's already been set to that. So you can see in here, this is looking at the specific configuration map in that for the tecton chains, you can see the format is already in total and you can see the storage is OCI and tecton. So the controller will automatically, let me just show you this real quick. You can see here is the chains controller that's running. So that got modified based on the config map that I just changed earlier. You hear is that dashboard that's running. So that's what this is running right here. So you can visualize stuff. And here are the two different pipelines. The pipeline controller and then the webhook. So for example, let's say you want your tasks and pipelines are stored in a specific repository somewhere. So you can use the webhook in order to instantiate the pipeline run and run through that pipeline if anything newer changes. So those are different pieces that are being installed. So next, like I said, we have to install the different tasks, right? So the catalog says we need to, in order for this pipeline, the build tax pipeline to run, we're missing these dependencies. So these are the different tasks that get clone tasks, the build tax task and the build tax basis tasks. So you can see that's what I'm gonna be installing. So I did replace this with the 0.4, right? This is this one right here that has that underscore image. So that's the task I'm using so that it will get signed once this pipeline actually runs. So I'm going to do that and do install. You can actually go back in here and take a look. Now you can see there's three different tasks up here. Quick clone, the phases and then actual build pack. We click on it and if you actually see the actual YAML, right? Because it is a Kubernetes object, a CRD. So you can actually go in and view exactly what is this task doing and all that kind of stuff. Next, we're gonna install the pipeline. So do that. And all this is doing is just installing like the next piece of this is install the pipeline. So that's what I showed you here, which is the build tax.yaml. It's basically this pipeline object right here. It's gonna create that for us in our cluster. So if I click on pipeline now, you can see the build tax is there. And again, you can view the YAML as an associated with it. Now finally, we're going to do the actual pipeline run. So if you remember at the beginning, in order for a pipeline to actually run, there's a pipeline, it's just an object, right? This can be reused based on the different parameters you pass it. In order for it to run, you would have to specify a pipeline run or let's say you wanna just run a specific task by itself, then you have to specify a task graph. In our case, I wanna run the actual pipeline. So down here, you can see the pipeline run specified. Up here is the persistent volume claim that I was talking about before because it takes in a specific workspace, right? It actually the clone and the build tax task kind of work together in order for it to clone down the specific repository and then do the actual image creation, right? So it needs a shared workspace. So that's what this PVC is. So the persistent volume claim, you can see matches with this one. So all that's doing is basically creating that, making sure that it's there so that the pipeline can actually use it when the time comes. So you can, so right here, the reference pipeline reference is that build tax. So that's the name of our built pipeline that's in our cluster currently. It's gonna take in the different parameters. So the builder image is gonna be using is the builder, the build tax builder that's located in the Docker hub right here. The app image. So what do I want it to be named? Where do I want it to be stored? It's gonna be here. So that's what is specified here. Basically I want it to be stored in TTL.sh. And TTL.sh is basically a very short lived and it's very good for development use cases. Anyone actually can use it and without logging in or anything. So it's very useful for demo purposes or just what you wanna make sure that you're pushing and pulling is working properly. And of course you don't wanna do this with production or any kind of stuff like that just for like testing for demos or anything else like that. So it's great use. All I'm doing is putting it into my repo and giving it the name of veteran demo eight. So it's gonna pull where is the source code where is that get repo coming from? It's gonna pull from this build pack sample and specifically it's gonna build this Ruby builder. So what I'll do is I'm gonna use kubectl apply in this case because they are Kubernetes objects, right? I can just use that instead. And I'm just gonna do that and we'll take that so you can come in here and now you can see the build pack run. It's actually, so it created that object and it's actually, it found the pipeline and it's actually running now. So we can view exactly what's going on. So that's where this user interface comes into comes in handy because you can see exactly what logs are being created within each of these different tasks because each of these different tasks that are running you can see them down here. They're all gonna create their own task runs. So each of these are different pods in your cluster. So it helps you visualize, okay, hey, what's going on with that pod kind of thing? So it gives you a nice log of it. It gives you a status of what's actually happening if it finished, was there any kind of errors? And it gives you a little bit of detail like, hey, what's happening? Same as that the YAML configuration, what's happening or what is going to happen in that specific task that's being run. So it looks like the get fetch finished, it pulled from the build pack samples. And I see here, the next piece after that is going to start, the build packs is going to build the image, specifically name it this, push it to that TTL that SH registry and it's going to use that, the source path. So it looks like it completed properly so we can take a look. So you can verify that it's actually there. So go back to the top of the screen. And so next we're going to look at is verifying. So we want to verify, so we want to verify, hey, is the image signed? Has the attestation been created? And has it been stored into OCI? So the first thing we're going to do is, so this TKN command, I kind of want to show this here real quick, what this does is, this is that the Tecton CLI that I was talking about before. So what this did is basically, hey, Tecton CLI, check the pull request or sorry, check the pipeline run, describe the last pipeline run and then use the JSON path in order to get me what the image name was. So basically what it did is it came here on the side, looked at this, looked at the, okay, the last one run was this cache image pipeline. Go in here and find the one that's labeled app underscore image, which is this and give me this value. So it's very useful to get information back or let's just say you wanted to describe your last pipeline run kind of thing, right? So you don't want, you're not a graphical user interface person to kind of do everything in your terminal. You can get a lot of that information. Again, same thing you got in your dashboard, you can get over here. Has it been finished? Has it successfully finished? What parameters have you passed in there? Was there any specific results in that specific pipeline? What workspaces were used and what tasks were actually run? And you can describe this, you can also describe the task run, you can describe the task, all that kind of stuff. So it makes it easy just to verify what's going on or interact with it. So I'm just gonna do this command here. Basically it's gonna store the doc image as an environment variable in my terminal here so that I can use crane. So crane is another tool, basically just you can, useful for verifying to view the, what's in the actual OCI registry. So when I do a crane LS, so if you remember that this is our app image, right? So that's gonna be looking at the TTL.SH, PXP and IPA and then the Beber and R demo. So it's looking at that. So if you remember, I didn't specify a tag, I did not specify a tag in here. There we go. I did not specify a tag. So all it's doing is just gonna, by default it's gonna tag it to the latest. So you can see in here, it tagged it to the latest. So that's what this is. It also stored the dot ATT, that's the attestation that Providence document and it also created, stored the actual signature in there. So what cosine allows us to do is that it allows us to verify that, hey, like you want to verify that whatever image you're pulling down from the internet is actually trusted, right? So for example, let's say I'm building this internally, I built this some kind of application image and I pushed it up to the OCI registry and I'm gonna pull it down into my production environment. Before I pull it down, I wanna make sure that whatever image that I'm pulling down I trust that image. And that's where this whole signing and checking and verifying comes into play. So cosine verify is gonna take that my private key, in this case it's gonna pull it, pull it from the tecton chain's namespace, that signing secret that we created earlier and it's gonna verify that, hey, is this image signed with the key that I used to create it? So you can see right here verification passed. You can see that cosine claims are validated, the signatures are verified against the key and it's also using the full shear roots. So all that stuff was also verified and it gives you just a little bit more information. The other piece you wanna check is the actual attestations. So that attestation document that got created by chains and got pushed up to the OCI registry has that been tampered with? So we can use that because that's also a sign. So you can verify, hey, no one's tampered with this, with that specific, the provenance document. So that does the same thing. It kind of checks for validated and the keys and everything matched up. So here is the actual payload. So that's the, so it's kind of giving us that back. So I'm gonna copy this to here. So you can see the payload starts here and then ends there and the next piece after that is a signature. So this is base 64 encoded. So if you wanted to actually view it, I'm just, I'm gonna decrypt it here real quick. Do a base 64 decode and then base 64 dash D and I'll type it to JQ just to, it's more visually pleasing. So here is that provenance document that got stored. So like I said, here is that salsa provenance is stored in the total format. The type is in total. The name is that specific image that we created, the app image right there. That's what I got created here. This is a specific digest. What did the creating, right? The builder in this case was Tecton chains. Build type was specifically to change version two. And it kind of gives us more information on, okay, what happened in this specific build? How was this image created? So it gives us all that information. So what parameters were passed in? So like app image, you can see app image got passed in. This says sort, sort sub pass that got passed in here. User ID, so all the different things that got passed in are all mentioned here. So someone like, so let's say down the line, this pipeline no longer exists. And you wanna verify like, hey, how was this image created? Actually, I wanna deep dive into it kind of thing. You can look at this provenance document and be like, okay, so this is all the different parameters that were used. And then these are different steps that the specific, this build pack image took in order to actually make that. So there was multiple different steps that this build pack step actually ran and then there are all listed out here. So you can see the actual entry point and there's a bunch of different commands that it ran, what arguments that I took in. Was there any kind of environment variables that I took in? It gives us that information, any annotations. And then if there's any other, and then the next piece, right? So this was the first step and then this is the second step. And then lastly, the third step and so on and so forth. It tells us when the build started, when it finished and all that different kind of information. So this is the format. I think this format is still kind of evolving. It's at, you can see here, it's at version 0.2. This was very, very different when it first started out at 0.1. A lot of this information was not captured. So version 0.2, there's a better job of capturing exactly what is going on. You know, what actually took place in order to create this image. So I think as the iterations improve, as this also provenance, you know, as a community kind of works at it, I think more and more information will be captured here and it's gonna give us a lot more information to verify and trust that it's actually, that the image or whatever image we're using, you know, is actually trusted. So I think the whole focus of this is like, you wanna create images that you trust, but I think the overall goal is that the community kind of adopts this and that, you know, vendors, you're pulling down images from your vendors. How can you trust them, right? So now, if they kind of follow this kind of format and you can be like, hey, I know what the public key is, I can just double check that it was actually, that image that I'm pulling down got signed by their specific private key and that I trust that image and there's no other man in the middle attack happening. So that, and you can also check the attestations to make sure like, hey, nothing weird or nothing, you know, like in the case of SolarWinds or something, nothing got introduced while the image was being built, that kind of could compromise the actual security of the image. And that brings us to the end of the presentation. So we verified, so we verified that the signature, the image is actually signed. We also verified that the attestation was signed. And real quick, I do wanna show, so let me just do our task runs. So just getting the task runs here. So I wanna specifically get this one, right? This is the one that ran here. So it's cache build image. The full name is left out, but this is the one. What I wanna show is that, hey, Ymer, the setting I set for storage was save it to OCI and also save it to the annotations. So if I do a get task runs and specify that name and output that in, let's say the YAML format, right? We can scroll up here and we can see the, you can see all the annotations in here. So TectonChains creates a lot of different annotations. So specifically here is that payload. So that's what I decrypted before. Here's the same thing. If I did the echoBase64 decode and then privateJQ would be the same exact thing that we saw before. Here is the actual signature that got stored in at the annotation and at the OCI level. And it gives you a little bit more information that yes, TectonChain signed it and everything is successful. So thank you. Please let me know if there's any questions or anything else I can answer. So it does not look like there's any questions. There's any questions, please let me know. If not, thank you for attending. I hope you guys learned a little bit about what TectonChains is and what exactly it can do and how it can help you, especially your organization, how can it help them achieve this? Parth, we do have a couple of questions coming in. I see one asking, what is the future of Tecton? So I think the future of Tecton, you know, one of the things is, I think the chain's piece is very important. I think a lot of companies and a lot of organizations are gonna are moving in that direction. And because it helps you establish Salsa level one and two currently, right? It's very easy to obtain those two different levels. So I think a lot of organizations are gonna start adopting Tecton pipelines along with Tecton chains so that they can actually meet those requirements and have a software secure factory basically within their organization. Great. And someone else is asking about licensing and pricing. So this is all open source. So there is no license, there's no pricing. You can actually just go to GitHub but right now and download it and use it in your organization. So there's nothing, no price related to this and licenses. So it's all open source tooling. Perfect. If anyone else has questions, feel free to put them in the Q&A box or in the chat box. Stick around a couple of minutes longer. What phase of adoption are we in for Tecton? Early adopters still? So Tecton, so Tecton pipelines, I would say is a lot more mature than Tecton changes. Change is still in terms of adoption. I think there's a few companies that are actually using it. So I would say Tecton pipelines are a lot more mature but Tecton changes is improving. And as people are finding bugs and issues with it, it's going to keep getting better. So I would say in terms of production ready, I would say Tecton pipelines and changes production ready, but there are a few bugs along the way. And especially there have been, of course, like I was talking about before, they are introducing chains. So change is going to get added. So it's going to increase the actual functionality of, sorry, Inspire is getting added to change. So it's going to increase the actual, you know, making sure that you attest that you can falsify the actual task runs and annotations that are actually getting created in your pipeline. And what should be done to address salsa levels three and four until Tecton has the capability? So in terms of salsa, so let me actually bring up the, let me share my screen here again. So let me bring up the PowerPoint. So in terms of, so the main focus is the non-false viable prominence. So that piece can be obtained by using Spire. So for example, if you wanted to create, you know, if you had Spire running internally and you kind of, you, you would have to create your own mechanism in order for it to, like a test that, okay, this specific pipeline that I'm running is actually getting run by the Tecton controller. So it's kind of like, you know, you probably need a sidecar and all that kind of stuff running in your pipeline in your cluster in order for all that to work. So it seems a little bit difficult just to do it without Tecton. So in terms of, in terms of doing it manually, I don't think there's anything that's been discussed currently that I know of to do that to achieve level three. So once changed, I think there has been work going on. So their improvement, basically the PR has been approved, basically. So work on the pipeline side and chain side has started. So I think we'll see that coming up in the near future. So I would say probably in a few months, level three should be addressed by James. And then the last piece is the hermetic builds and two-party reviews, right? So that's, that's more like, hey, just making sure that you're working in, you know, a secure environment, maybe that's offline somewhere or something. You're kind of following a specific standard having two-person reviews. And that could be meaning like, hey, you know, review or sign it with their own, own public and private keys, right? So I reviewed this specific task where I approve it or, or this provenance, I approve it. You put your, you know, you sign it with your private key and other person sends it with their private key. So that's a two-person review. And then maybe you have a admission and controller running, running in your production environment. That's basically a, like it's checking to see that yes, there's a two-person review, right? And then you can see that it's signed by two specific keys, by two specific, two specific valid reviewers, right? Because they have their, the admission controller would know about their public key. So that it can be like, before it actually gets deployed into a production environment, it has been verified. So there has, so there is a key where no is actually one of the, the admission controllers that works well with chains. So that I actually automatically checks to see if the images are in there. It actually checks to see if your, if your provenance document that's got, that got created, right? If I meet, it has specific fields in there. So there's, there's actually work going on in terms of the admission controller to make sure that this, you know, whatever you're creating and, you know, making sure that it doesn't get into the production environment without force being verified. So it automatically blocks anything, like it's not signed or doesn't meet your, your provenance needs, all that kind of stuff, right? And in the future, you know, if it's not signed by two, two-party review block that too. Any other questions? Yeah, there's one more. I'm going to say some of these things wrong. Would that require ITSM slash CMDB integration to facilitate that two-person signing process? I presume manual quote manual breaks level one. So automation being paramount. Yes. So I'm not sure what the other two, the tools you mentioned there were. Let me scroll back and see it. But yes, like you said, it would have to be automated because that would break level one. You would want it to be an automated process, but in that specific instance, in order for it to meet level four for signing, right? You would, it would have to be a manual stuff. So I'm not sure exactly. I think I could, you know, we can talk offline about this, but to meet level four for the two-party signing, it might be that, you know, it maybe needs to be a manual step, but I could get back to you on that. Yeah. Yeah. So I would like to clarify ITSM is like service now. Okay. Got it. Any other questions? Looks like we're good for now. And again, you know, you can always follow up with parts later. You guys want to get to further discussions. Part you want to wrap it up. Yes. Yep. Thank you guys. Thank you. Thanks everybody for attending. Hope you all enjoyed and hope you go. Learn something in this webinar and, you know, please contact us if you have any more inquiries, or if you have, you know, specific, you know, questions or questions that you have. Please contact us. Thanks. Thanks everybody. And thank you also for joining us today. Thanks for having me. I'm going to start with the session with setting up tech time, tech on chains. Fox, but it's here to help. Thank you.