 because I don't know if we have to work the room. All right, well, thank you for joining our talk, our presentation. This session is going to be on Introduction to Supply Chain Choreography with Cartographer. My name is Cora Iverkleid. I'm a developer advocate with VMware. I'm Corbi Page. I'm a tonsu solution engineer at VMware. And so, yeah, thanks for being here. So what we have planned for you today is it's a 90-minute tutorial. So you can, if you pull out your computer, you can actually follow along with us. And if not, then we will go through all of the instructions online. So go ahead and pull out your computers and log on to this website and enter that password. And once you do that, oh, yeah, sorry, let me get it. Sorry, there we go. OK. And if anybody needs help at any time, just raise your hand. I'm happy to come on over. Yeah. So are you connected? It's working for you? My Mac is still booting. Oh, it's still booting? OK, all right. Oh, well, no rush. So if people are following along at home also, you're welcome to follow along with us. Just log on to this website and enter the password. I'll leave it up there until all the people in the room are connected, just in case. Just give me a hand signal, I guess, when you're in. Oh, for booting up? Wow. I wonder if it's trying to do an update or need something from the internet. That's usually the good thing about Macs, the boot up time. I know. Yeah. This is right when I needed to. All right, but you've got the website, right? via vmdu.com slash osscon, password osscon22. And you're good? OK, perfect. OK. So once you log in with that password, this is the page you're going to see. So we are going to use a platform where, as soon as you enter, you will get your own session on a shared Kubernetes cluster. And so you will be able to work on your own objects for the most part. And then there's some cluster-wide objects that we'll all be sharing. We'll explain that as we go along. So just click into that. And just to show you this platform, on the upper left-hand side, you've got a table of content. So we'll be going through these instructions. You can see already we're going to have different sections. One is going to be a section of how you would do. So Cartographer is a project for creating supply chains, sort of the analogous to CICD pipeline. So first, we'll talk about how you would run through a sample CICD process manually without this level of automation that we're going to introduce. And then we'll talk about how to automate those steps that we just did manually with this approach. And then we'll do a little bit more complex example following a GitOps model of deployment. So that's your table of contents. On the left, of course, you have the frame. There's things you can click on here. And on the right-hand side, you have three frames. You have the terminal window. So you have two different terminal windows. They're the same. You can do an LS here and an LS here. It's just two different windows so that you have two places to run commands if you want. There is a VS code is built into this platform. The first time you click on this, it takes a second to load. But you do have an IDE right here. And then the third one, we just added a tab that is actually linking to cartographer.sh, which is the home page for Cartographer, the project. So you can also go from here and open it in your browser if you want separately. It's literally just the same page embedded. And here you can learn a little bit more about Cartographer. So you can see from the project page that this project is a tool to help you create reusable supply chains that are secure to manage your continuous integration, continuous deployment, continuous delivery. And it is a Kubernetes-native product. And it leverages other Kubernetes-native products. So in that sense, it differs a little bit in what it can do and how easy it is to use from maybe some more traditional tools. So having said that, everybody ready? How's the Mac loading going? Yeah, we're good? OK, OK, cool. So let's go ahead and get started. Before we do, I guess I would want to get a little bit of sense from the folks who are here into what CICD, what role, like what do you do for work and what CICD are you using at work? Yeah, you go from here. OK, so Azure DevOps and GitHub Actions. OK, perfect. And Tecton? Oh, you do use Tecton? OK, great. Oh, perfect. OK, you use Tecton for the whole pipeline. Everything is done in Tecton. Oh, OK. OK, great. OK, so you'll? Perfect. OK, OK, so for the sake of Kubernetes user for about seven months previously using Jenkins and now using Tecton on Kubernetes. OK, perfect. All right. Hi. All right. So OK, so to get started, first of all, I mean, you've both been doing DevOps before. There's different words that people have used to talk about the process of getting an application from source to production over time. Some of them are listed here, pipelines, DevOps, DevSecOps, supply chains, path to production, CICD. There's nuanced difference between them. Based on maybe when the term came up and what it was trying to highlight to differentiate itself from other ways to describe the path to production. And so in the context of Cartographer, we're talking about supply chains and specifically the approach that we're taking is one of choreography versus orchestration. And we can talk about that a little bit more as we go along and say where it makes a difference, I guess, or why we're orchestrating. So we'll get to that. As I mentioned, the agenda is going to be first a manual approach so that you can appreciate where the opportunities for automation are. Then we're going to introduce Cartographer. And lastly, we'll do the get-ups workflow, which is where you'll see a load of tecton coming in, actually. So your background and tecton will help you understand that. Hi. Come on in. We are doing a live workshop, so if you want to pull out your laptop, you can follow along. And yeah, Corby will help you get set up. All right, so let's do the manual approach. So what we're going to do is we're going to take a really simple supply chain. We're going to say, what are the minimum steps that you need to do to get an application from source code to production on Kubernetes? So obviously, the very first thing you need to do is if you have developers who are committing code to a code repository, you need some process that's going to monitor those Git repositories and notice when a developer has made a Git commit, that you need to pull and process. So in order to do this, we're going to use a Kubernetes resource. We're going to look to the, we're doing this in a Kubernetes native way. And so we want to look to the ecosystem and say, we don't want to write a script and we don't want to have a process where we're looping through some kind of script that we've created. We'd rather look to the ecosystem and say, hey, in the CNCF or even non-CNCF projects, is there anything existing that will run on Kubernetes and that will pull our Git repository periodically and pull down new code? And in fact, there is. And in this case, we're using a product called Source Controller, which is part of FlexiD. And then after you pull the source code, I mean, you might do some testing, but we're keeping this really simple. So the first thing we have to do is we have to take that source code and build it into an image, right? Because the only way you can deploy an image, an application into Kubernetes is that if you have it packaged as a container image. So for that, again, we don't want to script it. We don't want to have a, you know, we don't want to write a bash script that does a Docker build or any other CLI kind of approach. We want to look to the ecosystem and say, is there some kind of Kubernetes native way to do this that's available? And so for that, we're going to use a project called K-Pack. K-Pack leverages a technology called BuildPacks to build container images from applications in a very structured way. And it does so for applications written in many different languages. So it's a really good way to cover a lot of different kinds of applications with a similar approach. And it builds in concerns for security and efficiency and, you know, optimizations that are available just by the virtue of using this tool. So we're going to make it easy on ourselves and just use K-Pack. And then we want to run our application. So we could, you know, for this, we don't really need an ecosystem project. We could just draft up some YAML for a deployment and a service and maybe a proxy or an ingress. But since we are trying to leverage tools in the ecosystem as much as possible, there are projects out there that will make our application deployment more sophisticated, give us more features with simpler configuration. And so K-Native Serving is an example of that kind of project. With K-Native Serving, you have a, instead of having a deployment, you know, three, a deployment and ingress and a service, you have one very simple YAML file. And then when your application gets deployed, you get functionality such as traffic routing or revision management and auto-scaling, scaling down to zero, et cetera, without any additional work. So we're going to choose to deploy our application using K-Native Serving. So we're kind of halfway there because we didn't write any code. We haven't used any proper CICD tool yet. But because we're using these ecosystem resources, these things are already going to do a lot of the work for us. So we can say we're halfway there. But these tools are written by entirely different organizations, right? They're not intending, they don't have a feature inside of them that integrates with one another. And they still require you to, you know, install these products and create the resources for any application. So there's obviously another layer of work that has to be done. But before we add that automation, we want to, let's go through the process manually. So we go to step one. Now these are action boxes. If you click these blue lines, they'll, they have an action that you see on the right. So this opens a file that we've prepared for you here. And you can see that this is a YAML file. Is it big enough? Can I make this frame bigger? I'll just do this for now. Okay, so this is a configuration file for that first resource. So we're going to use Flux, K-Pack and K-Native, which means that we know we need three YAML files. For the, for one application, right? This is the application we're working with. It's on GitHub, it's under MyOrg, and it's this Go application. It's a very simple Hello world application. So for this application, we need three resources. So this is our Git repository resource. And you can see that it's just a Flux resource. And all it's saying is every minute, every minute go to this URL to this main branch and just check to see if there's a new commit. And if there is a new commit, what Flux does is it takes a zip of that code and it downloads it into the cluster. So that's what this resource is doing. So as soon as we apply this, this file, that process will run every minute. So we can go ahead and apply that. And our Hello world resource is created. And so because this is the first time we're creating this resource, this resource is going to go to the repository and it's going to get the latest commit right away. So if we check the status of this object with this command, get the repository, look at the YAML and then just filter out using YQ, which is a tool to select and filter YAML output. We can just look at the status section of the object. Now in Kubernetes, resources can have a status associated with them. So it's basically state information related to the resource that's running in the cluster. And the status is located, it's stored in that CD, right? Kubernetes is storing these object definitions in its internal database. And so this internal database is acting as a data store for us also. So we don't have to provide a separate sort of storage system in order to track the status of our jobs, which is already, you're starting to see how Kubernetes is simplifying the process of composing a tool chain, right? So we can leverage that and look at the status and say, okay, what kind of information is available to us here? And there's two things here that are informative to us. One of them is that the status of type ready is true, meaning that this one is ready. And secondly, in particular for flux, we're looking for where is the value, where is this information that flux found for us that we care about? And here's the zip, here's the address of that targz file with the latest commit. There's a commit ID and you can see by the URL, it's not pointing to GitHub. It's pointing to a local address, right? So flux has downloaded a zip file with the latest commit of the code for us. And it's put it in a field called dot status dot URL. Sorry, dot artifact dot URL, right? Status artifact URL. So, great. Now, just to review what we've done manually is we wrote some YAML, we applied it manually to the cluster, we manually checked the status of the resource and we're manually going to want to copy that value out. So I'm just going to run this, this is a similar command, I'm just using YQ to pull out the whole dot status dot artifact dot URL. And so I'm going to take this value and literally highlight it and I'm going to do a control copy because I need to give that to K-Pack, right? We want K-Pack to build this into an image. So we need to give this value to K-Pack. All right, so make sure you've got that copied in your clipboard and then click next. And so now we can go to our next resource. And this one is, let me know if I'm going too slow. So this one is the build stage. So here we're going to take a resource from the K-Pack project. It's called an image, that's just the way K-Pack works. And we need to give it a couple of pieces of information, we need to give it the source to build. So right now we don't have the source, we can't apply this yet, but since we just looked at our, we just copied that value, we can now do a control V here. And now we have supplied the value to K-Pack. And so we're asking it to build an image and publish it to a registry and that can be fixed. So we're saying put this in everybody, in all of your sessions you have a local registry. So we're just saying take this code, build an image and put it on this registry. Question, yeah. Why did I paste the URL? Because K-Pack, oh, oh where? Here I'll undo this, yeah. So there was a value, so if you go to the manual building container step and open the file, you should see an environment variable placeholder called new source. So you want to delete that new source and instead paste in the URL that you got from your last step. You're good? Okay, good, okay. So once you have that, now this YAML is ready to, actually this will do this for you if you click on these things. Okay, now we're here. We want to do the kubectl apply of this file. So okay. So now we're gonna wait for this resource to finish and so we're doing the same thing. We're applying some YAML, we have to monitor it so we can get the status. In this case, the status of type ready is unknown. And that's just because it takes a little longer to build an image than it does to download code from GitHub, right? So if you know a little bit about K-Pack, then you know you can, for example, get the list of builds that are happening at any given moment. So we know that there is a build happening. We can check the log file of that build. That'll happen in the lower window. So this is just a little insight into how K-Pack is building images. It's grabbing the code that we gave it from this address. It's analyzing it. It's detecting that it's a go application and so it's gonna use certain modules with logic about how to turn a go application into an image. It's going to execute each one of those modules. So this is just a little bit of how it's doing and then it's going to export the image onto the registry that we configured. So by the time that this process is done, which should be just a second, does this finish? What is this image for Matt? It's an OCI image. Yeah. So when that's done, okay, so when that's finished, then you can check the status again. And so in this time, the status of ready is true and we can see that we need to extract the tag of the image that we want to deploy. So if we look at this, this will be under dot status, dot latest image. And here's the address, right? And in fact, if you run the next command, it'll pull that out for you really cleanly. So we're getting the status in YAML and we do dot status dot latest image. And so again, copy and paste this because or copy it to your clipboard because we're gonna need it in the next step. So this is the tag of the image that was just created. And so this is what we want to deploy, right? But again, to review, we passed it input this time. We applied some YAML, we monitored the status and then we're extracting the value that we want. So now if we open, this is now the definition for our third resource, which is the Knative one, as you can see here. And again, Knative has to deploy the image that we just built so we can find the placeholder for that, which is new image, and do a control V and copy that value into where it says image. And now that that resource is fully configured after we've injected the input, now we can apply this YAML and then we can check the status, check it again, just give it a second. If it says unknown, oh no, if it gives you a URL, you should be okay. So then you can curl it and you should get a response. So the application is running and working, or you can also open this in a browser if you want. Either way, it'll work. So our application is deployed and running. And so now the question is, all of those things that we did manually, how can we automate them? And what's interesting to note here about the difference between choreography and orchestration is that because Kubernetes is providing us all of the life cycle management and because these resources have their own reconciliation loop and they're all sort of independently long running processes that can accept triggers and will just do their job, you don't need a tool that has any knowledge about how these processes are executed or that needs to provide any kind of execution runtime for them, right? Norm, with more traditional tools, often the CICD tooling itself is providing the execution environment for these activities and it has to know how to launch these processes and manage their life cycle, right? So just by nature of using existing ecosystem, Kubernetes ecosystem resources for the various steps, we're already delegating to Kubernetes a lot of what we have to do. And also, we've been able to monitor, apply, monitor and extract information and inject values to all of these resources just by using the Kubernetes API as an endpoint, right? So the ability to integrate disparate resources if Kubernetes is your sort of like equalizing entry point and your equalizing execution substrate becomes a lot easier and that's what puts us in a position of just being able to choreograph resources rather than having to get in and really sort of orchestrate something more, I guess, more deeply, right? So let's go ahead and do that. So before we go on, just go ahead and go to this cleanup section on the next step and make sure you delete those resources. And you can just ignore it. There's this error message that keeps popping up that's just saying that we're like overloading the API server a little bit. Just ignore that. So it's kubectl delete and you should be deleting the three resources we created, the fluxcd git repository resource, the kpac image and the Knative service. So, all right. So let's talk about how am I doing on time? Should I speed it up or? Okay, okay, we're good, okay, okay. Okay, so let's talk about how we would do this with Cartographer. So we wanna just add Cartographer on top. We want Cartographer to do all of the things that we just did manually. So, and the other thing to keep in mind is when we did this manually, we did it for a particular application. So not only do we wanna automate, but we wanna make it generic so that a developer, so that many developers with different GitHub repos with source code can all onboard their applications, right? Okay, so for each one of those three resources, we're gonna have to give Cartographer the ability to stamp out the instances for any application. So we need to template them, right? That makes sense, right? That concept of templating them. So, let me clear this. Okay, I'm gonna run this here. So, Kubernetes Cartographer, so here we're listing the API resources. For this one, I'm gonna make it a bit smaller, okay. So we just ran this command. This is a really useful command. So if you're newer to Kubernetes, you may or may not leverage this command. I don't know, but it's useful if you don't. Cubicle API resources, and then you can either filter, grep makes it really easy. So these are the custom resource definitions that are added by this Cartographer product, and we're filtering to only look at the ones that are templates. So you can see that, that's easier, I don't know, that's too small, but there's maybe a handful or so of different kinds of templates, and they differ in the word that appears in the middle. So there's cluster source templates, there's cluster image templates, there's some other kinds of templates. And so just logically you can imagine that a cluster source template makes sense for when you're dealing with source code. A cluster image template makes sense for when you're dealing with building images. These templates really just differ in the output that they provide. So if you're creating different sort of object definitions that have output fields, they provide specific output fields, but they're not substantially different otherwise, or most of them are not. So we're gonna use these templates to wrap the YAML that we applied manually. So for that, let's start with the source. So click on this. So here is sort of the scaffolding definition of a cluster source template from Cartographer. And the only thing that we've included right now is the name of the resource and it's auto generating on each of your sessions is gonna have a different session ID because you're each gonna make your own in this case. So we have to fill in these values. So we see that under spec, let's start with template. So if you scroll down here, this will actually highlight template for you. So the template is that generic, this template has the ability to apply any YAML to Kubernetes. So we're gonna copy and paste by clicking here. This will copy under here the exact YAML that we did in the first step manually. So instead of us doing the kube-cuttle-apply, we're asking Cartographer to do the kube-cuttle-apply and we're saying here, under template, this is the YAML I want you to apply. And this is really interesting because it means that Cartographer doesn't need you to, doesn't need anyone to develop a special plugin for flux that's for Cartographer. But rather, Cartographer is giving us the ability to give it any arbitrary YAML and it will just do that apply for us. So then if we go down a little bit, we want to look at these other two fields we have to fill in. And we know already from when we did this manually that we're going to want to extract the URL value and we know that flux in particular puts this value under a field called .status.artifact.url. Now if you decide not to use flux, if there's another ecosystem project that you prefer that will also pull for, pull get repos and pull down source code, maybe that product doesn't put the information in .status, artifact URL, maybe it puts it in .status.url, who knows. So this is a way that if we add, if we highlight here URL path, this is the path from which to get the URL information and we populate that with .status.artifact.url, this is the way that we're teaching Cartographer how to work with flux. This is where flux is gonna put the output that we want you to extract. Now with flux, because flux actually downloads a tar GZ of the source code for that commit, we really don't need any other value. But the cluster source template is a little bit more generic than that and it also gives you the ability to populate a revision. And in fact, actually, even though we're not using it here, this revision path, flux also puts the revision in the revision path. So we'll just go ahead and populate that, yeah. Probably it's two different questions, but how does this tool know that the URL path is the value of .status.artifact.url rather than just. The tool doesn't know. So we know, because we just looked at the resource and we did it manually, right? We know that flux, so we're using flux as our arbitrary YAML. And we know that when flux downloads the code, flux is gonna put this in .status.artifact.url. So we're telling Cartographer, hey, this field, when you check the status, like apply this YAML and then monitor that instance. And when you monitor that instance, when that thing is ready, go to the status.artifact.url, pull out that value and that's gonna be the URL output of the template. How does one do that versus literally my URL is .status.artifact.url? Oh, that's part of the template. Yeah, that's the Cartographer template. So the reason that these fields are the sort of, I mean, that's the way Cartographer works. And we know that there is a, by convention, this field is called URL path. Not URL. Because the output value is, you'll see when we use it later, we're gonna get it as URL, but we're telling it to populate that URL output with the URL path. Yeah. Okay, so we've told it where to get the information. So now the other question is, okay, well, this template will work great. This will do exactly what we did manually, but only for this application, right? GitHub, Cyberclid, hello, go, which is not good enough. We want that to be generic. So we need to, and that information is not something we can precode, right? Only the developer that wants to use this supply chain process knows what app they want built, right? So that information has to come from a different, from a developer sort of object, right? So we're gonna create something called, so this is the information we wanna pull out, right? The URL and the branch. So we're gonna pull that into a separate file. So if you click on this one, create a workload with application-specific details, let me make this bigger for the camera. Okay, so create the workload. So you can see this is a different, this is not a template, right? The last one was cluster source template. This is a workload, and this is also part of cartographer. And this has in the spec the ability to say, this is what a developer, this is the object that encapsulates information that a developer provides, right? So you'll also see in this tool a very clear separation of concerns between what the DevOps side of the house would configure and how to onboard developers or what you're exposing to developers, right? So what's exposed to developers is done through this workload. And as a designer of the process, of the DevOps process, you can choose to incorporate other inputs, but this is sort of the bare minimum, right? Okay, so we're giving the developer the ability to say these two things. So now that we've pulled that out, we can go under this source, right? Now we can go back to our cluster source template. And this value, instead of hard coding it, we're going to parameterize it. And so the way you do that, if you click on this one, it'll automatically replace it for you. And so you can see that the context of this value is a workload, right? Cause it's that workload API that we just looked at. And the path is spec source get URL, right? Just like we saw the, it's, sorry, these popups keep happening. Workload spec source get URL and the get ref branch. So that's exactly what we have. And so we'll do the branch also. This will highlight main for you. It actually takes the ref branch main, the whole thing, and knocks it down to source get ref. So now, what we have is the, we've enabled cartographer to inject values dynamically provided by a developer, apply this YAML to the cluster, monitor the resource, and extract the output values that we need. So that's all that manual work that, all that stuff that we did manually. And then, oh, the last field, the name, of course. We don't want them all to be called hello world. So we're going to use the workload name as dynamic input. Okay, so now this is ready to go. So, oops, okay. So now let's do the same for our second resource. So we're going to start with a basic template. This template is going to be a tiny bit different. And because we're talking about source, this is a cluster, because we're talking about images. This is a cluster image template instead of a cluster source template, but it only differs in what kind of output it has. So we still have the template field here, but instead of a URL and a revision, we're going to output an image, right? So let's go ahead and fill that out. So first of all, highlight that template block. And this is where we throw in any arbitrary YAML. And of course, we want to throw in that same KPAC YAML. And in this case, when we replace it, so here we have this new source value, right? So, okay, no, sorry. When we replace it, you'll notice one thing that we just packed in here is that we already took the name, which was Hello World when it was hard-coded. And we've already replaced that with workload metadata name. Now, this of course is a concern, right? We can't, this new source doesn't mean anything to cartographer, so we want to, but we know we're going to get that from the cluster source template output, right? So let's go ahead and highlight that and we'll replace that with this value. So using that same syntax that we used here where this is coming from a workload context, this is coming from a context of sources. So, and you'll see, once we get a couple of steps ahead of here, you'll see exactly where this is coming from, but just know that we have the ability to tell in a certain sequence of events, we'll have the ability to tell cartographer these are the sources. In this case, it's going to be flux. And to pull out the source that we're using, which is, you know, you could have more than one. So you could be using flux and something else in a chain. So it's plural. Then you pull out the specific one. We've just called flux the source. And then you pull out the URL, which is going to have the value of that, whatever was in that URL path, right? So we've parameterized that. And then the last thing is, of course, the output, right? So when we did this manually, we know that K-Pack in particular put that value in a field called .status.latus image. So again, we're teaching cartographer to use K-Pack, right? By telling it where to get the output. So, okay, so we're two steps through. Now for the K-native one, there's no step after K-native, right? We don't have to pull any output out of K-native to give to someone else. So we can just use something called a cluster template. So again, we create our basic file. And this one just has spec template, no path for output. And so we're going to do, we could just take that template field and throw in it that simple K-native configuration that we applied manually. But in the case of K-native, and this is just sort of like a detail of the integration of cartographer and K-native, but there's some resources, some configuration within K-native that should be immutable. And in order to tell cartographer to treat those as immutable, we're actually going to put one layer in between cartographer and K-native. And for that, we're going to use a tool called cap controller. So we'll throw a little extra YAML in there. So highlight that template and throw in some YAML that is for this cap controller application. And again, we're just putting some configuration in here so that cap will treat some resources as immutable and then scroll down here and then you can add the K-native. So the second part of the YAML is that same K-native configuration that we were using before. So once we have that, then of course, again, we need to provide, we've already parameterized the name of the K-native application using the information from the workload, but we're going to need to change this new image placeholder. So highlight that and in this case, we'll replace it with a placeholder that uses this idea of images, a list of images of image sources as a context. It pulls out the one that's, we're assigning this alias, we will assign this alias to the K-native, to the K-pack one in particular, and then that field that's coming from K-pack, remember we filled in something called image path so that output field is going to be called image. So it looks kind of confusing with images, image, image, but this refers to any list of images available. This one is referring to K-pack and this is referring to the actual image field that is being pulled out by the cluster image template. And you can name this one differently. You could call this K-pack actually, if you wanted to or give it some, that's an alias that we assigned. We just weren't very creative about it. Okay, so you've got your three templates. Now imagine that you're running a DevOps organization and so for these simple web applications, you've got these three steps that you could combine into a workflow, but maybe for other kinds of applications, maybe for batch applications or whatever else, or some applications written in particular, language frameworks, maybe you have other templates that do other things, right? You've got a whole collection of templates in your cluster. And so how do you know, how will cartographer know that for this particular workflow, it only wants to use these three templates? And also as it applies YAML, how do we tell it do the flux CD1 first, then do the K-pack one, then do the K-native one, right? We still have to give it a little bit more information. So we're gonna do that through a resource called a cluster supply chain. So in this case, we'll create a new file. And again, this is a cartographer type resource, as you can see in the API version, kind is cluster supply chain. And it basically has two values under the specification. And we'll look at the second one first. So this is a list of resources that we want to chain together. So we're gonna add, of course, we have three resources, right? And we've built three templates. So let's throw those in there. So we'll add our flux CD. And you can see here, we're giving it a name source provider. This is just the name of this particular resources within the context of this file. Let's just file scoped. But in there, we have a reference to that cluster source template that we just created, right? So each of you should have a different value here with your session ID in there. So, okay, we're telling it. First, use that template to stamp out a flux CD resource. Next, add the image one. So the order here is important, right? We're telling it the order. Next, you do a K-pack image. You don't do the K-pack image until the source is ready. But after the source is ready, you can do the K-pack image. And again, we'll give it a image builder. It's just an alias that is within the context of this file. But we're telling it, we're referencing that cluster image template. And again, the one that each of you created, right? So each of you has your own session ID in there. And then finally, our third step, which is that cluster template, which has the Knative definition. And so now it knows. First this, then that, then Knative. So what's missing here is how do you really link up the output of one to the input of the other? Because this is a very simple case. And we know that the output of the first is the input of the second. And the output of the second is the input of the third. But you may have cases where the input of the first, maybe you stick a testing step in the middle. And so you need the input of the first maybe to be, or maybe you're doing two different tests, a test and a scan. And so maybe you want the input of the first to be given to two different steps, further steps, right? It's not a given that the output of the first is automatically the input to the next. So we need to be very specific about that. So we're going to go here to the mapping outputs to input section and explicitly say that. So let's go to the KPAC step. So we're highlighting the step, right? This is a step we're gonna provide inputs there. So we're gonna introduce some lines after that and say for this KPAC step, we are gonna add an input that's a list of sources, meaning that they're coming from cluster source templates. And we only have one cluster source template. It's called source provider. So here it is, we're aliasing it, right? This value, like I said, is scoped to this file. So in case we had different cluster source templates, we're saying which one to use here. And here's the alias where we say source, which is why in our KPAC image, we said sources, source, and then URL, because here's that second source alias. So this is a list of sources in case there's more, dot source, and then dot URL, because that's the output, right? Value instead of dot revision. So that's where that comes from. And then we wanna do the same thing for the next one. We wanna say for this Knative resource, we want to provide an input from cluster image templates. And specifically, we want it from this cluster image template, which is our second resource. And we know we wanna pull a value, an output value called image, but since we have a list here, we're gonna give this an alias. So that's why this is images, dot image, and then the output value of the template, which is dot image again. So again, we could give it a different alias here, and it would be images, dot, my second resource, dot image. And in truth, because there's only one image under the list of images, you could just say image or you could just say source. You don't need to. If it's a list and there's only one, you could just use this alias. Okay, so then, so that was the list of resources we mentioned, right? Here's another value called a selector. So now imagine that you've defined this basic process of three steps, but again, you might have different kinds of applications or application teams, development teams with different needs, and you wanna define a different workflow for those teams or for those applications. And so you might have more than one chain in your cluster. And you wanna empower developers to be able to say, I want this chain or I want that chain for any particular application. So for that, we're gonna use, familiar with the concept of labels and selectors in Kubernetes, we're leveraging that concept and applying a selector to the chain and we'll put the same value on it. We'll label the workload with that same value using a label. So here we're going to append here under selector some value, so we're saying app Tanzu VMware workload type is web OSS con cartographer and then your session ID. And just to verify that our workload in fact has that same value, you can go back to your workload file and notice that it has a label and that's the same value there. So now if you apply this workload, it'll be identified by this supply chain and acted upon by that, by the particular flow we just defined, right? So we're putting all the pieces together. Oops. Any questions so far? Does that make sense? Yeah? Okay. So cluster supply chain. I did this one, right? Next. Okay. And that's it, like that's, we're done. So let's try it out. So apply your source template, then apply your image template, then apply your cluster source template. So we've got the three templates in there and apply the supply chain. So we've got four objects in there and is anything happening? And not yet, right? Cause now we've got, we've given cartographer the ability to create these resources and monitor them and pass inputs to outputs and things like that. But we haven't triggered it yet, right? We haven't deployed it. We haven't onboarded any application yet. So until you actually deploy a workload, nothing's really gonna happen. So let's go back to our workload here and go ahead and apply that. And so now that workload is deployed to the cluster, the cluster supply chain based on the label on the workload is going to identify it, realize that it matches the selector and the supply chain and start doing stuff. So how do we know what it's doing, right? So from a developer point of view, I've just deployed a workload type resource. So it makes sense for me as a developer, I wanna know what's happening with my workload resource. So I'll do a get workload. Why not, right? Makes sense. So I can see the status here is unknown and it says missing value at path. Now, if I think back to when I did this manually, there was a point at which I also saw an unknown status, right? And that's when I was monitoring my KPAC build. And that's because just the build takes, you know, a little bit longer than the other steps. And it's, in fact, it's saying, I'm looking for a path, I'm looking for an input value and I don't have that yet. I'm looking for an output value and I don't have that yet. Cartographer is saying the value at the path that you gave me, status.latestimage is not there yet. So we can probably, so then this command, there's different ways you can get the logs from your URL, from your workload. There's a CLI called Tanzu apps that we just wanted to highlight. Tanzu apps workload tail, it'll tail the logs for any pod that's involved in the process of a related to this workload. So whether it's, right now it'll be a KPAC pod, but it'll just kind of give you a bird's eye view of everything. So we see the same logging that we saw, is it? Oh, we say, okay, I'm sorry, we see here. Here's the command. And we can see that it's doing the build, right? So we just gotta give KPAC a minute or so to actually build this application and upload the image. And until then it's gonna be still in this state, right? So we'll just give it a second. You'll see the same. If you keep running this command, you should see the same thing. Oh, there it is, it's ready. Okay, so I don't know what happened to my tail, but okay, so try getting the workload status again with kubectl get. And if you see that it's ready, then I guess just control Seattle logs here. So since it's ready, we can test it. You can see the list of services that are deployed with Knative, so it's there. And just like before, you can either do a curl or you can open it in the browser. Same application, so same result. But essentially, from a developer point of view, all we did was apply this workload YAML with the bare essential information describing our application. And by applying that workload YAML, we actually onboarded our application to this process. And the reason I make that distinction, we didn't just deploy our application, we've onboarded it onto an ongoing CI CD process, right? Because if we do another git commit on that source code repository, then flux within a minute is gonna see that, right? And then as soon as that downloads that process, the cartographer will give that downloaded code to KPAC. KPAC will build the image, and as soon as it's done building the image, cartographer will give that to Knative, and Knative will update the application that's running on the cluster, right? Or Knative will, yeah, cartographer will reapply that YAML, and then whenever you apply YAML to Kubernetes, it's a bit different than the last one, then Kubernetes will give that to Knative and Knative will update it. But that process is now an ongoing process, right? So really what we've done is onboarded the application rather than just deployed it. So it's, I think it simplifies a lot of things that a lot of companies really struggle with, and they end up writing pretty customized processes with more traditional tools. And here, by virtue of leveraging Kubernetes and existing Kubernetes resources and just using this approach of choreographing these steps, we get a lot done with very little effort. So I'm gonna hand it over to Corby to show you how to incorporate a GitOps workflow into this process. But so far, so good, is this interesting, useful? Yeah, okay, good, okay, awesome. Yeah, okay, do you want this? Because if you were typing, yeah, yeah, let me, yeah. Hello there, I'm Corby, thank you very much. Cool, so we've gone through kind of a good starter process in terms of how to compose different pieces into the supply chain, create a supply chain out of it, and be able to take advantage of the fact that we are relying on Kubernetes reconciliation. So like Corby said, when a source update comes on in, that's going to cause the Git repository resource to update its status, which is going to trigger things through the supply chain. And we'll look in a minute at some additional benefits that we get from that approach. But what we've got right now is a little bit limited because basically everything is just running in a single namespace on the Kubernetes cluster. So that's kind of, that may be useful for a developer iteration type of thing, where you just want to go ahead and within a single space, build some code, deploy it, run it, and do what's going on. But it's not going to be especially useful for enterprise use cases. So what we want to do is think about what an enterprise environment looks like, where you're going to have a bunch of different Kubernetes clusters, you're going to have a dedicated build environment, you're going to have QA, you're going to have staging, you're going to have prod, and how can we build a supply chain that's going to be able to effectively service a bunch of different environments about that. And so we're going to lean heavily into GitOps as the mechanism by which we're going to share information across different clusters, even across different clouds. And so what we want to do here is, let me see how we're doing our spacing here. I'm going to pull this out for a minute here. So let's home know what did I do? Let me try again here. Okay, come back please. Nope. Show me my window. Did I do, should I refresh? There we are, thank you. Okay, so what we're going to do is we're going to take the supply chain that Corehub does build, and we're going to make a modification to it. We're going to go ahead and read from source code before, like we did with the Flux CD controller. And I like what we did with K-Pack in terms of getting that automated container build. It helps us with a lot of security and consistency requirements of the container build. So we're going to leave that in place to be able to get our container image. But this time, instead of outputting a running deployment onto our local environment, we're going to output a description of the running deployment into a GitOps server. So we're going to write it out to Git. And that allows us to take advantage of a GitOps model, which means specifically, we can create a second description in Cartographer that is going to allow us to talk about a kind of a deployment chain instead of a build chain. It's going to teach us how to take that GitOps repo that we wrote out to as an input source and use that as a template for creating the running deployment on the target cluster. So this is going to help us a lot. We call this a delivery when we've decoupled the build process in the supply chain from the deployment process on the target environment. And it's nice because we can have a single supply chain process running on one environment, but we can have lots of different deliveries. We can have a QA delivery, a staging delivery, a prod delivery, all which are going to take this output from the supply chain, a process which where we're confident that we got a secure, consistent, automated, repeatable build that's going to incorporate a lot of the security governance and compliance constraints that we need into our build. But it can be reused for a bunch of different target environments and be able to function in a multi-cluster environment. So we'll go and get started, but first thing we're going to do is we're going to clean up what we did from the previous exercise. Because right now, if we look on our cluster, we've got a bunch of different resources running. We've got the Git repository and we've got the K-Pack image and we've got the K-Native service image here. What I'm going to do is we'll go ahead and just execute this next command here, which is we're just going to delete the workload resource. And the workload resource owns all of those other resources that were created as a side effect. So as soon as we delete the workload resource, we can go to the next step here and do a kukoto get again, and we can see everything's gone. So cleaning up the workload, cleaned up the Git repository, the K-Pack image and also the K-Native service that was deployed out there. And we've got a clean environment to get started. So we're going to go ahead and get work. Our task here is that we're going to modify the supply chain. We're going to create it to be able to support this sort of workflow here. So to get started, we're going to introduce a couple of new templates. So remember, Corey talked us through the cluster source template that we did for Flux Git repository, the cluster image template that we use to stamp out that K-Pack container build resource, and then the cluster template to do a generic K-Native service deployment. We're going to go ahead and introduce a new resource here, which is called the cluster config template. And a cluster config template is going to stamp out a config map. So here's the strategy. What we're going to do is we're going to take the description of that K-Native service deployment that we're going to use to deploy the application. But instead of just stamping it onto the server and turning it into a running deployment in our local environment, we're going to record it into a config map and store it later so that we can use it in our get-off strategy. So this config map looks a little bit trickier here. You'll see that instead of the template field that we used in our previous templates where it basically just takes the raw inputs and turns them into outputs, we have this YTT field here. So YTT is a template in a language and it's going to allow us to create templates that are a little more dynamic and complex so we can introduce things like loop constructs or conditionals or that sort of thing. It's going to use the Starlark language to be able to create dynamic behavior in our templates and you can declare low methods in here. So here we have a service method that's created that includes these conditionals here and it's going to dynamically create that K-Native service template that we were using before. And if we go to the end here, this is the config map that's going to get produced by our cluster config template and that config map is going to invoke this manifest method in Starlark, that's this guy right here which in turn is going to invoke this service template which is going to generate that K-Native service template here. So this looks like a little busy here at the beginning but what we're doing is we're just creating this cluster config template which is going to take that K-Native service definition and it's going to store it in a config map where we can be a little more flexible in how we use it here. So I'm going to go back to my supply chain definition. So this is the supply chain core help this build. So remember the last step is using that cluster template to spit out the K-Native definition wrapped in the cap. So I'm going to replace this final block here with our cluster config template. So now instead of spitting out K-Native onto our local environment, we're going to spit out a config map into our local environment that's going to store that K-Native service definition. But we're still not done yet because we still need to get this into a get repo right now, right? So there's going to be one more step in our supply chain we're going to use to spit this out into a get. And so we're going to go ahead and, what we'd love to do is have a Kubernetes native resource. It's like we didn't know if the other things that just automatically knows how to stamp out and write to get. But I don't actually have one right now. So what we want to do is be able to, instead of taking a declarative Kubernetes resources, we want to take an imperative scripting process. And we want to write a quickie script that can write out to get. But we want to be able to incorporate that into our supply chain. So we're actually going to rely a little bit on our friend Tecton to be able to help us out here because Tecton's really good at being able to execute these scripts in a Kubernetes native container type of context. So what I'm going to do here is we're going to create one more, and we're going to rely on one more cluster template. So this cluster template and the cluster conflict template that we saw before are already preexisting on the service. We're not going through the process of authoring and creating this by hands. We're just going to use them in our supply chain. So our cluster template is going to take some input parameters. So we're going to be able to specify the Git repository that we want to output our GitOps definition to and other optional information here. It provides the faults for things like the username and stuff like that in there. And what's going to spit out onto the cluster is a resource that's called a runnable. So a runnable is a cartographer construct that is basically going to be our mechanism for being able to take just a regular imperative bash script and be able to execute it as part of a cartographer pipeline. So we can take a generic process that's kind of a done process. It just runs a script. But what we're going to do is we're going to attach metadata around the resource that runs the script. And that metadata is going to have the stuff that cartographer expects. So what is the status? What are the outputs of the script? And it's going to be able to use inputs to drive the execution of the script. So runnable is going to take a Kubernetes resource that's responsible for executing a script. It's going to wrap it with that metadata so that cartographer knows how to use it. And it'll be able to fit cleanly into our supply chain. So we'll take a look at what is this resource that the runnable is wrapping. And that's going to be this cluster task. That is a tecton construct. So tecton has the concept of cluster tasks. And we're going to use a cluster task specified, which is called our Git Rider. And so tecton is really good for being able to handle these types of imperative execution. Another nice thing about wrapping tecton is that tecton provides a very nice library of community supplied tasks that can encompass all kinds of functions. And we'll see a little bit later an example of taking an existing community tecton task and using that to provide additional security in our supply chain. So we can take a look at the definition of the tecton task here. And so this is the cluster task that's in here. And at the bottom, you'll see the actual script in the task. So this is the batch script. It's consist of about, I guess, probably 18 to 20 lines. And it is going to actually perform the Git checkout pool rebase and ultimately commit push to our repo. So this is what we're going to use to take the contents of that config map where we store that Canadian service definition and run it into this script, which is in turn going to write it out to our GitOps repo. So we can go on back to our supply chain. Right now we have these three steps. Read source with flux. Build the container with Kpack. Write the definition of the Canadian service deployment into a config map. And then we can add the final stage here, which is going to be execute the tecton task. Take the inputs. So you can see this config's resource input. This input it's taking is the config map that was generated in the previous step. It's taking the data in that config map. And it's going to feed that into the OSScon GetWriter task, which is going to output the contents of that config map into our GitOps repo. All right. Does this make sense? OK. So we'll go ahead and give it a try now. So on our server, we already have that cluster config template defined. We didn't offer that from scratch here. We also have the cluster template to find this GetWriter that wraps the tecton task in there. So we don't need to create anything in there. And we now have our supply chain that is executing all these steps. So I'm going to go ahead and apply our new supply chain, which is going to execute the steps. So remember, this is what's happening in here. We're reading for flux. We're building the container image. We're going to write out our target service definition to a GitOps repo. So just like Corsus before, you create the supply chain. Nothing happens yet. We need a workload to drive it through. So we're going to reuse our workload that we used before. And we have this label which says, hey, match it up to the supply chain and start running the execution in there. We're going to add one more piece of metadata to the workload. So remember that our task takes a Git repository per round that's going to be used in GitWriters to say, hey, which GitOps repo you're writing to. We're going to attach that metadata to our workloads. So when the supply chain receives the workloads, it will be able to read that param. And it will be able to identify where is the GitOps repo that we're going to write out our service definition. So now I'm going to go ahead and apply the workload. And that is going to go ahead and kick off the supply chain. So in a little bit, we'll see that the source code picked up and it's running through the container build process, like we've seen a couple of times at this point. But this time, after it gets to the end of the container build process, we'll also see the tecton task execute, which is going to write out the definition of the workload into the GitOps repo. And while we're waiting, I can go ahead and check out. You can check the link for our GitOps repo. So these are session-specific. This is my own GitOps repo here. But this is the target where it's going to end up into. So this is going to be our cartographer. We call this the delivery repo. This is going to take the delivery, the output of the supply chain. And it's going to write that into the GitOps repo there. All right, so now we're in the build process. Here, we're running through the build packs. And the build packs are a nice component of your software supply chain, because they ensure consistency across what they're going into. They don't encourage your developers to be adventurous in authoring Dockerfiles in terms of injecting their own opinions about what the container-based images should be, about which version of the language runtime is going to be used, about dependencies in the image. You can standardize this across all of your application builds and be confident that you're getting a consistent security profile. Oh, I'm sorry. I'm done. I just hadn't scrolled down in the window. I'm so sorry. I was killing time and I'm done. OK, so you can see here. This is the tecton task that it executed. And then it ultimately committed out and pushed to the GitOps repo over here. So I can go ahead and check again my GitOps repo. Here it is. I'm going to do a refresh on here. And we can see one minute ago, this config was written. And it was committed. It was an update from cartographer in here. So what did cartographer write out? It wrote out this manifest.yaml. What does the manifest.yaml have? It's the content of the config map, which is the Knative service description here. So we've written out the Knative service thing. And instead of writing to our local servers, we stored it in a central repo where it can be stamped out into any target environment that we want to be able to use here. And so I can check my workload status. I can see the workload has completed. The status is ready. I can look for my Knative service on my local environment. And it's not there, because we didn't create a Knative service development. All we did was we wrote out a description of it to the GitOps repo. So how do we actually go ahead and deploy this onto the target cluster? So what we're going to use is the concept of a deliverable. A deliverable is kind of a companion of the supply chain. So the supply chain is what we use to execute the steps to create our target image. And that's usually going to be done on a centralized build environment, where your DevSecOps people can control what's going on. And then the deliverable is going to be executed on your target environments, what we sometimes call our run clusters, where we're actually going to run the applications. And these can be your QA, your staging, your prod environments. So we'll take a look at the definition of the delivery. So this is the companion to the supply chain. So the delivery is pretty simple. It just has two steps here. It has a cluster source template and a cluster deployment template. And it's executing these two steps. It's going to read from the GitOps repo to get a definition of the deployment. And then it's going to translate that into a running deployment on the target run environment here. And so we can check real quick. Here's the cluster source template that we'll be using here. It's going to read from the GitOps repo. And here's the cluster deployment template, which is going to do what we saw Core do before. It's going to take that candidate service definition that comes out of the GitOps repo. It's going to wrap it in KApp to be able to handle the meetability of fields and create a running deployment out of that. So for our supply chain, we created a resource called a workload to control the execution of the supply chain. For the delivery, we'll create a resource called a deliverable, which is going to be an instance of the delivery. And so when we create the delivery, it's going to use this deliverable type. So just like with the workload type we saw before, it's going to use selectors to identify which delivery is going to be executed here. And we're using the delivery type OSScon for here. So that's going to match our delivery. And then we simply need to provide a parameter here, which is going to be the pointer to the GitOps repo that we are consuming here. So I'm going to go ahead and put in the address of the GitOps repo where we just wrote out our delivery definition. And then that's everything. So we're able to match to our delivery through the selector. And we've specified the GitOps repo that we want to run from in the target cluster. And normally at this point what we would do is we would have a separate cluster environment where we want to run the application. Because of this lab environment, it's a little more constrained where we're actually going to create the deliverable on the same environment where we just ran the workload. But that's not really a real-world use case here. So I'm going to go ahead and apply the deliverable. So remember, the deliverable is going to stamp out two resources, right? It's going to stamp out the cluster source template for the GitOps and then the cluster template for the Knative. So I can go ahead and take a look here. This is the deployment piece it created where it's reading the source code that is reading source from the GitOps deployment. And then if I look at my Knative service list, now I can see it actually has deployed onto our local cluster. So this is the delivery. This is taking the application from the GitOps repo. It's spitting it out. And there's not any real magic in what we're doing here. You've worked, if you worked with Kubernetes, you're probably familiar with other tools that know how to read resources from the GitOps repo, deploy it out in there. You could be using Argo CD or Flux or whatever. If you prefer to use those tools to accomplish the same tasks, that's perfectly fine. This is just the cartographer support for that workflow here. But ultimately now the application is running. If I take a look at it, we can see once again it's deployed and running on that environment. And the status of our deliverable is true. So we have this model here to be able to centralize and get consistency across our builds and supply chains, and then to be able to create the deliverables to target our deployment environments. So those are the basic steps here. We have a set of resources in the workshop that you can use in order to be able to learn more about cartographer. We don't have time to do a hands-on deep dive of all the enterprise use cases, but I do wanna touch on where you go from here in terms of creating more of an enterprise deployment and using the cartographer tool set. One thing that's really nice about the choreography approach is the fact that the supply chain will be very intelligent in identifying when the conditions and gates in your supply chain have changed and necessitate a new reconciliation. So what I'm showing here, this is a commercial visualization tool. That's not important. It's just giving us a view about what's underlying in cartographer. So this is a simple supply chain, very similar to what we just created here. You'll notice in that image builder step there that is using the K-Pack tool to be able to do the container build. And if I click on that step there, then I can see at the bottom what the build history is in the supply chain. And what's interesting about this, if you look at the very bottom, that first build that occurred 20 days ago, that was triggered by the source code commit by the application developer. So that's basically the developer, the work load was created, it did initial build from source and it triggered a container build through the supply chain. All the builds that occurred since then occurred because of external circumstances changing. So what happens is that a DevSecOps operator can supply new information to K-Pack. They can provide what they call stacks, which are the base OS images that are used consistently across the container builds. And they can also provide new build packs, which are the new container image definitions that are used to create the language runtimes. And so for builds two through six, what happened was new patches came on in. So basically this is gonna be like CVE fixes on the OS images or security fixes on the language runtime. And when these fixes came on in, then that triggered a reconciliation at the image builder step of the supply chain. That caused it to kick off a new container build, which in turn caused it to re-execute these steps from that point forward in the supply chain. But nothing prior to that had to execute. So developers did not have to trigger a new build. Nobody had to check in new source or whatever. The supply chain was able to simply recognize, hey, where do these status fields no longer match with reality? Where do I need to reconcile and then execute from that point forward into the supply chain? So that becomes very powerful. In practice, what this means is if an important security hole comes on in a heart blade or something like that in there, then the DevOps team can simply say, hey, I want a new declared definition of what a secure container build looks like. And automatically regenerate a build of every single workload in the enterprise that has a dependency on that without going back to any visual developers without triggering off new executions from the beginning of this supply chain. So one question that's gonna come up a lot is, hey, what if I already got Azure DevOps? What if I already have GitLab Runners? Where does cartographer provide a value add into the process? And cartographer supply chains work best as a bridge between your continuous integration and your continuous delivery. So if you've already got an Azure DevOps pipeline that is reading from source, that is doing compilation, that is doing automated unit testing on the code that is doing Git tagging and release management on there. Continue to use that as is. The goal of cartographer is not to try and replace things that are already doing a working well for you, but it can strategically add value into the supply chain along the way. So for example, if the container build process can be improved with the type of automation that we saw before, that's a good piece to add in there. I will look at some of the other capabilities you can add to the supply chain in a minute. But the idea here is that in the world what's gonna happen is your developer is usually going to end up committing to Git. That is gonna kick off your Jenkins, your Azure DevOps, your GitLab Runners, whatever it is that's going to execute your CI tasks. And then cartographer can read, instead of reading the source code like CORE and I showed before, it's gonna actually create the artifacts that are produced by your CI pipelines. So for example, that could be an S3 bucket where the CI pipeline is outputting the compiled results. What's showing the right there is that's a community contribution someone created where they said, hey, we're going to use a Maven repository as an input to the supply chain. So we have a CI tool that already does the compilation of the Java file creates a Java file and it publishes it to a Maven repository or a NuGet repository or whatever it is that's gonna be your artifact there. That makes a great entry point into a supply chain where you can then go ahead and impose additional governance on your path to production. So this here shows an example of what a more elaborate supply chain looks like. We can start adding a bunch of different pieces here. So common constructs that people may want to add into a supply chain would include things like source scanning or image scanning and the open approach that a cartographer makes it easy to incorporate third party tooling out there. So examples of community contributions we have are incorporating GRIP to be able to enforce container image scanning across in the supply chain. There's another team that wanted to use SonarCube and there is an out-of-the-box tecton task that implements SonarCube scanning. So within a couple of hours they were able to just take that pre-existing community contributed SonarCube task and insert that into the supply chain. They wrapped it with a runnable like we just saw before with that Git writer cluster task out there. And this opens up a whole library of third party tooling that you can incorporate into the supply chain. You can pick and choose what's right for your enterprise. Another community effort was around using Sneak to be able to introduce a source code scanning step into the supply chain. So the key about it is the composability, the flexibility of being able to reuse and not reinvent capabilities out there to get what you want into the supply chain. Another cool contribution that came in recently was an improvement on that get out slow model. In practice, you probably don't want complete automation all the way in the path to the point of production. A lot of enterprises are going to say, hey, we want to introduce some level of human and oversight or process into that. So instead of taking a Git commit or push to the repo, we're going to issue a pull request. And then that will allow you to have the appropriate amount of intervention. So say, before we go from QA from staging to prod, let's have the correct authorization and review on there and have a human say approve and be able to promote that deployment out into the production environment. And then one last capability I think is really cool is using supply chains to alter what we call conventions. And conventions are basically decorations that are going to go around the Kubernetes resources before they go into the deployment. And this is a real nice capability because it completely decouples DevSecOps opinions about what needs to be in the Kubernetes resources from the developers who are pushing the code out into production. So examples of this would be, a lot of times enterprises will say, hey, every single container deployment needs to have a specific antivirus sidecar injected into the deployment definition. And so you can offer that as a convention. The developers are irresponsible for it, but that third party antivirus scanner or a security tour or whatever can be automatically be appended to the deployment description before it goes into that GitOps repo, you'll see that show up in there as that sidecar. Another example is I talked to a customer who said, we need to be able to have a ChargeBack showback on all of our deployments out here. So we want to have organizational codes that are attached as labels to every deployment that comes out there to identify, hey, who's consuming the resources and be able to do ChargeBack for the consumption. That was super easy to author as a convention. The DevSecOps can say, hey, you don't get through until you deploy in here. And you do have that gaving capability. So for example, with the image scanners, you can say, hey, if vulnerability thresholds of XYZ are found, then you're gonna output a non-successful exit status and then the supply chain won't be able to progress. So you're able to block progression of the supply chain when certain conditions aren't met out there. So the idea here is to be able to inject a lot of creativity and flexibility and allow you to create the supply chain that works for you. So we've got just a few minutes to the end, so I'm just gonna go ahead and open it up for Q&A if there's anything that people wanna cover here. Yeah. Oh, I apologize, yes. So he has, if K-Pak supports a cosine, the answer is no, it does not, but Cartographer does. So we do have image signing as one of the steps that you can put into the supply chain on there. K-Pak produces a signature that is cosine compliant. So you can say, hey, K-Pak needs to, you need to verify that the signature that this was a payback-generated image to keep it from progressing into the supply chain. That's an alternative approach, but with Cartographer, I'm sorry, the question was, hey, can you use a registry support like Harvard to be able to do the signature verification on the image? And yes, you absolutely can't, but you don't need to do that for Cartographer. You can insert cosine into the supply chain and you can do the signature verification and turn it all into the supply chain instead of relying on the registry capabilities. Yeah. I was looking at the version number of Carti Scared by the zero. So, I think it largely depends on whether you're scared today. So the thing that I find most scary about the current release is just that there's a lot of kind of extra, like taking that tecton cluster task and wrapping it in a roundable. There was a lot of kind of boilerplate YAML we had to do to set that up. I think we can all visualize how that could be a little bit simplified in here. Have a kind of a simpler definition. So I would expect to see simplifications in terms of that specification making it much easier, easier just with a few lines to pull in third-party scripted toolings, turn that into running executions and go for. So there, I would imagine there's some risk between point four and point one that you're gonna see some level of format changing in the theme. In terms of the core underline functionality, I've been very happy with cartographers. So if you're not intimidated by what we saw today, then I would be very comfortable with using cartographer in terms of actually authoring and implementing supply chains. Yeah, I guess that's all we wanted to cover. We really appreciate that you stuck around and went through all the exercises. We hope you got a lot out of this. And I think we've covered everything we wanted to cover. I think one of the nice things that we've seen in practice from an organizational perspective is that, you know, a lot of customers I think struggle with who owns the supply chain and being able to satisfy enterprise requirements and also team needs. And when they're using tooling that can't really address both of those cleanly, what ends up happening is often development teams will own their own CI CD and they're often running their own machinery and everything. And it just kind of puts a burden on developers in terms of that extra work. But it also creates a challenge for operators because it's a little bit harder to ensure that you have the, you know, governance across what ends up being sort of snowflake pipelines across the organization. And so in this exercise, as you noticed in the first set of exercises, you each created your own template. But for the second set of exercises, we all shared the same set of templates in the cluster. So really you can imagine that you're, in reality, just, you know, this was an exercise so that you could each go through it and learn from it. But really you'd be using these reusing templates across the organization and being able to more easily create different combinations of them that address developer needs while still ensuring that the proper governance controls are in place across everything and have an easier way to monitor all of them. And because it's all sort of using Kubernetes for life cycle management, for monitoring, for troubleshooting, all of your Kubernetes skillsets are sort of leveraged in order to be able to track the progress across all these different tools. So it's really powerful as a way to leverage, you know, anything in the ecosystem and pull it together, integrate it without a lot of other work and kind of reuse that same, those same Kubernetes skills to be able to manage them all. And I guess the other thing is, yeah, as Corby was saying, right, like for any step for which there is no Kubernetes ecosystem project, then, you know, Tecton is your friend in that case, right? You just, for anything that, if you don't have a Flux or you don't have a K-Pack or a Knative and you wanna do something that you're forced to script, then you have something like Knative or Tecton to help you run that in the cluster anyway. So yeah, that's all we have for you. I hope this was useful and thanks for joining.