 All right. Thank you for joining us, everyone. Welcome to today's CNCF live webinar from Pipelines to Supply Chains, Level Up with Supply Chain Choreography. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our Code of Conduct and then I'll hand over to David Espejo, Cartographer, Community Manager at VMware, and Cora Iberkleid, Developer Advocate at VMware. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee, but there is a Q&A box on the left hand right hand side of your screen. Please feel free to put your questions there and we will get to as many as we can at the end. This is an official webinar of the CNCF and is such a subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct and please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They will also be available via your registration link and the recording will be available on our online programs YouTube playlist. With that, I'll hand it over to David and Cora to kick off today's presentation. Alright, thank you Libby and thanks to the CNCF for giving us the chance of being here. Alright, welcome everyone. In this session alongside with Cora, I replied developer advocate at VMware. We will discuss or describe some of the current challenges we see in the whole problem of delivering software and also we will introduce a different way to do it using the choreography pattern. Finally, the cartographer open source project which is basically an implementation of this pattern. Cora will give us a fantastic demo as usual and we will have a wrap up in a section for questions and answers. Alright, so let's start with why. So motivation for all of these comes from different places. First of all, according to the authors of this seminal book on the field of continuous delivery, the most important problem we face as software professionals is this. How to take a new idea, a new application version, new commit into the hands of users as quickly as possible. That's it. This book was published like 11 years ago, so fast forward into the future. We have the definition for the term cloud native provided by Joe Vita, we're native school funder and this definition has an implicit recognition of the same kind of problems. Cloud native is basically a set of tools, processes, culture to manage complexity and to tackle a problem of lack of velocity, lack of speed delivering software. So the core problem remains the same. So so far, CICD pipelines have been the de facto standard to address this problem. In this pattern, we define a set of steps that code needs to complete before going into production. And we have several features here. First, each one of these steps, most likely is executed by a different tool, right? And if you consider the abundant number of tools present in the cloud native space, for each one of these steps, we have a lot of options. So these individual tools need to be somehow connected between them or wired between them using this sequential pattern where each step is completed sequentially in a linear manner. And also in order to manage the flow of information between the steps, we rely on the actual entity that follows an orchestrator pattern. It's something like the server client pattern. And this orchestrator not only deals with what should be done to take code to production, I mean the steps in the supply chain, but also how should it be done in terms of all the tooling integration, the external integrations that are necessary for this to happen. So for example, the tool that you use to watch a GitHub repo is different to the tool that you use to scan source code against a set of security baselines, right? So we end up with this high level of inter dependencies between different tools and a very rigid logic that is right there codifying the orchestrator that in summary, this is tight coupling, right? And this is an enemy for change in distributed systems. And tight coupling is also the cause of some other problems here because it makes it difficult to swap or add tools. So for example, let's say that for a different workload, you want to have a different tool that actually builds your image. If you want to change a tool for one of the steps, you will need to change the logic that you already created there in the orchestrator. So it's not easy. It's very fragile and hard to maintain. And in general, this rigid workflow also means that, for example, what if not all the changes to your app come from a commit in a GitHub repo? In this example, the first step is watch a repo for a new commit. But what if, for example, in this scan source code, there's a new vulnerability discovered? In order to update or trigger the pipeline, you will have to submit an artificial commit for it to run. So it's a very rigid workflow, very linear. And if some step fails, the remaining steps won't be in bulk at all. And also any delay in the execution of one step or any delay in the response time from the orchestration will delay the whole process. So the challenge becomes even harder when you start scaling. So when you this is the supply chain or the pipeline definition for a specific app that runs, for example, in a Rails framework, and it's maintained by developer team number one. But what if you have another team with a completely different workload? And, for example, they plan to use a different tool to build the image, you will have to codify that different logic in the orchestrator. And so on. So for you have different teams, for example, this team, they have a completely different pipeline design that doesn't start by watching a repo. They want to start a different point. It's a different logic that you will also have to codify there in the orchestrator. So at scale, we find even deeper problems. For example, this could be this could mean an inconsistent path to production. If we see here, each one of these pipelines with different tools, these tools have different inputs, different outputs. And once you finally have pipeline number one working, it doesn't necessarily is useful for that team number two of that team number three for different workloads. So you will end up having a CIC pipeline sprawl for as many teams and as many workloads that you may have, you will have completely different paths to production. That's first thing. Second thing is that there's no clear separation of concerns between operations and development teams. What happens in this universe of configuration? What happens if something fails? Well, most likely developers will end up doing DevOps stuff instead of actually writing code. And it creates a high operational burden in general, because it's hard to maintain what happens, for example, again, if the organization now wants to standardize in a different tool for the security scan, you will need to change the logic for all the pipelines in the environment. It becomes a nightmare. Right? So these are the challenges so far. And what we are proposing here is the adoption of a pattern that is natural to event driven architectures called choreography. In choreography, different orchestration, imagine an orchestra with each musician, they know how to play the actual musical piece, but they still rely on a central conductor that manage the flow of information for the whole musical work to happen. In choreography is different because once they are in the scenario and the music starts to play, each dancer knows what to do. Even if someone else in the dancing fails, the remaining members of the team, they know what to do. So we consider here the term resource as a step as a component in your supply chain. We call it resource because it could be several things. It could be a github's aging, watching a repo. It could be a service building your image. It could be a config map in your Kubernetes cluster, whatever you need to take code to production. We call it a resource. And we know the things about this resource. First, it has a single input. Input type yellow x and it will produce a single output type yellow. That's it. It's a black box. We don't know how it works and we don't care. At this point, we don't care. So if we use a different input value, well, it will produce a different output. Pretty simple. So we can use the choreography pattern for two things here in the context of supply chains. First, we can use it for self mutating resources. So imagine this resource is a github's aging, watching a repo. So the input will be the URL for the repo and the branch. And that's it. So imagine that the URL has not changed. The branch has not changed, but there's a new commit. This new commit will be auto detective by the controller and it will produce a new output, even when inputs have not changed. In the same case as, for example, if you're using KPEG or some other service to build your image, if the container definition has not changed, but there's a new revision to the base OS image layer. It's not a new input, but it will produce a new output and automatically it will update it down through resources accordingly. That will eliminate the need for developers to codify that kind of logic or submit an artificial commits to trigger a pipeline. It will happen automatically here. So now that we are using the image build example, what happens if for a different workload, you want to use a different tool? I mean, instead of KPEG, for example, KANICO. Well, as long as these need to produce the same type of output, which is very likely it will produce a path to your image. In KPEG or KANICO, well, you can swap out or change the tools without changing anything else, without affecting the logic of the whole supply chain without changing anything else. Very different compared to the pipeline orchestration pattern. Right? So that's resource A. What happens with the next step in the supply chain? Well, the only thing this next step will do is to subscribe to a specific type of output or to watch for a type of output. Once this output is produced, you will know what to do. But we have a missing layer here. We need an intermediate layer here that will actually translate what is desired state. Everything above this layer is desired state or it's declarative. It's declarative because we will define these are the components of my software supply chain. These are the inputs, the outputs, and that's it. But there we need a layer that translate or reconcile desired state into actual state in the underlying platform. Who could be that layer? Who could be that choreographer that is common to all the components here? Well, introducing cartographer supply chain choreographer for Kubernetes. We are in a project that recently joined the CNCF landscape like a week ago. We are filling you here. It's an open source project initiated by VMware. And it has several difference here. So the first thing is that it removes completely the dependency on a central entity. Right? And we have the same step, the same resources for your supply chain. But the first thing that it does is to wrap them around a common abstraction. This abstraction is called template in the cartographer jargon. But conceptually, it means that now we remove all these inputs and outputs of each one of the elements in the supply chain that produces this complexity that is hard to maintain. We now have a common abstraction, pretty simple to deal with. Science has already demonstrated that the only thing you can do with complexity is to hide it, to put an abstraction layer on top of it. You cannot remove it. But so the, you know, what we are doing here in the project is to just that, you know, to put an abstraction layer, common abstraction layer for all these steps in the supply chain. Then we glue them together or wire them together using the choreography pattern that we just saw. And then we wrap the whole thing in a bigger abstraction or higher abstraction called blueprint. In this case, a cluster supply chain blueprint. It's pretty simple. I mean, the project only implements two types of blueprints. And that's it. What's the implication? What's the meaning of this? Well, in terms of things, we have here the cluster supply chain abstraction. We have the underlying platform, the firm and we have the controller or choreographer in the middle. So DevOps and secOps in general operations team, they will own and apply the cluster supply chain definition. They control two things. First, they define the steps for the supply chains, the supply, the steps that code need to complete before we're going to production. And then they also define the level of flexibility they will enable for developers. So for example, operations teams can say, developers team, you can choose whatever tool you want for building your image. I don't care. But you cannot change the tool we use to scan source code in terms of security. I mean, that's the standard. You cannot change that but you can change other steps. And this is completely under the control of the operation steps. From the developer point of view, the only thing they deal with is the workload definition. It's a single YAML. And where they they where they define the needs for their workload, they will submit that using keep control to the cartographer controller. The controller will find a supply chain definition that matches the workload definition. And it will translate that into resources that need to be created or updated in the underlying traffic. That's it. That's the whole idea. So benefits here, I hope they're clear. First, there's a clear separation of concerns. I mean, team members, they will spend their time and efforts in their respective areas of expertise. They know what to do with the respective field. And also implies a lot of flexibility. Because remember, we have here a system that reacts to even to low level changes. So for example, there's no new commit, right? But there's a new vulnerability discovered by this cancer score process. Remember, it will produce a new output and it will update the downstream resources accordingly. So you have now the ability to build supply chains that are much more flexible than just a sequential linear step by step process. Also, it's much more modular, because as we saw, we have these very granular controls. And you can also with it, as long as the output type is the same, you can interchange, you can swap out tools and add tools for different warlocks, very simple, without affecting the consistency of the whole supply chain definition. So that helps with scaling up, scaling out the problem of delivering software. It's also consistent in terms that not only the consistent interfaces between step, but in general, what the operations teams define and apply the cluster supply chain definition will, it can be reused for different workloads for different environments. And it will produce the same type of outputs, it will, it will give them peace of mind that source code is completing the necessary step before going to production, right? So now for the specifics, how it works. So we have here the steps. Again, the steps or the resources watching themselves for a specific output. As we mentioned, the first abstraction is called the template. And we have five kinds of templates here, and projects for different components or steps in the supply chain. And with different combinations of the templates, we produce what is called a blueprint. This is the higher abstraction in cartographer, the cluster supply chain blueprint, or being cluster delivery is blueprint constantly deploys and validate the configuration to the Kubernetes environment. What if you already have an investment on the CI CD tooling? What can we do here? Well, cartographer ships with a runnable call with a CRD called runnable that is used as a gateway, let's say, to integrate with existing task runners like tecton, janking, circle CI, etc. You can still use them for the specific steps that you require. Right? All right. So in the theory of operation summary will be that once a developer submits a workload that matches specific blueprint, cartographer will reconcile that into the actual resources in the underlying platform. That's the summary here. Make sure that if you have any questions, please put it in the chat, we'll be really glad to read you there. And without further ado, I will pass into my colleague Cora for a nice demo. Okay, thanks, David. That was great. Cool. All right. So let me share my screen. And okay, I'm going to move this screen over here. So my demo was working perfectly. And then I started to detect some problems. So if the demo does not work, then I will switch I have, I do have like a recording at some point. So I have a little bit of a backup plan. So bear with me. Hopefully, everybody cross your fingers. Well, we're gonna get through it somehow. But basically, the plan for the demo is to show you some introductory concept. So just to reinforce the things that David was saying, show you how everything is wired together, and then go through an example of the GitOps workflow, because there's different kinds of workflows that you could compose with cartographers. So I'm going to try to show you one from code to publishing your image into a registry and publishing your configuration YAML into a Git repository, and then taking that and having a second blueprint or workflow that will deploy that to a cluster. So let's get started. See how far we get with a live demo. Okay, so the basic workflow that we're talking about is source code, publish an image, and then end up with a running application. And for that, the implementation that we're going to use is we're going to use Flux to pull the Git repository, we're going to use K-Pack to build and publish the image, and then we'll deploy the application using Knative. So one thing to gather from here is that cartographer is not trying to do all of the things, right? It's not trying to replace these tools from the ecosystem that do their job very well. Rather, it's just trying to help us use all of these tools together, integrate them in a way that that makes sense. And that's easy to work with. So first of all, just to reinforce the concepts, I wanted to talk about what would it mean for you if you were doing this manually? Let me make this a little bit smaller so you could spread it on the screen. If you were doing this manually, so you'd have to write at least one YAML for each of these things where one Flux YAML, one K-Pack YAML, one K-Native Service YAML. So that would look like, could look like something like this, right? So this could be, so there's no cartographer here, right? This is just a Flux definition of an object called getRepository. And what we're asking Flux to do is every minute to go to our source code repository, main branch, and just check if there is a new commit. That's all that Flux is going to do. It's not going to talk to any other product in your cluster. And then we know that ideally we would like to be able to, whenever getRepository finds a new commit to take that code, somehow inject it into the definition of our K-Pack image, and then have K-Pack build an image using this builder that we are providing as part of the K-Pack configuration, and then publish that to our registry. So whenever K-Pack is done with that, then we would like to know that there is a new image available, and we would like to know what the full tag of it is with the precise Shah, so that we can inject that value into our K-Native definition. So here's just plain K-Native, so that then we can apply this to our cluster and effectively have our application running in our cluster. So all of those things that I've called out that have to be done manually are opportunities for automation, of course, right? So let's talk about, okay, so let's go ahead and apply that getRepository. So we have Flux doing this thing, right? So now, so that's one thing, right? Applying that YAML in and of itself, that submission of that configuration to the cluster, that's one thing to automate. The second one is to just monitor the status. So we can see here that the status of this getRepository object has a couple of things. It has some conditions. So we know programmatically we could tell because the type is ready and the status is true, we could tell that this one is ready. And then if we look at here, this URL field has actually the tarGZ, Flux has actually gone and downloaded this and storing it in the cluster of that last commit of code. So we could just pull out this value, we could give it to K-PAC, right? And the way we pull it out is .status.artifact.url. So if we wanted to pull that value out, we could do something like this, for example, I'm just going to copy it to the clipboard. So then we would want cartographer to actually go to our K-PAC YAML and actually do this edit for us, paste that, and then we would want to apply this YAML to the cluster and have K-PAC build an image for us. So at this point, this is where I'm not sure that my cluster is actually working out for me. The build does take a couple of minutes, so my expectation is that immediately it's not ready yet. So this is okay. If you know a little bit about K-PAC, so either we could just kind of like wait a couple of minutes and keep checking until the build is ready, but K-PAC has this handy CLI called K-P that allows us to check the logs of an ongoing build. So hopefully, okay, so maybe it will work. We just got to give it a little bit of a minute. So just if you're curious the way K-PAC works is that it uses a combination of build packs to build an image. So it's kind of analyzed our code, it's decided that it has to use four different build packs to build an image for us. It's going to execute each one of those build packs in order and we'll just give it a second to finish. By the time it's done, K-PAC will have published an image to the registry. So I guess while that's happening, I'm going to check the chat and see if there are any questions. I don't see any questions, but I do see we have people joining us from Bangalore and maybe other locations in India. David here in Colombia, we have somebody in Argentina, AJ in the Bay Area, Jonas from Boston. Welcome. Very cool. The build is moving along. This is the slowest part of the demo. The rest will be in the mail. This is a big one. We're less worried now. But let me know if we have any questions about cartographer. We can find them online. Does it optimize build, execute, that would be a K-PAC question because cartographer is not going to step into like, so if K-PAC is slow on my machine, this is because I'm doing this demo locally. So if you wanted to optimize this build time, there are definitely things you can do for K-PAC to make the build go faster that I obviously have not done. But so cartographer doesn't know anything about K-PAC. That's part of the beauty of choreography and one of the differences between orchestration and choreography that as a choreographer, cartographer is really just a layer above all of these things. So you can optimize the build for sure, but you would be doing it with K-PAC. So we're almost done here. It's exporting the image. It's publishing it to GCR. You can see already that it's got the full tag with the shot that we're going to want. Okay. All right. Moving on. Cool. We have an image on the registry. Okay. So now what we want to do is, again, cartographer, we would want cartographer to have been monitoring that and to realize, again, that the status type is ready and that it's true. And so now the piece of information we want to pull out of here is this latest image field. Here's the tag. And this is what we want to give. So it would be dot status, dot latest image. In the case of K-PAC, if you were using another build tool, then maybe it would be a different field. So specifically, I've just gotten that same value and I'm just storing it in environment variable. So I can just in line do the environment substitution. And I'm going to take that same K-Native YAML that we were looking at before and here it is. So now I have this YAML. So I have choices now. I could either just do sort of the equivalent of a kubectl apply, have cartographer just submit this YAML to the API server, and then I would have my application running in the same cluster where I've just done this build. Or I could do a get push of this to some ops repo so that I can then deliver this application to maybe multiple clusters in multiple regions, et cetera. So both of those use cases are valid. And hopefully as long as my demo keeps working and we have enough time, I'm going to show you cartographer doing the get push and then the delivery. But just to continue on these core concepts, the next thing is, so we've seen the manual approach and we've observed the opportunities for automation. So how does cartographer actually do this? So when you install cartographer, it installs a few additional resources. So I'm going to grep it specifically for those, for the ones called templates that David was mentioning earlier. You can see there's several different types of templates that cartographer is giving us to work with. And what we want to do is embed our resources, those three that we created into three of these. We have to choose the right ones. And basically you choose them based on the kind of output that the templates provide. So you can imagine that a cluster source template is probably good for our FlexDB get repository source. A cluster image template is probably a good choice for the K-Pack one. And then K-Native, we're just deploying it. We don't need any output from it. So cluster template makes sense. So let's look at what that looks like. I'm just going to focus on just the get repository and the K-Pack because they have more fields in them. So the way that you embed it is to literally this stuff, everything underneath template here, is just copy paste the exact same thing that we were just using and that we just looked at. This template is kind of like a free form field. You can put anything. So cartographer itself doesn't know how to work with Flux. It's just like, you give me some YAML and I'll apply it to the cluster. So by that, we're giving cartographer control to create the resource. And then because cartographer has created the resource, of course, it has now knowledge about it. So it can continue to check the status to see when it's ready, when the conditions show that it has done something. And then at that point, because this is a cluster source template, the cluster source template has two fields of output. One is URL and the other one is a revision. And so what we have to do is explain to cartographer how to pull the information that we need outside of from the Flux resource. And so we've just seen that Flux puts the URL that we want in this field called status artifact URL. So this is how we bridge that gap and teach cartographer how to read the desired information from whatever resource is underneath the template. Same for K-PAK image because this is an image type of template. The output is going to be an image and we're telling it where to get the value is going to be the image path is status latest image. But other than that, underneath this template field, we basically just copy and paste it. The very same thing we just looked at. The only thing that's different here is just the syntax. So you can tell that cartographer is going to have to inject the source URL. So the syntax is not environment variables. It's this kind of syntax. So that's how you kind of start to wire it together. But what we haven't told cartographer yet is that it should apply the Git repository VAML before it applies the K-PAK, right? It doesn't know what order to do things in. And also we could have maybe several Git repositories that we are monitoring. So we have to tell cartographer exactly which of those Git repositories it should use to inject the value into K-PAK. So we do that using a supply chain. So supply chain is... Yeah, I'm just going to get tiny bit smaller. I hope you can still read this. Okay, so here's our supply chain. So it's another cartographer resource. And you can see that it has a list of resources, right? David was talking about resources. So we have three resources. Our Git repository, our K-PAK image, and our K-Native application. And so the way that these work is every resource has a name and a reference to a template. And so these references are exactly the templates we were just looking at. It's cluster source, cluster image, and cluster template. And this is just the name of the actual resource that we're pointing to. The metadata.name inside of our cluster source template. So now there's an order. Now cartographer knows Git is first, K-PAK is second. And when it does K-PAK, we're also creating a dependency now for injection where we're saying we need... This template needs input. And that input is going to come from a cluster source template. So it's a list of sources. And which one is it? It's the one called source provider. So this matches this. And same here, you're saying... We're saying here that we have an image type input coming from a resource called image builder. Here is image builder. And in effect, it is an image type template. And so that's basically it. You can have many, many inputs. So that's why they have names here. So that you can refer to them separately if you have multiples. And then it would be sources.source.url, for example, or images.image. And then the fields also called image. So that explains this syntax I showed you. So sources.source.url. That's where that's coming from, from the supply chain. So now we've got a lot working for us. But this supply chain that I showed you is going to work really, really well for my Hello World application because I've hard coded that name in there and I've hard coded that URL. So really what we want to do is parameterize that information. And we do that through something called a workload. And so you could also imagine that so far, everything that we've seen, the templates in the supply chain is something that an application's operator would do. They could build out multiple supply chains with multiple paths to production. And then this view is what the developer would see, what would use, right? The developer would be responsible for providing the URL in the branch, most basically. That would be the basic information. If the application operator exposed more fields for the developer to use, then it's possible that the developer could fill in more fields. But the basics are these. And then because you could have multiple supply chains running in the cluster, we put a label on the workload here. And this one is called aptowns.usbmware.com, workload type web. And that label needs to match the selector. So if I scroll back up to the cluster supply chain, this has a selector with the same value. So as soon as that workload is deployed, this supply chain will respond. It'll say, oh, I see that workload. I have to act on it. And so it will take those values and inject them. So the last, very last thing we need to do is go review all of our templates and parameterize all those values that were hard coded. So the way that would look would be if I scroll up here. So for our cluster source template, instead of hard coding the name, when it's going to stamp out that Git repository resource for our particular application, instead of hard coding Hello World, now we're going to take that value from the workload. And instead of hard coding the URL and the branch, again, we're taking those values from the workload. So now this can be used for many applications. And you can see here on the image one, same thing, the name will match. We're going to use the same name to name the image. But we also have other sources. We can define more global parameters that are shared across workloads. So for example, the name of the registry, that might be something we want to make more general. And then you can better understand also here this syntax of this is the injection that cartographer will do automatically from the source, right? It has all these different sources of information. So that's kind of how it all wires together. So I'm going to keep going with this example. So I'm going to move to the GitOps example. And I'll just, I guess, keep going until, for as long as I have time, so hopefully I can get through the whole thing. But I do want to show you that these examples are coming from here. I'm going to run actually, there's a set of examples in the cartographer repo, which are really great. And what I'm going to try to get through is these last two. This is sort of the GitOps using the supply chain to go from code to Git repo, and then the delivery to go from Git repo, the ops repo to the deployment. But I would encourage you if you're, if you want to try it on your own, you can get it from here, or you can try the other examples here as well. They're all great. So, okay, so let's try the Git writer example. So we're going to do a little bit of a different flow. We're going to go still source using flux and to build the image using Kpack. We might have time for a little more chat while Kpack is doing its work. And then we're going to actually use a cluster config template to write the configuration to a config map, and then use tecton to do a Git push. It'll read from the config map and it'll Git push to a Git repository. So that'll be our, the left side, I guess, of GitOps. So I'm just going to go ahead and deploy. So the example files that I showed you that are part of that repository. I just deployed everything at once, the supply chain as well as the workload. So while that's running in the background, so hopefully Kpack will finish before we're done talking about this. So this supply chain is a little bit different from the first one we looked at, right? This is the same. I left the selector the same. Instead of having two supply chains, we're just overwriting the last one, which is okay. And so we don't have to change the workload, basically, that's what that means. And the resources, again, the first two resources are exactly the same. So you can also kind of see that these supply chains, because they're decoupled from the templates, they allow you to reuse templates in different order if you want or just in different kinds of designs. So we're reusing our first two, the cluster source template and the cluster image template that we had before. And at the end, instead of just deploying it, we're going to use a cluster config template to write the value, write the Knative definition to a map, to a config map. And then we're going to use a cluster template that is going to call Tecton. This is going to utilize Tecton to do the gate push. So I'm going to show you the cluster config template, the one that, the third new resource here. Okay, so this basically, let's scroll to the top. Okay, so here it is. So it's another one of our cartographer templates, cluster config template. And in this case, the output of this is going to be some config information. So we have to tell it where to find this config information. And we're saying this one's not in the status, but this one is in a field called .data.manifest. How do we know that? It's because we're defining in our config map definition that the values are going to be in .data.manifest. So that's just going to go to the config map and get whatever is there. And then as far as like what we're actually writing to that field, it's a bunch of YAML, but it includes, it includes that same Knative. This is a little bit different, the coded, I guess here, but it is ultimately that same Knative service definition. And we also want to be able to interpolate values dynamically. So we're just using, rather than the dollar sign syntax, it uses a syntax, YTT is a tool from Carvel, very powerful for templating and overlaying YAML. But at the end of the day, this is going to generate that same YAML. So if you kind of peek out here is the Knative service and it's going to inject, again, this is just a different way of saying the same thing, data values image. This is just a way to call out the image that we want to dynamically inject in a way that YTT can understand. So it's just a lot of YAML for the screen. So the templates for the tecton portion of it to push to get, that's also a lot of YAML for the screen. So I'm not actually going to show you, I'm just going to show you that there's three files here. And I think suited with Protographer, we might be able to consolidate it a little bit, but we're basically using a task that's available on the tecton catalog for Get CLI and poking that into Protographer. But same thing, concepts that you've seen so far more or less to do that. So from a developer perspective, since we didn't change the selector on the supply chain, nothing really changes for the developer. So remember when I switched directories, I already applied this. So let's see if it's done. Hopefully it will have finished already. Okay, good. Okay. So in the meantime, our workload finished running. So let's see, we can use, there's this handy plugin called Tree. Let's to see what our resource kind of like spawned. What's the tree of things that got generated? So, sorry, this was a little, okay, there we go. So we, as developers, applied a workload, right? And that generated first to get repository resource. When that was done, created an image. The image did its thing and built an image. And then after that was ready, it generated the supply chain, created the config map with our values. And then after the config map was ready, it called tecton. So what we should have, and these are red, but they're just because they're jobs, they didn't fail. What we expect to have right now is that in our git repository, we should have the YAML that we would want to deploy to any cluster. So I'm just going to do here a git clone. I have a git repo running on the cluster, but I'm just doing a git clone. And let's look at what we got in that git clone. So we do have config manifest.yaml. So that's good. That's what we expect. And if we look inside the file, there we go. There's YAML. Here's the Knative portion. So it's done YTT without all that YTT kind of like I stuff. It's interpolated all of the right values, so our application name and the image. And so now this is ready to be deployed to any cluster you want. So that part of our demo, that part of the demo finished. We've done our job for the left side of GitOps. So now if we move over to the right hand side, we have something called a delivery. So just as we've been looking at supply chains, there's a counterpart we call these blueprints that does the delivery of that YAML to whatever cluster you want. And so it's going to be responding. We're going to use, again, very similar. We're going to use Flux to detect new commits on that ops repo and then deploy it to Kubernetes. So we look at the delivery very similar structure to the supply chain, cluster delivery instead of cluster supply chain. We still have the selector so that it can detect when there will be deliverables instead of workloads that are deliverables. And again, similar structure to resources. The first one is very similar. It uses a Git repository from Flux again because all we're doing is pulling a Git repo. But then for the deploy, we use another kind of template called a cluster deployment template where we're going to take the output from that this one finds in Git and deploy that. So I think, yeah, we can take, okay, let's take a look at that YAML quickly, but it's sort of the same idea, right? Cluster deployment template, there's no output here either. So we just have this template to grab whatever URL was there, extract that, and we're just telling it how do you know, how can cartographer know that this was successful? Because not everything uses the same syntax. The ones that we saw earlier were ready is true, but sometimes there's some resources that are built differently. So this one is reconciled, succeeded, reconciled, failed. So just another way that you can teach cartographer to work with the resources that you're asking it to deploy. So the deliverable, again, the counterpart to the workload. So this would be the thing that represents that precise repo that we are working with for this below CNCF app, right? So that's our repo for this particular app. So if we apply all of this, then we would expect cartographer to very quickly go, go to that repo, realize that there's a commit ready for us, grab it, and deploy it. So we can see, and that's what it did. So that deliverable led cartographer to create the Git repository, the Git repository, very quickly realized there's something to apply, and then it created the deployer template was applied and it created this. So just to make sure that it is actually running, let's make sure that our KNATUS deployment is there, which it is. So we can really quickly do a port forward and we can send a request to our application and it is running and okay, the demo did work. I was panicking for no reason. Okay, that's it. Awesome. Thank you, Kora. Yeah, demo gods are with us today. Yeah, thank you. All right. Yeah, so keep the questions coming. We're doing our best to try to address them and just let us know if that's not the case and we hope to continue a conversation after the session. So wrapping up, the current challenges we see with the pipeline orchestration pattern is that it's tightly coupled, very high level of interdependencies between steps. And once you start scaling, it becomes harder to maintain, to adapt to different workloads and also to modify tools or add tools. As we saw, it implies a lack of consistent paths to production because these highly customized do-it-yourself pipelines along your whole environment. And also, there's no clear separation of concerns. Benefits we see using supply chain choreography will be the world first. Because of the choreography pattern, there is this loosely coupling between steps or between resources. It's highly customizable as we just saw. Each one of the examples or each one of supply chains that Kora demonstrated was completely different and it uses different tools for different purposes. Even providing the enough level of flexibility to developers, it maintains consistency and it's repeatable for different environments and workloads. It also provides a model with clear separation of concerns both for operations and depth teams. Finally, giving us the flexibility to produce or design different configurations to get source code to production and in general is much more reliable. It doesn't have a dependency on a central entity to manage all the steps much more trustable. All right, we hope this was educational for you. We would really like to keep the conversation going, bring your questions, your easy, your tough questions to our several communication channels. I will put here a link to our Slack channel in the Kubernetes Workbase. You are welcome to join, continue conversation and even join us in the meetings if you want. Is there any question, outstanding question, right there that we could address in the a few minutes that we have remaining? I think we do have a few. Kora, can you see them or do we want to turn? Yeah, which ones are outstanding? Which ones have not been answered? I'm just trying to catch up on that. Yeah, the one from Anand. This is basically optimizing sequential and parallel steps by analyzing steps. Oh, I think, Anand, if you're talking about, if you're talking about cartographer, then I think David is answering right that cartographer. I'm not sure how much is on the roadmap and how much is now. I guess, David, you're probably more up to date on cartography, but if you're talking about K-PAC, then K-PAC does definitely have optimizations built into it and it does a lot of like caching. It doesn't apply the build packs in parallel, but it has two different kinds of caching and it does a lot of analysis. That build that you were watching was the first time build. And so any further builds would be faster. And then any other kind of parallelization kind of depends on what you're talking about. Yeah, I don't know if I answered the question. Yeah, right now what cartographer can provide here is, again, a way to design the execution path, even considering parallel tasks, not much in the analysis field. I mean, cartographer itself tries to not deal with the specifics of each step, and specifically not much with analyzing, but in the whole context of providing flexibility for different supply chain designs, cartographer could help with that. And then I think, does Ben say it would be an answer to the question? You can see the example at the bottom. I mean, definitely, there's a lot of use cases. If you don't have, if you're in cluster, and if there's not a tool that does exactly what you want, like Flugs or like K-Pack, for example, the Git code use case, right? We had to use Tecton for that. Tecton is a tool that allows you, that will execute for you in a container, a script of your authoring, right? So cartographer doesn't do that. You can't give cartographer a script for a container. Cartographer can orchestrate things, but you need some kind of like arbitrary activity to happen. You could reach out to something like a Tecton, and Tecton will run any script for you. So I think that probably covers a few of these sort of cases where the task that you're trying to accomplish doesn't exist as a tool itself in Kubernetes. There's a really, there's an example, I think Scott Rosenberg actually, he has his GitHub repository, I think, vRabbi or vRabbi IL. I don't know if you know that URL, David. He was an example of like provisioning VMs, I think it was, right? He stamped out a whole set of machine using cartographer. So because it's just this orchestration layer that can orchestrate any sequence of tools, and those tools can essentially accomplish anything because if there isn't a tool that does exactly what you want, say you can use a tool that can run any script that you want, that can make call outs, that can check things that are outside the cluster, trigger things, check responses. I don't know that there's really any limitation to what you could put together. I wonder if we could share that and get it quickly enough, let me see. Right, yeah, I just shared the link to the repo. He's a cartographer user, and this is just an example of how far can the project get outside Kubernetes. Yeah, so he's using, right, he goes from a Git source and he provisions virtual machines. So he's stamping also to that Terraform kind of question, right? That repo would be a great example of how to snap out a bunch of virtual machines. And so that's the approach that Scott took, but you can use it as a model if you want to take a slightly different approach. So yeah, I hope that that answers the question. Thank you, Koran. Let's see, and then there's other others. Yeah, I don't think I see different ones, and we are at time. Okay. All right. Well, thank you both so much, David and Koran. Thank you, everyone, for joining. Definitely click their links to join in the conversation post event. And we will have all of this up and online later this afternoon for anyone who missed it. Thank you all, and we'll see you next time. Thank you. Thank you. Bye-bye.