 All right. Hi, everyone. Today, we're going to go ahead and get started. And thank you for joining us. Welcome to today's CNCF Live webinar, How to Manage Kubernetes Application Life Cycle Using Carvel. I'm Libby Schultz. I'll be moderating today's webinar. And I want to welcome our presenters, Helen George, Product Manager for Carvel VMware, and Joelle Pareda, Safa, and I know I didn't say it right. So I'm going to let you tell everyone and correct me. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. Inside the chat icon at the top is a Q&A specific box for Q&A. Please feel free to drop your questions there. And then we'll get to as many as we can at the end. This is an official webinar of the CNCF, and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. And please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF Online Programs page at community.cncf.io under Online Programs. With that, I'll hand it over to our presenters to kick it off. Great, thank you. Hello, everyone. Welcome to CNCF webinar, where we will share about how to manage Kubernetes application lifecycle using Carvel. So today's agenda, if you go to the next slide, is simple, as you can see. First, we'll talk about what Carvel is, what problem Carvel is solving. After that, we will spend a majority of our time demonstrating how you can use Carvel and talk about its benefits. Then we will wrap up our time with some Q&A. So my name is Helen George. I'm a product manager from VMware working on Project Carvel. I've been with VMware for about a year before that I worked at various consulting firms helping customers build products in various industries ranging from health care, travel, and finance. And here's my co-speaker for the day. Hello, I am João. I'm also in VMware. I'm a software engineer that works on the Carvel tools. And I've been with VMware for roughly a year now. Prior to that, I worked in Pivotal. And I worked in projects from CNCF, like build packs. I worked also as part of the team that worked in K-Pack. And yeah, that's it. Cool, so yeah, let's get started. What is Carvel? So Carvel is a set of Kubernetes tools. Note the plural there. So it's not a single tool, but rather a tool suite that help application developers and platform operators to build Kubernetes applications, distribute them, and install them. It consists of single purpose composable tools based on Unix philosophy. So each tool does one thing very well, and yet they work well together. To go further, to name a few, Carvel consists of YTT, K-Build, Image Package, Cap, and Vendor. And there are other tools such as Cap Controller, which is used for package management. And also we have other experimental tools, which you can learn more about from our website, Carvel.dev. But in today's webinar, we will only cover the tools shown on the slide here. So what are these tools and what are they for? Before we jump into what these tools are, we would like to take a step back and show the problem space we are playing in since Kubernetes ecosystem is big and vast. This will help us understand where in the Kubernetes landscape Carvel tool suite fits in. So what you're looking at here right now is a simple workflow from application definition to management of its lifecycle on Kubernetes. You might have heard of this referred to as day one and day two operations. Day one operations involve developing and deploying software that was designed on day one, which is not represented here. We're going to start from day one today. And in this workflow, we're starting with the building the app into container image. And as part of the process, the application developer will describe the details for deployment with manifest and then parameterize configurations. And those configurations will make it easy for the platform operator to make changes to deployment details if needed and later in the workflow. Once you've turned everything into you need into a single image, then you're ready to push it to a registry. In the next step, you'll bundle together images and references to dependencies into a single package that can be pushed to a registry where others can pull and consume it. So now that the package is made available, most cases it might be the platform operator's role to pull the package and do the proper security checking to make sure the bits contained in the package is indeed what you are expecting to be there. And the platform operator then applies customizations based on the environment to configure the deployment. And finally, the application is deployed to Kubernetes cluster, which marks the end of the day one operations there. Now that the application is in production, there are still various data operations to be done, which is to maintain, monitor, and optimize the platform. So we called out a few operations that are directly related to Carvel's domain here in the slide. And by no means that's not the all-encompassing task, but we thought these are the ones that's relevant to the Carvel's domain. So platform operators will manage the application and storage lifecycle in day two operations. Doing such as backup and failure recovery, they'll be also scaling the platform as the demand grows, perform upgrades whenever a new version of the application and its dependencies are made available, and also install security patches if any vulnerabilities are found. So where does Carvel fit in? What problems Carvel is solving here then? So if we go to the next slide, now that we've gone through the whole development deployment lifecycle, we'll show you where Carvel's tools refits into this workflow. So as mentioned earlier, we had the unit's philosophy in mind when we first created Carvel, because we wanted to prevent it from becoming a monolithic tool, which is hard to use it with other tools, and we don't want that. So that's why here you'll see each Carvel tool is solving a single problem, and you can have better control on what is happening at each step in the workflow. So now looking at the day one operations, you see YTT, which stands for YAML Templating Tool. And as you can guess from the name, it is a YAML Templating Tool. So it's Structure Aware, which is really convenient, and it is used for authoring and customizing configuration. Next, Catebill is used for building container images and image package is used for packaging and relocating those package configurations and images. CAP is used to deploy applications to clusters. And CAP is really good at ordering Kubernetes resources during deployment, so it prevents you from getting into this weird race conditions. We'll show you more of CAP in action during the demo later. So Carvel Tool Suite can also be used for day two operations, as the diagram shows, just like how you use them to customize, build, move, and install applications for day one operations. You can do the same for upgrading, patching, and scaling. This way, you can operate applications in the production environment in the repeatable and the reliable fashion in the same way that you have done in the past. And yeah, we understand that managing customized application manifests in the long run gets messy, and we think Carvel makes day two operations easier by utilizing YTTs, data values, and overlays features. These will make it easier for you to manage both the original configurations provided to you by the developer and your own custom configurations that you developed. And once again, CAP is aware how certain resources need to be installed first, so this can help better manage CRDs along with other Kubernetes native resources. Another advantage of Carvel is that it meets users where they are. What I mean by that is that you can decide to mix and match different tools in your workflow. So for example, here, you can easily swap a CAP here, and you can use Q-Cutl instead to do deployments, or you can swap out YTT with either Helm or Customize for templating. It's really up to the user to choose the right tools that work best for them. So now that you've heard where Carvel works and what problems we are solving, we're going to do a demo, and we're going to spend a good amount of time there, because we believe showing is better than telling. So now Jaws is going to take us through the whole workflow using the Carvel tool suite. Cool. Thank you, Ellen. So we're going to look at an application. I'm just going to roll back a little bit. We're going to step through all these steps during this process, and we're going to use the Carvel tool suite in order to move us forward. So what we have to show you all today here is we do have a small application that we built that is an application that basically stores projects that are a name and a description into ACRD, and it's able to read it. So this is the application that we are going to show here today. And to start us off, we're going to use a tool that we didn't specify. We didn't show in the flow that's Vendir. And we are going to use Vendir, because Vendir is an application, also part of the Carvel tool suite, that allows us to synchronize directories in your operating system, for example. So what we're going to use Vendir for is we standardize that we need some versions of the Carvel tooling to develop our software. And we are going to use Vendir to retrieve them. And as you'll see in the configuration, we are setting up where do we want to retrieve those binaries. And Vendir is going to do everything for us. So Vendir is able to download GitHub releases, files from GitHub releases. So for example, in here, we see that on this, this is a path we want to put the binary at. But we want to retrieve it from a GitHub release. On this log, we want this particular tag. And more specifically, this is the file that we want from it. So you can retrieve GitHub or Git repositories, and as we'll see in a step in the next step. But you can also get some blobs, some files, some directories that are present in other places. So we're going to just start off by initializing our environment by just doing a, I created a make file to get us everything because we need also to move the files into the binaries and smother them as well. So while this is running, let's go and talk a little bit about our application. So this is done. And now you'll be able to see that in our bin folder, we do have all the applications that we are going to use for this part of the demo. So this is, let's start from the beginning. He was a software developer, created this application. But now you need to install it somewhere. And for that, we're going to install it in Kubernetes. So we'll have all our CRDs, not CRDs, but all our manifest and our deployment configuration that we need to store, right? So we created a deployment folder. These are not like mandatory, we're just decided to put them here. And we have our configuration and each of these files, it is, so these are template files for YTT. And as you can see, this is using annotation for YTT, but it is a plain YAML configuration file. So YTT will, when invoked, will replace all these values as a templating engine that will replace like all of these files with the values that we provide to YTT. So we do have applications. We do have a definition of CRDs right here. And as you can see, there are replacements there. And because for YTT, we use Starlark as the language that is going to do the processing. So you're able to create Starlark files that in the end, they pretty much look like Python because Starlark is somewhat the subset of Python. So you can create functions in a star file and then you can just load the star file and load the function project CRD and call it project CRD and you can then just invoke it. And we'll be able to see that it is going to replace the values. So in this case, it's going to replace by projects and then whatever comes from the data values file, CRD.group. So these are like the configuration files more or less. One new feature that we are rolling out, it's still experimental, are schemas. What are schemas? Schemas are from the developer side is to tell whoever is deploying, what are the values that you can change? What are the values that you can parameterize on the deployment side that will be used then later? So another thing that this schema will allow it to tell what is the shape of these data files, these data values. So for example, you'll have a CRD that is going to be an entry to a map that will contain a map that will have a group and a version keys. And these both are strings, right? So this is telling the the deployer that if they want to change the group of this CRD, which in this case, we do not advise, but if you wanna change it, you'd have to put a string there. So this is very experimental. We're starting developing this. So the major annotation that it has at this point is schema nullable, where you can say that the description for the projects can be null. So it doesn't need to be present when you're defining the projects. So another thing that I'd like to call out and this is kind of specific, but if you define an array, so for example, projects is an array, the default value that you'll have when you generate the schemas is going to be an empty array. Something that I forgot to say is that this works as also the default values. So you don't need to set up example, the Carvel Dev, it will come by default. So just to give you a little test out just to look at how this would work, I'm just going to create a file called data.yaml. And what I'm going to do is this is going to be a data values file. And what I wanna do is I wanna add to our projects, I wanna add three projects, one, two and three. And as you can see, the description is only present on the last one, as we said on our schema that can be nullable. And yeah, so let's take a look and let's try to run this and see what happens when we run YTT with our file. So let's first take a look at how we call YTT. So we are providing all the configuration files that we have in this folder plus our data values file. And we do have to enable the experimental schema flag because we're not yet, it's not yet like live. So you need to enable it there. But as you can see like previously, our namespace didn't had anything here. It had like the YTT templating information, but now it got replaced by project app. And a more interesting part is in here, if you go to the bottom where we see the projects, where we previously, so let's take a look at what the projects look like. It is a for loop that it will iterate over the project's value and then it will create the custom resource for each one of them. So if we look at it, it is creating a custom resource for each one of them. And the only one that has a description is the last one. So it looks like everything is working. So the next step in our process would be, okay, so we do have configuration. We saw that our configuration is working by creating like this YTT data values file. So what next? Now we need to build our application and create an OCI image with it. So we do have a tool that does that. It is called K-Build. So I'm just gonna run it right away because it's going to take a little bit to compile while I talk about what K-Build does. K-Build is an orchestrator that uses applications that you have installed in your machine or in a Kubernetes cluster to build your application. So in this case, what K-Build is doing is it's looking at our configuration file that we'll see in a bit. It provides also the deployment and I'll explain why in a little bit. And then we provide this other flag that we'll discuss in a little bit as well. So let's start from looking at our configuration file. So our configuration file was this the one that I used? No, I used the hub one. So it's interesting that if you see, I do have two different configuration files that differ only on the destination. We had some problems yesterday trying to push to Docker Hub. So that's why we have two, one for GCR. But what we're saying is that the image that we call that we name project app image, this is the place where it's going to read the information from. So because we didn't specify anything, it's going to use Docker. So it's going to search for a Docker file in this path. In this, in our case, we do have a Docker file. He's here, so it works. And we're going to tell it, tell that we wanna say, we wanna push this image into GCR in this case or into Docker Hub. We'll see how that went in a little bit. Another thing that you can do is, I have another example here where we're using pack, the build packs from the build packs project. And it uses the pack builder in order to build the image. So Kbuild supports building images using Docker build, using build kit, using pack. And recently in the newer version, it also allows Co to build images using Co. So it is an orchestrator. So the second thing that we talked about was that you need to provide a configuration file, a deployment so that Kbuild knows what are the images that need to be built. So if we go back into our application, that's the file, you'll see that in our deployment, we did set up our images, project app images. So Kbuild will search for this name here to see if it is used in the configuration files that we have. And if it is used, then it will build it. And you'll see that the output from Kbuild, so the output from Kbuild will replace where you had previously the image, project app image, it will replace it with a full image name with the SHA. So that we know that the image that we built locally is the one that's going to be passed for. So this is just printed to the standard output because the tools can be chained and they can feed from one another. So you can use YTT to generate the templates, Kbuild to replace the images, and afterwards you can use CAP as we'll see in a little bit. But basically it's just a Docker build and then it pushes the image to the place where we asked it to push the image to and that's it. So looks like that Kbuild was able to create the image that we want. So the next thing we want to do is we need to package our application and so that our deployer can deploy it in their cluster. So we do have a tool that is called image package and we are going to create a bundle of configuration plus our OCI image and we're going to get both, right? We have to bundle both together. So the configuration that you'll need as a deployer, you'll need this configuration file, this configuration files here and you will need the OCI image. So let's take a look at how might we do this? I'm gonna do first, I'm gonna copy a file that you remember I said that we would look at this image log output afterwards. Now it's the afterwards. So I'm just gonna copy it. Nope, sorry, didn't copy what I asked it to copy, which is good. So let me paste that. So I'm just going to make a new folder called bundle. I'm gonna give it another folder underneath it that is dot image package. This is a structure that is fixed and you need to follow it in order for image package to know that this is a bundle. And we're gonna copy the file that we just created from K-Build and we're gonna put it inside of it. So this is called the image log file. And if we look at it, what it contains is a list of all the images that will be contained in our bundle. So you'll see here the image and it contains the full image with a shout at 27C. So if we go back and see which image was built and this was the image that was built, right? And if we see the shine here, it is the 27C. So we are saying to image package that this image is part of our bundle is part of our bundle. So let's call image package and create our bundle. There, let's paste that. And as you can see, I am, what image package is doing, we're asking it, we're pushing a bundle that is a group of images plus configuration into this particular place that is Docker Hub. And we want to include in this bundle our bundle folder that contained only the structure to create an image package bundle plus all our configuration folder. So you don't need to specify these separately. I just did it so you know that you don't need to select a single folder and you can see how this all works. So the, and you can see all the information all the things that were attached to the bundle. This is also an OCI image and is saved into the registry. And there you go. This is where our, this is where our image is right now. So the, as you can see this shot here, this is going to be important and we're going to use it because now that we have this in a public place, we want to provide to our deployer to the person that is going to deploy this application. We want to provide it this shot so that, okay. All right, about that. We're going to provide to our deployer this shot because this is what the deployer is going to use in order to be able to deploy the application. So if we, if we look at here, if we look at our next step here at the deployer side, what we'll have only is going to be our shot. So the first thing that we are, that we want to do as a deployer because we are mentioned that we want to have the images saved in our private registry because we want to make sure that if something happens to Docker Hub, we still have an image that can be pulled from our local registry. Oh, what is wrong with this? We want to be able to pull it from our local registry and we want to make sure that it doesn't take a lot of time. So what we're going to do, the first step that we're going to do is we're going to move all the images that exist in this bundle that we just created to our local registry. So in this case, I did stand up a registry V2 in my local machine. And I want to copy this image that we have here. That's the one that it's right there. Sorry about that. And I want to copy it to a new repo locally called bundle to deploy. So what this is going to do is it's going to find all the images associated with this bundle plus the bundle itself and is going to copy them to our new registry and repository. So as you can see, it was able to get the demo projects app, the one that we had here. So the 27C, so 27C. And then also the bundle itself and it's going to put everything underneath the same repository in our local registry. We do this because we want to try to do not have collisions of names. So for example, if you're packaging the Ubuntu image, if you already have the Ubuntu image, we're going to be polluting and adding more things to your repository of Ubuntu on your local registry. So we want to try to concentrate in all the images that are needed for this particular project or this particular application in a single repo. This would allow you to afterwards to just delete all the images on this repo and when you don't need it anymore. So now that we copied everything to our registry, our local registry, what we want to do the next step as a deployer, we want to get the configuration that the application developer created and we want to do some tweaks to it because in our company, we need to do some little tweaks, we need to have some access to some information. So the first thing we do is we're going to pull, we're going to try to get the configuration from the bundle. So and to do that, we also use image package. And what we're going to say is that I want to pull the configuration from the bundle that is in here. And as you can see, we always use the shots just to make sure that we are using the, we have the things that we expect. And as you can see that the shot does not change even though we change, we move that through repos and so on, the shot did not change. And something very interesting is that you see like these commands, I'm just copying them. All the previous runs, they did have the exact same shot because I didn't change the configuration. So you have like a repeatability of your builds as well. So and we want to output our bundle that we're going to pull from this place. We're going to output into this temp project bundle. So let's do that. And it was able to pull everything. And it also says here that it found all the images in that were in our images lock. They were found inside the repo for the bundle. So they were found in here. So we were able to relocate. What happens was that this image package images is now updated. So let's take a look and see what does that mean. So I saved everything into our image package here. So if we look here, now the image that we have is pointing to our local image. This is the previous one that was pointing to the initial one that was pointing to Docker Hub but it's no longer in Docker Hub, the one that we care. Now the one that we care is in our local registry. All right. So let's take a look at this configuration. What is this configuration? So it's basically this folder. And one interesting thing is that the developer said, okay, in order for you to install everything, you'll need these applications. You'll need YTT, you'll need CAP and you need K build. But because we want to we want to do, we use some sort of like a GitOps model in our company here as a deployer. I wanna also download to my configuration folder and put it into a local folder some configuration that we currently have for this particular application. So this is good. For example, like if the app on the day two example where the developer creates a new version of the application for some reason, there's a bug fix or something. There are things that me as a deployer care for my company that I want to when I'm deploying, I want to make sure that they are the same throughout making my life easier. So I saved in this repository more configuration that's going to be used to generate our deployment. And as you can see, we're gonna use Vendir again. Now to download all the binaries that we care about and also to download our configuration set. So I do have a script that basically just does that. And I'm gonna just do bash prepare. And as you can see, it is using Vendir to fetch the releases that we care about into our bin directory inside of this folder. And as you can see here, it was able to download from using Git to download our configuration. So if we do an LS now, or if you do a tree, you'll see that we have a local configuration. We do have some things here, but we do have two new files that are the roles and the values. So one thing that the deployer decided to do was to every time that you install this application, you want to make sure that you change the namespace. In this case, we're gonna change the namespace to be number team one. And we want to have a project to be configured just to make sure that everything is working. So we're using overlays to add the projects here. And a very interesting thing is the deployer have a service account that is called adminSVC. And he wants to be able to access and read the projects for this team. So what this is doing is it tries to find this role binding that we have in our configuration and it will add, as you can see here, it will append a service account to our role binding. So this way you can have your own overlay, your local update of configuration in your separate repo that will just change the default behavior or just change something in the configuration that comes from the developer. So this is very helpful to manage so that you can manage all these applications and have somewhat of a GitOps flow in your application. So this is all powered by YTT. Okay, so now that we saw this, I do have like a big line that does everything that just generates, uses YTT to generate all this configuration, KBuild will do something to it and then it calls CAP to the installation. But I'm gonna do this step by step and we'll see the output and we'll see what the differences are. So the first thing that we're going to do again is call YTT, sorry. It's going to be, we'll call, we're going to call YTT again. So let me just scroll a little bit up. We're gonna use whatever is in the configuration file that came from the bundle. We also are going to provide to YTT our values plus our overlays for our configuration that we just got from Git. And we also want to enable the experimental schemas because if you don't, it will fail. And as you'll see here, a new thing was dependent that was not there when the developer created this configuration. So one thing, and then in the end, you'll see the projects that we just asked for. An interesting thing here is, let's imagine that we have these values files, right? And for some reason, I said that the description is going to be a number one. When we run YTT again, it is going to complain to us that the description field that we have in our values file line 11, right there, very specific, which is good, is of the type integer when it should be a string. So as you can see the schema allow the developer to make sure that when you are deploying something you don't accidentally change or do something that is not expected. So let me roll this back. And if we go and do this, you'll see that everything is here. Is it in a good order? We can say that maybe it is, but this is somewhat, it is not randomly generated. This is not random like printer, but it might not have a good order right now for YTT after YTT comes out. But we'll see what happens when CAP is invoked. So the next thing that we want to do is, so we do have our output, but if you see our image here, it is still not resolved. It is still project app image, right? So what we're going to do is we're going to pipe this YAML that we have here into K-Build. And we're going to say, this is where the information your image log file is the translation layer for our images. Use it and translate the image into our new image. So if we run them in a pipeline, you'll see that now the output of our YTT was passing to K-Build. And now outside of K-Build, we already have our image correctly replaced there. So we are in a good spot right now where we have all the configuration that we need. We do have the image that we care about. So the only thing now that we're missing is, can we put it into a Kubernetes cluster? Let's see. So we do have this command again where we pipe YTT into K-Build here and then K-Build into CAP. And what CAP is going to do is going to deploy our application and is going to read what comes from the input. So this is going to fail. It was kind of on purpose. You need to pass some minus Y so that you say like that you accept that this change is what you want. I stopped this so you can see something interesting here. Do you see the last thing that we have as an output for our K-Build? It is the custom resource and there's also like the custom resource definition. But K-Build is smart enough to know that maybe you want these to come first. So it reorders your custom resources. It reorders your resources that you have here in order for them to make sense when they are being applied to our cluster. So if I do a minus Y here, as I said, it is going to deploy everything and it just finished, which means that we should have our application running. So if you run CAP inspect, inspect and we provide the deployment name that we have here into it, you'll see that the cluster does have custom resources. It has a new namespace that was created and underneath that namespace, there is a project that's the custom resource and also the deployment and so on. And we have a poll that is running. So if we do, for example, kubectl logs minus N, and the namespace is going to be team one, and we want to see the logs for our application that our application is here, you'll see that it contains some logs and if we could expose the port then we could see the output of this application. So summing it up all up, you can see that we do the application that we just built was moved through the beginning of our creating configuration into our deployment. And for example, and all the tools that we are using throughout this time. Okay. So this was a very big demo. As you can see, there are a lot of steps on the creation of an application and we in all these terrible tools are here to try to help you to move this process along and try to meet you where you are as a developer, as a deployer, if you have tools that you'd like, you can intertwine Carvel tools in some of these steps to ease your work and to make it easier. So this being said, if this, let's see if this was on, there we go. Thank you. And I think we are able to answer some questions if you all have any questions for us. Looks like one just came through on the general chat. So I'll let y'all take that one. The short answer is yes. If you, you need to add a little label to it though, but it would, so like it's asking if you can reconcile the existing resources in Kubernetes cluster into the deployment and you need to add the labels to it. So basically CAP controls everything knows about the resources using the labels. So that's a way to do it. I hope our presentation is so very good that nobody has any questions. Everybody knows everything about Carvel now. So we do have another question. What are the benefits that, why did you go over D-Haul? I'm gonna be very straightforward with you. I just had to Google to see what D-Haul was. But the benefit that we see from YTT is that first of all you can do, you can just pipe everything through all the Carvel tools if you want to. And we are trying to evolve YTT in order to make this tool as friendly as possible and try to make it as usable as possible. And we're trying to make sure that all our Kubernetes resources can be overlaid and changed. So it is a little bit hard to try to talk about this without knowing what D-Haul does. I don't know, Ellen, do you have any idea? I've heard of it, but not know of it myself. Oh, is it possible that this is JSON applique? This is JSON, right? It just does JSON function types. Yeah, we can come back at it and try to see if there's like, what are like the major benefits over from YTT overall and see if it would make sense. Looks like Dimitri added some content there for us. Oh, cool. There we go. So there are some pointers there. All right, so next question from Brad. Is it possible to build an image without deployment file? So you're saying using kbuild, right? So if we look at kbuild, you'll be able to see that there's a couple of information that you can provide. But you always, because of the nature of the configuration itself, it would be a little bit hard for you to try to provide all the information that you care, that you need for it to build, because you can provide destinations to it, where you wanna put it. You need to provide also a name. You can create multiple images from the same configuration. So if we look at our example of, sorry, our build file, you'll see that you can provide multiple sources and create multiple images from it. So it would be a little bit hard to have all those things in the common line. And in terms of where you can learn more about Carvel, earlier in the thank you slide, we had our website, Carvel.dev. So we have all of our docs there as well. So you can check out various documentations. And then other VMware projects, open source projects, you can go to, there's a GitHub called VMware Tanzu. You'll see various other open source projects, like Filero and other PennyPad. So you'll be able to learn some more resources, get some resources there. Cool, maybe we can provide them in the chat, the links. Dimitri provided a link to the website too, as well. So there we go. So the next question is, are all these tools, do they work on the client only? Or if there's any kind of configuration that needs to happen in the cluster? So these tools are all client-side. There's nothing that you need to install in your cluster in order to make this work. We are developing right now another tool that is the cap controller, that is going to manage all these steps. So you'll be able to, it will run YTT for you, KBuild and Cap, and it will install applications for you. But this is, that would be like, some sort of like an orchestrator of all this that I showed manually, it will be done in the cluster. So we are still developing it. We still are not in an MVP situation. So we didn't show it here today, but we hope that in the next couple of months we'll have a version that does more or less, that does like this flow for us, and it's inside the cluster. Thank you, Nancy and Ellen, for providing the URLs. All right, any other questions? Okay, y'all did a great job. Thank you so, so much. And thank you for your Q and A. If anyone else has any questions, we'll go ahead and wrap it up today. Thank you for joining us. And remember that the recording will be online later today, and we will see you all at a future CNCF webinar. Have a great day, everybody.