 We are recording. Thanks for joining us, everyone. Continue to sign in and get settled, but we're going to go ahead and get started with today's CNCF Live webinar, tackling the Kubernetes software packaging puzzle with CNCF Sandbox Project Carvel. I am Libby Schultz, and I will be moderating today's webinar. I'm going to read our code of conduct, and then hand over to Cora Iverklein, developer advocate, and Gabrie Brody, solutions engineer, both with VMware. A few housekeeping items before we get started. During the webinar, you are not able to speak as an attendee, but you are able to add questions to the chat box. Please do so and send your questions there. We'll get to as many as we can at the end. This is an official webinar of the CNCF, and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct, and please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link you used to sign in today, and the recording will also stay on our online programs YouTube playlist. With that, I will hand things over to Cora and Gabrie. Take it away. Okay, thank you Libby. Thanks so much. So yeah, so we're here to talk about tackling the Kubernetes software packaging puzzle with CNCF Sandbox Project Purple. I am Cora I-Reclined, a developer advocate, as Libby said, and I'll let Gabrie. Hi, I am Maria Gabriella Brodi, AKA Gabrie Brodi, shorter is better. I am a solution engineer at VMware, and thank you very much for this time. I'm very happy to be here with Cora and the whole team. Thank you Libby. Thanks so much. And we just wanted to throw up there that we are co-organizers of, you know, New York, so if you're here occasionally we have online. So today we wanna cover sort of the challenges are Project Carvo, we wanna set us, give you an idea of what each individual tool is for, but then give you a larger, broader story for software producers and software consumers for multi-component applications. So we'll talk about the challenges and requirements that you need if you're in one of these two task groups, producers and consumers of software, and then we'll go through the project and give you a demonstration of how you can put it all together. So, yeah, Gabrie, you wanna talk a little bit about challenges? Sure, oops, sure. So from a perspective of a software producer, I need to deliver a system that it's quite complex. There are many software components and I need to give this to my customer. So this is my challenge, how to do this. And then from a producer, from a consumer perspective, as we know, many systems are not only systems of microservices, but actually, you know, you might have an application at the database or it's just a more complex application. Together, as a consumer, if I want to take advantage of such a system, I want it to be easy, right? I need to deploy it to many different target locations and those target locations might have different configuration requirements, but either it's multi-cluster or every target location actually satisfies a different use case. So my concern is, how do I get this complex system out to many different kinds of targets? So back to the challenges, right? So Gabrie, as a producer, what are some of your specific challenges? So I can't see, oh, sorry, I was looking at the wrong screen. Well, the first challenge is if myself, like I am the one that's developing the system, if I am not able to do an easy installation of the system, I cannot test it in a pretty fast fashion. Like I cannot be up to speed with all of the things that need to be happening in order to go properly through the life cycle of my system, then even how can I think about my users being able to do that? So first of all, I need to solve also the same challenge for myself and see how I can be really, I can be able to do this, to do install different components, to do upgrades of this component. So everything that's about the life cycle management software in my own environment. And then from there, improve to of course also provide the experience to my customers. Now, the system that I'm looking at can be something where I need to configure different, I need to configure for different type of use of my software. For example, I can have a system that is installed in a location where there are a lot of resources. I can rely on higher viability, like I can run multiple replicas of the same workload. And let's say that I'm also using an external system, I can also look at the components of the system and say, I wanna use let's say a database in higher viability. Or if I deploy this instead in a smaller location, I wanna find an easy way so that my consumer can say, you know what, I need to run with less replicas. And I also need to use a database that for example is not in higher viability type of configuration. I want also to be able to deploy all of the dependencies that I have that are needed for this software to run because eventually I'm having some third party components that I need to also add. And the other thing is that I would like for this complex system to be reinstalled in, like are we used to install software in, I don't know with an image for our Mac or an executable in our windows. So in one step, is that really possible? So I'm just gonna, I think, Gabri, I think we're looking at different screens, but I advanced the slide one, but I think it might be, you might be moving the slides on your own screen differently. So just so you know, we're on the producer requirements, I'm just gonna go back a slide and tell them the second half of this slide, which is the consumer side. So from the consumer side, the same story, right? Like if the producer delivers a software, even if the producer delivers software in a cohesive package, in a single package from a consumer perspective, because my targets are very in the type, but the capacity that they can sustain or maybe the type of hardware, I might have edge locations, I might have a data center, a private data center, I might have something in the cloud that's more elastic. I can't simply take the same YAML, for example, the same configuration that describes a multi-component system and apply that same YAML to different locations. Or as a Gabri mentioned, right? There's things like databases, things like that require secrets. So every target location has to have some kind of configurability. So we have both of those challenges, right? Complex multi-component system, as well as the need to vary the configuration of it depending on the target location, right? So different apps, different targets, different configuration. And so I'll go past this next slide, which we covered. So from a consumer perspective, my requirements are that if I have different target locations and they all require the same system to be installed, but that system has to be installed in different ways, like maybe at the edge I need three apps and in the cloud I can install 10 apps, right? A sort of a different kind of setup for the same system. Then I still, as a consumer system manager, I still want the whole system to be centrally managed. Especially if you think about edge locations that are maybe a point of sale somewhere or somewhere in a remote area, we can't necessarily assume that you even have the same sort of operational personnel to be able to manage an edge, right? So it's very important to have this sort of central management and not assume that you can have high touch deployments. So everything should be low, no touch deployments and automated as much as possible and allow for different kinds of configuration and combination. And of course it's super important, of course it's to be reliable, repeatable every time because you might have thousands and thousands of target locations and you have to manage the system with confidence and repeatability. And finally, there are gonna be some target environments that have access to the internet and some target environments that are air-gapped. So again, we need to be able to consume software in a way that allows for both of these setups. So function, get this out, Gabri, and you can cover it. So in terms of functional requirements, then we know that we are looking to distribute into a Kubernetes-friendly environment. So we need to be able to work with Kubernetes. Of course, GitOps, it's gonna be an important requirement. So we need to have the possibility to approach a GitOps model. And for all of this reason, we need something that is really going to enable us to manipulate YAML configuration because there are gonna be enough that we need to work with. But we also wanna have a clear building of materials. Like we need to know what is deployed at any time for a specific environment. So we are not going to use simply that's labeled for our images. We need to be able to lock down the exact shell that was used for a specific deployment. This is also another important component of our solution. And last one is that we need to automate as much as possible all of the operations. And let's see how we can do this, which are the tools that are really giving us edge. Exactly. So we propose that Carville gives us all of these functional, provides all of these functional requirements for us and gives us the tool set to be able to solve these two challenges for producers and for consumers. And so we wanna spend the rest of the time now introducing you to this project and showing you how it can particularly solve these two problems. So from a high level, you can see this on Carville.dev, the Carville home site. It's a set of reliable single purpose composable tools. So you can see that there are four, five, six, seven tools listed on the website. And each one has a very specific purpose that it specializes in. And you can use each tool separately from the others, but you can also strengthen together in a way that gives you a complete workflow to solve the broader challenges that we're talking about. So let's dive in a little bit more. So what does it mean? Again, single purpose and composable. So for example, these are three of the tools that were shown in the last slide. So YTT, for example, is a tool that focuses on processing YAML. It can be templating and it can be overly. So some overlap with a tool with customized, for example, some overlap with Helm, but it's a very effective tool for sort of wrangling your YAML. And so it expects the YAML as input, it modifies the YAML and then it emits YAML as output. K-Build is another tool that's part of the tool set of Carvel and this one specializes in finding all of the images that are mentioned in YAML and resolving them to their shot. So it has to have access to the registry in which these images actually live. It goes to the registry and it looks at which SHA represents that tag and then it emits the same YAML that it read, but now it produces information about what the actual SHA is. And then we have CAP, which is sort of an improvement on KubeCuttle CLI. KubeCuttle CLI, one of the challenges, for example, is if you do a KubeCuttle apply on a file that has 20 different resources in it, then you can't simply use KubeCuttle delete and the same file and delete all 20 resources, right? You have to have that file so you can see which resources they were and delete them one by one. Or there's no way that you can manage, if you change one field in that file of 20 resources and you apply that file again, KubeCuttle can't tell you what's changing and get you to confirm that you wanna do that change, right? Are you creating something new? Are you updating something? You really don't know. KubeCuttle is more basic than that. So CAP is sort of a step up from that and it's also one of the tools in the tool set. And because all of these tools expect to get YAML, then they can simply be chained together in this way to give you a workflow. In this case, starting with some YAML, updating some fields, resolving the shaws and then applying it directly to your Kubernetes cluster. Now, because they're YAML and YAML out, they're also interchangeable with tools that you might be more familiar with. So if you wanna continue using Helm, Custom and KubeCuttle or any other tool that reads and emits YAML, then those can play very well together. So again, because they're single purpose, you can choose what you want. Vendor is another tool in the set of Carvel tools that was shown on the screenshot from the webpage. Vendor is a tool that enables you to synchronize dependencies. So for example, if you are developing applications, then you own that source code and you own that YAML configuration. But you may include, for example, a database or some other, or just a third party piece of software that forms part of your system. And you don't own that code and you don't own that YAML. So Vendor can help you reference where the source of those dependencies are. In this example we're showing on the screen, the source happens to come from a GitHub repo, but it doesn't have to be Git. It could be other kinds of sources, images and such. And it gives you the ability to, through a simple command, synchronize on your local machine the files that you need from that third party product. So you can see here on the right after a vendor sync, we've actually downloaded, in this case, the YAML configuration for Redis because we are Redis' dependency in our system. And we don't want to tell our consumers to go get their own Redis. We actually want to vendor in Redis into our application. So Vendor can help us manage that. And again, track versions and track shaws and things like that. So Vendor's another one in there. So if we put it all together, if we're building a multi-component system, we would first bring in our third party dependencies and then we take that plus the rest of our YAML configuration for the rest of our system and then we can get to a place where we can apply that to a Kubernetes cluster. But of course, this flow, you don't usually, if you're in the business of producing software, you don't want to necessarily just have a YAML file with a whole bunch of stuff in it. You might want to be a little bit more organized or provide different ways for consumers to retrieve your software. And so in this case, there's another tool called Image Package, which we're showing here as instead of using CAP to directly apply the YAML to a Kubernetes cluster, instead you can take all of that configuration and place that into a bundle and have the consumers download that bundle instead of the same way that you would send them an installer executable, for example. But in the case of Carvel, what it does is it leverages OCI image registries as a way to transport files to consumers. So it's kind of like, in the same way that you might use an FTP server, you can put any arbitrary content inside of an OCI image and then use a registry to enable people to download that content. And so you can see here, for example, what's going into this OCI image is just a set of files. It's not actually necessarily YAML that you would directly apply to a Kubernetes cluster. We're not using the OCI image as the executable image that would go inside of a pod. We're using an OCI image as a method to transfer and distribute an arbitrary set of files. So we're going to use this as a way of getting software to consumers, which is also great because consumers can now, every image has a SHA, right? So it also gives you a way for consumers to know exactly what they're getting and to track that. So that's image package. And then, so that's the producing side, right? Once I vendor in my dependencies, I process all the YAML, I specify what SHAs I want consumers to obtain and I put that all in a bundle. Then I can tell my consumers to simply reference that bundle. And once they download, they put that into a Kubernetes cluster and that Kubernetes cluster using yet another one of the tools that we saw on the first image that showed all of the tools of Carvel. In this case, we're talking about Cap Controller because this process is now happening inside of the Kubernetes cluster, right? We apply this package YAML as shown here on the right into the cluster. So the cluster can now obtain that image from the registry in which it sits. So now you have all of the software that you eventually want to install in your cluster is now unpackaged from that bundle. And then inside of the cluster, any additional configuration can be applied and we can ensure that we're using the exact SHAs that were specified by K-Build. So you know that you're gonna download exactly what you want. So it's sort of a declarative, excuse me, that consumer has a declarative mechanism to consume software which goes back to that requirement of no-touch deployments. And so you have an imperative workflow for the producer and a declarative workflow for the consumer. Again, the consumer can also replace any of these tools with an equivalent tool because they are interoperable. So whether it's, excuse me, whether it's Helm or whether they're getting, they're fetching the image from a different source and a CAP controller actually supports all of these cases. We're of course highlighting that you can use an image package for it but the product is a little bit more flexible than that. Okay, so I've shown you a sort of a workflow that highlights each individual tool which by nature sort of shows that it's very powerful but perhaps a little cumbersome to call one tool at a time. So at this point I'm gonna hand it over to Gabri to show you how to make this even here. Gabri, do you wanna share your screen? Sure, I think you need to stop though otherwise I don't see, okay, now I see it over here. Okay, yes, so there is a kind of a tile missing over here that we just found and the name of the tile is K-Control. Why we are really interested in this K-Control component from Carpela. Well, K-Control, it's like the CLI that you can use when interacting with K-Up controller. So that's great because we can use that to do, to talk with the K-Up controller and check what's happening with our applications that we deploy through K-Up. But the things that is particularly relevant for us at this point is that it helps in facilitating the creation and the consumption of packages. We can think at K-Control like an orchestrator for all of the other Carpela tool set like the YTT, KVLD, Web Vendir, Image Package, K-Up with through using K-Control and a simple subset of comments, you can actually reproduce the entire flow that Cora was showing before just using one comment. Let's take a look. In the example, in the conversation before with Cora, she was saying, let's say that we have this software component and the software component, it's also in need of using Redis and we are using Redis that has been offered as a helmet to an helmet deployment model from a third party. And this is, by the way, it's also gonna be one of the example that we are looking at later in the demo. Well, we will start by using Vendir to add the configuration of Redis to our set of configuration for our application. And then we will start using YTT, KVLD, Image Package to go through all the flow of creating our package. And we need to orchestrate all of this, so it can be quite a little bit. But K-Control over here, it's gonna be really interesting because it actually can orchestrate everything from using Vendir to download what's necessary. And then to put in the right sequence, YTT, KVLD and Image Package with the purpose of creating our package for our application. So let's take a look. We have specifically two sub-comment inside the K-Control CLI that we are going to use. The first one is Package Init. What Package Init does is this portion of package build and package resources and eventually Vendir if there are external components that we want to vendor in. And as we use K-Control Package Init, we will see a few parameters that will be supplied to it. So we are using an imperative approach at this point. We are the output of this command. It's gonna be to generate a number of YAML file for us that contains all of the necessary configuration for our package. Then the second thing that we do is to call a K-Control Package Release. And the Package Release, what it does is essentially taking versioning, whatever we put in our original package and creating metadata and package file that contain all of the information for this package to be deployed. But also it's gonna ask for the information on the registry that we want to use in order to use image package to bundle all of this configuration file and then save into the specific registry. Let's look at an implementation of it. So first thing we said, we are going to create from our application and say we have this Hello application that's using also Redis. We are going to create first a package of our application that contains not only our configuration component, all of the YAML that we need to deploy our application, but also we said we wanted to vendor in Redis so that we are going to really deploy everything at once. Oh, this is great. But now probably I have more than one application with these own components that I need to give to my consumer. And in our example, we will have another giant application that we want to deploy. But of course, you can have a really high number of microservices that will need to interact together in order to provide the function, the final feature that you want to deliver to your users. So you can actually have a number of packages that you want to give to your customer. So if the customer now, the consumer now has to do the installation of each package, well, now this can again be, this is already much more than our original idea of experiencing a kind of an install where I double click on something and everything that needs to be installed in my system. It's gonna go there. Well, with Carvella, this is possible. We are going to introduce the concept of the meta package. The meta package, what it does is that it creates the structure of the package around all of the packages that we need to deploy. And essentially we can see how inside the meta package we have as usual, as we were saying before, configuration on the package and the meta package of the data. And then we have a directory that contains the package installer. And for each application, we have the opportunity to say exactly how, sorry, for each of the packages that are referenced in this meta package. We can say what is the configuration that we want to apply. So this, the concept of the meta package is the one that allow us to deliver this single entry point of installation for a system that can be quite complex. And we will see how even though we can add opinionated default, we can also provide the user with the possibility to change some of default for with what is more appropriate in that system. And one other concept that we need to keep in mind while doing this is the concept of profile. Let's say that I have my different packages that I need to deploy. I create the meta package that is going to take all of them and know how to deploy and it knows how to deploy all of them. And I'm targeting this for system of my customer, customer A. But then I have another customer, customer B that has a slightly different need and for example, in our demo later on, we will target two different situations. One where I have a customer with really, it's already using Kubernetes and a lot of this technology at scale that you already have K-native implemented. And so when we deploy to this customer, we are going to use a deployment type that is K-native. But eventually we can deploy to another user that start smaller than customer A. So customer B, it's gonna have a profile. It's gonna use a profile that is slightly small. It's gonna be just, it's not a full profile. And at this point, it's going to have a different type of configuration and we can allow for this to happen. The last element that we need to consider is to put everything into one repository so that all of these packages are going to be contained in one repository. And for this also K-control provide and handy command that we can use to create this repository and release it. And Kora, do you wanna talk about the package consumption or do you want me to just flip over this? You know what? I'm a little bit concerned about our time. So why don't we talk, why don't we go to the demo and then we'll come back to this. We'll go to the producer side and then we'll show you the consumer side. Perfect. So let's look at the package, oops, too much, package auto-ring. We have a repository that we can share later with everything that we are going to do with this demo. One thing that I'm doing over here, oops, one thing that I wanna do right now is just to quickly set up my environment so that we can really speed up the demo. And all of the information are in this readme file so you can take a look at this later on. So I'm gonna start my demo over here and let's take a look. I have my giant application. I already built the giant application, the low application, everything that I'm using, it's already built. I already have the container and what I'm interested right now, it's only in building the package for this application. So I'm taking a look at the configuration file that I need in order to deploy the application on a Kubernetes file. And I see that right now I have those file that contains the standard configuration. Now, let's start with the package in it. But I want to start packaging my application and the way I am gonna do this, I'm gonna give a name, that's gonna be giant app corp. Right now I have everything downloaded locally so I'm gonna use local directory and in this local directory, the config directory is the one that contains the information that I'm interested in and so that's, I'm providing just this. And what happens is that now pay control already built the files that I need later on to build my application. You can see that the package build is already there and there are a few of the information that I already provided. And there is a second file that has been built where there are information also to consume the package. So a lot of the YAML is automatically produced by K-Control. Now let's take a look, I wanna create the actually package repository for my application. So I want to release, so first we initialize the package and now we want to release the package and to release the package we need to say where is that we are going to save the image package bundle. So the parameter that I'm providing over here is exactly my registry. What I'm gonna say over here is that I want to release the giant application and this is the version and I also wanna put everything inside this repository directory. And as you can see KVLD is behind the scene and K-Control is behind the scene running other tools from the Carvel Belt on its own. And now I can see that what was before empty now contains two files, one with the release and a metadata. In the release file what we have is that now we have the specific image bundle that we are going to use that we are pushing to our registry. We have the version and we are also pointing to that there are images with like the shaft or the images that I'm using have been also saved over here so they are not. Now, before and after the release what happened to those file? Well, we can see that specifically if we concentrate on the package side from zero, we went to the specific version. We added information for the image that we are going to use and also we add information on KVLD and where all of these files are. Plus, we are saying, look there is an open API specification with the parameters that can be specified. And there you are here. So this is really handy because now you have and also an easy way to query these parameters and know what you can configure. And we are going to do the same thing also for the LO app. So I'm going to go, I'll try to go fast over here and again there's going to be one and it's going to be complete, perfect. Now we want to release the package and again to release this package, we need to add the repository that we want to release too. So we put this and it's going. Well, now let's go to the Redis application that we also want to bundle in. So what we are doing is that we are going to init this K-Control is going to vendor in our Redis application by first of all, now we are going to use a Git repository as a source, not a local directory as we did before. And as you can see, there are also other options that can be used to accomplish the same. So now what I want to give is the URL where the Redis files are contained and I'm going to say take from there only and all the file that starts with Redis. And you can see that Bandit is now orchestrated directly. So now I see that there are from upstream all of these files were downloaded plus all of the file, the one in red that K-Control while creating the package already produced for us. So you can see in the Bandit that also produce the YAML for the rendering in all of the file. So now again, we are going to push this into our image bundle into our registry. And at this point, the only thing that we need to do is to really work with the meta package. So we already populated a couple of information over here in particular, let's look at the LOAP. What we are doing is that we are using the packaging that were built before, but also we are adding some overlay, some Starlark syntax through YBT to change information so that these are going to be specific for if the profile is full or if it is not full. Sorry, if it is full, it's going to be K-native. If it's not full, it's going to be core. So it's going to do deployment instead of using K-native service is going to deploy using standard deployment and service file. And then we are also changing some of the variable for some of the parameters for the value for the configuration of our application. I'm sorry. So now as we do this, we see that we have in our repository we have packages for the three application. And now we want to add the other package that is the meta package that will contain information on how to deploy the application. So again, we are going with one config and sure enough, now we can do our release. You can see this is quite repetitive at this point. I placed it too early. Okay. Okay. And now I have the meta package added to the repository. So now the only thing that I need to do is to release the, to create a repository with all of these packages. So I am adding information for the repository and for the registry where I'm saving all of this and that's it. This is my one of the application that I push. You can see there are a number of pushes but that's not really relevant at this point. So Kora back to you. Awesome. Thanks. Sorry. I think I mean to make you too nervous with the time. But okay. So I'm going to share my screen and basically I'm going to show you the consumer experience now and share screen. Okay. So just to recap, right? So Gaby has just built as a very sophisticated software producer, Gaby has a system that I want. The system has three different individual applications. Redis, a Hello app and a giant app. Each one of these has been converted into a package by using Carvel. I still don't want to have to download three different things. So Gaby has created a meta package so that I can have that single touch, the single click experience and just say install the meta package and that will install each of those individual three packages for me. And then in order for me to be able to obtain that software easily, Gaby put it into a repository, created a repository out of it and wrap that into an OCI image so that I can easily just download this OCI image and again, so a single download and a single step installation with a meta package. So my experience as a consumer, let me just show you the slide quickly, is that I'm going to use also K control and I'm going to add the repo. I'm going to do a repo add, take that repository Gaby created for me, put it into my cluster and then I'm going to say, I want to install the meta package and expect that to download to all of them. And then the other thing I want to do is Gaby also showed in an earlier slide here, I'll show you that she's put a lot of configuration into her packages so that I have, she's introduced the concept of a profile. So now I can look at my different target locations and decide is it profile A or profile B? In this case, we're using as examples, large profile, small profile, but it could be any profile you want. And in a large profile, as Gaby has configured it for me, I will have all three applications installed. If I do a small profile, I only get the hello app with the Redis, but not the giant app. That's too big for my small targets. And also I have the ability to tailor other configuration values for every target. So I'm going to show you how I can take advantage of what Gaby has built for us. And okay, so 15 minutes. So okay, so first of all, I'm working on a cluster. This cluster has cap controller installed, right? I'm going to work with that declarative consumer experience. So as I apply YAML to my cluster to Telc Carvel, in this case, cap controller, how to download that repository and how to unpack it. Cap controller is the Carvel tool that's in the cluster that's going to do those things for me. The other thing that I happen to have here is Knative serving with a CNI. And that's because it's always great to have Knative. It makes running applications easier. And in the profile configuration that Gaby prepared for me, if she assumed that a full cluster or that a full cluster will have Knative installed. And so the configuration will produce YAML for a Knative service. Or if I have a small cluster and maybe I don't have Knative, then that Taylor configuration will automatically produce YAML for a simple deployment and service. But in my case, my cluster has Knative. So those are the only two things I've prepped my cluster with. Other than that, it's a brand new cluster. So first of all, I want to create a namespace so that I can do my work as an operator. So I'm going to add the repository that Gaby created. So you can see package repo add is the first command that I run. Now, I'm giving it the URL for where Gaby published that OCI image that contains all of these installation files. So essentially I have all of the files that I need now in my cluster. Nothing has been installed yet. No, the Hello app and the Giant app aren't running, but I have a single artifact that I can use to obtain all the software. So we can see already how Carvel facilitates software distribution. And so then I can see, okay, well, what does this installer essentially that I downloaded, what does it contain? And I can see that it contains a meta package and it contains three individual packages. And because I know what I'm installing, I am interested in actually installing only the meta package and the meta package should install the Redis Hello app and Giant app for me. So now I want that single click installation. So I'm going to create a namespace for these things to actually be installed. And now I can think about, well, before I install meta package, what is my target location and what kind of configuration values do I need to set? So here I can say, well, let's assume that this is a full profile. This is my largest cluster. This is maybe I'm doing it in the cloud. It's elastic. And so I've just created this namespace, right? Namespace apps. So I'm going to say that's where I want all these applications to be installed. And the Hello app and the Giant app happen to have configuration values. During Gabri's generation, you saw that based on the schemas that each app had an open API of E3 schema was automatically generated. So I can use that to see what are the values that I could possibly set. And so here I've decided that I want to, this is the way I want to name these apps in my cluster. I'm the deployment type. It's the default anyway, but just to kind of highlight to you that we're going to use Knative. It's the default for the full profile. So this doesn't need to be set here in this case. And for Hello app, I can set a message that the app will return. And so I'm going to say full happy package. So I'm customizing it a little bit. So now I'm going to say install a package, but I'm choosing to install metapackagecorp.com version 100. So I'm choosing to install this one and not the other individual ones. So this is my installer experience with a single command. I'm going to actually install three applications, right? Metapackage is not really an application, right? It's just sort of a way to wrap the other three applications so that they are all installed as one thing. And now, the other thing that's happening, of course, is that instead of kubectl, right? We're using this cap in the background. So if I make this a little smaller, you'll see kubectl wouldn't go through a bunch of YAML and give you this kind of summary, but because we're using cap, I can see all of the different things that it's installing. And so it's going to extract the package install and then we can see that cap controller takes that package install file, it takes this package install configuration and actually goes ahead and installs the packages. So now if I say, hey, which packages have been installed, I can see that they've all been installed, right? The one that I manually installed and the other three. So what's the effect of installing these packages? Well, let me make this a tiny bit smaller. So I have two apps. These are the kService apps. Redis is a different product. It doesn't have a Kubernetes service associated with it. So of course, it doesn't show up on the screen, but I have Giant App and it's Giant App is big. And so it's not that it's not working, it's just still starting up, but Hello App, which is light and quick has already started. So I can already test this application and I know that Redis is running well because that number that every time I refresh the page, it's storing the count in Redis. And so, and you can see that we have happy full happy package, which was the configuration value that I set in my file. So, so what we've seen here is that as a consumer, I have, I can tailor my configuration and I can use, I can simply download one artifact, which is the repo, and then I can install one package, which is a meta package. And the result is a basically a single step, a two step installation of an entire multi system, multi component system. Now, let's talk about, time check, okay, 152. Let's talk about what if I wanna change something? What if I wanna do any kind of update? In this case, the example I'm gonna show you is a configuration values. So I, let's say I wanna change instead of a full happy package, I wanna change the value to medium happier package. And also, I've realized that maybe my cluster is too small, I have to uninstall Knative maybe. And so I want the deployment not to use Knative, I just want it to use a deployment and a service, these core Kubernetes resources. So again, I've, that's my new configuration values. So I simply reinstall the package. Again, I'm only working with meta package because this one will transitively update the other three as necessary. And I'm providing my new values file here. So with a single command, I'm basically updating the software that's installed. So we'll give it a second. So it's reconciling. And again, CubeCuddle would simply apply the YAML, it wouldn't tell you how things are going. But since this is using CAP in the background and interacting with CAP controller, it continues to check on if the reconciliation of the resources is done. And it also tells me, well, you know, in this, we updated two resources with this change and you created two new resources. So it's very informative as to what's actually happening. And of course it gives me the logs in the background as things are progressing. So just give it another few seconds to finish. I'll check. What is the, so in the chat, what is the advantage of using Carvel to deploy on Kubernetes when compared to other get up solutions like Argo CD or Flux? So Argo CD, so it can be in a workflow together with Argo CD or Flux. Again, this is sort of composable you can pick and choose. But Carvel is serving a much broader sequence of uses, right? With this imperative packaging model plus the declarative. And you can see that it has different pieces of functionality but they can certainly work in concert with tools if you're already using Argo CD or Flux. You can use them together. So this has finished reconciling. And let me know. If I can add to that, all of the package creation, Argo, it's only deploying. So it's just the CD portion. It doesn't help you with building all of the structure for the package and the one-step install type of experience. So we've updated our local package. So if we see what's now installed, we can see that again, everything is installed, everything is reconciled. And if I try to get the Kubernetes services, remember last time we got the K-Native service, sorry, for both applications, but now they're not there. And that's because in our new configuration, we chose not to use K-Native. We told it to use core resources. So if I just do a get all in that namespace, I can see that the giant and the hello app pods are there and all my Redis pods are there as they were before with just the services and the basic resources. So that one also works. Let me do a time check to see. Okay, so it does work and it's a new message. I'm gonna skip proving that because I wanna show you some other stuff. So what if the target location is air-gapped? This is also a very useful scenario. So that, if I can't access the image repository where all of these images are located, then how could I install? Because when I install the meta package, what actually happens is that it has to reach back out to the registry in this case on this registry is running outside in the cloud. So I need that internet access to go and get all of the package images. And then I need to go back out and get the actual OCI image for hello app and for all the Redis follower, the Redis leader. So there's sort of this transit relationship of images and I keep having to go to the internet to get them. But if I'm working in an air-gapped environment, I don't have access to all of those. So image package actually gives us a copy utility. And what it does is it looks through that image and all of the YAML configuration inside of it, it finds all of the images on the internet, it brings them down and it copies them to a destination registry. So you would either do this, you could either use sort of a tar ball in the middle to move it to an air-gapped environment or if you have a jump box that has access to the internet and your air-gapped environment, you could do it in a single command like I'm showing here. So you're copying from the public access registry to your local air-gapped registry. And so you can see here after it's done that copy, I can pull from my air-gapped registry. This is just to show you what was pulled and I'm gonna extract it into a temp directory on my local machine. And if I look inside of that temp directory, I can see that all of the files are there. And then if I explore what's inside of this images file, which is just a piece of metadata, right? These are the four packages that were actually in that repository that I copied. But there's this extra piece of information that tells me about the images. And what that has is a reference to say the image that was here, for example, this giant app image that was in this network accessible registry, that is now in your air-gapped registry. And so it's basically giving me the new image location. And so between the Carvel tools and this will be automatically replaced. And so when I want to do the local install, it's going to pull from the air-gapped registry. And you can see that it's pulled all of the different images that it needs. And then the last thing that we wanted to show you was that what if your target location is tiny? Let's say your edge location is so small that you don't even want to do this air-gapped copy of all of these images. You really only want to be able to provide just the few images that the tiny target will use. Well, then you can use vendor, for example, to pick and choose. And then I'll just show you the command here because I think we're going to... You could basically generate a new repository and just tag it as a small one. And in this case, we're not even including the giant app so that the giant app image itself is not copied to the air-gapped edge location. And so that way you can really tune the size of the repository as well for your target. So I think that's all we have time for demo-wise. But that's really what we wanted to show you. Anything else would have just sort of added a little color to that. And so the other thing that we wanted to mention is that as far as VMware sort of dog hooting this technology because VMware is behind Project Carvel, VMware's commercial software is VMware Tonsu application platform. And this is the strategy that we use in order to package and distribute our own application platform. So this is sort of under the covers of how VMware is approaching this problem for ourselves and how we try to make it really easy for consumers. This is a system that has 45 different components. And so this is how we've made it easy for ourselves to install and test as Gavry was saying, solving the producer's challenges and also make it easy for people who need to install this platform for themselves to have this one click download, one click installation but also support different kinds of profiles. So that's sort of an example, not the only one but the one of course as VMware that we're most familiar with under the covers of how Carvel is used in the wild. And yeah, Gavry, any final words? I was just answering to the question in the chat asking what are the options for deploying application in multi-cluster? Well, you need to build appropriate package repositories so that each cluster will do the installation of the components that are needed over there. Yeah, I think the answer to that question also is that Carvel doesn't answer that question specifically but there's other solutions to the question of like how would you take any piece of YAML and deploy it on multiple clusters? So that there's, yeah, I think there's different ways to solve that problem, probably Argo CD cartographer there's different solutions for that but that's not one of the problems that Carvel is focused on, the multi-cluster but it does help you in making it uniform that what needs to land at different targets can be managed in the same easy low touch central way with custom configuration for every site. Is that, does that answer the question, I guess? And then, yeah, Gavry, you wanna wrap it up? Well, so I don't know, it's a, there is a screen, can you close over there? We have, so I feel like we were able to show everything that we wanted, so that's great. And as we were saying at the beginning, the GitOps approach was a big thing in our mind as well as automation and easy way to consume not only to build the packages but also for the experience. So here is a worker that is available on that the repository and there is the demo, the two demo that we were running are also available over there. And a few links in order to learn more about Carbet. So we hope you enjoyed this, thank you very much. Thank you both so much. Thanks everyone for joining us. Cora and Gavry, thank you for your expertise. And everyone remember that we will have the slides and presentation online later today. Join us again for another CNCF webinar and we'll see you soon. Thanks everyone. Thank you. Thank you.