 All right. So we're going to talk about application configuration management at the edge, how to tame thousands of deployment targets. And I am Cora Iverkleid. I am a developer advocate at VMware. And I am Maria Gabriela Brodi. I am a solution engineer at VMware. And both of us are co-leading and hosting the cloud native Meetup New York. So if you are there, by any chance, check out our page and join. We also have virtual events also. So no excuse, not to come. We are looking to go through what is the problem, why we are thinking about all of this problem with configuration of many edge, the requirements and introducing you to a set of tools, Carvel, that we are going to leverage and show how we did it, like our proposal on why this looks nice with Carvel. So we are in a scenario where we have a lot of different locations where we want to deploy our software. And they are really different, like something in the cloud, some is an edge location with one server, some is bigger. And almost the same, even the one that have one server or the one that are different distribution center can have different type of hardware over there. So how does this affect the way we package things? So yeah, so things can be very different, right? So you can't just take the same YAML that you created for one deployment and apply it to every location, right? So there are things like capacity and hardware types, so certain configuration values might be different, but also a scanner might need one or two apps to function while a cash register might need some others and a full server might need a whole other set of applications. So we have variations in the configuration that encompass the subset of components that you need to deploy plus maybe some common configuration and then some very specific configuration at the edge. So all of these things can change, right? Also different locations might have different upgrade life cycles. And the other thing to consider is that we're pushing software to machines or places where you don't necessarily have an ops team at the ready to make changes or to respond to incidents, right? So we have to take that into consideration. So if we think about what kind of system do we think, like what's the ideal solution that we could think of? Well, so let's start with lack of expertise. Lack of expertise means that we need to do as much as possible in a centralized way and with zero to no touch, low touch to no touch deployment at the edge. So the central management actually also answer another need that we want to maintain track of what we are delivering at our edge location. So we need to have the central management. We need to allow for, okay, not all of the application need to go in all of the locations of different mixes, different profile for this application. We need to do this in a way that we can keep scaling, adding application, adding edge location and changing the way we are packaging things. And we also need to think about our gap scenario because often enough, this edge location won't have direct connectivity. And of course there is always more for all of the requirements and all of, but let's say let's deal with this core. So what is, what can we do? Like do you have an idea? I do actually. I think we should use Kubernetes. Kubernetes is going to give us already something that is declarative and self-healing. We want to make sure we're using GitOps that'll help with that central management to have a declarative description of what we want and we can drive it centrally. Of course, because we're going to have all of this declarative state, we need a tool that is actually really good at wrangling and manipulating YAML and slicing and dicing it. Because we need to know what we've deployed at all these locations and have very strict control over exactly what image is running somewhere, we want to have a very clear way to register, lock, and track the bill of materials that we're delivering to the edge. And we want to do it. We wanted a low touch, no touch. So it has to be true GitOps where it's automated. And we can use Carvel tool. We are here to talk to you about Carvel. So the idea is that Carvel proposes this set of tool that each one is a single purpose. So one tool, it does one thing. Very well, that thing. And then each tool you can use to build your workflow in order to manage the configuration and the deployment of Kubernetes. Yeah, so this slide is sort of visualizing those two concepts, right? That each tool has a single purpose. Carvel, for example, includes a tool called YTT. Its purpose is to process YAML, similar to Customizer Helm. And I would argue I'm standing here superior. Because it can do both the templating that Helm can do, and it can do the, am I saying it backwards? It can do both templating and customizing, overlaying, sorry, the templating and the overlaying that both Customize and Helm can do. It can do both. And it's itself a programming language, so it's very powerful. Kbuild is a tool specifically to look for those images in your YAML and replace those with shots and produce the bill of materials that we're looking for, that we can use as a lock, as well as for information purposes. It has a tool to apply to Kubernetes, just to apply YAML in the same way you might use QCuttle. But it takes a set of resources that you're applying together and it gives you a way to control them as a whole as an application. So you delete them all, for example, at once. And you can apply that directly. So all of these tools are composable into a workflow because they ingest YAML and they emit YAML. And so because of that, they're also interoperable with any other tool of your choice that does the same. So we can see here, if you don't like YTT, if you're used to working with Helm or if you have a lot of Helm-defined applications already, you can still run your Kbuild against that and resolve all your shaws. If you prefer a QCuttle rather than Cap, then mix and match as you like. And so anyway, you can build this workflow. And in this example, we're showing a workflow that ends in YAML that can be applied directly to Kubernetes. But that doesn't get us far enough for this edge scenario because we don't want to send a bunch of YAML files to edge locations. We want to do a little bit better. So we are actually thinking about why can't we package it? Why can't we use a new tool still from the Carvel tool set that is Image Package? And this new tool, what it does is that it takes a set of configuration files. Actually, it takes files. And from those files, it creates an OCI image that contains exactly that file system. Think about FTP of a GZ file. This is what you're going to get. But this is something that you get through the registry. And it's an easy way to bring things to the edge. Like, for example, I'm bundling this file system inside an OCI file. I can then either use the replication from the registry or if I don't have connectivity, I can save this on a USB and run out with my USB and all of the files that were originally in the container. Now, this doesn't sound safe. Of course, there are safeguards around who can access your registry and pull the image. So the declarative, so right now, this is what we are using in order to build this configuration bundle with all of the files that we need to deploy to each location. When we want to consume this, we would rather use some kind of declarative way, because of course, we know that we want to go over a GITOP type of construct. Like, we can always, this is a CRD for Kubernetes. So we can always use this in an imperative way. But for sure, we can also do this in a GITOP approach. What we have over here is different way to fetch this. Like, we can fetch our bundle. And when we take your bundle, when we take the bundle from the registry in this case, but can also be from the file, we are going to apply some changes, manipulate with the tool that Kora was describing before. We are locking in specific SHA. So we know exactly that it's not version two, it's this SHA. And we are deploying to the cluster. So this is how we can then describe and consume the package. And as before, we can also source this from different locations. We can use different mechanisms to work with the Yammer file. So it looks like we are close. Yeah. So we have a solution. We have a tool set that's going to help us do the packaging with the imperative tools and the unpackaging with the declarative. And so now we get to the problem at hand, right? Like, how do we avoid chaos when we're talking about all these edge locations? So what we're proposing is that look at your edge locations and try to group them into, we're calling it profiles. So here we're going to be working with let's say a large profile and a small profile. And the difference is that the large profile has the servers or the edge locations that match the large profile need to receive two applications. The one in the small profile only needs to receive a single application. And the single application that the small profile receives is actually the same source code and base configuration as the one that goes to the large profiles, but they have some slightly different configuration. So here we see that concept is mapped to the way we organize our files on our file system, right? These could be different Git repos. I mean, for the, we're just showing it off of one root, but you can slice that into different repos depending on your RBAC and things like that. But you can see here. So for the Hello App application, for example, that is under both large and small profiles, what we would do is take that application, build the container image that's the actual runnable image. It has, of course, its default configuration. So if you're using Knative, maybe that's a K service. If you're just using, you know, maybe a service deployment ingress, whatever that is. And then we have some values that are specific to these profiles, but common to all. So we're going to take those three pieces of input, and we're going to use image package to generate for each application, for each profile, a package. So at the end of this, we would end up with five bundles in our registry, right? Now, okay, so for the large, in this picture, we would have three bundles, but we still don't want to be in the position of sending three different artifacts to an edge location, right? Remember that low touch, no touch, really simple experience. So yes, Gabby, what can we do with those? Well, it looks like the idea of this profile match really well with the idea of creating a package repository, where the package repository contains all and exactly only those packages that need to go to that specific location. So this is an ND resource that we can bring to Kubernetes and use CRD that allow us to take all of these packages, bundle together and guess what? Now, we can have another image package bundle that we are saving in our registry. And keep in mind, right? So as you see, so when we bundled the first bundle, we had a reference, of course, to the application image that's our executable image. And now when we bundle together those, we have, again, references, we're basically bundling, we're taking a file system of YAML files, right? And making that distributable as an OCI image. So this package repo has these package definitions that now simply have a reference to those packages we had created before. So they are sort of recursively, we have an image package repository pointing to bundle images. And inside of those images, there are pointers to our application running container, right? And so the concept of image package allows us to recursively nest bundles in this way. Awesome. What is the experience at the edge then? Well, we have two resources that we are interested in. One is what is my repository? And so that's the package repository. And the package instant, because I received some kind of configuration, but here I need to customize that configuration. And over here, we put just for just a few simple parameters in terms of values, but we can do more, we can apply more YTT, we can process more those files that were coming in and eventually be allowed to have the agility to also change components. Like, do I want to have really an ingress over here? Or maybe no, you know what? In this location, I'm not deploying the ingress, so I'm going to remove the ingress component and just go with the not port approach. And the only other thing I'd add to this slide is think of it as the package repository is like the installers. Like so when you're going to install a program on your computer, first you download the installers and then you click on the installer to actually install. So the package repo contains your installers basically, but it doesn't mean that you've installed. And then for every instance of the application you actually want to apply to Kubernetes, you create a package install. So yesterday somebody compared it to an object in an instance or installer and the installed application. But doing this with applying just this to the cluster doesn't result in a reconciliation. So can we do something? Yeah, so that last requirement that we had to really make this a get-offs process. So Carvel also includes a resource called an app, which is what allows you to subscribe to a Git repo or a source of truth. And as that changes, it reconciles to the state in the cluster. So in this example where you could either point it to the package installs and then as the location changes, maybe a value in their package install, then it would be reapplied. Or you could set it to the repository level if you want new repositories published and you want the automation at that level. But the app allows you to subscribe to a source of truth and trigger that get-offs process automation. Good. So before we go to the final thoughts, maybe a demo? All right. So we let's start to package this application for distribution. So first let me show you that in outputs I need to show my Git, my GCR. And right now in our registry, there are the giant applications like this. These are the application built. And this is we only have one package that has been built for a giant application. So what we are going to do is to build the other application, the lower world application, add the package of the lower world application and create a repository with this. So first of all, we need also to, let's look at this. We have some external references. So we are using Bandir that is another cool tool from Carvel that allows you to import third-party software that has been distributed through YAML. So then so that we can we can see here how we have our configuration on this. Yeah. So basically what we're saying here, so we didn't mention this tool before, but part of Carvel includes a tool called vendor, which allows you to vendor software in. So we own the code for HelloApp, but app has a dependency on Redis. And so we've chosen a Redis provider. And of course because we want to deploy that to Kubernetes, all we need is some Redis config files. So we're using vendor to point to a Git repo that we don't own, copy those files into our repo. And then with YTT, we could apply overlays to that. And that's a really good way to think about overlays versus templated values, right? You want to apply overlays on YAML you don't own because you can't introduce placeholders there for new values. So so we've used vendor to copy in these Redis yet configure files that that we're that we're taking from a third party. And so we are checking and okay, good. There is a vendor configuration. We are taking it new. And at this point, we want to look at all of the configuration file. And we have an overlay file and overlay directory and a values directory in the overlay that app YAML contains the overlay for the application. So this is the manipulation that is done. Let's take a look at notice also that this is the directory name starts with profile home, right? So here we've taken the base configuration and we're adding to it what is specific for the the common configuration for all large target deployments. Let's take a look, for example, at the values. For one thing that's not scripted, I'm going to make a mistake. Okay, values. I think if you just copy paste that. Oh, yes, because it is all values. Okay, here we have the values that we can override for this one, but we can also check on the overlay and see all of the YTT overlay that we can write. I think we just changed the number of replicas for this example. So and this is it's going to be a lot of YAML when you have a lot of changes that you want to do to your application and a lot of application that you need to distribute and different profiles. So of course, all of these, it's better to have some kind of tool that allows you this automation on top of it. Now let's talk about K-Building. Yeah, so basically we've got all this YAML, right? And I mean, you don't want to look through it to see what images there were. So K-Build gives you a really easy way to comb through it and say, Hey, there's three images referenced in your YAML. And two of them are from Redis. We have a Redis leader and a Redis follower. And one is the Hello app that we built. So the cool thing about K-Build is that if our Hello, we have the source code for Hello app, if that app is not built, we can actually configure K-Build to build it for us, whether with a Docker file or with build packs, or you know, you have a little bit of choice there about how it orchestrates the build for you. Or if you have already built it and if you have already resolved these images to a SHA at a prior point, you can take that bill of materials and tell K-Build, Hey, just go ahead and use these SHAs that I had already locked in at a prior point in time. But so this is K-Build looking through and helping us to resolve those decisions. And specifically, we have this image file already present in our machine. So it's not going to override it. It's just going to read it and say, Oh, this is just restored. I'm going to use the same. So and this is what it looks like. And so this file will accompany your application throughout its life. So we're not actually modifying the YAML with those original image names and tags, but we're accompanying the application with a new lock file that K-Build will always be able to resolve. Or that actually that image package when it when it unbundles will always be able to resolve into the final YAML. Okay, now that let's bundle all of these bunch of files and push that to our registry. So as we push this, now we have a new LG, a low application bundle. So if we refresh now, we have one more. Yay. So seriously, if you ever have big files, if you don't like this is the new FTP image package push and image package pull. That's your new FTP. And again, and again, if you want to take a look, let's just pull this thing back and run it free and see that that's exactly our directory that we were bound. But look, which values can I configure? Oh, yeah. So this is the other cool thing. So the original application, as we said, has its own configuration. But one of the files that it has is a schema file that describes all of the variables in the configuration. And so if you think about it from the point of view of a consumer, because we already applied some customization, right, we took the base app and we applied the values that we want to be common for all large deployments. So if we didn't have that original schema describing all of the configurable values, we might lose that information, right, as we fill in values over placeholders. But because there is a schema included with the application, these tools give us the ability to inquire and say, what are all the originally configurable values? And because all of the original files are there with the overlay separately to be re-rendered at the destination, we can still have the ability to change any of those values. So we can find out the schema and still change them all. And the schema is given as an open API3 format that you can then use and simply fill in the values that you want. So now we want to use Carvel, essentially, to manage all of this. And let's start to put all these two packages that we built for the one that we built right now for the low app and the previous one. And let's create a package. So we can describe, give some metadata for this package, create, and then we can then take our package configuration, that reference to our bundle, as well as the configuration of the other. So we have, as well as the configuration of the other application that we want to bundle together. So when we look at the definition of the hierarchical structure, you can see that we have the giant application, the low app application, and this one 00 YAML contains the package definition. And so this is the transition between imperative to declarative, right? We built it imperatively, and now we're describing it. We built an image imperatively, and we pushed it to a cluster, to our registry, and now we're defining it declaratively so we can pull it into Kubernetes. So again, we always build a package bundle because we are distributing everything through that. And so this is what we have done now. We also have our repository over there as a bundle. Perfect. Now on the target location, we want to do the installation. And so we have our package repository resource that we were looking at before. We apply this to the cluster. And then using, and we are doing this using K app because K app give us also the opportunity to bundle all of the resources that come with this like to treat that as a one. And it starts this reconciliation, and so it starts applying the changes that come from that resource. So that step was essentially the download of the installers, right? We create the package repo resource in our cluster pointing to that bundle that has both Hello app and Giant app inside of it. So now we've got the installers. So we can see that we have a menu of packages that we can install, but none of these packages has been installed just yet. Because to do that, we need another resource that is, oops, I think I cannot do this, is the package. Okay, let's look from here. Okay, is the packaging store that the one that says I want to install this package and I want to use this variable and this overlay to change the YAML so that is configured properly for my location. And the thing we were talking about before of being able to inquire what the schema was and get that open API response back, you would use that to build the secret here that would contain your location specific values. So now we can apply this or we can save all of this in a repository, sorry, here, okay, where we have the package installed for the different application that we want to deploy at this location. And then we, so when we show this during the slides, this app, this is the way to create a subscription to a source of truth. So we're using, in this example, we're using this app resource to point to our package install files. But we could have chosen to create a resource that pointed to a package repository definition as well. But this is the model we went with. So once we apply this app to the cluster, it's going to be checking that get repo. So whatever changes you make, if you change the value, for example, or anything that you change, it'll pull down into the cluster. So it'll, it'll automate the installation of the packages for us. So if you want to change something exactly, we, we will go to the git and change the value over there. Now we have the package repository, we have a list of package, and we also have a list of packaging. So the app did that, right? We applied the app resource, it checked GitHub, it found two package install YAMLs, it applied them for us. And then CAP controller, which is the carvel tool that's operating this in the cluster, applied those files and, and did all of the unfurling of what we had furled, right? We had, we took source code, we used some YTT, we used some K build, we image packaged, and now declaratively within the cluster, it undoes all of that. It pulls out the, it, it unzips the image package, it applies YTT again to render our YAML, it uses K build to make sure we're all using the right shaws. So it renders the right YAML, and then it uses CAP to apply it. And that, that was the set of tools we saw in the package CRD, right? So there's our app. Okay, next. Yeah, so last demo. Yeah, what, what if your, what if your environment is air-gapped? Because right now all those bundles are on some public repo. How do you, how do you, how does carvel help you if your environment is air-gapped? So let's go back to my here, and let's go back to there is no temp. We are using the same repository, but of the same registry, but of course, you change the value. So now we are pointing this to edge registry, that's just the same, but the name is temp. So let's see how this is going to play out. So we use, we've shown you the use of image package push, which takes files from your local file system, bundles them into an OCI image and pushes them to a registry. We showed image package pull, which takes those and extracts them locally if you want to look at them. And now this is image package copy. So copy is going to go from one registry and move all those images to another. And the cool thing is remember that we said that image packages can be recursive, right? So you bundle the repository contains two image package bundles, and each one of those is actually pointing to another image, which is the application image. So as you'll notice, we're not copied. We copy seven, that are exactly the one that we had originally inside our base project. And now we have this new temp. And if we look inside this, but you can't recognize those, but those are exactly all of the shock with the images that we need to use. And last thing, but probably one of the most important is that we don't have to change our, let's pull back, let's pull down now this new re transported repository and check what is in it. And specifically, let's take a look at this and look at what does it say over here. So right, so when you copied the bundle from a public repository to an air gap repository, it modified the lock file inside, it modified the bill of materials. So that now the bill of materials says you got to point to the copies. And so now all of the YAML will be rendered with the air gap copies of the images for all of the images that were referenced. So yeah, final thoughts. It is possible to using Carvel and specifically with using using Kubernetes, we have this opportunity to manage everything as a configuration. This is true at this point, everything that we did in this package, building this package repository for any type of situation, not just edge, you can have you can deploy your software, you can deliver your software to a third party for them to install. And of course, this is going to apply as well, because the configuration at this point, it is your software. And as YTT is really powerful is a programming language with if construct, and all of those great things. You really are not only writing like the output is the configuration as a code, but you are actually coding it for real. So any package delivery. And but yeah, we see it. So because, so this is an example, I guess we wanted to show you an example where where Carvel is being used to package and distribute software in the wild. This is not the only example, but being from VMware, this is the one that we work with on a daily basis. So VMware sort of flagship modern application platform, tons of application platform. This is the way that it that it packages itself for distribution and delivery. So if you ever install this product, you will recognize the patterns that we have shown you today. And what we did was modify it for edge by creating this concept of grouping your edge locations and applying some configuration before. So VMware is using it for tap for software packaging and distribution for just to end users. And so all of the concepts that we because all of because on on Kubernetes configuration is software. The same concepts are true for software packaging and distribution. And with a few modifications that can work well for sort of varied edge locations. So actually, we covered the image package air gap right this morning. So that's that's done. And we should also think about how now we can move from GitOps to registry ops, leveraging the power of image package. There is a link to our repository over there, our yeah repository over there. So that is thank you very much.