 Hey, I hope I'm audible. So we know it's the last day of the conference, and everyone has a lot to think about. You're pretty saturated, but we're hoping we can give you a little bit more, right? So we're going to be talking about Project Carvel today. But before we jump into that, a little bit about ourselves. So my name's Somyk. I work at team.tanzu.com. And I mainly work with developer experience tooling. And a part of my job is to maintain the project we're going to be talking about today, Carvel. And I'm here with my colleague. And hi, I'm Daniel, as you can tell by my name, and probably not my accent. I'm from here. I work in the Tanzu team and the Spring team. I do Java for a living. Don't judge me. And here are our socials. Feel free to reach out if you have questions, if you have feedback. We love to engage. Is the font size all right for the people in the back of the room? Yeah? All right. A little bit more? This? Cool. All right. All right. So Carvel. So I guess we can really hop real quick to the website maybe. So Carvel started off as a set of composable single purpose tools. And they come together to help you manage your applications and ship them to your end user. I guess the key words here are reliable, single purpose, and composable. And when we say reliable, we mean that the tools are idempotent. Like they're very repeatable in nature. And if you scroll down a bit, we'll be taking a look at the tools you're talking about today. So we'll mainly be talking about the first five tools you see here, and not the experimental ones. So to sort of, I guess, the best way to explore the tools is to get your hands dirty. And that's exactly what we'll be doing. So maybe we can hop back to the terminal. All right. Let's get rolling. So what do we start with? Right. So we'll be starting off with YTT. YTT stands for Yarmil Templating Tool. And that's exactly what it does. It templates Yarmil. But what sort of sets YTT aside is that YTT understands Yarmil as a data structure, which means that you can shape Yarmil in unique ways. And everything that goes into YTT is valid Yarmil. And everything that comes out of it is valid Yarmil. So maybe you could take a quick look. Like, do we have some configuration to play on? So I want to point out that any Yarmil that's valid is OK for YTT, right? So if I do this, I can pass it into YTT. And it works. It doesn't have to be Kubernetes resources. So if you want to template your GitHub actions, your config for, I don't know what, you can use YTT for that. And so we already have some basis for a Kubernetes config, because this is KubeCon, right? So fairly straightforward, normal config, service deployment. And so we can use YTT and experiment on that. So I'm going to open a second terminal here on the right. And once I change any of these files, I'm going to run a command. And the command is going to be clear on my terminal and then run YTT on all the files in this directory and then pass them into YQ to have some coloring, right? So here I have my file. And if I go into the config here, I change the port to foobar, capital, updates on the right. OK, so we have infra to work with YTT. What should we do? All right, so how we supply YTT instructions is that we use sort of special commands. And let's start off with a variable, maybe. Let's see what it looks like to sort of replace a value with a variable now. So let's do a node port. And I'm going to compute it dynamically, right? So here I'm writing YTT code that's executed. And then I can use the output of that code in the program. And it changes the value here. So now that we have a variable place, maybe we could try out a function. So I see that we're using labels multiple places. So let's sort of put together a function that supplies those labels and sort of de-duplicate our YAML directory. So we could write a function in YTT that looks like this. And it could be any function that actually looks like a function. But the reason why we use a function and not a variable is because with functions, we can do this. I can take the YAML directly. And this being valid YAML, it can be returned by the function. So I can use it here, labels, right? And then I guess simple app, right? Simple app. So I have labels here, labels. And then I think I have some more here, labels. So of course, nothing changes. If I change my label here, some label, it gets updated in all the places. I think this is like a good place to show how YTT is virtually incapable of producing invalid YAML, right? So maybe if you try to mess up the indentation, it actually just blows up. Right, this is not valid YAML. If I save, it's going to tell me, well, you're trying to do something that's not allowed here. It's not a map. It has a value. So I'm happy with this. And I get feedback immediately. I don't have to wait until I try to apply it to the cluster for it to blow up. YTT already tells me this is wrong. So let's try something more fun, right? So we have a list of ports over here. How about we try to sort of create a for loop and loop over them, loop over a few values? Right, so we have node ports. And I could do ports manually like this. I can also do say, oh, my port is going to be 30,000 plus I for I in range, I don't know, four, right? Something like that. It explodes because the node port doesn't exist anymore. And for node port in node ports, right? And I have to end my for loop here. So now I have node ports like this that come from this list, this weird list. Yep. And so this is where you might notice that this looks a lot like Python. That's mainly because YTT accepts instructions in Starlark, which is like a dialect of Python, right? OK, so we have done some funky stuff now. But we do need our users to be able to sort of supply some values when they're consuming a configuration, right? So let's look at what accepting values while the configuration is being consumed looks like. Right, so for using values that comes from outside or data that comes from outside, there's a YTT module. Think of it as a library. And so I'm going to load this data module, it's called data. And then from here, I can reference it, data of values. And then I can ask for, say, a channel that should come, that should be supplied to the program. So of course, it doesn't work because nothing is supplying channel here. So I could run the same command with YTTF dot, and then data value channel equals KubeCon, for example. And then in that case, it gets templated in from a CLI flag. So I guess this works just fine if you have one value, but more often than not, you'll have a set of values. So you can sort of supply a file with your values in it so that you can replace, you can supply multiple values at the same time. Right, so in this directory, we have a file called data values dot YAML. It could be called anything. And in here, I can supply the channel. So channel is going to be, I don't know, testing. And so to YTT, this is just a YAML file. So if I want to turn it into a thing that provides data values, I can annotate it and saying, this is data values. And so now this has passed from here. We could be passing the node ports, I don't know, 1, 2, and 3. And it's in the config. So in here, the node ports could be data dot values dot node ports. And so now my node ports come from this data value file. And you probably don't want your end user to be using YTT annotations. You don't want them to know what YTT is most of the time. So you can actually use another flag, which doesn't require the annotation. So I'm going to copy this data values file, put it one level up so it's not included by default. And I change it. So we set values in here. I'm going to do node ports. It's just one channel is production. And so what we say is our users don't really have to know the details of how to work with YTT to just supply a value file. So we do this, and we can do YTTF dash. And then data values file, it comes from this YAML file, which is a normal YAML file. And so now we have our production channel and our port being templated in. So let's do something interesting. Let's change up the values file and maybe turn the port into a string. And I think that'll work right now. Right. So if I do this, and I change this to a string, then I have node ports with the wrong type here. And users might try stuff like this, and you don't want them to be able to do that. So YTT has a concept of value schema where you can sort of restrict what values a user can supply. Right. So I have my value schema prepared already, so I don't have to type it. But it's, again, it's a normal YAML file. It has a special comment saying, this is the schema. And it requires a channel that is empty, a node ports, which is an array of integers, and then a namespace. And if I save this, then YTT complains. It tells me, well, on line six in your data values YAML, you're supplying a string. And it doesn't match the expected type, which is integer, which you find here in this value schema line four. And you can actually go fancier with your validations. You can have length checks in place. You can even have a lambda function, which takes the value and sort of verifies it in certain ways. Right. Yeah. And so maybe one thing we haven't shown, this is happening in comments. So if I disable the YTT coloring, this is just YAML comments. With YTT, I can still do comments. This is a comment comment, right, as you do need that. So I guess we're at the stage where we have like a way of shaping our YAML configuration. But what YTT also helps you do is that it sort of helps you apply overlays to large chunks of YAML files, right? So you can sort of shape your YAML using some functions. Right. So in here in this directory, there's also an overlays file, overlays. And so we can tell it, change me this file, change it, evolve it. And the way we do this is very similar to what you would be doing with Customize, right? We are applying an overlay. And we tell YTT, find all the nodes, the YAML nodes, that match anything. And then for these, I want you to change the API version to V1, Sonic 1, the best Kubernetes version, as you all know. So if I do this, YTT will complain to me and try and help me. It says, well, we know. By default, I expect one node when you do match. But you supplied two. There was a service and a deployment. So if I do this, my deployment is now a V1, Sonic 1 deployment and V1, Sonic 1 service. And since this is a bit too broad, I want to scope it down and maybe do that just for the, let's say, for the service. So we're going to only take a subset of the resources. And I want a subset of type kind and change the service. Service, right? Save that. So now the deployment is back to the boring apps V1 API version, but the service is still V1, Sonic 1. And you can go crazy with any programming you want to happen in here. So for example, if I take this to here and I make that into a variable, so service is this. So match by service. And then I could say, well, you know what? Anything that's not a service, I don't really care about it. So here, take me the opposite of service, so not operation. Everything that is not a service, I want you to overlay or remove it. Remove. There we go. So now I only have a service because the other deployment has been removed. And I guess it works the same way if you don't have overlay to move because it just replaces those bits with an empty arm of the file. And you could, in theory, also patch on resources. And what's cool is these bits interact with each other. So you can use functions. You can use loops. And you can also have data values being fed into your templates. So in a way, it's much more powerful than customized because you could write functions that read the existing data and then change it, patch a config that is inside a config map or what have you. So I guess you can do a quick recap. Yeah, so quick recap. YTT is our YAML templating tool. It's YAML aware. All the programming happens in comments. And you can work with any YAML. Very useful. And it's a good substitute for either Helm or Customize or both, and you can combine them. You can do a Helm template if you have Helm charts. And then you want to apply overlays. But Customize is not powerful enough. Well, you can apply overlays with YTT. Or on the other hand, if you can craft your config with YTT and apply your existing Customize tools to it, if you want. I guess what's worth calling out is that you don't need to uproot your existing workflows to start using YTT. You can start using it as an additional layer. So now that we have a way of dealing with a configuration, I guess what comes next is how do we deploy it to the cluster? Oh, before we deploy it. Yes, there's one cool thing that we haven't shown. Oh, yeah. So we're doing fun stuff in the terminal. But if you just want to try out YTT, do go to the Carvel website to slash YTT. For those in the back, we can take a look at that. YTT, Carvel, YTT. And then down here, you have a playground, which basically does what I'm doing in my terminal. So it has many examples if you want to learn how to use a for loop or a function. And then you can change in here, and this is premier. And this is does yam. And as you may notice, it changes here on the right. And if I change the port to the wrong type, hello. I got the output from YTT. Sorry, you were saying deploying this. Yes, my bad. OK, so now the tool that Carvel uses to deploy and manage resources on top of the cluster is called CAP. And how CAP essentially works is that it manages resources as a group on the cluster. And so CAP already knows how certain resources behave. And you can also use annotations to sort of tell CAP to define relationships between custom resources. So I guess our config has changed a bit. So we made a more interesting config more than just a deployment config map. So if I run this with YTT, I get ingresses, services, config maps. And at the end, because we're a bit cheeky, we put the namespace in which everything is deployed at the end. I think that's fortunate because we can just use kubectl now to see how things would look different if we use kubectl instead, right? Right. So you know that with kubectl, if you're doing this, you should define your resources in an order that makes sense, not the namespace at the end. So if I try to do this, it says, well, the namespace is created, but everything else fails because the namespace got created after we tried to create the other resources. So kubectl is not really friendly with us here. So the alternative is using CAP. So in here, I'm going to zoom in again, we can do CAP deploy. And then we give our app a name. We're going to call it coupon. It uses the file from standard in. And then I'm going to say, don't wait for my confirmation dash dash yes. And so in here, we have the output. So CAP is telling us everything that's going to be deployed. So I'm going to be creating a namespace for you, a deployment, services, ingress, config map, and everything. So when CAP exits at this point, you show that all of your pods have spun up. The sort of weights for your deployment would reach the desired state. Right. So it does many things. It starts by applying things in an order that makes sense and that do not depend on how they're defined in the file. So you can see it starts with creating the namespace. Once the namespace is created, it will create the config map because the deployment wants the config map to be there. And then once the priority resources are created, then we're going to create the deployment. And then as Somic said, we just don't create it and let you write your own weight rules. We know what a successful deployment looks like. So we can weight on it and we can read the status. And so you can see it takes a few seconds to, or it's actually really fast, it takes three seconds to reconcile. And so once CAP tells me this has succeeded, it means my app is actually ready. And I can look at it once it's deployed. So CAP inspects the app that's called KubeCon. We called it. So in here, we have all the resources that we created. We can see it as a tree. And of course, we can list all the CAP apps that are in the cluster. So here's our KubeCon app, but we also have some infrastructure that I created for this talk. So I installed a controller, an ingress controller, and so on as CAP applications. So you called out that the config map was created before the deployment, right? And I guess that's an example of how you can control certain behavior with annotations. So if you take a look at our configuration, we tell CAP that the config map is a version resource. So CAP already knows that config maps might be referred in deployments. So if we update this config map now, what's going to happen is CAP is going to create a new version of the config map and ensure that the deployment is updated as well. So the deployment will be spinning up new pods, which reflect the new config. So we do the same thing. We template the config with YTC. We re-apply the same CAP deploy. So it's going to update our CAP app. And I'm also passing the flag to tell it, give me the diff of what you're doing here. So first, when we look at it, it gives me a nice diff. It tells you, oh, well, here on your output of YTT on line 2, I changed the text from English to French. But then it also tells me, A, your deployment was using a version 1 of the config map. I'm creating a version 2 of your config map. And then I'm changing the deployment to use the new version. So that means my config map is created first, and then my deployment rolls out. And I don't have to re-roll it out manually. CAP does that for me. And of course, it has custom. You can do powerful custom rules yourself or ordering. So if you have a bunch of custom resources, you can define groups of resources and sort of tell CAP how their order amongst each other. And so I can delete as well. So quick recap. CAP is kubectl, but it's friendly towards users. So if you're hacking on your cluster and you're deploying without getups, using CAP is nice, because you can find what you've deployed. You don't need to keep track of the files. It has weighting built in. And it has powerful rules for ordering. So you can manage your own dependencies if things are not eventually consistent. You say I want this first, and then that, and then if this succeeds, give me that. So I guess we're at a point where we have a way of managing our configuration and we know how to deploy things, right? So whenever you're shipping your configuration, though, you want to be sure that everything's in a nice little box that your customer can consume. So I guess where we're going to start off is images. So more often than not, your configuration is going to reference images. And what you want to be sure about is that the image you were referring to while putting your configuration together is the same one that's being referred to when your user's consuming your configuration. And the only good way of doing this is using Sharsom references, because we're not really sure if someone at Hasheqa pushed another version of HTTP echo yesterday with the tag, right? So for doing this, we have a tool called Cable. And what Cable does is it sort of helps you lock down onto immutable references. So let's take a look at what that looks like. K build, KBLD. Great. So we can take image from the standard in. It reads our file. And then by some black magic, it turns this image into its Shav reference and gives me some annotations on the deployment. So how does that work? How does the? So Cable essentially just looks for the image key. And you know what? Your configuration might actually be using a key like container image at times. And you can configure Cable to be aware of that. So if you just provide any YAML with those keys in it, Cable is going to be able to resolve it, which means you can, in theory, use Cable with other platforms as well. And if you write your custom resources and they use image, it's fine. You don't have to teach K-Build how to do that. OK, and so you don't want. So what you want to do, however, is sort of generate a lock of sorts so that when your users are consuming your configuration, they have a reference as to which images were used when you were putting together your configuration. So one thing we could do is version this. Run K-Build and version this specific version. But then we kind of lose what was the tag we used. And so instead, we're going to generate a lock file out. And then we're going to put it in the right place. And then we'll go into image package later on, images.yaml, right? So if I run this, same thing, but now I have an extra file in image package images.yaml. And that tells me, well, I found. And what you passed me, I found HTTP echo 1.0. And it resolved to this. And now, when I distribute my config, I can distribute my config, my files with the tag in them, and then my image package file with it. And if K-Build consumes it, it's immediate, right? I don't have to talk to Docker Hub to know how to resolve this into a shell. This is essentially what a user will be doing while consuming your configuration. OK, so now we show that our images are what they're supposed to be, but we still have a configuration, right? What you want to be able to do is bundle your configuration into an artifact, maybe an OCI artifact, so that you can test it, sign it, and sort of ship it in the nice little box. And that's what image package does for us. Right, so instead of doing GitOps and pushing this to Git, we can make immutable artifacts that we're going to be keeping here with image package. So we do image package, image package, push into a bundle. A bundle is a target OCI image. I'm going to push it to my local registry, 5,000, KubeCon, Carvel, and then the tag will be demo, right? And then take all the files that are in the current directory. So it pushes these files. So now if I take this artifact reference, this one, if my mouse works, it doesn't work. I'll retype it. So in here I have an empty directory. I can do image package pull, the bundle that's called localhost, 5,000, KubeCon, Carvel, and I think the tag was demo. And the output put me the output in the let's call it the out directory. So in my out directory here, I have the files that are where my local file systems are in this other directory. So I think we should take a quick look at the output that was just outputted. So image package calls out that one or more images were not found in the same registry, right? So what happens is so image package also has this copy functionality where you can ask image package to copy a bundle. And when you do that, it sort of ensures that all of the images that were referenced in the log file are also moved over to the new registry, right? So Carvel demo. And I'm going to pretend I'm going to reach another repository, so to repo. It's actually the same one, but it has an alias on my machine, 5,000, and then KubeCon Carvel. I don't supply the tag. The tag will be inferred by the tag of the previous artifact. So it copies things over. And it finds that in my image log file, I had HTTP echo. So I have two images, the bundle that has my config and HTTP echo. It downloads HTTP echo and then copies it to this other repository. And so now, if we pull the relocated bundle, so if I do pull bundle, call demo, and then two, or output, sorry, and we're going to call that relocated. Re-co-cated, relocated. There we go. So over here, image package sort of calls out that it could find every image referred in the same project, right? And I think it will be clearer if we take a look at the log file. It's in relocated. Relocated slash dot image package. So the HTTP echo image is now on the same registry as the bundle itself. It's even in the same repository, right? Yeah. It's not referred by a tag, but it's referred by its shell. Yes. So if someone applies Kbuild to the config with this image package log, it means they can use the image that are hosted in the same repo as the image package bundle itself, which is very useful when you're doing, you know, you have air gap environments, or you have a registry that you own and you want to be sure that the images come from there and not from the internet. So image package copy helps you with this. This is how we distribute the Tanzu application platform. Our customers, they copy or bundle and will pull, I don't know, 60, 80 images and copy them to their platform. So for bundling, two things. Kbuild, to take an image reference and make it immutable by using a shell and produce a log file so it can be repeated. And then image package for managing configuration. So bundling configuration and do OCI ops or registry ops, whatever you want to call it. And then relocate bundles across repositories. And it can even be recursive, right? An image package bundle can reference another bundle. And so image package is smart enough to copy everything your entire dependency tree from one registry to another. So now that we have a way of sort of putting everything we want to ship into a nice little box, I guess this is when we start talking about CAP controller. So CAP controller surfaces certain APIs on your cluster. And this is widely divided into two parts, I guess. So the first bit is the app CR itself. So we have like a nice flow for obtaining the desired cluster state, right? So we fetch our configuration using image package. Then we templated using YTT and Cable. And then we deployed using CAP. So what the app CR allows you to do is it allows you to put these steps in like a declarative form. And that's essentially how you define your application. Right. So classic GitOps are the target. Where do I want to deploy my things? Either another cluster or the same cluster with a different service account that's scoped. Where I fetch my config from? So you can fetch it from an image package bundle, but also from Git or from a Helm chart repository if I want. And then how do we change it? How do we modify it? So we run YTT, but we can run Helm template. We can use kbuild to pin images and how we deploy them. Well, it's called CAP controller, so we deploy it with CAP. Yeah. And in the template section, you can really see the composability bit because, as we mentioned earlier, like if you're already using Helm template and want to use YTT after it, you can actually order those steps similarly, even in the app definition. Right. OK. Maybe we could take a look at the application that we just put together, but as an app CR? Right. So I pre-packaged this bundle just in case the demo went wrong, but apparently, that's all right. So we have an app.yaml here. It's an app CR. And it's going to get its config from an image package bundle. And then it's going to apply the YTT templating on the image, taking everything that's under the config directory. And I want to supply, as a user, I want to supply my own data values. There are many ways to supply data values. One of them is create a secret in my cluster, use that secret. I can also use a file that comes from, I don't know, Git repository, another image package bundle or something. Then we also apply kbuild to resolve HTTP echo to the relocated version of HTTP echo that lives into this kind registry. And then we deploy with cap. So let's deploy. Speaking of deploying with cap, I think I've deleted the KubeCon app. Yes. So we can cap deploy. And the file is called app.yaml, right? And I have to give this an app name, of course, app. This is going to be KubeCon app CR. So cap tells me I'm going to deploy this app CR, the secret that goes with it, and then the permissions are back that goes with it, so that when this deployment happens, cap controller doesn't use its very powerful cluster admin service account. It uses this scoped service account. And so this app can only create resources that a user with this service account would be able to do. So let's do this. And then, well, this reconciles into an app. It reconciles fast. And in the end, we have some pods that get deployed, I guess, somewhere demo namespace here, right? So if we do a cap LSA right now, I think we'll be able to see that what the apps here did at the end of the day is it created a cap app. Right. So I have to zoom out a little bit, unfortunately. But here, we created a simple app.app. Oops. Zoom out, zoom out. There we go. So now I can cap delete the app that's called kubecon app CR. So one of the drawbacks of this, and I know that because I'm an app developer, I write Java. I don't write YAML for a living, is that I have to be kind of very intimate with how it works, right? And what's this image package thing? What's a bundle? What's YTT, et cetera? So cap controller has nice utilities to do this, to make this more accessible to end users or your developer personas in your organizations. So that's why we started to talk about the second part of the APIs that cap controller surfaces. And those are the packaging APIs. The packaging APIs allow, let's say, a cluster admin kind of factor to add, make certain information available on the cluster about different versions of apps. And as someone who's consuming that cluster, you can use those APIs to install a specific version of an app that's been shipped. So what we're going to do is we're going to sort of change and look at what it looks like for someone who's using a cluster to install certain apps on it, right? Right, so we're using the key control command line. And we can say, well, package available. Oh, it's in it. It's a different namespace, I think. Install. So I put everything in a different namespace. So key control package available. List, right? And here, as a developer, I'm not interacting. I'm not doing YAML. I'm just using my CLI. And it tells me, well, you have this Carvel demo that you can install or a certain manager. All right, I would like to install Carvel demo. How does that work? Get, so key control package available, get the package that's called Carvel.garnier.wf, right? It tells me, well, it's a KubeCon Carvel demo app. It does this, it does that. There's one version. OK, cool. Can you tell me more about this? Well, for version, say, version 01, 100, we can tell it, well, this is fine, but what's the values schema for that? And so here's what I need to configure my Carvel app. I need to provide a channel. I need to provide a list of domains. I need to provide a namespace. This is string. It has a default. It doesn't have a default. It has some open API schema that tells me what I should do with this. So I prepared a values file in here, in values.yaml. And so I can key control package install. The package is called Carvel.garnier.wf. The version is 100, and then I'm going to give it an install name, so KubeCon package install. And then a values file that is values.yaml, right? And the output shows you that when you do this, what essentially happens is the fetch template deploys steps. It fetches the image package bundle. It templates it with YTT and Cable. And at the end, it's deploying things using CAP. So you can see the same diff that we saw earlier in the demo. And so this is a developer-friendly workflow. But of course, under the hood, what it's really doing is creating custom resources on the cluster. So we have packaging resources, package repository, when I do K control package available list, it's going to look up the packages. When I do K control package install, it's going to do a package install. So it's still GitOps-friendly if I want. And if I'm a developer, I don't care about YAML. I can use this K control CLI like this. That's sort of nice, because as and when a usage of Carvel matures, you can sort of move to having a more GitOps-y workflow to manage all of your infrastructure. Right. OK, we're almost at time before we close this out. There's a demo for the repository for this demo is on the internet on the left. And then on the right-hand side, feedback for us. Take 20 seconds, give us a comment so we can know what we can do better. And of course, join our Slack channel on the Kubernetes Slack, which is Carvel. And reach out on Twitter if you want, and reach out to us after the session. Thank you very much. Thank you. I think we have one minute for one question. Is there one question? One. Yes. Hi, thanks for the talk. A quick question about YTT. If I want to do composable stuff where I develop stuff in YTT and why I want my developers to use it, I cannot use any remote sources, versions, stuff like that. The composition happens later in the chain. Is that my understanding correct? Yeah. So that's actually a good question. So we actually skipped over a tool called Vendor. And what Vendor allows you to do is that you can fetch any files for that matter from multiple different sources. So I've seen folks at some organizations do this, where they use Vendor as a package manager for YTT. So you can define some star-lock modules and repeatable YTT that your developers can then use using Vendor. So a Vendor.yaml file essentially becomes something like a Go mode, where you can log on to certain versions of YTT packages that you're using, right? Right. Is that? YTT itself just takes a bunch of YAML files and outputs YAML. And that's it. All right. Thank you very much. Have a good weekend.