 My name is Lee Capelli, I actually have a Filipino surname, but I grew up in Los Angeles and I live in Colorado now, Colorado U.S., Denver. If you've kind of been following my work, you'll know I come from the Kubernetes project and from Flux. That's a picture of my dog and I do love snowboarding. And because I'm from LA, I grew up, well, I'm LA and I'm also Filipino, right? So I grew up singing, I grew up dancing a lot, and that got me into the hip-hop community, right? Breaking, popping and locking, and hip-hop is communal. Hip-hop comes from pain. Hip-hop is a house of language and music and storytelling. And so I kind of want to tell you guys a story, because I feel like when we work in infrastructure, when we work on software, it seems like it's always a struggle to get away from the pain to what you're actually trying to do with your life, which you're trying to actually improve. And so I find myself at the keyboard every day, working the command line, my app, I'm writing and it is so clean. The new stacks, my code slaps, libraries of pristine. I'm keeping it sistine, a gold routine. Spun up a half a mil of those a new machine, as I'm checking a CEI, test run or fly, complex regex, git rebase I, get in my YAML ready, I think is for Kubernetes. Terraform a new norm, this repo is getting heavy. Because I'm working nine to nine, six days a week, driving four to five nines got me losing sleep. No availability and this code is pulling on complexity. We doing an induction, put my function in production. I'm munching and lunching ready to do some number crunching. Ring the platform team. It's about to get worse. It's already my third verse and we're pushing the stage first. Nine to nine, six days a week, driving four to five nines got me losing sleep. No availability and my code is a mess of complexity. All right now, I already lost it. Red light, wow, can't ship, I'm exhausted. Relationships caustic, what's DevOps? This is not it. Shadow IT, communications frosted. Who knows what it costed? Spreadsheet, just to cop it. License change, strange, open tofu. Acronyms half true, two weeks notice. Call me up when we're V2. I'm out of here. Because I was working nine to nine, six days a week, driving four to five nines and had me losing sleep. No availability and my code was this mess of complexity. The blocks just got the best of me. Operators and a CID. Yeah, the cash was good and the project's free. I was building with blocks since the age of three. Yeah, the nine to nine, six days a week, had me sleeping in the bathroom sheet. No availability and this code was mess of complexity. Someone else's problem now, come on. Who can relate, right? Who has worked on a project at some point in your career? And you're like, man, I was just trying to do something cool. I was just trying to accomplish a cool business value mission with my coworkers. I was trying to build something innovative. And then it got a little complicated, right? Like, you were just trying to glue up your cluster to DNS and then you called up the person who knows about that and they were like, hey man, can you update that in this Excel spreadsheet and then email it to this team? And then, oh, okay, what's your employee ID? I got to put that in this bash script. And you're like, what? I'm not even going to work here. And it's going to get in the bash script? And so I think sometimes we have a lot of pain and then our projects are getting complicated. And so you might have heard a little bit in the keynote about how there's this suite of tools. And I'm going to do most of this tour of our Carvel project today from the website because I want you to know how to get around so that when you run into your complicated problems inside of your ecosystem of problem solving that you do with your coworkers and our collaborative workspaces that, you know, when you're feeling a little bit of dissonance maybe you can come to Carvel.dev and you can find one of our tools and you can be like, oh, I think this actually might help me clean up my problems so that I don't have to write a messy bash script. And so the first tool that I want to take you through is called YTT. This is the YAML templating tool. I actually don't know if it really stands for that but we just call it YTT. And YTT lets you template and overlay YAML configurations in a structured, smart way. It's a hermetic config language that lets you solve all of these sticky problems when YAML starts to get too limited, too unsafe lacking the expression that you need to solve the complicated problems as we start to build these systems from all these different blocks. And so if you go and you just Carvel.dev click this, right? And then I'm sort of betting on the internet working for the rest of this demo. So all you're in the YAML shaping in one tool. On that previous website it says Kubernetes but the cool thing about the Carvel project is that we're building all of these tools in a unixy way. All of these tools only do one thing. They do them very well. It's not forcing you into a particular workflow. And so if you have YAML problems and I know I got YAML problems, probably a thousand of them in a single repo. Then I can come to this tool and start shaping and solving and building some abstractions. And so you come to the YTT website what's really cool over here is giving you a little rundown. YTT is structure oriented. What does this mean? It means that it's not a go template. It's not a text templating language. And so I wanted to pull up an example of a go template just to show you what I mean because before we can really appreciate what's good about this tool we have to remember what is really miserable about the tools that we use already. And so I don't know. Let's say I go into like this click house chart and then let's go ahead and enter a templates folder where we're going to have a bunch of go templates. These are text templates, right? And let's just look at something simple, like a service. I was expecting this to be simple. Hopefully this is, I should have probably turned off dark mode and GitHub, sorry friends. Might be a little bit hard to read. I'll try to make this better. So this is a go template for creating a structured Kubernetes service. We're trying to get to a declarative configuration so that we can tell our distributed computer how we can network to our click house installation. And so you can see here this is a service. It's in the V1 networking API. And then there's some metadata here and then now we're getting into all of this templating stuff. So we want the name to have some certain rules about it. We want to add some labels. We want to have a type for the service and maybe have the option to provide a cluster IP. Now you can see that this starts to look like a little bit of functional programming. Cool. I mean, I think maybe not a lot of the people I work are very big fans of functional programming. I know I am, but it's a little bit esoteric. The main thing here though is that if you sort of look far enough through here, it's very indentation specific. And I should have found a... At the bottom here, in almost every service definition that you'll find in the Bitnami Helm charts repo, you will see including a template from somewhere else. So we start to get a little bit in direction from our programming language. Go templates are cool. We have this full in direction going on. We can do terrain complete stuff. But after we render that template and then pass this dictionary and get some values from a different place which is a really powerful abstraction. I love that about Helm. Then we have to take our pure functional programming, our data structures that are happening in our hermetic build and because we're in a text templating language, we have to think about how we're going to format the output. And the formatting of that output is critical to the runtime of our distributed computer. And so I have to know when I'm... I don't know, what? 120 characters deep into this line over here that I want to take this structure, render it and then indent it by four spaces. And I think that as much as we can all appreciate the ecosystem around Helm and the power of a Helm chart, the ability to take a vendor piece of software, it's something that somebody else maintains or an entire group of people maintain. But when you look at how this stuff is built, it is so hard to test and it is dangerous. And I'm not speaking with the lack of authority here because back in 2017, this was my job. I was making charts and modifying them and then running them. And it was painful because you run into bugs where suddenly your chart's not rendering when you turned on an option. And the mistake that causes this is the choice to build Helm with Go templates at the core. And mad respect to the dais team for building Helm in the first place. But Go templates just sort of came out of that early ecosystem and it's an unfortunate mistake that we have to live with. So then we got into the world of customize. And customize is awesome. Not without its own kind of controversial kind of story to rise to fame. But customize is really delightful to work with for simple things. It says, hey, maybe we can just start focusing on the YAML again. We don't have to worry about writing Go templates that then render to YAML and aren't actually valid YAML and then we got to have these hybrid syntax highlighters and try to figure out how like DevOps people can go and mesh all of this pain together. And so with customize, you can just, I don't know, maybe I can make this picture bigger here. You can have your base configuration and then take these patches for different environments or variations of config. And because customize is a magic Kubernetes specific tool, you just take your patches and it lays them on top and then merges all the fields together for you. And then when there's special things that you might want to do like merge onto the same name of a container with your patch, it does that magically through the tooling and open API abstractions that build up strategic merge patch behavior. That's great. And so customize greatly differs from this templating approach because we're now structuring our configs into different pieces and then laying them on top of each other in a tree. But we've lost something. We've lost something really valuable and that is that if I go back to that Helm chart example, there's the values abstraction which customize is just really bad at. There's ways to create variables in a customization where you can put random strings or various input fields into a config map data and then reference that field from inside of your customization YAML with four lines of YAML that then create your variable which you can then interpolate into strings but only in certain places. It's just not actually meant for that. Customize is not a values, top level value propagation tool. That's not how it's built. And so if you want to create a customized patch and put your ingress domain into 10 places, you're going to have to repeat yourself at least 10 times, more or less. And that's part of the design. The customized team says that's on purpose and it's good. Once we start getting into more complicated repos and you start having your 60 cluster problem where you're then having multiple clusters with multiple apps and multiple environments for multiple customers or whatever your variation need is, you start getting into very large repositories of YAML and you're starting to wonder to yourself how much should I be repeating myself? Isn't this a sophisticated distributed system? Shouldn't we be doing real software development? I need a real programming language to start to handle all of the complexity, all of the intellectual property necessary to create the infrastructure that makes my business happen. And so I want to get to a happier place than using text templating to create a bunch of config to run my distributed computers. I want to do something a little bit more sophisticated than just store patches in a directory of things and trust that if we store 5,000 lines of YAML in a repository then I'm going to be able to manage my fleet of clusters. I want to be able to run tests. I want to be able to assert that things are of a proper type and that they're meeting my business needs. And so we can get to a better place here. And I just wanted to, I guess, sorry, got a little bit off track here, but this is the valuable part of Helm, probably the best part of Helm. I go to the Clickhouse chart, there's a values YAML file and it shows me top level options all in one place. A simplified abstraction, I can take this vendor piece of software, have all of these values, go through and say, oh, I can set the image registry. I can make sure that I'm asserting for a particular Kubernetes version. Override some of the way that things are named so I can manage resources in my cluster at runtime. I can turn on the diagnostic mode. Do all of these fascinating things at the application level use different versions of Docker images. It's a fantastic abstraction because I'm no longer going into this incredibly verbose blob of literally tens of thousands of lines of YAML that's needed to run this application in a cloud-native way. So I like this. And Customize is great because I like patching. But these tools, they're very Kubernetes-specific. And frankly, in my job, I don't just work with Kubernetes, right? Whether I'm executing some Terraform modules or if I'm gluing something into DNS or a BGP set of autonomous systems or maybe I'm using OSPF, maybe I need to tie that into the network topology of my data center. And I have to also sync that with a legacy system that's provisioning via Ansible, which is all YAML. And so my job isn't just constrained to this Kubernetes cluster. And if I want to wrangle complexity and actually represent the whole state of my distributed system, I'm going to need a programming language to do real software development at that layer. And so this is why I'm so excited and why I'm spending so much time talking about YTT. Because I think if you go and you start finding all of your YAML problems, you can get to the good place. Let me show you what it looks like. So YTT is YAML. It's all just parsable YAML. Every YTT program is a valid YAML file. And the way that we make YAML smarter because YAML is incredibly limited, there's no control flow really. You can barely do any sort of indirection with this thing called anchors that nobody ever uses. But if we were to say embed a programming language that was familiar to operations type people, something that felt a little bit like Python, into the comments of your YAML file. And we gave somebody a statically compiled Go binary to then turn all of your YAML programs into the output that you needed to run your system. That's YTT, and that's super exciting to me. This language, you might have heard when you mentioned it in our keynote this morning, is called Starlark, and that's what's embedded in these comments. And you can use Starlark programs either embedded in YAML directly or right alongside all of your YAML files to make your YAML way smarter, safer, more confident, and more fun to use. And so here, this is our kind of main demo. And you're going to see a lot of the same kinds of things that you could do in a Helm chart, but in a really kind of ergonomic way that's readable for operations people. So here, I want to define some labels. This is similar to when you would just define a function in Python. Right now I have a function that outputs this YAML fragment. It is a map that has two keys, app and org with the values echo and test. Cool. Now I can call that function and just always get that map. Here, I'm loading in this data library so I can actually get data from other places. You'll see that in just a second. Here, I'm defining a function, and instead of working directly in YAML, I'm actually executing a statement inside of Starlark, which feels like Python. So here, we're passing in an echo variable inside of the arguments, and then we're going to have the name. We're concatenating that or putting it together with the echo prefix string, and then we're calling return to have the return value of the function set. So whenever I call the name function with some argument, I'll have echo, hyphen, and then whatever the argument you passed is. So now that we have these functions, we actually have no output yet. But the main function, or the main entry point inside of this program is just at the root level. If you have YAML documents, that's what we're actually trying to get to. So here, I'm going to start outputting a pod. You'll notice I don't have to call print or anything. If my YTT program has YAML at the top level, it's just going to start outputting YAML. And that's a really nice feature because any YAML file is a YTT program that will output the YAML. But we can start making the YAML that we want to output smarter. So here with that pod definition that we've started to write here, when we get into that metadata section, we have to pass the labels in order to maybe have some more interesting semantics in our Kubernetes cluster from a scheduling perspective, or maybe I want to start putting this into a replication controller or under a deployment or a stateful set or something. So I need to label the pod. And here, I can just call my function. So I enter into starlark by typing the comment pound sign at. And once I'm in there, the YTT interpreter, sorry, the YTT parser is going to start executing the starlark function. And then it calls labels, grabs this map, and puts it in. Similarly, for containers, say that you wanted to do something less trivial and you're tired of writing pod specs that are literally hundreds of lines long, you want to have a simpler way to define what goes into your containers. So you're going to need to have some sort of image. In this case, I just want to use the same one all the time so I don't have to indirect that. But I can give each container a different name by calling a function and then doing a for loop. When was the last time you saw YAML with a for loop? Maybe only in a text template, right? But it's like, loops, I'm a programmer, I love this, I can do multiple things all at once. It's really fun. So let's say we're a data structure and produce 10 containers if I want to. 100, why not? In fact, let's go and do that. So you'll see that we loaded the data library in here. And then for every echo in the data values of echos, which probably some list, I hope that that would be typesafe. And it is, which is really cool. It's way more typesafe than Python. And then here in the arguments, we want to pass two arguments every time we have a container. This is an arg for a particular port. That feels pretty infrastructure-y. And then also let's have something to output onto that port, right? And then we have to get that data from somewhere. So I'm going to declare some data values in a different file, just like Helm. I love this part of Helm, it's the best part. And then I have some echos here and I have two maps. One is named first, one's named second. And then I'm also going to pass in some extra fields here and finally template it into my containers. So here, this is the live demo that's just running on top of Carvel.dev slash ytt. If you go here, you can start editing all of this stuff. And you'll see that if I add another section here and call this third, that the pod output over here on the right is updating in real-time. Let's go ahead and put that on another port so that we can cause a conflict. I probably could template this later if we wanted to. And we'll say hello to Kukan. So you can see that this is powerful, right? But at the end of the day, if I wanted to do this declaratively by just writing a pod spec, then all of the same information would be here, right? So what's the value of this abstraction? Am I just creating a needless abstraction? Well, remember how we were talking about ports? And ports are this very infrastructure-y thing. And infrastructure is where pain comes from because we have to make sure that we glue things together. And sometimes when we're using the glue, our hands get stuck together and it's not fun. Well, if my ports, when I configure my network, are not the same, then I'm going to have a bad time. But if we go to our service definition, you'll see that I have another loop right here. And I'm naming my ports after my containers. And then I have the ports from that same data structure, the exact same data structure from one place, where I'm not repeating myself anymore. I can, as a programmer, use a loop in a Python language that's readable by Sys administrators and then make sure that all of the ports are the same. And when you look at the output, you can see that that third container that I have there is matching in the port network configuration. Very cool. Here you can also see that our type safety comes from a schema definition. And this is like a total game changer for keeping YAML safe, because now I don't have to worry that when I have a port number, it's going to accidentally turn into an octal output or something. Or if I have a commit shaw that it's not going to convert from a string one day into a number, you know, or another kind of octal output or hexadecimal output kind of thing. Just all of these surprising things in YAML that come up and bite us when we're trying to just do our jobs. And, I don't know, you're like, I made that change three months ago and it took that long for the get commit shaw to then bite me in production, right? Type safety is going to prevent us from having those problems and I love having a schema for all of my inputs when I give an interface to somebody else. So hopefully you can see a little bit why I'm so excited about Y2T. This is probably one of the most important tools in the Carvel project. And so I would encourage you already just as kind of the action item to go to Carvel Dev or show this to the people that have the YAML problems in your organization and send them to Y2T. There's a lot of non-trivial things you can do here. Before I get to the other tools I just kind of want to show you that there's just things you could never dream of that are possible. Over here in my project, this is a control repo for a cluster. It's obviously very non-trivial, super complicated already. What's happening here is I'm configuring authentication and access to the Kubernetes API. There's a mixture of DEX and PINAPED being used. PINAPED also has two components, right? There's the supervisor, there's the concierge. I'm like looking into this stuff as the infrastructure engineer, what's happening with all of these projects? And then I have to read all of their documentation, look about how all of these components are configured. They're all using different schemas for their own YAML stuff. I have to use those in inline strings inside of secrets. It's a mess. And so I go and I fork the YAML configs to install these three components into this upstream folder. I have DEX, the concierge and supervisor. And then in the patches folder I have a bunch of Y2T programs to start overlaying over those base configurations. So for DEX, this one in particular, well, let's just actually, we'll do something, we'll look at the simpler patches. So for the supervisor, I wanted to make some simple edits, right? Here I wanted to find this deployment, so I make an object that has the group version kind information. Then I start an overlay on this document and I'm looking for the container by name. So then we find the pinniped supervisor in there and I add aliveness and a readiness probe. A great thing to do for most of the things that you want to probably deploy to your cluster, right? But this is a nice simple patch. In fact, it's even feeling like customized but then without any of the magic, right? Like instead of relying on a strategic merge patch, I'm overlaying by name explicitly. If I wanted to overlay by probes instead or just add some fields to every single container or specifically to a sidecar, then I would be able to do that easily. I can append, I can remove. It's not magic, right? But then look at what I had to do to DEX. Patching the DEX config, this is one of the most important parts here that I want to hammer home, which is that I've only been working on Kubernetes configurations but I have all kinds of YAML problems that are not related to Kubernetes at all. In fact, DEX itself is configured by YAML and there's no open API spec or custom resource definition that I can query. There's no auto-completion that's going to happen in my editor to make sure that I write the proper DEX config. So I have to be careful and I have to know how to read the docs. And frankly, once I fork the DEX config, all of the defaults from the upstream config, I'm going to lose them. So I actually want to take my upstream config and then patch it. But if you actually look inside of this upstream config in this blob of YAML that I definitely didn't write, like it's like hundreds of lines, maybe just 159, you'll actually see that this inlined YAML string here is just like part of a data field in a config map. So how do I patch this thing? This is not something that I'd really be able to get to ever in customize because it's not actually part of the YAML structure. There's no longer any tree of keys underneath this. This is a string. This has to be parsed and serialized. And then I have to patch on top of that and then I have to serialize it back into a string. And then also this is a nightmare because this is a config map and it probably should be a secret. I'm going to have secrets in here, so I should change the type of that. YTT is so powerful that I can do this in a single program. I can define my updates, choose all of the merge behaviors for all of the fields that I have to read the docs on, include my secrets, pull my secrets from a values file instead of inlining them directly in my patch and then find the config map, overlay it, change the config map to a secret and then use a lambda function to base 64 encode the results of decoding what was in there and applying my updates to it. This is all possible. And what this lets me do is copy the docs config right out of the repository and patch right over it and keep all of the defaults in normal configs and show the provenance of my infrastructure decisions. YTT, love it, cool. All right, friends, moving on from this super complex example, the rest of everything is quite simple. I love YTT. So at Carvel Dev, you'll see that Carvel is a suite of tools. We've got a bunch of them and I love them. CAP is a way of keeping all of my resources together when I apply stuff to Kubernetes. You probably understand if you've used Kubernetes that the namespace is kind of the way of dividing things into folders and it's a very flat way of splitting up your cluster. And so your resources, you can only really be trusted to keep them kind of in a namespace. But what if you need more than one namespace? What if you have resources at the cluster scope? What are you going to do? Make a whole virtual cluster for your app? Maybe. Kubernetes is kind of broken in that way in that structure. And so we need some sort of cataloging to keep stuff together and to diff changes. We have to know that these services go with these deployments and that this cluster role probably was for this app. We need something to track that and CAP is really good at that. It also makes the just user experience of applying stuff to a cluster much better and so say when you do a CAP deploy, you name your application and then pass your files. It all gets stored under that name, and then you put this together. And then instead of doing nothing as you just wait around for everything to reconcile, it actually waits for everything to become healthy and tells you what's happening as those changes are happening to the cluster. It also gives you an opportunity to read a diff ahead of time. It tells you which namespaces are being affected. Very cool stuff. I love CAP. Cool. Let's look at some of the other tools. So K-Build. This is a fun one. Have you ever thought about how whenever you deploy stuff you usually use a container image in a tag and then you've got maybe 40 copies of this container running out across all of your clusters and then the tag might actually change? And so you probably should actually be using all of these image digests, but then you got to go look up the digest somehow. K-Build's really good at that. It will resolve all of the digests for you if I pop into my terminal here and then echo a very short YAML file such as maybe IngenX Latest in an image field and then I pipe it to K-Build. I want to resolve all of the image references and it just will go and do that for me. And then actually output a metadata annotations update to show that this image tag was resolved and what tag it came from. So that way you can actually put this SHA immutably in all of your deployments and then your containers underneath will never change on you when your pods are thrashing around and your deployment has to create new replicas. Super fun. And if I look at some of the other tools there's a bunch more. So we've got seven primary tools in the Carvel project. I've covered YTT. K-Build can also resolve our image references. Image package is a way to put anything inside of an OCI artifact and then basically use your image registry as an S3 bucket. If you want to put Kubernetes configs, binaries, if you want to put a root of Fests for a Linux machine into there or some kernels, if you want to put some Wasm containers, you can make packages out of OCI artifacts and actually reference them together and then copy them across air gaps. It's wonderful. Great for building products. Great for doing stuff with the government. It's fantastic. Vendor, one of the most general tools. I love this thing. You just have a single YAML file that declares the state of your directory and then you can fetch stuff from Git, from HTTP. You can go to SVN or all of the archaic ways of versioning your software. And Secret Gen Controller is a way to export and import and generate bytes of data from Kubernetes resources or from random number generators and actually generate and then distribute those secrets, whether they're SSH keys or API keys or passwords for Git or something, or also a really common one is the image registry credentials. You can distribute that to all of your developer name spaces and have a really great time instead of copying and repeating yourself over and over again inside of a repository. So Secret Management with Secret Gen Controller is something that I would recommend you look at if you're feeling that pain inside of your production environment, which everybody is. Lastly, this top level project is Cap Controller. And Cap Controller is a continuous delivery in GitOps tool. Again, you might have heard Whitney talking about it in the keynote this morning if you happen to make it. And this is a world of rainbows and consistency and confidence because GitOps is the way that we're going to build and innovate our next decade. If we're going to be using all of these distributed computers in cloud native interfaces and we're having this sprawl of complexity, the previous generation of intellectual property, we've lost it. There's so much entropy in all of the virtual machines that we've deployed and the bare mental servers that we've built that oftentimes we find ourselves in a new job just having to rebuild everything from the ground up. Can we do better? I think so. GitOps. GitOps is the way that we can use our collaboration tools to work together and keep track of all of the individual moments of brilliance that we all have when we're collaborating together to build up our distributed systems. And if we can get a little bit of a better hold on that, I think we can postpone when we have to tear everything down and build everything up again. And that's going to help us innovate towards systems, whether it's a payment system where I pull out my phone and I'm paying for my subway ticket or if I'm ordering a car from the new version of the internet on a 6G network or something. Those systems, that's the stuff that we're actually trying to do when we're with each other and I think that GitOps is the way that we're going to help prevent the entropy of all of our intellectual property. And CapController lets you do this with continuous delivery in your cluster, bridging all of these tools and more things like CFG, sorry, things like Q, things like Helm, bringing the cloud-native ecosystem into a continuous delivery controller that gives you the way to take your intellectual property from the place that you collaborate and get that into your fleet of computers. So, yeah. Carvel.dev. Open up your phone, send it to your coworkers, navigate the website, try the YTT live demo and everything is installable via brew. So, my name's Lee. Thanks for listening to my story, letting me express my language a little bit in the terms of pain and hip-hop and I will be around for questions probably over there just outside the door. So, cheers.