 Hello, Neat Hero. Hero is application source code on a developer's laptop. Hero longs to be a real application, running in production, serving end users, and we're going to help Hero on their journey. Our job is to help Hero navigate hundreds of CNCF projects, choose which ones to use, integrate them so that Hero can live their dream. I'm Whitney. Oh, I'm Victor here. I'm a decoration here. She will be talking most of the time. Victor and I host a streaming show called You Choose. And in each episode, we help the community make a system design choice. So for each system design choice, we'll gather all the relevant CNCF projects, invite an expert on from each project, and then each expert gets five minutes to talk about their project. At the end, we have the community votes. Now, the project that gets chosen gets implemented into our ongoing demo. It's not the best project. We're not having a judgment here. What it is is it's simply the project that the community wants to learn more about. So for example, our first system design choice was to build a container image from application source code. And the winner, not the winner, the chosen one of that is Cloud Native Build Packs. And Hero leveled up and became a container image. Next, Hero needed to be hosted in a container image registry. The community chose Harbor for that one. And then our Hero is a stream of images safely stored in her registry. So that brings us to today. Right now, we have three system design choices that we're going to make during this presentation. And we need the community's help. We need to choose a CNCF project that will provision a cluster. We need to choose one that will implement GitOps. And we need to choose one that will write application configuration. So let's get started and help Hero live their dream. So first up, we need to provision a production cluster. For this choice, we're deciding between cross plane and cluster API. But before we dig in on the tools themselves, let's talk about cluster provisioning, generally speaking. Cluster provisioning is complex. So no matter which infrastructure you deploy to, like which cloud provider you use, or whether you provision your cluster on-prem, no matter which bootstrapping tool you use, there's a lot to consider. You have to think about stuff like node sizes, subnets, VPCs, cluster authorization, node groups, roles, policies, security groups, gateways, root tables, it goes on and on. No matter which CNCF tool you choose for cluster provisioning, cross plane or cluster API, cluster provisioning is never not complex. So given this complexity, what are we looking for in a cluster provisioning tool, generally speaking? We want our tool to provide abstractions so that they can describe the cluster declaratively. So we want to use infrastructure as code. So the declarative is good because it can be source controlled in versions, but also because it's repeatable and it can scale. But more specifically, we wanted our production cluster to be described using Kubernetes resources. So using Kubernetes to describe our cluster has two more big benefits. That is, it could use the Kubernetes control loop to make sure what you declared as your desired state stays in sync with what the actual state is. And it's also good because then you can use all of the tools in the Kubernetes and CNCF ecosystem to help manage those Kubernetes resources. OK then, let's choose between cross plane and cluster API. Let's talk about those two. So cluster API is a focused tool. It does one thing. It helps you provision and manage the lifecycle of your clusters. And it does that thing really well. With cluster API, you can host your cluster almost anywhere, all major cloud providers are supported, and also all major Kubernetes distributions are supported. Next, cross plane is a tool that lets you interact with any API on the planet using Kubernetes. So cluster provisioning is a really popular use case for that, but there's a lot that you can do with it. So cross plane has a couple of big benefits. One is, with cross plane, you can provide a simplified interface. So whoever wants to provision that cluster only needs to know about the part of it that is relevant for them. So all that complexity we talked about earlier can be extracted away for the people who need it. And then the other big benefit of cross plane is that since it can do other things besides cluster provisioning, you can use it for more advanced use cases, such as you can make one cross plane resource that will have you make a Kubernetes cluster that also is integrated with a cloud database that also maybe has Knative pre-installed into it. And it's all already working together. So to recap, cluster API does one thing. It provisions Kubernetes clusters, and it does that thing really, really well. Cross plane offers more options and more flexibility, but there's a lot of extra complexity that comes with that. So which one did you choose? Which one did the community choose? Y'all chose cross plane. So Victor. Oh, nice. Let's make a production cluster for our hero. You need to imagine that I'm coding right now. I couldn't do it live. Anyways, over here what you see is a cluster claim. That's a way how we organize resources in cross plane, convert them into custom resource definitions, have controllers behind it, and so on and so forth. The whole idea behind compositions and claims is to simplify interaction, bid infrastructure and services for everybody else. Shifting left. And in this case, I'm saying, hey, those are the labels that will let you choose what you want. In this case, have a cluster in AWS and have that cluster be EKS. It can be anything you want. That's the point of it. And there are some parameters that end users, let's say developers or anybody else, can use to specify what they want. Like I want medium-sized nodes. I don't know what medium size is in AWS, but let it be bigger than small. How many nodes we want or version of Kubernetes? Or anything you choose. It's your option to create those interfaces. And now if I apply to Kubernetes and list all the managed resources, you can see that from that simple composition, simple claim, we got a bunch of things. We got subnets, internet gatefaces, VPCs, routes, security groups, Kubernetes objects, ham charts installed, and so on and so forth. So that's the way how to provide services to your end users and behind that service, everything that you require for managing something will be happening. And as the end user, I can say, I don't care about those low-level details, let me see the status of my claim of what I created. And you can see here on the screen that it is still not finished. It just started. It takes approximately 20 minutes, give or take. So I'm going to fast forward and then output the claim again and you can see that now it is successful, right? Everything is true or successful. There are no obvious problems. My Kubernetes cluster is up and running. And all I had to do to get a cluster that is fully production ready exactly as my company wants me to have it is to write a few lines of Fyamo. Now to prove that that's really happening, I'm going to retrieve kube config. I'm going to put it in a variable, say give me the nodes and there you go. Three nodes of my newly created fancy, amazing production ready cluster is up and running. And that's you again. That's me again. Great. So Victor just made a production cluster for Hero. So now what we want to do is deploy our application using GitOps. So we're going to set up GitOps as our next step. For this choice, we have three projects that we're choosing from between Argo CD, Flux, and Carvel Cap Controller. But before we dig in on the tools specifically, let's talk about GitOps generally. There are four features of GitOps. The first one is that your resources must be defined declaratively. So here we've written an application manifest. The second feature of GitOps is that the resource definitions must be versioned and immutable. So almost always Git is used for this, putting the Git in GitOps. So third, the resource definitions are pulled automatically. So the GitOps tool does this. It watches the Git repo in a periodic interval. And then finally, the desired state as defined in the Git repo is continuously reconciled with the actual state, in this case in the Kubernetes cluster. So in this diagram, we're showing our application getting deployed into Kubernetes, but it also is going to keep watching to make sure it's always staying in the state that we want it to be in. So let's explore the different tools that do this. First up, we have Argo CD. Argo CD does GitOps, and it does it well. It also has a fantastic UI. And it changes Helm charts into Kubernetes resources before applying them to the cluster. Next up, we have Flux. Flux does GitOps, and it does GitOps well. Flux has a tighter integration with Helm so that not everything's a Kubernetes resource with Flux. CarvelCapController does GitOps, and it does GitOps well. With CarvelCapController, you don't have to give cluster level root access to your whole GitOps tool. You can do it on a per application basis. And it also integrates well with the other tools in the Carvel Suite of DevOps tools. So to recap, all of these tools do GitOps, and they all do GitOps well. The differences are slight. Argo CD has that amazing UI. Flux integrates tightly with Helm, and CarvelCapController integrates well with the other tools in the Carvel Suite of tools. So what did the community choose? Y'all chose Argo CD. Victor, let's implement GitOps so that later we can deploy Hero. Holding it right now. Yes, yes. So here, I have a directory on my laptop with a couple of files. One defines cert manager, and another one defines schema Hero. Doesn't matter the details, what those applications are. The point that I'm trying to make here is that I put a couple of files in the directory on my laptop, push it to Git, and then I wait for a couple of moments until, in this case, Argo CD synchronizes, detects those changes. And if they retrieve all the resources currently running in the schema Hero namespace, you can see the schema Hero is automatically running. And if I do the same thing for the cert manager, which is going to happen anytime now, you will see that cert manager is up and running. And the point here is that we do not need to interact with the system directly. All we have to do as developers or engineers or whatever we are doing is write our stuff on our laptops, push it to Git. And this is the UI of Argo CD that shows a couple of applications that are running. I'm using app of apps model. And in this case, the one that really matters is production infra. That's the one that is monitoring specific directory and specific repository where I push to those two files. And you can see that cert manager is there. Contour is there. And schema Hero, those two cert manager and schema Hero are those that I just pushed a couple of moments ago. Argo CD detected them, figured out that there is a difference between what I want and what I want is what is in Git and what something is. And that's what is in cluster. And I can drill down to the details. I can go and see what are the child resources of the parent resources that were created? And what are the children of children of children right? And you can visualize easily and see what's going on without ever touching your system. And this is now Whitney again. Excellent. So now we have GitOps ready to go in our brand new production cluster. Hero will be frolicking there soon. So our third and final decision is to choose which tool we want to use for application configuration. For this choice, we're deciding between customize, helm, Carvel YTT, and CD-Cates. So why do we need a configuration tool at all? Why don't we just write our application manifest with Kubernetes YAML? The reason is there's a lot to define. So we need to define things like container images and how we want to update them, application-specific configuration, like supporting services and connection details, configuration related to how the application integrates with the infrastructure, Kubernetes-specific configuration. It's a lot. So let's go over the tools that help us manage this. So first up, we have customize. Customize is integrated into Kube-Cuddle with the apply-k command. With customize, you have a base manifest, your base YAML. And then any changes you want to make to that base, you make in small patch files. So it's using a patching strategy. And those patch files can be overlaid onto the base. So this is used when you want to deploy one application across many environments, for example. And you only need to make small changes between the environments. Next up, we have Helm. Helm is a package manager for Kubernetes. And he likely interacted with it if you've installed third-party apps into your cluster. So Helm uses a templating strategy for configuration. So with that, it takes values that are likely to change per deploy. And it factors those out. And you put them in a values.yaml file. And it also offers dynamic configuration with go templating. Next up, we have Carvel YTT. YTT stands for YAML templating tool. Carvel YTT uses both a patching and templating strategies. And you can add that dynamic configuration with a language called Starlark that you add as part of the YAML command. So with Carvel YTT, your configuration is YAML all the time. And you can always process and validate it as such. And then finally, we have CD-KATES. CD-KATES stands for Cloud Development Kit for Kubernetes. And with CD-KATES, you can write your application configuration in any language you want, as long as that language is JavaScript, TypeScript, Python, or Go. And then with CD-KATES, you apply a Synth command that turns that CD-KATES code into pure vanilla YAML. And you can apply that YAML to any cluster. So to recap, the choices are Customize, Helm, Carvel YTT, and CD-KATES. So what did you choose? What did the community choose? Carvel YTT. So Victor, this is the big moment. Let's deploy Hero to production and help them live the dream. OK, let's do it. So over here, you have one file, which is YTT definition of, in this case, deployment with all the variations that I might ever need. The important thing to note here is that it's all YAML, right? There is nothing special except that it's YAML and comments. And YTT knows through comments what to do. Hey, here is where I should inject the value. This is a conditional. This is a loop. But it's always, always YAML. Then we have Schema, right? Schema defines which parts of those YAMLs can be overlaid or template, right? What are the parts of all those shenanigans that we can change? And finally, we have Argos CD application definition that tells us Argos CD, a couple of important pieces of information. And two of those is that, hey, I have an application. Everything is application, even if it's infrastructure. And that application is located, or the definition of that application is in this specific repository. So go and look over there. But don't look anywhere. Don't look in random places of that repository. There is this specific directory that you should be monitoring, right? And then Argos CD goes there. And it will find, right now, nothing, because it didn't push anything. But here, I'm using YTT to convert my YTT templates into final YAML, right? So I'm not going to push YTT into Git. I'm just using it to get the final output, the final result, and store it in YAML, whatever directory, right? And this is that output, right? So I'm using it to convert something complex into YAML, right? Push it to Git, and wait for a couple of moments. I'm pretending to wait right now. And then, if I retrieve everything that is happening in my production namespace, you will see that all the deployments, and pods, and services, and ingresses, and whatever constitutes that specific application is already there. It's up and running, and we can live happily ever after. Ta-da-da-da! Victor, we did it! Hero's now running in production that we provisioned, a cluster we provisioned, using GitOps and our configuration tool. If you want to play with these tools yourself and choose any path you want, use the QR code to see our Git repository. Thank you so much, everyone. Thank you.