 So hi, everyone. Thank you for to be here. I'm Pietro Terizzi, and I'm a DevOps engineer at Classics. This is my first talk, as I said, so be easy on me. I will try to do my best to describe you my talk. So as you can see, the title is Day 2 Has Arrived. So how Carvels Suite and Cluster API can bring GitOps to your Kubernetes infrastructure? This is my context. You can ask to me whatever you want after the call. I will be really happy to speak with you and have some networking. So whenever you think about a product, it could be a simple app or an infrastructure, so on and so forth. You have to design your journey to satisfaction for the client and for yourself. And this is a step-by-step guide, a simple way to describe it, based on days. So what are these days? On day zero, you have to design your application. So you have some requirements, and you have to apply them to an existing infrastructure. So you have to create an architecture. And this one, just the day after tomorrow, will be a place on your environment during the day one step that is deploying. So during the deploying, you have to install a simple and minimal available way to install your application with a first configuration that will just work. But then pain comes in, and it's the pain of the day two that is maintaining your application. So whenever you are on production grade environment, maybe, for example, you will try to achieve all your point to success, fighting with all new challenges about network traffic security, monitoring, and so on and so forth. But in this talk, we will focus on infrastructure update and application update. So these are some of the challenges that you will have during day two that could be really different. But this is the first one that comes to my mind that can be repetitive during the various enterprise installation. And it's timing. There is a rapid development and delivery of updates that have to be reliable. So you can avoid disruptive times to your client. You have to use trustworthy, trustworthy technologies. So your components has to be adopted from many companies, maybe. And they have to be able to integrate with themselves. At the same time, this could be a really difficult challenge that leads to complexity. So you have to develop strong GitOps and DevOps culture with your team. So how can we achieve this? Not all these challenges could be achieved directly, but using a declarative way with the GitOps methodology can help you continuously reconcile the state of your application and infrastructures on different environment. How we can achieve this? We can involve some tools like Cluster API and Carvel Suite. So what is Cluster API? Cluster API is a Kubernetes sub-project, Kubernetes sync, since two or three years I don't really know. It's a way to provide a declarative API to your management cluster on Kubernetes to simplify provisioning, upgrading, and so on and so forth to your managed Kubernetes cluster. This comes with a key tool that is called Cluster CTL. And Cluster CTL can help you to bootstrap a first cluster that will be called the bootstrap cluster or management cluster to which you will deploy your custom resource like control planes, machine deploy, machine and so on and so forth. This will be your nodes, your nodes for your new infrastructure as a pod, as a custom resource. And during the deploy, there will be some bootstrap phase based on cloud in it to join the nodes to the Kuba DM control plane. Now we can achieve this on different environments and different public cloud. There are three types of providers, but we will focus on just two. The first one is the bootstrap provider. This is the way to deploy your first cluster to deploy also your custom resource definition for Cluster API. And they will be different because you can have a different base cluster based on Docker, based on Kuba DM, on Talos, and so on and so forth. And then we will have a workload cluster where your enterprise deploy its application for production grade environment that could be placed on the most famous managed public cloud like AWS, GCP, Azure, and so on and so forth, but also on Digital Ocean, OpenStack, et cetera. These are some simple concepts on how Cluster API achieved this. So fundamentally, there is an operator in install on the management cluster that tries to reconcile on different level your infrastructure, core bootstrap infrastructure, control plane, and so on. And they will be placed on different environment, as you can see. But now there is the time to not use in an imperative way. So just creating the ML manifest with the cluster CTL is not enough. So you have to continuously reconcile your infrastructure to reflect the state of it. In this way, there is a fundamental application that is called CAP controller. It is an operator created by VMware and Carvel Team that deploys a fundamentally operator with CAP. And CAP has to manage a cloud native application. So it continuously fetch configuration from different locations. It could be your GitHub, Repositor, GitLab, or your shed folder, and so on. And it will try to deploy it and overlay it with different values on your environment. So the first step, as I say, is to create a bootstrap cluster. On this test, we will use a kind, so a simple docker provider. You will create a management cluster. On this, you will deploy your CAPi CRD, that is a general way to deploy a CAP controller customer source definition on your environment. And then we will deploy more specific providers on it, so CAPD, CAPZ, and so on, to manage different cloud providers. Finally, we will deploy, as I said, the CAP controller that will continuously fetch the configuration from your branch and deploy it with CAPi. The powerful way to achieve this with the CAP controller is a custom resource called CAP app that provides a declarative way to install, manage, and upgrade application on your Kubernetes cluster. As you can see, we have a different patch definition to achieve the most precise way to install your environment. But generally speaking, the one that will interest you most are the ur, the reference, the subpath, and how you will deploy your template based on different languages. The Carvel Suite comes with tools like YTT. There is a specific language to deploy your packages and applications. But in this case, it's an template. And it comes with different way to personalize your deployment using secret config map and so on to pass some values and variables, like the name of your cluster, the number of nodes, version, and so on. So finally, we will achieve this installing a CAP app, so the custom resource to your management cluster. The CAP controller will read your up-custom resource and will fetch it to your management cluster in less than 10 seconds. And finally, it will deploy step by step the nodes on your environment. So as you can see, with our Qubectl get pods, we will pass the Qube config created during the first step when we deploy the custom resource about machine nodes, machine deployment, and so on. And we can see that this is completely a simple way to achieve a different workload cluster on different environment. Watching your pods, you will see that this is the typical installation of a Kubernetes cluster. So what we achieve now, we can deploy Kubernetes cluster or more Kubernetes cluster on different environments and simply changing values on your branch on GitHub, like the version or the number of nodes, the operator will reconcile the application, the custom resource of CAP on your environment. And so you will automatically see new nodes or less nodes on your workload clusters. And it could lead to upgrading your environment to a new version, for example 1.22, easily as changing a value on your GitHub branch. So thank you for watching. Thank you for your attention. Write to me for every question or so on. Thank you.