 Hi, everybody, and thank you for joining this CNCF webinar. Today, we're going to be talking about cluster API and GitOps, the key to Kubernetes lifecycle management. So my name is Nick. I'm the Delveril lead at SpectroCloud. And what I want to focus on today is a deep dive on a hands-on use case. Typically, you will find a lot of talks and presentation on cluster API GitOps and both together. But most of the time, they will give you a simple example or a very simple use case. Today, I want you to have a transparent experience. I'm not going to hide anything around cluster API GitOps and AlgoCD, which is the GitOps tool we're going to be using. We're going to do everything live. And we're going to see what are the considerations you have to make to basically make GitOps with cluster API useful in your organization. So let's take a quick look at the agenda for today. So first, I'm going to quickly go over cluster API and GitOps principle. Then I'm going to take a closer look at cluster API components and what provisioning workflow look like. Then we're going to talk about what sort of additional tools you may want to have on top of cluster API and address some caveats, especially when you want to combine with AlgoCD or any other GitOps tooling. And then we'll have the use case deep dive live demos. And then I'll do another demo also as we move forward. We're going to start with a simple demo and then we're going to build our full use case. So let's get started. Before talking about cluster API or GitOps or any other sort of tooling that work with the same pattern in Kubernetes, which is the operator pattern, so let's talk about this first. So operators are not something really new. It's been there for a couple of years now. But what makes it very different compared to what people used to do in Kubernetes before is that you can bring automation inside of Kubernetes as opposed to infrastructure as code or other tools like that is that they live outside of Kubernetes. With the operator pattern, you are bringing automation capability within the Kubernetes cluster. So how it works, it has multiple components. First, you have what we call custom resource definition in the case of cluster API. Well, cluster API implements a lot of custom resources. The cluster itself, machine, machine deployment, we're going to see them later. But they are essentially resources that extend the native initial Kubernetes API by making those objects first class citizen inside Kubernetes. Then the second component of the operator pattern is a custom controller. So the role of the custom controller is to monitor changes that are realized on those custom resources like cluster API cluster and others and then react and perform certain actions based on events. So for example, if you create a new cluster API cluster, then the custom controller of cluster API will make sure that this cluster will be deployed in the right environment with the right machine, the right instances, size, the right network, et cetera. So it's automating things outside of the Kubernetes cluster while at the same time monitoring a representation of these resources within the Kubernetes cluster. And it does this permanently. So in other words, that means that the custom controller, or it doesn't have to be only one. You will see in cluster API, there are more than one custom controller. They are creating a reconciliation loop between the desired states, which is the resources that are created within the cluster and the real current states of the infrastructure, which is what exists outside of the Kubernetes cluster here. In the case of cluster API, this is the state of the cluster you want to deploy. So now let's talk about the GitOps pattern. So a GitOps pattern means that you are storing not only your code on your repository, so that would be the dev that stores his code on a Git repo, the application code, which is on top of this representation here. Traditionally, it's going to trigger a pipeline that may be living within GitOps actions in that particular example, build the image, and then this is the traditional developer pipeline. Now, a GitOps pipeline will add another component to it. So more specifically here, you will generate, or the dev slash platform engineer or DevOps engineer will create the right workflow to generate the Kubernetes manifests for deploying that particular application. So here, that means that the Kubernetes manifests stored on the Git repository contain all the objects required to install the application on the cluster. So we need an extra tool to create a reconciliation loop between this desired state, so the manifests, and what is deployed in the cluster, the current state of the cluster. In our case, in the example today, we're going to be using August CD, which is going to be responsible for that particular automation. So any change pushed into the manifests will be implemented into the cluster. So for example, if you change the container image, August CD will reconcile this within the cluster and replace the image where appropriate. If you delete the application and enable the pruning function, then August CD will delete the application in the cluster. So of course, there are multiple benefits of doing this. So the first one is that now, as I said, everything is managed declaratively. There's no imperative command. So it's kind of avoiding any possible mistakes made by human because we bring this extra automation. And then there's the second benefit, which is around security, because now all those operations are performed by a service account associated with August CD. And only that particular service account will require permissions to perform action within a particular namespace or within the cluster, which is, in the end, reducing the attack surface of that particular cluster. So now let's take a quick overview of cluster API and some of the main components. So at the very top, the cluster API cluster type is merely an interface for more specific and lower level implementation details that are implemented by the infrastructure provider. And then we also have other providers that cluster API relies upon. We have the bootstrap provider and the control plane providers. So as I was saying, the infrastructure provider role is to encapsulate all the tasks that are specific to a particular cloud or infrastructure. Things like defining instant sizes for the different nodes, how you implement a load balancer, which network you want to use, this type of things. So I gave a couple of examples here. CAPG for cluster API provider, GCP, Kappa for AWS, and CAP-Z for Azure. And there are many others. So the role of the bootstrap provider is basically to turn any machine into a Kubernetes node using CloudInit scripts. So the bootstrap provider, by default, if you don't specify anything, Kube-DM is going to be used. So that's going to be the CloudInit script corresponding to making sure Kube-DM is used to turn that machine into a that instance into a Kubernetes nodes. Then there is CAP-PM for microcades, a CAP-PT for Talos if you are deploying a Talos cluster, which is a curated, let's say, very opinionated Kubernetes cluster. The control plane provider will be responsible for creating the control plane nodes. So in the same way, the bootstrap provider is responsible for turning a machine into a worker. The control plane provider is responsible for turning the machine into a control plane. It does have all the config specification, the type of instance, the machine template. You want to use all those kind of things. And all those three components are combined into options for the cluster CTL init command that you perform when you want to deploy the cluster API components into your existing management Kubernetes cluster that you are going to be using for hosting the cluster API resources. So cluster CTL init will take those three options, B for bootstrap provider, I for infrastructure, C for control plane. You can omit B and C, and then the system will assume that you want to use Kube-DM as the bootstrap and the control plane providers, which is quite common. And then the remaining only required parameter is the dash I for the infrastructure provider. And in our case, it's going to be GCP. So now let's take a look at the different controllers involved with the cluster API and the different resources they are responsible for. So we can map out the different components here to what we've just seen. In the middle here, we have the controller manager for the infrastructure provider, so the CAPG controller manager. On the top right, this is the bootstrap provider corresponding controller manager. And on the bottom right, this is the control plane provider corresponding controller. And then on the far left, this is the main controller for cluster API responsible for the most generic object. So this one is responsible for the cluster, then for the machine deployment. So the cluster is, as I said earlier, the interface for a more specific implementation of the cluster within the cloud provider you want to target. So this is why it's making references to the infrastructure provider, the cluster inside the infrastructure provider, which is the GCP cluster. So if the infrastructure provider is GCP, there's a one-to-one mapping between the generic cluster and the more specific GCP cluster. But of course, the cluster is also composed of the control plane, of course. So then this is why you have a reference to the control plane provider within the cluster. So we have a reference to GCP cluster and we have a reference to the controller manager responsible for deploying the control plane. Now, for the worker nodes, this is where you have the machine deployment that is defined here on the left. And it's similar to a deployment in Kubernetes in the way it behaves. So in Kubernetes, a deployment resource is responsible for controlling how different pods are deployed within the cluster. So responsible for managing the number of replicas, how they are started, and making sure you always have the right number of desired pods running. Here, same principle, but for your worker nodes. The machine deployment is going to manage a machine sets and the machine sets is going to be composed of one or several machines. And again, this is quite generic machine. And this machine will have a one-to-one mapping with a GCP machine. So one-to-one mapping with a GCP machine and the GCP machine will encapsulate the specific information related to your infrastructure provider. So related to GCP in our case. And then the machine deployment itself, which is acting as making reference to a template, will also have a reference to a GCP machine template the same way a deployment makes reference to a pod template. And in addition to that, we've seen before that the bootstrap provider was responsible for turning the machine into a worker node. So the machine deployment makes also reference to that particular object managed by the Kube-DM bootstrap controller. Also, I don't represent all relations here on the picture. Like, for example, the control plane also has a relation to, of course, a GCP machine template because it needs to get the machine information from somewhere. So yeah, I didn't represent all the relations because otherwise all the objects, because it will be just too much. It's just like the main ones. I know it may be a bit abstract at the moment. But what I can propose is to go through our first demo where we'll deploy a worker cluster from Cluster API. The only thing is that I've already installed Cluster API within the management cluster, meaning that I've run the command cluster CTL. But what we can do already is just to check that I have nothing in my GCP environment. And then we're going to start from there. So let's take a look at the management cluster right now. So I'm using canines, which is very useful to get quick access at whatever information you want on the cluster. So here I can see that Cluster CTL in it has deployed the main components, so the CRDs, which we're going to see in a minute, and then also all the controller manager. So KBG controller manager, which is the infrastructure provider controller. Then we have the Bootstrap controller manager, so the Bootstrap provider controller. We have the control plan controller, control plan provider controller, and we have the generic main cluster API controller there. In terms of CRDs, anything ending with cluster.x-kates.io has been installed by Cluster API. So we can see here cluster. I don't have any cluster deployed. And so with the one-to-one mapping to GCP cluster, don't have anything there yet. So let's create the cluster. So the command you want to do is cluster CTL generate. And then you specify the name of the cluster. So that would be Cappy Nick here, the Kubernetes version, number of control plan nodes, and number of worker nodes as well. And this is going to generate the manifests that you can redirect into a file. And then we're just going to take a look into that file once it's been generated. OK. Let's take a look. So we have a bunch of information there. Let's start with the cluster, so generic information like the cluster network block, the CIDR block here, making reference to the control plane, making reference to the infrastructure provider, as I was mentioning before. One-to-one mapping with the GCP cluster, a bit more specific information like the GCP network here, as well as the project ID, the region where we want to deploy cluster, the workload cluster. Now in terms of the control plane, bunch of information like the kube-dm configuration spec, the machine templates, as I was mentioning, also used by the control plane to generate, I mean, to deploy the nodes. Here, the GCP machine templates reference by the control plane and the machine deployments using that particular image. We're going to see what are the restriction on images and what consideration you need to have for them. The instance type, then we have the machine deployment for the worker nodes. The bootstrap saying that how are you going to join the cluster with the kube-dm config template, and then that's basically all the information that we need to deploy our cluster. Now the only thing that we need to do is to apply this manifest into our cluster and see what's happening. So we have one, two, three, four, five, six, seven custom resource created. So we can already take a look here. The cluster, we should see something here. It's provisioning already. GCP cluster, of course, something. GCP machines, we should see something eventually the first, the control plane for the worker nodes is not going to be deployed until the control plane is deployed. Machine template, we should see the two, one for the control plane, one for the worker nodes. We also have the kube-dm config. They are for the bootstrap. This is for the workers. Then we have the kube-dm control plane for the complain nodes. Desire, yes, not available yet, as you can see. The machines, this is pending at the moment because remember machines, this is for the worker. Worker is going to be provisioned once the control plane has been provisioned. We can take a look at some of the logs there while it's deploying. So the main cluster API controller is basically waiting for the infrastructure provider here. You can see infrastructure provider to provision all the different components in GCP. And this is the infrastructure provider corresponding controller. So it's currently reconciling. This is expected. So reconciling instance. So it must be already there in GCP. We're going to see in a moment the bootstrap. This is still waiting for the machine, the worker, and the control plane should be doing something here. Still reconciling. So it's not finished yet. Just take a look quickly at what's happening in GCP. And then we'll continue when we move forward and take a look later when we start the second demo. So here you can already see that the control plane has been provisioned. And then in a moment we'll have the worker as well. But yeah, for the moment, let's go back to our slide and move forward. So the question you have to ask yourself now is cluster API enough to deploy a cluster? Not really as such, right? Because the cluster, although not fully deployed yet, once it's deployed, it won't be necessarily working because first we didn't install any CNI. So it's not part of the cluster API process to install the CNI. And this has to be managed after the cluster has been provisioned. So it means that provisioning nodes, which are not ready. Also, how do you plan for the underlying operating system used by cluster API? Well, there's a tool called ImageBuilder that's also part of the cluster API documentation that is a mix of HashiCorp Packer with Ansible to generate Kubernetes-ready images. So what that means, Kubernetes-ready, it means that for the particular version of Kubernetes you want to install. So I'm speaking here, for example, if you want traditional instances, so non-managed Kubernetes, so let's say like in GCP like we did, the underlying operating system image need to embed kublets with the right version and all the Kubernetes binaries needs to be present there with the right version. So you need to build the image before that. And same thing, when you are upgrading, of course, you can declaratively upgrade your cluster by changing the version numbers. But first, you have to make sure that the image you are using will have the right numbers, the right versions as well. So this is something you have to manage yourself. Which leads to another question, how do you add additional software or infrastructure components like the CSI? Maybe you want some Ingress or any other components and any other software layer. This is where GitHub's principle may kick in, but we'll see what are the different options in a moment. And how to provide autoscaling for the number of nodes, it's not part of the base components we're just seeing now. This is something you have to do extra, to plan for extra. And then the last one is, OK, now if you want to deliver all this automation by also using GitHub's principle, so delivering all the cluster API manifests into a Git repository and GitOps to kick in, what is the work that remains to do that? So this is what we are going to see now. So our more, let's say, detailed use case comprises all those components. So first, to solve the additional software layer issue, there's another project called Cluster API Add-on Provider Helm or C-A-A-P-H. I'm not sure how to pronounce that, so I will just use the individual letter, which is aim at creating Helm proxy within your management cluster so that you can install within your workload cluster some Helm charts. So you are applying the different manifests to create the CRDs, so also using CRDs. So the Helm proxies, and there's also another CRD we're going to see. So there are two CRDs are going to be, so the customer resource, you apply them into your management cluster. And then from there, you can define in what workload cluster those Helm charts need to be installed. So that's an efficient way to deploy software inside your workload cluster directly from your management cluster as an additional step. Then there's also the cluster autoscaler project. So that's the same sort of principle where you deploy the autoscaler component. You can actually deploy them in the workload cluster or in the management cluster. In our use case, we're going to deploy them into the management cluster. I say them because for every workload cluster you want to create, you need a dedicated instance of the cluster autoscaler components. So the pod needs to run for every workload cluster. Then of course, we're going to be using August CD to deploy the workload clusters. And August CD will be responsible for also managing, I mean, not only managing the creation of the workload cluster, but the provisioning of the add-ons using the cluster API add-on provider Helm and also responsible for adding information to the autoscaler pod. Because in the architecture we are deploying today, autoscaler is installed within the management cluster. So the autoscaler pod need to get access to the workload cluster to be able to monitor the resource. So how autoscaler is working, it's basically monitoring pods that are in pending state because of resources that are not available from the node. If that happens, then it's going to provision new nodes. After 10 minutes, for example, if you remove all those pods, if those nodes are not required anymore because you have enough resources in your cluster, then the worker nodes are going to be destroyed. And to do that, the cluster autoscaler pod need access to the COOP config of your workload cluster. So it needs to be implemented in a sequential fashion. First, you need to deploy your workload cluster by using August CD, in our case. And secondly, once it's been deployed, you can get the COOP config. And then inject the COOP config into your autoscaler pod. If you want to manage this with August CD, that leads to another sort of issue, because GitOps means that you are going to check in to push information into Git. You don't want to push a clear text COOP config file into Git. So we're going to be using SOPs, so secure operation from Mozilla, to encrypt the COOP config file and customize, which is like Helm, a way to manage packages, not packages, but a way to manage how you deploy application in Kubernetes. So Helm is doing packaging. Customize is more like a configuration management tool. So customize supports SOPs, decryption via KSOPs. And that will allow us to decrypt the COOP config in August CD. But for this, we need to implement a couple of things. So first, we're going to be using the app of apps pattern in August CD. That just means that we're going to create an August CD application. And the only job of that application, the only function, is to host other application definitions. So that particular pattern is useful to automatically add children application. Because here, so you have a representation here on the left, so you have the parent application, which is CAPI clusters, because you can then manage all your cluster as a pack of application. So here for our development environment that we are going to provision, we will need to do two things, deploy the cluster and then get, of course, the COOP config file. And then in the second part, inject the COOP config into Autoscareer and deploy the Autoscareer pods into the management cluster. So just to combine them into a single, let's say, application pack, you can use the application of apps, the app of apps pattern. And also, so as I said, the benefit is that now those applications, those August CD application contain the definition of the children application, which means that as soon as you synchronize the parent, so the CAPI cluster application, automatically here on the right, you can see the children application are going to be created automatically within your August CD environment. So it makes the management a lot easier. August CD supports both Helm and Customize to deploy manifests into the destination Kubernetes cluster. So we're going to be using Helm to customize the cluster API resources and install our workload cluster resources in our cluster API management cluster. And then we're going to be using Customize, mainly because of the subs capabilities and because Customize can easily configure a specific portion of resource manifests and customize them. So that means that August CD will need to be patched to support K-Sops and also modified to import our encryption key. By default, August CD, the customized version of August CD doesn't have the right options to use K-Sops. So that's one thing we need to fix. And also, of course, the key that is used to decrypt the Kube config file will need to be present into August CD. So for that, we're going to be modifying the August CD image and some of the configuration, the config map that's used to configure August CD installation within our management cluster. So yeah, that's a lot to do, but let's do it now. Cap it with GitOps. But first, let's check that the cluster we built in our previous demo is up and running. So what we can do there is QCCL gets clusters. We can see it's been provisioned. If we take a look at some of the custom resources, let's check the GCP cluster. OK, so you can see that it is ready. And last thing, let's take a look at the GCP machines here. It's running. And now let's just double check in the Google console that the cluster is effectively available there. Here you go. So I need to refresh that page, refresh. And we should see the two machine there appearing. So we have the two machine, one in US Central 1C and the other one in US Central 1A. OK, so I think we can say that it's worked perfectly. Now let's move on to the next demo. So the first thing I'm going to do is show you the structure of the repository we're going to be using for AlgoCD. First, we have our parent application definition that is created in AlgoCD that will comprise those two children application. So the principle, remember, the principle of the app of apps pattern is to declare the children application into your GitOps repository. So we're going to have one application for the cluster autoscarrer. What is noticeable here really is the path that we are going to be using within the Git repository, which is the same one here. So it's going to be overlays dev. So in the customize part, overlays dev. So those manifests are going to be deployed and reconciled into the cluster. That's for the cluster autoscarrer. And the dev cluster here, this is the configuration for cluster API itself. So again, we have the repo URL here. And we're going to be using the past Helm KAPI GCP, which is this one. So it does have a standard Helm structure with the manifest template and the values.channel that we're going to see in a minute. If we look at AlgoCD right now, let's just log in. You can see that I have my application KAPI cluster. This is the parent application. If I look at the configuration, you can see the repo URL. This is our main repo source. And then the path is the apps section I'll just show you. So it's currently out of think. And you can see that this application, so the KAPI cluster parent app has two children, so the dev cluster and the dev cluster autoscarrer. So now let's go back to the repository here. Let's take a look first at the Helm section. So we have the cluster API template. We're going to start with cluster.yaml, which is remember the top object. We are going to use the same traditional configuration that can be generated by cluster CTL. And we're just going to make a bit more templatization around it. So the name, we're going to put this into the cluster name in the Helm values file. And then we're going to also add the software add-ons we want to install on the cluster and specify this into the add-ons section of the value file. Then for the GCP cluster, again, we're going to determine, we're going to define the cluster name, which is exactly the same as the top level cluster name. Then the project, more specifically, the GCP region. Then here in the machine template control plane, we're going to be using a particular instance type for our control plane and also the GCP image we want to use is going to be specified there. Same thing for the worker. And then for the Kubernetes control plane, we're going to specify the number of replicas. So for our control plane, the Kubernetes version as well. And for the worker, basically, just is going to be the name. And the machine deployment, we're going to have, again, cluster name and the different references with the Kubernetes version and the number of worker replicas that we can also specify. And on top of that, we are also going to configure some annotation that are used by cluster autoscaler. Namely here, we want to set the maximum size of the cluster and the minimum size of the cluster. So let's take a look at the values.channel, which is basically containing all the values that we want to define. So the cluster name, this is Cappiedev. We specify the GCP project region, the instance type for the control plane, N1 standard 2, the same thing for the worker instance type, GCP image. So we're going to start with number of control plane replicas to 1, same thing for the worker. Max, we're going to set it to 10. And we've set the minimum to 1, actually. And in terms of the add-ons, we're going to install nginx. And then we're also going to install calico. But calico doesn't have to have the add-on enable in terms of the annotation, because I didn't specify any label selector. If you don't specify any selector, then it just assumes that the add-on will be installed in every single cluster you deploy from the management cluster. Now let's take a look at the customize section. So we have the base directory, where you have all the manifests required to deploy cluster autoscarrer. So we have traditionally cluster role binding for the management cluster, for the workload cluster, cluster role, the deployment required for cluster autoscarrer to be installed. And this is where you have the command. This is the part that is of interest to us. So this is where we want to specify the kubconfig file for autoscarrer to monitor the workload cluster. And this is also defining the name of the cluster we want to monitor. So those two things are the most important one, the autodiscovery here, as well as the kubconfig file. So for our customization here, we need to basically enable, activate all these resources for customize to use them in our overlay. So in our overlay, this is where we are going to overwrite the configuration. So first, the deployments, what we're going to change here is basically the name we want to give to the cluster if we compare it to the one above. This is the original one, the base one, which is Kappinik in the deployment here. This will be KappiDev. If we look in the customization, we're going to add a prefix dev to all the resources that we are going to create. And we have also a name reference file. So we want to change all the names that we are going to modify, potentially have some references in other field. And this is where we're going to tell customized to accordingly modify the name in all those references. And then we want to get a secret generator. This is because we want to decrypt the kubconfig file. And the kubconfig file we want to decrypt has this pass here. So there is one from one of my prior tests. We're going to rewrite it because we're going to deploy a new cluster. So this is one remaining from my previous test. And you can see that it's effectively encrypted there. I cannot make any use of that file. If I want to connect to the cluster, everything is encrypted using subs. And the role of the secret generator.yaml is to specify how to decrypt the kubconfig file using the ksops plug-in. So what we want to do first is take a look at the August CD pod configuration. So let's go back to our terminal there. So you can see that I have an August CD namespace with many different pods running. So let's take a look first at the August CD server located there. You can see that if I look for image, that it is running an image from my personal registry where I've added ksops to August CD, the ksops capability. So that's one thing that we need. Then if we take a look at the repo server and look just for PGP, you can see that I have created an image container whose role is going to be to import the PGP key that is mounted as a new volume into the container. And then the container, once it does its job, then the payload, let's say, container will take over. And perform its normal role. So the ID here is really for August CD to get the private key so that when it's going to be leveraging customized to deploy the manifest and reconcile the manifest into the cluster, it will be able to use the decrypted version of our config file. There's a last thing I want to show as well. In terms of August CD configuration, we have a config map there where I added extra option so that ksops can be used. Because it's a plugin, you need to add this extra customized option when using customized to do anything. So when August CD will call out customized, it's going to add this specific option that will allow for the usage of ksops. Now, let's go back to August CD. And what we are going to do is reconcile the application, rethink the parent application so that our children application can be installed. So if I look at the application status there, I've got my single parent application, and that's basically it. So let's sync it and see what happened. So now if you look in the CAPI cluster, you can see that the dev cluster and dev cluster autoscaler children application have been synchronized. And now, of course, I've got both my dev cluster and dev cluster autoscaler that are out of sync. So in the order, we want first to deploy the cluster, then we'll have to do a couple of manual actions to encrypt our config file and commit it, push it into our repository, and then we'll proceed with the cluster autoscaler installation. So first, let's sync the dev cluster. So you can see all the components. Those are all the cluster API manifests and the objects that are required for the workload cluster deployment, as we've seen before. So let's first sync this and check what happened. So it's going to basically deploy the cluster. So you're going to see it's going to start by creating the machine, GCP machine, and reconcile this in GCP. So we can take a look at what's is happening from the GCP console. And if we refresh, we should see now we have both the GCP dev control plane, as well as the worker node that has been installed. So now we can take a look at all the component deployed. You can see here a complete picture of all the objects deployed. So from the machine deployment for the worker, from the machine, also the control plane machine here, all those components. So now let's go back to our top application view. So our dev cluster autoscaler, of course, is still out of sync. But before proceeding to the dev cluster autoscaler install, we need to encrypt the config file into a config map. So let's do this now. For this, let's go into our visuals to the code environment, open a terminal there. So what I want to do first is I'm connected to the right here on the top right, the right cluster. So we can go quickly to get cluster. We can see that our, of course, GCP dev cluster is provisioned. So what we will do now is get the config file from cluster CTL. Here we go. Now we need to create a config map out of that config file, so GCP dev dash config dot yaml. And then we can encrypt this in this sub dash e GCP dev dash config dot yaml into the expected file, which is config dash enc dot yaml. Here you go. And now if we take a look at that particular file, we're going to see that it's completely obfuscated. All the data from the config map is encrypted. And the last part now is to commit and push this into our upstream repository. So create new cluster. I'm going to commit and push. Let's go back to now our CD. And now we can sync that application. So we can already see that the config config map has been decrypted with the right information. And now let's just sync to install cluster autoscaler. It's being deployed. The pod is being deployed, so now we can check that a new pod has been deployed into our management cluster. And then we can proceed with the autoscaling tests and just check that all the application, the add-on application, have been installed. Let's connect to our terminal now to check that cluster autoscaler has been effectively installed. The first thing to do now is take a look at the pod. So we can see the dev cluster autoscaler, which is actually a deployment. So let's take a look at the deployment. Dev autoscaler there. And if we display the configuration, we can see the arguments for the command are effectively the ones that we have defined from the customization file. So everything looks good there. Can check the log quickly. It's waiting for logs. Now on the second screen here, let's connect to our dev cluster. So first, let's get the config file. We can just set it to copy dev.cubeconfig. Just launch k9s. So we can see that Calico has been installed as expected from the add-on layer, as well as the nginx ingress. So this is all good. And in terms of the nodes, yeah, we have, of course, the two nodes ready that we have deployed in GCP. Now let's take a quick look at the add-ons where they are coming from. So they are coming from other CRDs that we have installed with the add-on software layer. We have two CRDs. We have the Helm chart proxies and the Helm release proxies that will install Helm chart in the destination workload cluster. So here we've added two of them. So first, the nginx and the Calico one. So the difference is that, if you can see here, the Calico one doesn't have any specific label selector matching cluster here. It's basically all of them. The current matching cluster, it's all of them because we don't have specified any label selector in the corresponding manifest for that custom resource. For nginx, it's a bit different. You can see here that you have match label for cluster selector. So only the clusters, so the cluster API clusters that have that particular label will get nginx installed. And you can see here, the current matching cluster is only copy dev. So now for our last test, let's try to scale the cluster. So we're going to monitor first. Let's check the log for the dev cluster autoscaler. Now I've got a deployment here that is reserving or requesting quite heavy, large resources. In that particular deployment, I have five replicas. So that should trigger the creation of new node. Let's apply that deployment. And again, let's check the cluster. And you can see that now I've got multiple pod pending. And on the right here, you can see that the dev cluster autoscaler process is asking to scale up that particular cluster, meaning that is going to trigger the creation of new machine in GCP. So if we go back to the Cappi controller manager, we should see something happening. So you can see here, waiting for the Kubernetes node on the machine to report ready states. So now if we go back to the infrastructure provider logs, it's going to be reconciling with new machine, new dev instances. And we can check that here that's currently catico is being deployed on new nodes. So we have effectively new nodes being deployed. It's not ready yet, but catico is going to be installed. So we have effectively now two new nodes that just appeared. And now the pods can effectively be started. So all my pods from my deployment have started. And the cluster has been scaled. That concludes our demos for today. I hope they've been useful. And now let's conclude that presentation. So within that GitOps plus cluster API is not necessarily a lazy task. If you want a comprehensive set of features, this is most of a do-it-yourself process. And you have to dig into very different concepts and different software. So this is where pallets from SpectroCloud can really help, because it's based on the same principle as cluster API. Actually, we have committed a lot of code and contributed to a lot of the existing cluster API providers. And the additional feature that pallets will add are related to more enterprise features. So we are really decoupling and making cluster API more usable by separating the actual cluster from reusable cluster API profiles. And also on top of that, we add extra security and enterprise features. I mentioned things like security scans, backup, SBOM tracking, role-based access control, et cetera. Whether it's managed or unmanaged communities, you can use pallets for any cloud. Or at the edge, you can try it yourself. It's a freemium model. So you can have up to, I would say, two or three cluster, free, depending on the resources and the number of nodes. But you can really give it a try. It's very declarative, same reconciliation principle for deploying and managing all your clusters. Now, what I want to add is for you to get the key takeaways for today. So of course, cluster API is a proven tool to manage Kubernetes cluster. If you combine cluster API with GitOps, then you can really provide a lot of automation at large scale. But it's just the beginning. As we've seen today, Kubernetes needs many more software layers to be production-ready. So we've seen the cluster auto-scaler, the additional add-on software that pallet will give you basically out of the box. So that gives you the foundation. And you have to build on cluster API to add all your enterprise requirement. And of course, that may require a lot of sweat and bitten nails. So yeah, you can try a pallet to see the difference and how easy that will be compared to what we've done today, what we've achieved in one hour. You can probably do it in less than five minutes. Another couple of things. So call to action, please check out our cluster API and declarative Kubernetes management report that you can find here. And finally, come and visit us at our booth S22 at KubeCon Europe 2023 in Amsterdam, which is happening in a couple of weeks. So that's it for today. Thank you for joining again. And I'll see you in the next one.