 Hey, everyone, welcome to the webinar. Today we're going to talk about how to manage add-ons across clusters. Before we get started, though, let me quickly introduce the presenters. So I'm Ritesh Patel, co-founder VP products at Nirmata. And then with me, I have Damian Toledo, who's also a co-founder and the leads the engineering team at Nirmata. Hi, everyone. Just to quickly introduce Nirmata, our company. So we are the creators of Kivarno. It's an open source policy engine, now part of CNCF. Nirmata is actually a platform that enables data management of Kubernetes and workloads. And we've been part of CNCF for quite some time. We are an active member of CNCF and participate in various community events and sakes. And we have customers who are using Nirmata platform to operationalize Kubernetes for their developers. So it's a quick introduction of Nirmata and what we do. In terms of the agenda, today we'll start with what's typically running inside a Kubernetes cluster. And just to kind of level set and describe what we mean by add-ons and how we are seeing enterprises enable and manage add-ons across multiple clusters. Then we look at GitOps, which is quickly becoming the de facto standard for delivering or deploying applications on Kubernetes. We look at some of the benefits and also some of the limitations of GitOps. And then we'll talk about how some of the limitations of GitOps can be overcome. And we'll use the example of automating add-on management to demonstrate that. And Damian will then do a quick demo showing how this can be solved and how we've solved it. And there are other solutions that try to address this as well. All right, so let's get started. So typically, when you look at a Kubernetes cluster, beyond obviously the control plane, there are a few different types of applications or components that are part of your cluster. So at the minimum, initially you need some of the required core services. And these tend to be your CNI plug-in, your CSI for storage, DNS if you're running core DNS, and ingress if it's HA proxy, NGINX, or any other ingress out there. So these tend to be core services. And the next set of components you need are typically what we call add-ons. These are some examples of these are monitoring, logging, security add-ons. These are typically shared across your applications. In some cases, they may not be if these add-ons need to be application-specific. But some of the examples here like Vault agent, Datadog agent, Prisma Cloud, Sumo Logic, there are several of these that are typically part of any Kubernetes cluster. And finally, the applications. So then these are, whether it's custom applications that the development team is building or third-party applications that need to be installed and run inside a Kubernetes cluster. So these are the different types of applications and on your running on your cluster. And today we are going to focus on add-ons. Typically, what are these add-ons? What do they provide? Why? How are they used? So these add-ons are standardized services that need to be available in every cluster. These are, like I mentioned earlier, security monitoring, logging, backup, secrets management. I mean, this could be any of these types of services. Then what we've seen is typically these add-ons are owned by different teams in an organization. So for example, security team may own the security add-on, monitoring or SRE team may own monitoring add-on, storage team may own or be responsible for backup, and so on. And even though these are owned by different teams or they could be owned by the same team or maybe the operations team have the various SMEs who are responsible for this, but then these are used by various development teams or they could be used by, even in some cases, just by central teams. So the importance of these add-ons is they typically provide some or address some operations and compliance requirements, which means that if these add-ons are not running, the cluster may be out of compliance. And also, one thing to notice, these add-ons are themselves applications. So they really require ongoing management to ensure availability. So you need to monitor them, upgrade them, update them, and so on. So these are just like any other application. So let's talk about how typically applications are being deployed on Kubernetes. And obviously, there's lots of ways to deploy applications, but one common way to deploy and automate the deployment of your applications is by using GitOps. So GitOps is essentially, it allows you to deploy your application in a declarative manner using Git as the source of truth. Now, the benefit over here is you can actually define or commit your application manifest into Git, and Git now becomes a source of truth. And then any time you need to deploy your application into a single cluster or multiple clusters, you can, using a GitOps controller, you can apply the manifest to those clusters. So the benefit of this is, obviously, Git becomes a single source of truth. Now, also it enables developer self-service because most developers by now are familiar with Git. They've used Git before. They understand commands like commit, push, et cetera. Also it provides observability in terms of exactly what's running in your cluster. And you can actually validate that by looking at your Git repo. In case you end up losing your cluster or losing your application that's running in the cluster, you can easily recover from your Git repo. So these are some of the benefits. And Git is becoming increasingly popular for delivering or for deploying applications into Kubernetes clusters. But there are definitely some challenges and some scenarios that GitOps does not address. So GitOps has some limitations. One limitation is that Git is not really designed for automatic updates. So if you want to build a continuous delivery pipeline where for every build you want to update a YAML manifest and apply it or commit it to your Git repo, that works if you have a few builds or you don't build that often. But let's say if you have several builds in parallel and if you're trying to commit or update the same set of YAMLs from different build jobs, it gets tricky. It could run into conflicts and things like that. Now the other part here is if you're managing or trying to deploy an application that has to run across multiple clusters. So the example we typically see is for add-ons, which need to run in every cluster that's deployed. So if when you do that and if you need any variations in your manifest, you end up having to create either multiple repositories or branches to change your YAML manifest for those clusters. And that becomes challenging now because it really adds that additional overhead to be able to manage all of these Git repos and branches and so on. The other aspect that GitOps doesn't solve is around centralized secrets management. So for example, typically it's not a good idea to store your secrets in Git. So you need a way to ensure that your secrets are stored outside, maybe in Vault. And when you do that, now you need to, again, every time you deploy something from Git, you need to make sure that all of the configuration necessary to find your secret or to add your secret to your application is specified in the manifest. And this could be different for different clusters. The other challenge sometimes ends up being the lack of visibility in terms of especially when there are several changes across your manifest, it becomes hard to figure out which change was deployed when, unless you have a GitOps controller tracking that for you. So it's hard to figure out when a change was deployed, which cluster it was deployed to, and things of that sort. And then Git does not provide any validation for your YAML manifest that's really happens either when the manifest are deployed or can happen as an intermediate step before deploying your YAML manifest. So before we get more into the automation use case, let's quickly look at what a GitOps controller does. The GitOps controller is really responsible for applying changes to the cluster. Now, the controller could be something that's running inside the cluster, or it could be something that's outside the cluster. There's no real requirement on where the controller is running, but it's important that the controller can address multi-cluster deployments, especially if customizations are required per cluster. And also the GitOps controller provides that visibility and state, essentially providing you clear visibility into which change was deployed on which cluster and whether that change was deployed successfully or not. And then the controller also should enable advanced progressive delivery workflows, such as approvals, rollbacks, multi-cluster deployments, and so on. So now let's think about add-on management and how can we automate that, especially when you're deploying across several clusters. And one of the challenges we talked about, or the limitations we talked about of Git, was having to create multiple repos and branches if you want to deploy to multiple clusters. So in order to automate add-on management, what you can do is use customize to actually provide specific customization for each of your target clusters, and then select target-based customization to apply your customization so that each cluster has exactly the right configuration or the right YAML. And the YAML is specific to that cluster instead of just getting a generic YAML, generic configuration. And that way, you can address several use cases. For example, if you have specific licenses or IDs or tokens that need to be used per cluster, that can be done using customization. The other example here also is a centralized secret management where if some of these secrets like tokens and certificates, licenses are stored in a central store like Valt. You can use Valt agent to dynamically inject these secrets, but now using target-based customization, you can configure specific labels and annotations to make sure that the correct secrets are injected by the Valt agent. And we look at some of this in an upcoming demo. The other advantage of using customize for add-on management is that it becomes very easy to reproduce final YAMLs even offline before applying to the cluster because you can just use the customized CLI to actually reproduce or recreate the final YAML files. And that helps you recreate the state that's running inside the cluster. So it doesn't involve any third intermediate third-party controller or step to be able to recreate your final YAMLs. So here's a quick example of what customization files look like if you are using target-based customization. And this example is specific to how this is implemented in Nirmaka as a GitOps controller. And we'll look at the demo in the next, after this slide. So on the left is a specific customization for a particular target cluster. In this case, the customization is actually in the secret here, which it's targeting or it's listing a specific cluster. And on the right-hand side is a YAML which selects a specific customization for each cluster and each target, if you will. And the target in this case is a Kubernetes cluster identified by a name and a label selector. So we'll look at this next in the demo. So Damian, if you want to go through the demo and show how this works. Sure, thank you, Ritesh. All right, so let's take a look first at the repository that Ritesh just presented. So in our case here, we have actually multiple add-ons in the same repository. You could have one repository per add-on that works as well. So if we take the example of Datadog here, and you can use also multiple branches if you want, but here I'm using this branch. So we have these clusters that we want to customize, right? DevTest1, DevTest2, et cetera. And we could have also production clusters. As Ritesh explained, you can have a customization file that's specific to your cluster. And here we can customize the demo of the secret. Now, in the base directory, you have your base YAML. This one is not customized. It has kind of a generic name for the secret. And we have also the file defining the targets, the one that Ritesh just showed. So that's how you can structure your repository to have a per target customization. You have a base YAML. And then in each directory, you have a specific customization file that will change the name of the secret. So the next step then, in the amateur, what you need to do is to create an application for each of these add-ons. So for that, you can add your application here. You say that the upstream is gonna be a Git repository. And you can select or enter the name of your repository. Then you can select a branch in this repository. And here you can actually indicate that you want to actually apply per target customization. And you'll be able to select the file defining the targets that we've just seen. In addition, you can also have, you can mention or define the namespace you want to use to deploy this add-on. And in our case, we want to take all the YAML files from this data.dog agent directory. So that's how you can just declare this application, this add-on to Niamata. So we already did that for data.dog and the vault injector. The vault injector. So the next step after that is to create a cluster type. The cluster type is gonna be used when you discover your cluster, when you onboard the cluster into Niamata. And here I have an example. In this cluster type, we have added our two add-ons, vault injector and data.dog. We also included Kivano, which is a policy engine. And you can define in which order you want these add-ons to be deployed in your cluster. So when you onboard the cluster, you just say that you want to use this cluster type. So we have already registered few clusters and we have DevTest1, DevTest2, DevTest3 here. And now what we're gonna do is we are going to make a change to our repository and verify that this change is rolled out across my clusters. So the change is gonna be the same for each cluster. However, it should also customize the secret name for each of them. So let's do that. So we're gonna go here and we're gonna change the base YAML. Let's say we are going to change an annotation. So I'm editing. Here I have an annotation we are using just for test to create some changes. So currently it has the value eight. We're gonna put value nine. Going to commit this change. Annotation value nine. And now what will happen is that Niamata is constantly watching this repository. So it will detect that there is a new commit. And if we go back to our catalog, we will see that this commit will be detected. So we can force actually to do it. Let me take like up to a minute. And what will happen is once we have actually detected that commit, we are going to start a rollout across all of our clusters. So here we can take a look. We'll see that the rollout is started. Okay, for each of my clusters that I've selected by the target file. So I can go here and see what's happening. Shelly, actually in my running application. So we see that since it's a demand set, we don't have any pods here at this point. The pod is being recreated, we're pulling the image. And you'll see that it's gonna go to running state. So what's interesting here now, we should be able to verify that the change has been rolled out to my cluster. So here we are looking at DevTest1. So let me export all the YAML for this running application. If we look at the demand set here, we will see that we have the change for our notation value nine, okay. And yet, the name of the secret is specific to this cluster, right? So I can go also to my command line, can verify, can see that my data log agent has been restarted about a minute ago. And now if I look at the YAML, I should be able to verify that I have the value nine. And the secret name is DevTest1. Now if I go to another cluster, DevTest, let's say I go to my cluster, DevTest2, and I look at the data log application. This time I will see the same change, the value nine for the annotation. Look at the demand set here. So I see that the same change has been applied to all my clusters, but the key, the secret name is different for this one, right? So we want to have a secret name different from each cluster. So that's how you can solve what Ritesh has explained. You can have a per target customization across multiple clusters from a central point, right? Ritesh, back to you. Thanks, Timmy. So just to quickly summarize, Ketops has become a preferred approach for continuous delivery for Kubernetes applications, which include custom applications as well as add-ons. But there are some limitations which you can face, especially when deploying an application that requires specific customization per cluster, especially if you're deploying that application across multiple clusters and you want to use progressive delivery. So using Customize lets you address that problem, and especially using Customize along with central secret management can be used to fully automate add-on management. So customers we work with have automated everything, right, from provisioning a cluster all the way to provisioning all the different add-ons for their clusters, including vault and leveraging vault injectors, agent injector for secrets, and completely without any intervention, without any writing any additional automation. So that's really what Ketops controllers like Nirmata and there are other controllers like Freed and so on help. And this is how they help enable clusters as a service for enterprises. So thanks once again for attending or watching this webinar and please feel free to reach out to us, contact information was shared earlier in the introduction slide. So reach out to us if you have any questions. Thank you.