 Welcome everyone to this talk around managing GitOps deployments in multi-cluster production environments. My name is Roberto Carattala, and I'm a cloud services black belt working for Red Hat. And let's today imagine that we are part of the web steam that we have a duty and it's deploy my applications in multiple Kubernetes clusters across multiple regions in the world. So we are using different hyperscalers and also on-premise, and we wanted to build our deployment and we wanted to deploy our applications across different hyperscalers, including AWS, on-premise, Google Cloud, Azure, and also bringing the hybrid multi-cloud scenario. And for that reason, we wanted to deploy our application. We need to deploy first our application and we need to first log into the cluster. Then we need to log into the cluster as well to see the health of the application. We wanted to also use Helm charts and then the business unit brings more applications on the table. We need to manage the cluster config and everything is increasing with complexity and we will end like this. So for trying to avoid being like this, we wanted to implement, we wanted to embrace as a DevOps team, GitOps. We already know the GitOps principles is very, very quickly. The system is described declaratively. We have the desired state version in Git. For that reason, we have the possibility to approve changes that can be applied automatically. And there is a controller that exists and detects acting, for example, and detecting these trips. And one implementation and one tool that is super useful is ARCOCD, we already know. Things that it's super good is that can detect the drift, also have kernel control over sync orders. We have this rollback and roll forward. And there is super fancy that is able to support the manifest templating like Helm, Customize and others. So the first button that we wanted to introduce is that we can define our applications, our ARCOCD applications as another Kubernetes manifest. So we can define applications, projects and settings. And we can define our infrastructure as code. We have our source destinations and sync policy as well. The source is defined what is the source. The destination is in which cluster that could be in the same cluster that it's ARCOCD deployed or remote clusters and we will see. And also the sync policies. That it's what happens when detected drifts between the Git repository and the live cluster as well. So also for doing this in a GitOps way, we wanted to introduce Customize. For everyone knows that Customize is a way to, at least, remove or update the different configuration without a need to fork. Also we have this template-less templating system and we can manage a lot of different environments just with one hierarchy. And we wanted to do that in this scenario. So the first, by the way, we have this repository. Everything is sourced and you can scan this QR code. This QR code will appear again. And also you have this URL. This URL will land in this specific repository and it's fully open source and you can run it and you can deploy it. So let's go to the first one. And in the first demo, we have our ARCOCD here. We have our ARCOCD empty and we wanted to deploy our application in this exact same ARCOCD using the application set that we are using here. Check this because this is important. We are deploying our application in the exact same server. So with this awesome Wi-Fi that it's providing, let's see if there's a live demo. Let's see if this will deploy properly. And yeah, as you can see, we deployed very quickly the different Kubernetes manifest that we have already here. We are using all of the resources here, the finite. And as you can see, we are using an ARCOCD application. So nothing new under the scene. But what happened if I wanted to deploy my applications into remote cluster? I wanted to scale and I wanted to deploy my application that is composed by a front-end and a back-end. And I wanted to also manage the order. So I went first to deploy my back-end or my database and I then wanted to deploy my front-end in order to have control. First thing is that remote clusters can be added as a managed clusters into ARCOCD. So the cluster credentials that you have are stored as another Kubernetes object. The only thing that needs is that each secret must have a label specifically that needs to define as a secret type cluster. And then it will appear like this. So I'm using different clusters, including yes. I'm using Red Hat Open Chicken AWS. I'm using on-premise clusters. So you can control whatever you want. The only requirement is that needs to be a Kubernetes cluster. For that reason, it will appear as a secret. And then you can talk as we will see. Another thing is that we define the clusters that we have already managed. And then we need to tie this into ARCOCD. So you mentioned before, and we mentioned before, that you can define whether you want or whatever destination that you are applying. So the destination reference to the target cluster and namespace is tied to this name and cluster. But you need to define the server or the name. Can be used, not both. So if you wanted to deploy, you need to specify the API or the name of the cluster that you have. And under the hood, ARCOCD will solve this in order to deploy it. Also, we wanted to introduce sync waves and hooks. That is a way to order, because we want first, not everything deploys as a big man. We need to first deploy our backend and then deploy our frontend, as we will see. And also, we will use hooks that these hooks can run before, during, and after sync operation. I'm using specifically these hooks in order to add a call in order to have a health check if everything is working properly. So these, more or less, it's the workflow that I wanted to follow. I wanted to sync my application that it's in one repository using ARCOCD, pushing all of these steps in our remote cluster. And I wanted to use it just with one command. So this will be the second demo. Hopefully, we'll succeed or not. We can just copy this. And as you can see in the application, I'm defining the destination and the namespace as well of the cluster. So if I will go again to this, you can see that the application is being deployed, but with a specific order. This specific order is maintained using the sync waves flag. But also, the most important part here is that we are using the destination. So if you check here, this is a different destination that we already saw. So it's not deploying the application alongside with the ARCOCD. It's using the different settings that we have in here in order to deploy in my cluster. And with that, how about deploying multiple-related applications at once? Because we saw the first application, but if you need to scale this to a training application, 40 applications, into different clusters, it's very, very, very difficult. It's not quite easy. For that reason, it's introducing and we wanted to introduce application sets. Application sets is a CAD that you can use for multiple things. To use a single convenience manifest to target multiple different clusters of Kubernetes. Or also, you can deploy multiple applications from one or multiple Git repositories, as we will see in this demo, and improve supports of many repos. I'm using monorepo because it's very, very handy in order to maintain the different environments and the different configurations that you have. And also, within these multi-tenant clusters, improves the ability to add the cluster tenancy in order to deploy your applications as well. For that reason, we are using the different generators. These are more or less the most important generators, the AMO, and there are list generators, cluster generators, and Git generators as well. And the matrix generator that combines are different. So list generators is that we already saw that based on fixed list of cluster names, you can deploy the different applications in the application set. The cluster generator, we will see that is automatically generated cluster parameters. And finally, the Git generator, it's based on the different folders. So you can store everything that you have in one specific hierarchy and then deploy the different development environments, staging environments, or production environments. We will see. So what's our demo? What's our pattern? We want to deploy different environments, staging environments, and product environment in with one command and with one file, one Kubernetes manifest. And we will use this application set. So if we go to the demo three, we will see that we have staging, product, and demo environments. And here we have the three different environments. So if we go to the product environment, we will see this generator. So this generator, it's pointing to the Git repository. And this Git repository, we have here, for example, the different environments that we already saw, and also the applications. So if we show here, we will see the different environments that we have. And if we deploy this, we'll deploy the different application sets and boom. A lot of different applications in different environments are used in order to scale. In order to, for example, type the different projects that we have or the different applications and very easily we can scale. So also we can define this. And if we add another fold up here in the DevMf or in the BroadMf automatically, this will acknowledge and will deploy our application without doing anything. Because can scale and supports also the different hierarchy using this generator. So now that we have clear how we can use the Git generator, how about the GitOps multi-cluster deployment strategies. Because we wanted to automatically generate the cluster parameters. We don't wanted to specify, for example, the cluster, we wanted to deploy our application in every single Kubernetes cluster that we have in Argo. So with this, we can check the different clusters that we have. In other words, we have four clusters. And the thing is that we will use this cluster generator that you will see here in order to deploy our application and our specific Kubernetes manifest to every single Kubernetes that we have. And for that reason, we will deploy our welcome app in different Kubernetes clusters that we have. So for doing that, we need to go to the fourth demo. Let me go to the, let me hear of the fourth demo. Here you have also more or less the rationale and how you can add multiple clusters, independently that is in the cloud or on premise as well. And with this, we will see that these applications that we have, let me clean the different filters because I'm using Argo projects as well. And we will deploy the fourth demo. As you can see, let me just filter for one specific demo. We are using and we are deploying the exact same application using this generator. So if we go specifically to one generator that we have here, the multicluster application, we are using this generator in order to scale and define and type every single cluster that we have inside of Argo CDManage. And we are deploying the exact same application in the different clusters, as you can see. So we don't define the clusters that we are managing manually. We are using just this generator in order to scale, but we have sometimes that we want to have control. For that reason, how about deploying my multiple environments into multiple clusters? So we want to tie different environments that we already saw in different clusters. We want to deploy the development environments into the development clusters. And we want to define that this environment needs to be tied to this specific cluster. And if anything happens in everything, for example, needs to be checked and detecting these drifts as well. So for avoiding this, that work fine in depth, but there is an ops problem now that we already know, we wanted to also implement this multi-clustering. And how we can do that? We can label a selector that can be used to narrow one specific target clusters. So we are saying that this application set, that it's a depth and multi-cluster app, will deploy our application just in the cluster that is labeled as a depth, depth equals true. You can put whatever you want. It's just another label. But this would also require to match the Agocity cluster secrets with the application set selector because it's the way to tie one thing with the other. It's like the Kubernetes services that it's tying with the deployment. This is more or less the same. We are defining specifically the selector that we are deploying. So in the last demo, we are defining the different GitOps in a multi-cluster and multi-environment strategy. So we have this specific environment in the fifth demo. So you can see here, we will to the read me. By the way, you have these to patch automatically the different cluster managed secrets in Agocity. So I'm defining, for example, the different clusters that I have here in order to say that this cluster is for development, this cluster is for staging, this cluster is for production. And this, for example, I'm using Red Hat OpenShift in AWS. I tied this specifically to my production cluster. So if I go to the cluster two, this is just a secret. And if we see the secret, for example, we will see that this have a label brought to true. And for that reason, also if I deploy in a couple of just seconds, we will see, let's go to the application. As you can see, we have our application and I will, yeah, perfect. We have our three applications that is an awesome app for the three different environments. Let's go to this specific environment. And also in the details, we will see that this is tied to the cluster two, because if you check, for example, also the different path, we will see the path that it's pointing to one specific production environment. And this is controlled for the specific ARGO CD application set that is tying the cluster selector of PROD. And also it's deploying in one specific cluster that have the label and is matching also the label from PROD. And also it's tying the source of the app demos PROD MF. So I can tie the PROD MF with the specific generator of the cluster that it's marking with this label. So with this, the last button is how to promote between GitOps environments. I tried and I had this conversation so many times and usually there are some teams that have not a very good relationship with Gitflow at all. For that reason, we suggest and Costas from Codefresh publish a very good repository and blog post as well around how to promote between different GitOps environments. So basically it's not using branches, it's using hierarchy. It's using this type of hierarchy. You can use customize or you can use whatever it's written. Also you can use Hamtas as well in order to scale. And when you wanted to promote from one environment to other, just simply copy. It's very, very easily. So we have this version and we wanted to update this image. For example, for updating this image we can just promote from this and we can copy. And for example, for promoting one application from depth to a staging in the US we need to just copy the version YAML or the difference between one and the other and publish. This would allow us to not use the branching system and not having things like therapy. This is a very silly and very simple example but imagine with hundreds of different applications and in production it's a little bit tricky. So with this ends my talk. If there is any question, please go ahead. And if not, yeah. The same way, sorry. Take down the objects the same way. Correct, yeah. If respects also the order because it's the finite that needs to have minus one but also for deleting, deletes also in the order. And you can apply also the hooks in order to do several things like wire me out the database once everything is okay. And if fails, for example, page me with a page of the audio sonata. Makes sense, thanks. Brilliant. Hi, Roberto. Good talk. Can you talk a little bit more details about how the Argo CD is connecting, authenticating against the different clusters? Absolutely. I know there's Argo CLI that you can configure an extra remote cluster but it does something with service accounts but in cloud Kubernetes that is not the pattern. Yeah, that's a very good question as well. So I faced also this challenge when I tried to scale with remote clusters, for example. There is part of this guy that I usually do the Qubectl login but for clusters in the cloud, it's very tricky because in some clusters like in AKS needs to have this Qube config that it's maintaining along the time. And there is no save a bullet. I usually maintain everything in the exact same Qube config and I'm doing this switch context in order to at least have one single file and try to mimic and try to have every single file, every single Kubernetes API in one specific file and I can change the different context. But there is a very tricky way, for example, if you have to authenticate to once a AM or to open AD in order to grab this configuration because it needs to have it every time and Ago is not aware of that. So Ago is just having the secrets and inside of the settings, for example, you have the different secrets and there is one invalidated catch in order to add and regenerate the cache but needs to have these secrets because it's just an API and a token and if this token ends and the detail of the token is very short, I mean you need to re-instantiate the class time sometimes the objects that are already deployed are a little bit tricky. Yes, sir. Can we beat the question if you want to? Hi, great talk. I mean, we use the same, we use the app, the app side has been very powerful for us. We find it very amazing that, hey, we want data, we want data control on every cluster, one Helm Chat magically, okay, 30 of our clusters all get it all at once. I'm curious about like, we have this situation where in an environment, let's say Dev, in Dev, they're all the same data center and you have one cluster, cluster one and we call it Dev cluster but then, well, now it's up an apparel cluster that for various reasons and we have an issue of like, how do you name that in your app generator because then like, okay, that's just a random cracker, now it's stuck with that forever and the values will be stuck with that forever. So I'm not sure how, what's your suggestion in that scenario? That's a very good question and I have not straight answer, I'm honest because I'm struggling with the exact same way. The thing is that you need to maintain the rational between the different clusters that you are using and also tagging the clusters, it's a good approach and you have the different labels and you only see here one label but you can assign different labels that could have a meaning for you. For example, that this cluster is just ephemeral and needs to have an end of life of this and will be wiped out but also you need to go to every single algo and define it. Good thing is also all of the settings that you are seeing here can be automated and can be put it as a code. So you can define your own clusters as a code. For that reason you can define first the clusters as a code. You need to maintain obviously the API but at least you have the possibility to tag and maintain more or less the order. But yeah, it's a common issue that happens as well to me. Thank you. My pleasure. All right then. So thank you very much for your patience and yeah, the demo went well. Yay with this awesome Wi-Fi. Thank you very much for your patience and your assistance.