 So, hello everyone. Today's session is on GitOps simplified and like already I skipped through majority of the introduction. Most of the things are already mentioned. You can check out my YouTube channel and Discord. Yeah, I'm working at Civo. So, Civo is a company that provides managed Kubernetes based on K3s. And there is a community, a Kube Simplifier that I started that you should definitely join. So, let's dive into the session right away. So, what is GitOps? So, we'll go through a complete flow so that we have a complete story why we did it, what is GitOps, principles and all that stuff. So, GitOps is not a very new concept to be honest. It's been there. It has been practiced for quite some time now. There have been organizations coming together to define set principles, best practices and making sure that it is done in a right way. So, when you talk about GitOps, everything is in Git. Your infrastructure, your applications, everything you are defining in Git and it gets deployed automatically. So, your infrastructure gets created automatically, your applications gets deployed automatically. Now, since it is in Git, so it is version controlled, so everything will be a version deployment that gives you an easy way to roll back as well. And it maintains consistency between different teams, devs, ops, and other teams who are collaborating on infrastructure parts, who are collaborating on the application parts. So, all those things are there. Constant feedback, constant feedback is very important from the application, the metrics that is getting generated, whether the application is healthy, if any replicas goes down, then the constant feedback is important. Faster time production, yes. Since you are only managing with Git, so once the infrastructure is set up and you have GitOps tools installed, you have all the repositories in place to be watched by different tools, then the only thing you need is to kind of deploy on Kubernetes, sorry, the push to Git and it will be deployed to production with the pipeline that you have Dev stage and testing all the scenarios and going to production. It also brings a standardization across your organization because you would know the process of putting out a feature in production or in a particular branch or in a particular cluster. So, everything is in a GitOps way in managed, managed via Git and it brings standard. So, overall GitOps in very simple terms is everything is in Git, you have a Kubernetes cluster and you have agents that will be pulling information from Git and will be deploying clusters, deploying applications to the cluster, maintaining the different versioning of the application in advance stuff as well with which we'll be talking about later. Now, I told you previously like organizations are coming together to create, define the principles. So, the four principles of GitOps are it's declarative. So, all your infrastructure and applications are as code like we have already you know heard about Terraform, Crossplane and previous talks infrastructure as code. The thing is the true GitOps way of doing things is when you have your infrastructure also in Git repository, you commit to Git and obviously you have a control plane cluster. So, you commit to Git, your stuff running in the control plane cluster will be watching that Git repository and will be creating the clusters on the fly, will be deploying the apps to the cluster versioned and in-mutable. So, that's the second principle. So, again, Git is a single source of truth and everything, all the stuff is versioned. So, whenever you commit, you get that commit ID, you can use the SHA, you can use the some of the CI tools and like GitOps actions, GitOps actions and you can push those with respect to that particular SHA and then it will be deployed automatically because the GitOps is already connected. Pulled automatically. Continuous pulling of the desired states. I think in the previous talk you heard about drift and all those stuff and all these. So, that's where it fits in. Like you have to continuously pull the desired state because if you have told your application needs five replicas to run, if anything goes wrong, then your GitOps or the GitOps stuff that you have deployed should be able to pull that continuously or just observe or monitor the state, which is current, which is the life and the desired and compare those and apply if there is any differences. Continuously recycled. Observing the system and applying the desired state is very important when you talk about GitOps principles. So, it's continuously recycled. You can definitely define your interval times that you want to do with respect to those with different tooling as well. So, those are some of the GitOps principles that are in place. Now, why GitOps? I have a CI, I can, I have Git, I can build the image, I can push the image, the CI part has already existed for a long time now. But what about the deploy part? The deploying to Kubernetes is where the continuous delivery portion comes and regular pipelines, irregular pipelines, what we use? We use QCTL apply. If we write some basqueries or some cron jobs to do this jobs, what we'll be doing? We'll be doing QCTL apply. We'll be providing the Kubernetes access and the cloud credentials, some of the other way. But still, the monitoring will be missing, right? We again need to have some sort of mechanisms to monitor those application and make sure whatever we have in the Git repository, they are matching with whatever is there deployed in the cluster. And if not, then they do it. That is where GitOps is needed. But it is not needed in this way, like in the way where you are manually doing all these things or somehow calculating how to do this. So, there are tools that helps you in doing the GitOps the right way, especially when it comes to Kubernetes. But GitOps is not only for Kubernetes, it is for the traditional workloads as well, but majority of it is actually being used in Kubernetes. So we'll be discussing two tools today. One is Flux and one is Argo CD. So, first is Flux. So, Flux is a CNCF incubating project. It's a CNCF project and the V2 version came early 2022. And Flux keeps the Kubernetes clusters in sync with Git and automatic updates. All the points that I mentioned, it does that. So, it syncs from Git, image reference update in the YAML file, sending notification, managing Helm releases, managing customizations. So, you can have YAML files, you can have customizations, you can have Helm charts and everything can be in Git. And you have defined your cluster and cluster state and the applications in Git. Flux will be installed in your Kubernetes cluster. So, you will be bootstrapping your cluster and installing Flux. So, you will be bootstrapping the repository and installing Flux in your cluster. And that will be monitoring the GitHub, which is there, the Git repository and any change which is there, it will be deploying it to Kubernetes to some of the node, which is a worker node where your actual workload runs in a particular namespace. It supports multi-cluster deployment and all those stuff as well. Let's see on an architecture level how it works. So, you have Git, GitLab and you have a source controller. So, when you deploy Flux, there is a component called source controller, which is deployed onto the cluster. Now, source controller will take whatever is there in the repositories and it will talk to the other controller based on whatever is there in the in the repository, like the customized controller and the help controller. Now, you can use plain YAML files and not customized, but Flux automatically converts those YAML files on the flyer to customize and then they deploy. So, it heavily relies on the customized controller and then you have the help controller for the 100 repositories. So, Git manifest in the cluster, bundle them as artifacts by creating custom resources. So, we will be creating the custom resource for this. Flux also has something called image reflector controller and automation controller, which means it can monitor the repository. So, if there is a change in the image version, the tag, then you can it can automatically take that, push that to the Git repo and obviously Flux is already monitoring that Git repo. So, it will apply that changes to your communities cluster. So, this is how the complete cycle can be thought of when you are talking or thinking about Flux and how it is doing GitOps. Now, I told you bootstrap. So, this is how Flux would be bootstrapping. You can use Flux CLI or the telephone provider. So, the command is simple. You first download Flux and you use the command Flux bootstrap, GitHub repository, give the owner repository, the branch, the path and when you do Ctl get ports and stuff like that. So, it will have the help controller, the customized controller, source controller, notification controller, all these will be there and they are actually as deployments and same you have the services from them and it will create its own directory structure inside the GitHub where it will be having the components so that it can talk to Flux. So, that's how Flux works and does stuff in a GitOps way. It's pretty interesting. Now, let's talk about another interesting tool which is Argo. So, Argo has many projects. First of all, Argo CD is one of them. So, you have workflows, rollouts and two other projects which are there like there are four. Argo is the main umbrella, Argo CD is part of that. So, Argo CD is not the only one which is there. But we are talking about Argo CD specifically because we want to kind of discuss GitOps. So, a typical architecture, what it will look like in today's demo will be a repository. In that repository, you have GitHub Actions. The GitHub Actions will be building and pushing a particular file to a particular folder structure inside the same repository. And you have Argo CD which is installed onto the Kubernetes cluster. Argo CD is pulling the particular directory's changes. So, as soon as there is a change in that particular directory done using GitHub Actions, Argo CD will pull that change and deploy onto a particular namespace on a node wherever the application has to be done. Picked up by the scheduler, of course. So, that's how today's demo will look like and that's how the Argo CD would be looking in the picture. So, GitOps, again, Git as a single source of truth. Argo CD is easily trackable, easy rollbacks. You can have disaster recovery setups. For example, you can have a cluster A, a cluster B. So, if a cluster A goes down, a cluster B, you can have like pointed to Argo CD and you can have apps of apps like the application set as soon as the cluster is registered. All those apps will be deployed onto the new cluster. You can have SSO, you can have multi-cluster. So, multi-cluster is very, very awesome when you talk about Argo CD and we will be talking about that. So, same application can be deployed across all cluster using something called application set. It is part of Argo CD from the 2.3 onwards. You can also define using customization files, different configurations of the same file to be deployed across different environments. So, you can do that using the customization and you can define what all files will be there for their stage and prod. Those are very powerful features. It has a very rich web UI, which is I think enough to do anything for the operators. It also has the CLI that is there. You can use the CLI for projects, for connecting the clusters, for adding the destinations and all that so on and so forth. For me, it is a matrix that you can use audit rates for app events and API calls, precinct and posting hooks, which are there. So, that's what Argo CD is. When it's working, it's very simple and similar to what we talked about. User or a CI goes and pushes to get. So, there is a repo server. Repo server is maintaining the internal cache of the repository because not every time you need the copy of the whole repository, you just need the Git diff. So, it does that. Then you have the main controller, which is there. It actually syncs from taking from the repository server and the live state from the API server and it makes sure that they both are remain in sync. Replicas goes down, it makes sure they are in sync. Any image change seen in the repo server makes sure it is in sync. Then it keeps on notifying the UI. So, the notification changes are instant. Like, if it goes down, then you'll be able to see in the UI that something is happening. Dev is getting applied, all that thing. So, you'll be able to see that in the Argo UI as well. And then, yes, it obviously deploys to there. When you are in the HA mode, so Argo offers HA mode installation. Very simple kubectl apply command is there that you can use for HA installation. It installs DEX, which can be used for authentication. Repo server is there. Notification controller, it started part of as Argo Labs. So, there are many awesome projects going in Argo Labs and once it reaches a maturity state, then that project is into Argo CD. So, again, from 2.3.3, I believe, or 2.3.x, notification controller also became part of the main Argo CD repository. And then the server that exposes the API and consumed, which is consumed by the CLI and the web UI. So, web UI and the CLI interacts directly with Argo server. So, you need to provide the server. And the app management, RBAC management, so Argo has its own RBAC where you can create users and admins and stuff like that. Auth delegation to external identity providers, YDC, listen to the GitHub web book events as well can be done. Also, you get application controller, uses Argo CD repository server to get the manifest and creates API server to get the live state. And it also has the Redis as the HA proxy. So, that was in the HA mode. So, now you know what flux is. You know what GitHub is, the GitHub principles, what flux is, where Argo sets in, how they work, both. Let's see some of the differences because that's also important to understand the differences. So, flux is incubating project and Argo is also incubating project. Flux is contributing by the 14 companies. Argo is actually much more. I forgot to change the number over here. So, it's I think more than 30 plus contributing companies. So, Argo CD is on a higher state of the contribution in terms of collaboration when you talk about bootstrapping. I showed you the screenshot of flux bootstrapping. So, that is used for install and bootstrap via the flux CLI. There is no native mechanism for bootstrapping, though there is an external project, open-source project by CodeFresh that can be used. But again, bootstrapping is simple. You just apply Argo CD to install and then you can use the CLI to kind of create projects and stuff, or you can just use the Argo UI which is there. Next portion is the reconciliation. Built on GitOps toolkit and reconciliation can be set per component. So, in flux, this is advantage where you can set the reconciliation on per object which is there. But in Argo, you can, it's a global setting which is there for all the application. Three minutes is by default and can be changed via the config map, but it is not per application. It's like the global setting which is there. Application deployment. So, I told you, flux always uses customize even if it's a plain YAML. Flux will create the customize on the fly. Whereas, in Argo CD, it detects whether it's customize or YAML and it will directly apply. It keeps it close to Qubectl apply. For Helm, flux uses Helm native Golang library. So, you will be able to use the commands like Helm LS and all that against your cluster. But Argo renders that and pipes it to Qubectl to keep it more native to the Qubectl apply way. And you cannot use the Helm CLI command against the cluster when you're using Argo CD. Argo CD has something which is called application sets to deploy many, deploy too many clusters. And you have control over the ordering using the sync waves and phases. That is a very powerful feature where you have the control over the ordering meaning that once you deploy, so you can wait for a particular file, a particular YAML file to get deployed and then you deploy the next one. So, the ordering is in your hands. It's obviously it's it's it would be more work on the conflict side, but it is possible and it gives you more flexibility and control. Secrets. So, again, there is a big difference in the secrets because flux provides a guide for managing encrypted secrets with Mozilla SOP, Seal Secrets and all the other stuff. Argo CD has plugins like vault and it totally leaves up to the user for the secret management which is there. RBAC, again a big difference. Flux, RBAC relies strictly on the RBAC capabilities of Kubernetes service accounts and all that stuff. Argo CD is very flexible and it has its own user groups mechanism at a very rich UI which is there. So, yes, Argo CD rich UI, Flux, no UI. It has an experimental UI but that's not official. So, we keep it to no UI. So, I hope you understood what Argo CD is where Flux is what GitOps is. Let's try to understand that with a sample demo. So, let me re-share my screen. I will have to share the complete window to be honest. So, I already have Argo CD deployed on to a Kubernetes cluster. So, that will be a very interesting demo because what I'm trying to show you is the true GitOps way of doing things. So, this is a repository, KCD Chinnai that you can find. There is, there are two repositories in it. One is called Infra and Infra, I will be defining what clusters I want using cross-plane that you already, I think, learned in the last session. So, it is good. So, what I have is, I have a Kubernetes cluster which is kubectl. Great notes. I already have a Kubernetes cluster in place and I have kubectl get pod, siphon, a. I already have Argo CD and cross-plane configured. So, I already have cross-plane, sorry, I already have cross-plane configured and I already have installed the Cvo cross-plane provider, the Cvo Kubernetes cross-plane provider which is already there and you can see in Argo system namespace I already have all the Argo components which are there, the repo server, server application controller and stuff like that. Now, the thing that we are going to show is, I have an Infra project. So, in Argo CD, this is the UI and you can create a new application which can monitor a particular repository. So, what I did is, you can go here and you can see the app details. So, this project is watching the KCD Chennai repository, the Infra folder. So, any changes made to the Infra folder will automatically be deployed onto this cluster. The manifest will be applied onto this cluster. So, what we'll do is, let's create a cluster. So, let me show you first what all is there already. So, Argo CD cluster list, you can see I have three clusters, but one is the in cluster where Argo CD is already there, one is the test cross-plane and one is the KCD Chennai. Both of these are here and my in cluster is already running in some other region which is there. So, let's do cluster three. I already have created a live demo example because this is a live demo. I hope it works. So, I have defined the spec section and all that stuff with respect to Siego Kubernetes which is there. So, I will do git add, git commit, live demo, git push, origin main. So, it has committed to this particular repository. So, you can say the cluster three .yaml is committed to this repository and it will automatically sync from this, but what we'll do is we'll do a manual sync because we don't have time to kind of wait for that and here we can check, you see, we'll get Evo Kubernetes. We'll put that on watch and we'll apply actually. Yep, you can see the live demo cluster is already getting created. So, let's do... Hi, Zain, just to make a quick check, we have two more minutes to complete again. Yeah, actually, the session got started late, so I had prepared for 30 minutes. Okay, okay. Yeah, how many minutes? Yeah, I didn't take five more minutes. Five more minutes. Okay, so much. Yeah. So, the next thing which is there is the... I'll quickly tell you because that is very important, the deploy folder which has the application set .yaml. So, this is what I was telling. In Argo, you can define something of kind application set and it can be deployed to any number of clusters. So, when you set the generators, what it means is all the clusters that you add to Argo CD will have this application deployed once it's added. So, it will automatically deploy this particular application. And this application will be looking like this. So, that's a sample application which is there. It is coming from this particular repository. So, overall, what will be happening? Any changes made to this repository, there is a GitHub action which will be changing the deploy folder. And after that, any changes, obviously, the application set internally creates application. So, any changes made to this deploy folder will automatically be synced with all the clusters which are there. So, if I go to the applications, I can see in all the clusters which is there. And like the KCD Chennai, this is how the application will be present. So, let me just refresh this. Okay. Again, in the view of time, what will happen is once the cluster gets created, you need to run this command which is, again, this just is available. I will put all the links in the description. So, you have to Argo CD cluster add and you have to add cluster pointing to the kube config file and give the server name. So, what this will, this command will do is it will add the cluster to Argo CD and since we have the application set which is there. So, as soon as the cluster will be added to Argo CD, you will be seeing a new tile which will appear for a new cluster and the application, this application will be deployed onto that cluster. So, this shows you the power like I can have, I can create 100 clusters, have a mechanism to get the kube config file and add to Argo CD and then Argo CD will automatically deploy the 100 apps to all the clusters and you can see the amount of, you know, effort that will be saved and this is just one service. Suppose you have 50 plus services, so you can see how this will work. Yep, that's that's pretty much it that I have to cover for the for the presentation and demo.