 Okay, so it's two and we can begin. Welcome you back after lunch. We hope not to bore you much. So hello everyone and today we'll be speaking on true GitOps using crossplane and Arbo CD. My name is Saiyam Bhattak and I'm working as director of technical evangelism at CIO. That provides managed humanity service, which is called CIO Kubernetes. And I'm a CNC ambassador, pretty active on Twitter and I teach cloud native on YouTube as well. So you can follow me everywhere. Hello everyone, my name is Saloni Narang. I basically work in SAP Labs as a developer. SAP Labs in Bangalore, India. And this is my Twitter profile. You can follow me here. Please note down, I have less followers right now. And this is my LinkedIn profile. Sorry for the extra numbers over here. Okay, so we were chatting outside and Saloni was describing me the problem that she was facing with achieving true GitOps. But while we were having the conversation, she said that she already had a problem and she understood GitOps terminology and then implemented some of the tooling and solved some piece of the puzzle already. So Saloni, to set the context right, how about you share what problem you had and how you solved that before we jump on to the actual problem of achieving the true GitOps? Sure, Saheem, I will do that. Before moving to my problem, I would like to share with you all what GitOps is. So GitOps in a very simple or layman language, it is a mechanism where Git is the single source of truth. So a developer writes a code and commits it to Git and then he or they can easily deploy their applications using Git. So basically, when your application is deployed, don't you think that it should be automatically deployed and the changes made by developers should be auto-applied? So once the changes is auto-deployed, all the changes is reflected, now you would like to monitor your changes that is applied and I'm sure you don't want to write a long code for monitoring whether your code is deployed, whether your application is deployed. So basically, once your application is deployed successfully, now what happens is suppose somebody, some developer makes changes to the application and what happens if the changes are not reflecting in the system? Suppose what happens if the changes are not desired? So basically, in drift detection in GitOps, what detection means is it maintains the sync between the actual and desired state. So this is how the workflow is and now moving on to repeatable infra applications. What happens if a developer wants various application, internal development application and coming to the versioning and rollback? Suppose a developer creates application, wants to add new features in that application. So suppose if your application breaks in between, what will happen? Obviously, developer wants to rollback. So there is the concept of versioning in GitOps. So all in all, this was GitOps ideology or you can say this was GitOps, how GitOps involved, what in all technologies, what in all toolings, what in all GitOps do. Now moving to my problem, say, what was my problem? So what was happening is I created my application and pushed the code to the Git. Now I built the application or code using the toolings like Jenkins and GitHub Actions. Now what I did is I prepared artifact before in the previous step and pushed that artifact to the OCR registry. Now this is where I am facing my problem. So I am facing problem in the deployment of the application. So now what is happening is there are tools, Cube CTL or Helm Apply, which I have to use. Then I need to manually monitor the uptime of the application. Then I need to secure the cloud credentials and then same with the cube config. And then as I mentioned earlier, drift detection, the sync between the actual and desired state was missing and then infra and app setup was having too many tools. Now my problem was this I am with respect to the deployment of my application in Kubernetes. Now coming to the solution. What is the solution? What is the solution I used? So there are tools for the solution like Flux and Argo, but I personally used Argo for this. So what I did is I installed Argo onto my system, onto my cluster and then this is how what I did after installing Argo. So I again built my application and pushed the code to the gate, committed code to the gate and after that built my application using Jenkins, toolings like Jenkins and GitHub Actions. And then I prepared artifact which I pushed into the OCI registry. Now this is where the difference comes in the deployment. Now Argo is pulling the changes from the gate. As soon as there is some changes in the gate repo, so what it is doing is it is automatically applying all the changes. Moreover, it is also helping me to deploy my application. So basically Argo as you can see, it is working on the pull mechanism. It is going to the gate and helping me out to push, to automatically apply my changes and deploy my application. So basically, you can see over here, Argo is using all the GitOps principle. I will let you know what GitOps principle is. So basically GitOps have four principles which is under GitOps initiative, open GitOps initiative. And it is made by the working CNCF group. Now moving on to the four GitOps principle. First, GitOps principle is declarative. So now you would like to declare the state of your application in YAML, declaratively. You want to declare your desired state declaratively using YAML. Then coming to the versioned and immutable, you would like to maintain version history so that you can easily roll back. You can go back to your previous version if your cluster or system breaks. So you would like to go back. You can easily make changes and roll back to the previous version. If there is some break in the cluster or if something goes wrong in your infrastructure, you would be easily able to roll back. Now coming to the pulled automatically, agent should pull any new changes from the source automatically. Now coming to the fourth principle, you can continuously reconcile. So agent keep the desired state and actual state in sync. So this is what I discussed previously. This is drift thing. So agent keep the desired and actual state in the sync. So this is drift detection policy. Principle of GitOps. Now I have told you what ARGO CD is. It works on GitOps principle and now we will move on little bit onto ARGO CD component. How ARGO CD works? It is a workflow of ARGO CD. It is a high level architecture of ARGO CD. So once you install ARGO CD onto your Kubernetes cluster, so these and all components you will be able to see onto your cluster. So this is the workflow. User pushes the code to the Git. Now repo server stores the cache. It tries to fetch the code or the data from the Git. And then there is a controller which maintains the sync between the API server and the repo server. Then there is UI. So UI keeps updating the live status of your application. Then your application is deployed successfully. So whenever you set up your do the setup in production. So these are these four are the components you will be able to see. So first one is notification controller. Second one is application controller. Third one is repo. Server cache as I mentioned it stores the cache. And the deck server and Redis. Sorry, there is no space in between forward wood. And ARGO CD server. So this was all about the workflow. This was all about ARGO CD architecture. Cool. I think that gives a good gist of what GitOps is where it is fitting in your particular problem so that you are able to deploy your application from Git to the Kubernetes cluster and that process is solved. What else do you need? You already have ARGO CD, right? You are deploying your application. So what do you want more from GitOps? Before moving, what do I want more from the GitOps? I would like to give you a summary again. What are the benefits that ARGO is providing? So multi cluster. So you can deploy your application and use it to track the events. Then CLI for automation. Drift detection, sync between the actual and desired state. Automated deployment, whenever a coder develops the application, it will be automatically deployed and changes also will be automatically deployed. Then version control as I mentioned, whenever you will create a new application. When you add the special features to your application, it will be version controlled and if something breaks you will be easily able to do rollbacks. It also offers a very nice web UI. Then single sign-on and yes the last one is observability. Now I actually want to achieve true GitOps for both infrastructure and application and yes I want to achieve everything. So can you help me with that? So basically you want to GitOpsify everything. That's an interesting use case and I think when you have the GitOps principles, when you have powerful tools like ARGO CD you should be able to leverage some of the other components from the CNCF ecosystem to achieve this particular use case. I actually have exactly this tool called Crossplane. Crossplane is an infrastructure provisioning tool. For example, you want to provision the resources, the computers, the Kubernetes clusters and all these things in your particular cloud vendor. You can do that by a crossplane and it is using the Kubernetes API. Now here is the benefit and the power that it is giving. So it is natively using the Kubernetes API to actually declare your infrastructure. So it is declarative. You can declare your infrastructure in YAML and you can do QCTL apply, which is fancy because as Kubernetes administrators you already know the YAML files, you already know all that syntax very well. So you would be able to create declaratively your infrastructure. You can define I need a Kubernetes cluster, I need these many resources from this cloud vendor and you will be able to create that infrastructure. It uses the, so infra is stored in Git. Now this particular section which is there so this thing, infra stored in Git. So since these are the YAML files, you can easily store them in Git repository and slowly you already mentioned about Argo CD, it can watch the Git repository and it can pick up those YAML file customization or help and deploy on to the cluster which is great, that is what we need. So it automatically will take all the files, all the YAML files in the infra repository or whatever you might name it and then deploy on to the cluster. So Argo CD job finishes over there. But now crossplane, since crossplane is having its controller it will and it will be configured with the provider which I will show you obviously in the demo. It will be able to create the infrastructure using the GitOps approach. It is extensible because crossplane has all the integrations with the cloud vendors like Google AWS, GCP, DigitalOcean, Civo like you name it, you have the crossplane provider for that particular cloud vendor and you can create the resources within that provider. Also you have something that you can, since it is native Kubernetes so you can use the other Kubernetes tooling like suppose you have a cluster installed on to your cluster for managing your policies. You can apply policies for your namespaces where you are applying your crossplane components as well. So you can use the native policies, the quota restriction, all the benefits of Kubernetes through crossplane for managing your infrastructure. So that is what makes it fancy. Then it is less complex because it follows the same tooling. You do not have to learn any new language. It is YAML and if you are in the Kubernetes space in the Kubernetes ecosystem so you would know as the Kubernetes administrators. I have something for the developers as well. Then it uses the power of controllers. So Kubernetes is main power is the controllers that it has, the control loops the reconciliation loops. You have the deployment controller, replica set controller like you specify I need three replicas of my application and Kubernetes will make sure that your three replicas of your application is always running. Similarly crossplane will take care of the drift detection that Saloni was mentioning for your infrastructure as well. So let us say you created a Kubernetes cluster and after that Kubernetes cluster somebody in your team has removed the particular node in live and they have not committed that to the Git repository. So crossplane will identify the actual state from it and will make sure that your infrastructure is in sync. Even if it was deleted without any consent or without telling you that node will automatically come up. So that is what crossplane makes sure. The drift detection is also there. That is what it follows the GitOps principle. Now this particular section is pretty neat feature and very important feature for the developers. So compositions is something that can be defined for multiple cloud vendors. So what exactly crossplane is trying to solve here is let us say you have multiple cloud vendors and everybody is talking about multi cloud like I want maybe S3 from AWS and cluster from somebody else and database from Google cloud. So all these things are there or maybe you have different cloud vendor integrations you just need to have the best resource for a particular infra. So what you can do is your admins can define the composition files. These files will be the actual configuration and the specification of the cloud vendors that you want to integrate crossplane with. And as a developer you would only need to require. I need a database and I need from which cloud vendor the memory. Nothing else. As a developer it becomes very easy because you do not have to specify anything. So crossplane you can use compositions to even empower your internal developer teams so that they can create those and that goes where get commit, PRs, your infra admins can approve those pull requests and your infrastructure is ready. So that is what in a nutshell crossplane is more on it is working. So you have a base Kubernetes cluster. In our case we will be using CO-Cubinities. On top of that we will be installing crossplane controller so that as soon as there is a custom resource which is there for the infrastructure it identifies it is able to identify and do something with the provider which is configured. So after crossplane installing crossplane we will install the provider. So provider is basically your cloud vendor choice that you want to work with where you actually want your infrastructure to be and then you have the configuration. So obviously let us talk about AWS or CO. There are certain secret key access keys that you need to have in order to access that particular cloud from your account. So those comes under the cloud provider specific configurations that you will be deploying. So that when crossplane tries to create a resource it has the proper credentials to create those resources and then you create the custom resource. So these are the actual YAML files that your developers or admins will be committing to get and from here the controller will pick it up and create that resource in the configured provider. So to the problem that Soloni you just mentioned this is what my proposed architecture would look like. So this part is familiar so we will quickly go through it. So you have your GitHub repository. Inside your GitHub repository you have a couple of folders a deploy folder where our application is living and we have the infra folder you can have infra as a separate repository. For this particular demo we have the infra folder inside the same Git repository and as soon as this happy developer pushes the code to the main branch of Git there is a GitHub actions that gets triggered that builds the image then pushes that image to the artifact and also gets the SHA the Git SHA of that commit and change the tag in the deployment file and ROCD that is installed onto your Kubernetes cluster is watching this deployment folder and as soon as there is any change in the YAML file it will deploy that onto the cluster and give you a new version of your application that can comprise of pod services persistent volume PVC whatever stuff is there. Now there is another application that we have created here which is ROCD is monitoring the infra repository. So like Saloni said she wants the internal developer teams to be empowered to create the repeatable infrastructures so what they can do is they can just create a PR to this infra folder with the custom resource of the Kubernetes cluster or whatever resource they want to create and then ROCD will apply that ROCD does not know what YAML it is applying it is just applying the YAML file. Now there is also cross plane that is installed onto this cluster. Now as soon as there ROCD applies that cross plane custom resource the controller will be watching that and will create the infra with the configured crowd provider. So that is all overall the working would look like. Let us move and see all these things in action so that you know I am not faking it up it actually works. So it is demo demo time. So this is CO Kubernetes so CO just I mentioned like it is a crowd provider so we not go much into that we just model Kubernetes cluster. So we will click on launch my cluster so there is nothing right now you can see that and we are just creating a new cluster OSS Japan and we will just leave everything default choose a size that we need and I will just install ROCD because we need ROCD on to that particular cluster. And we will click create cluster. So this will create a Kubernetes cluster in the back end it will take a couple of minutes so we can go to the Git repository and see actually what all stuff is there inside Git. Let me zoom in a bit. So that is how your repository structure looks like you have a simple file you know Python application flask and you have Docker file to build the code and you have the deploy folder your deploy folder is actually having a YAML file a YAML file is actually having the deployment and the image and the service all these components obviously it can be a more complex application more complex you know stuff inside it. Now let's go back and you have this infra repository which has the cluster. So this is how the custom resource for cross-plane for Kubernetes, C-O Kubernetes cluster looks like. So you have a kind C-O Kubernetes that's what the power of custom resources is. So you define the API version cross-plane name and the spec section your cluster. So this should actually create a C-O Kubernetes cluster called test cross-plane with this particular size and with these application installed and this is specific to C-O provider and you can configure whatever provider you need to. So we can see our cluster is ready. So I'll click and download the kubeconfig file. Yeah I know it's super fast export kubeconfig users and we copy same in another tab as well because I need that. So kubecdl get nodes. So you can see your cluster is up and running. So we have that and it should also have started to deploy Argo CD components. And you can see the Argo CD components have started to get deployed. So you can see all these are getting created in the back end whatever Soloni just mentioned right. In HMO you have notification controller, repo server, decks, redis Argo CD server, application controller. So all these components are installing over there. So this is fine. What else we need to do now is we need to install crossplane. So let's install crossplane as well. So we'll create a namespace crossplane system and we'll do a simple help install. I already have added the help repo so I'm not going to do that again and I'll just do help install of crossplane in the crossplane system namespace. Now meanwhile that is happening we can check go back again the repo to see some of the other details so this is the GitHub actions that I was telling in the architecture diagram. So you can see it is actually using the Docker build push action so it is building the image using the Docker file which is present in the repository and then pushing it to my Docker hub account. Getting the Git SHA and generating the deployment so what it is doing is when I showed you the deployment file it is changing the image tag with the latest version with respect to the git commit that happened to this main branch so that is what it is doing is so I have a ginger template for that so you can see in the templates and in this section you can see the image deploy tag is the variables so that is something as soon as there is a commit there will be a new SHA tag with respect to the commit it will attach the the GitHub action will attach this particular hash and will be committing that to the deploy folder hello.yml file so that is on a high level that it is doing ok now we already have this installed let us do kubectl get pods iPhone and crossplane system so our crossplane is running so what next we have to do is we have to I told you we need to install the provider so we will install cvo provider because we need to create the infrastructure from cvo so we will install that provider provider is created and after that we have to install a secret because I told you after the config after installing the provider you need to configure that with the secrets so I am not going to show you the secret key this credential is fake so I already have a credential but the files look exactly the same and provider config means it will use this credential that will be created above and it will be using the region as FRA means in the Frankfurt region whatever will be created will be created in the Frankfurt region so I have that file let me apply that pro I think it is provider also meanwhile I need to open some firewall ports so I will do that now for now I will just open it do not judge me on this cool so our provider is also there the secret and the cvo provider is created awesome so we can actually now kubectl get I can go back and we can actually see the argocd everything should be up and running we can actually open argocd now cool all components are running we will pull forward that and we will get the credentials for this particular argocd proceed so this is how the argocd look like I mean let me log in so that is how the argocd GUI looks like so let us first create the application but Soloni has already done we will do the app side of things and we will sync policies automatic means as soon as there is a new change it will automatically sync auto create namespace it will do that repository url is os s japan and the main branch we need we need to deploy the application from deploy folder so this is the folder that I want my argocd agent to monitor continuously and destination is the current cluster in the demo namespace I want that and let us create that so it is creating the application it is missing it is syncing it is synced and it has already started to deploy the service and the deployment we can see kubectl get pods hyphen n demo so demo namespace was not there it was a fresh cluster and you can see container creating of the service and here also the live status that is what she was mentioning the live status was revolving as soon as the application is active you can see the green heart over there and we can see here it should be in running state and you can see it is in running state so if we go to the service we can actually see some of the other information what actually it deployed so you can see the manifest you can see the service you can see the it is a node port running on the node port this so let us actually quickly access this application so we will copy this and we enter the code number from here hey so this is application which is deployed do not forget to follow us on twitter that is why it is mentioned over here and so this part is already there that is what she has done what we are going to the fancy stuff starts from here so what we are going to do now is we will create another application so we will go to the home page we will create new app and this time what do we want is we want infra so we want infrastructure to be provisioned in the same way as we have done the application so you can see over here inside my Kubernetes in FR edition there is only one cluster which is there right now no cheating over here and now I will be giving a project name again same stuff automatic sync policy auto create namespace repository we have to copy this again so repository URL is the same and main branch this time I do not want to monitor the deploy folder I want to monitor the infra folder because that is where my infrastructure files will be here you can that is where I was saying you can have a different repository itself for your infrastructure so you can use infra and destination is this cluster namespace is default so we do not have to specify that and we click create that is it so you can see it also sync pretty fast and you can see it already deployed so that is again a very fancy thing like in this you can see the connections so this particular application that we have defined will create how many resources you can see in the RgoCD UI itself so you have test cross plane which is there which is deployed and you can see the custom resource that is deployed so we can actually see kubectl get cbo Kubernetes and we will be seeing that it is being created so it says that the cluster is being created with the applications RgoCD and Prometheus operator and it is the same YAML file that was being that was here cluster one so you can see there you have mentioned applications RgoCD and Prometheus operator so this is specific to cbo because cbo has a marketplace and you can pick and choose what application you want to deploy so you can give this here you might not be able to give this same configuration for AWS same configuration for other cloud vendor and if I go here and refresh this I should be able to should be able to see test cross plane being created so now we have actually get opcified not only the applications but also the infrastructure and that is the actual benefit actual use case that you know we should be moving forward to because now what you know the internal developer team Saloni can do in your team they can simply come to this repository OSS demo and create a pull request in you know this particular folder Infra folder and give the same file the same custom resource with their requirement like let's say a team is there and they want to create a custom resource they should be able to give the same requirement and Rgo CD will sync that deploy that custom resource control cross plane will take that and create the Kubernetes cluster or any specified resource that you have defined in the custom resource of your cross plane and the drift detection pretty simple like if I try to delete this cluster so Rgo CD the cross plane will detect that and it will automatically create this cluster again because our desired state is stored in Git as a developer developer can do a manual error and delete something or do something but our state is in Git everything is verified using the pull request the reviews and then only it's merged so our state is in sync with the Git and you soon you will be able to see that cross plane will start creating the test cross plane cluster again once the 30 40 second period whatever it syncs back it also in this particular file we have a few minutes and one slide to be covered so if we change anything in the code you know that should automatically be deployed that's what like GitOps is like if we just do anything like add more exclamation marks just push that to the main branch and we should be able to go back and see that there is a GitHub actions that is triggered so we can see the details it will go through the complete workflow of the workflow yaml file that I showed you building the code pushing it to the repository and changing the deployment hello dot yaml file because cross rgo cd is only watching the deploy folder it doesn't watch anything else for the application so it will do that meanwhile it is doing it what we will do is we have one more slide so Soloni how did you like this this particular solution for your use case thank you thanks for the great demo surely it will help my team so what to do next have an infrastructure and application repository have cross plane deployed with composition based different cloud vendors developers to use minimal config to create and push infra specifications connect clusters to rgo cd after that have application set deploy preset of get easy usable environments dr setup production setup in a true github way so these are some of the advanced things after this you don't have to stop you actually have to use compositions to go multi cluster you actually have to use application set so that you can deploy same application to all the hundreds of clusters that you connect to your rgo cd so that you can create a pre configured set of cluster applications for new teams you can define policies your standard application set whatever is there to be installed over there so I think that's what we had for this particular session and demo I hope the cluster is getting start to create and I also hope rgo cd will I mean this will deploy a new application soon so you can see test cross plane was starting you know started to create again the drift detection thing so yeah I hope it was useful in terms of how to get opsify your applications and infrastructure using rgo cd and cross plane so thank you so much we are here for questions if any and I think we only have one minute but we'll be here around so if you want to understand anything the demo repository the slides are already there I'll put the link to the demo repository in the slides so that you can go and do this demo yourself if you sign up for Civo you get $250 credit as well just the standard page over there yeah I think that's pretty much it thank you so much for tuning in