 Hello, everyone. I'm C.O. Mike said again for product manager for OpenShift GitOps And today I want to walk you through the first release of OpenShift GitOps Which was released as a tech preview on OpenShift Container platform What I have in front of me is an OpenShift Container platform 4.6 and I am already logged in looking at the administration administrator perspective of OpenShift web console Let's enable OpenShift GitOps in the cluster through the operator hub I go to the operator hub search for GitOps from the list of the content and operators that are available there There it is found Red Hat OpenShift GitOps the first version of that I was released and installed it on the cluster so I just go through the default and wait a couple of seconds that starts installing the operator and The related resources and controls for me on that cluster. What happens by default when I install the cluster is the install The operator is that a default instance of Argos CD is also at the same time provisioned on this cluster and is pre-configured to allow configuring OpenShift cluster itself through this instance of Argos CD As soon as the operator installation is ready to get green checkmark If we look at the application launcher We see in instance of Argos CD is added there in a shortcut so we can easily Switch and navigate to this instance like I mentioned this instance is pre-configured to Be able to manage cluster configuration with it if you want to make customization to open shift the authentication or console or other areas As well as one install operators for example It is not a cluster admin, but it has elevated privileges It's an instance that Typically owned by the platform Ops team or the cluster owners that want to manage a cluster itself. So Let's go to this Argos CD instance and make some customization to OpenShift configuration through it We're faced with the login page of Argos CD The default user for Argos CD is admin if you haven't configured anything any other Authentication for it the password by default is generated when Argos CD gets bootstrapped and Stored in a secret that is living in the same name space as Argos CD. So let's get this Let's decrypt the secret and get them a password out So this is the password that was generated for Argos CD in future versions of OpenShift You're working on integrating authentication or Argos CD with OpenShift as well So you would use essentially your OpenShift credentials here as well to log in into Argos CD You wouldn't need to decrypt the secrets anymore Let's log in and there we have Argos CD 1.8 and right now is empty doesn't have any Applications or sinks defined in it. I Do have a git repo ready, which is also the flow of this demo if you want to try it on your own And within that there was a cluster folder that I have stored some cluster configuration I have a cluster a console link that I can add a custom link to the application launcher It'll link to the red hat developer blog the Kubernetes space And I also have some namespace configurations to be created So let's ask Argos CD to sync the content of this folder to the cluster for us I'll go through a new app in Argos CD. Let's call this cluster configs. I Use the default project projects is used to group the application inside Argos CD And I will use the manual sync post policy in For this particular part because I don't want the changes from the git repo to automatically be rolled out to the cluster I want a chance for the ops team to review and when they're happy with the changes and shore Issue a manual sync and ask Argos CD to sync the configs to the cluster The next part is which git repo Contains the configuration of the cluster. So that's the git repo we're looking at I'll go with the main branch It is better if you go with a particular commit ID or branch for your cluster but in this example I go with a main and The content of the cluster folder is what I want to be synced to this cluster The destination would be the cluster that I'm running on Kubernetes defaults You see if I want to sync the content of that repo We want to manage a configuration of a remote cluster. I can just add the cluster URL here instead and Right now everything in that repo is cluster sculpt resources Really if you have other type of configuration for OpenShift, they are that our namespace scope They usually end up in OpenShift configs and namespace So let's just use that namespace to be sure and the recursively apply all of this So instead of this form if you want to be fully declarative you could also I have an example of the declarative way of creating the same application You could add an application CR From Argos CD that would configure the exact same thing that I just show you through the dashboard. Let's create this All right, the application is created and Argos CD immediately does a drift detection and notice that and identifies that it takes that The cluster does not have the conflict that we have any good people And this is expected because we asked Argos CD to not do Automated automated syncs and wait for us to issue a sync. So what we're gonna do is that let's just check that in this Console we we don't have any extra links here. I will ask Argos CD to Perform a sync at least what kind of resources will be synced to the cluster do the synchronize for me and It automatically applies those resources to the cluster if I go to the console under the application launcher I see that there is actually a link added there So the config is in sync with the cluster from this point on if I want to manage the configuration of this cluster or any number of cluster if you are in a multi cluster environment You have more than one cluster that is looking at this particular folder For for rolling out the configurations So from this point on what I need to do You want to roll out a change to a number of clusters that are looking at this git repo for to to retrieve their configuration is Really to go to the git repo and modify the content there and issue a pull request and after review Get it synced So let's modify the name of the link a make this added Kubernetes at the end So this is the community space of the redhead developer block and normally it should issue a pull request and get a chance Get it get it the chance of review to my peers so they can Take a look at the change that is being asked to be rolled out to those clusters for the demo I'll shortcut that and you should it come and commit directly So the change that I want of those clusters now is represented by a comet right there is an History of this change both for audit purposes and also especially when you're looking into issues happening You know what happened to the cluster you can always come look at the git history and see what was rolled out to to that cluster Let's go take a look at our city dash. We're gonna see how it looks like stare We see that our Mercedes has detected that there was a change and The stated of the cluster is not the same as what we have declared and get which is expected We just issued a change. I will ask our city to sync this change to the cluster again So it would issue a sync and roll those out to the cluster if you look at the application launcher We see that the name has changed to be the communities at the end So the git becomes my interface as rolling out operations right there. We have changed the cluster operation to be fully Through the git workflows that pull request workflows they come they review use the comets and everything that we have been doing really around it workflows for application now we can apply the same for Managing the configuration of the cluster itself And that's really the value that you would get out of adopting GitOps for for configuration management So let's go further and deploy an application also on the cluster through through Argo CD so in in the previous section when I asked the The cluster configuration to be seen we created a namespace as well Called spring pet coolnik so that we can deploy the spring pet coolnik application the spring boot sample application in this namespace within the same Git repo there is an app folder that I use customize for a templating system to be able to Deploy manifests of spring boot. There is a deployment around on service that I won't deploy it on the cluster So let's create a new application We call this one spring pet coolnik Default and this time I said an automatic so every change that is on git I want to automatically Rolled out to the cluster if something is removed from that git Repo folder should be removed from cluster as well as well And I want self-healing so our grocery they should enforce that The state of the cluster should always be in sync with the state of the git repo itself I'll use the exact same git repo here We go still with the main branch and the app Soft-folder in that git repo. We are syncing to the current cluster the pool method though This is the pool Model of application delivery with a cluster pulls its configuration or application into its namespaces And this needs to get synced into the spring pet coolnik namespace that we were created You can see also that our city actually have detected that we're using customize for that part of the particle folder and gives me some Customize configuration or options that I can can modify it if you need it Let's create this and since we put it on auto-sync Argo CD automatically starts rolling out the content of that git repo to The OpenShift cluster inside a spring pet coolnik namespace. We see a spring pet coolnik is started deploying now and bringing it up to To a healthy status give it a second till the the container is pulled from quay The image is pulled from quay and deployed within the namespace All right, it is deployed. Let's check it out. It looks like there we go spring pet coolnik is deployed and it's up and running so Let's let's look at that self healing part of Argo CD and in the security aspect that we have on one side at It trace up every change that is rolled out to the cluster In the git provider in the git history So that already gives us a higher level of audit and traceability of what changes by who and when was rolled out to the cluster But on the other hand Argo CD Constantly monitors the state of this deployed objects and compares them to the git repo And if there is a drift it detects it and and tries to correct it as soon as possible this change might be a large change right might be Somebody manually coming changing the object on the cluster or changing the image that is deployed to an image That was not supposed to be in that cluster that have been throughout last year breaches that were Really done through on the cluster through just replacing the image and that that is not visible in any system But our city would prevent that because it immediately compares to the git repo So let's see I scale this deployment to three pods and you can see that it immediately scales back to one part and if you look at Argo CD in spring coolnik you for a moment You might have noticed that it was in the syncing status. I can see it in the events because Argo CD also creates Kubernetes events as it Performs operations on the cluster that it had Identified immediately that the cluster was out of sync the application Somebody had changed something on the cluster and this is critical because the change was not Issued or initiated through the git repo the change was initiated on the cluster or nevertheless Argo CD Identified that and detected and immediately rolled it back to the state that was available on the git and bring it back to Fewer to to one part we can do even more aggressive changes and let me do for example Delete this all together See what happens. I'm gonna delete the deployment. We have some malicious user Locked in and is missing with the cluster so the deployment object was removed and You can see that Argo CD again identified it immediately and it's rolling it out again and to the cluster based on the content of The git repo and bring the application up so that it ensures that Undesired changes cannot be rolled out to a cluster unless it is coming through the git flow and is you'll get flow on is approved so it's really Hightens the levels of security we have in chain control in the changes that are rolled out to the cluster Like I mentioned this instance of Argo CD has elevated privileges to be able to manage the cluster Configuration without being cluster admin, but at the same time we have customers that want to give an instance of Argo CD to the Application team so that they can control the name spaces that the applications are deployed to without being able to make any modification to the cluster itself and for those instances as soon as you install the OpenShift GitOps operator within the catalog You can see that there is an Argo CD instance added to the developer catalog so you could go and instantiate an Argo CD within any name space that you want and retrieve the password similar to what I did from the Secret and you are admin to that Argo CD instance that is confined to the name space through through that Argo CD instance You cannot install an operator or make any type of cluster scope changes to to the cluster you're running on and you're only limited to the name space that you're running on this The cluster admin comes and explicitly gives more access to this Argo CD instance So we can cover also the cases that you want a less privileged Argo CD instance just for application delivery and at the same time instances that are owned by platform operation for managing cluster configurations