 and welcome to another OpenShift Commons. We have a great show for you today. I'm really excited about, well, I'm excited about all of them, but today we have Christian Hernandez. He is a GitOps Rockstar, right? OpenShift. If you haven't watched the GitOps happy hour, please check that out. That's on Thursdays and he'll plug it later. We also have Andrew Block, Distinguished Architect, and OpenShiftGuru, pretty much everything. And then CMAC is also joining us today and he is an OpenShift product manager and he covers all the stuff that you are interested in, right? GitOps, pipelines, a bunch of stuff. So again, it's an amazing group we have today. And we are going to kick it off with this GitOps in OpenShift with Argo CD and Helm. And Christian, please take it away. Yeah, so thank you very much. Again, my name is Christian Hernandez, technical marketing manager at Red Hat and overall GitOps enthusiasts, right? And so what I'm gonna do is I'm just gonna give a brief overview of GitOps and Argo CD leaving plenty of wiggle room for any questions or any comments that may come up. And then I'll hand it over to Andrew, who is like what was said, a general guru in terms of all things OpenShift, including Helm, so. Paulie Glott, you're thinking of Paulie Glott. Paulie Glott, yeah, exactly. So it's one of the things I kind of just start with is what is GitOps, right? And it's really, by definition, GitOps is when the entire infrastructure, your application deployment, everything is fully saved and installed, represented in a Git repository, right? So generically that's what GitOps is, right? So everything in your, having to do with your environment is on Git. And so I usually just like to leave it at that, leave it kind of enough breathing room for there because GitOps is an ever-changing, ever-evolving thing and it really is a journey, right? So I used to be in sales, right? And we always talk about journeys and that term kind of gets overused a lot but this is literally, I actually truly mean a GitOps journey, right? And it's really a evolution of what we brought with the idea of DevOps and Agile and actually Chris Short, as you may all know, the host of OpenShift.tv in 2018, literally said GitOps is a holy grail of DevOps, right? So it's really with the idea of DevOps practices where we wanna get to, GitOps is really that end goal, right? Where everything is described in a Git repository and everyone can get involved. So really like why GitOps, right? So it's like, you hear this buzzword, it's like, well, why would I want, okay, why would I want GitOps, right? And so these are some of the challenges that GitOps addresses, right? I'll call out a few of these things like, it takes weeks or even months to get me an environment. My application behaves differently in production and it didn't test. These are some of the things that I in my life have heard personally. And things like production deployments have every little success rate. So when you take a look at some of these things that GitOps addresses, these are actually some of the things that DevOps addresses, right? So it's like, well, what's the difference, right? Between like GitOps, GitOps, like how are they all tied together? A lot of the times people use these, it fits right in, right? They'll say DevOps and GitOps, especially us that are really into GitOps, talk about DevOps because while DevOps is actually a culture, right? GitOps is actually that culture in practice, right? So it's really, as again, as Chris Short put it, I guess the best, which is why I always quote him, anytime I present GitOps is that GitOps is literally the holy grail, right? It's like, I have a Git repo that everyone can contribute to and it manages my infrastructure anytime via pull request, right? So some of the benefits, right, you get it, is that since it's all in Git, all change are auditable, meaning like you have this convenient trail of all the changes that you've made in your environment, right? Anything as simple as someone scaled this, the cluster from three nodes to four nodes, that's in Git, someone deployed a new version of application, that's in Git. So all the benefits you get from Git is you, by extension, you get it in GitOps, right? So all changes are auditable. You get that standard roll forward, roll backwards, in the event of a failure. So you have the ability to, just like in your code repository, if you roll out a change and that change breaks something, you can always roll back to a previous Git commit, right? Or Git tag. Disaster recovery is, I'll put it simply, reapply the current state of the manifest, meaning you just reapply, you lose a cluster, you just reapply what you have there in your last known good state, ending you have the cluster up and running. There was a good article actually, and believe it was, it was written actually by Weaveworks, who talked about a customer that, essentially they restored their cluster in about 15 minutes, right? So they went from down to fully operational back in 15 minutes. And most of that time was getting storage back up and running. So the experience is really pushes and pull requests, right? So you make a pull request anytime you wanna make a change, that change can come from anywhere, right? There's obviously release skates in place, but the idea is the, if I as an administrator can make a pull request to the deployment code of the application because I want it to behave differently, and vice versa, right? So it's really, you have this whole convenient way of working together inside of Git, right? So it's, Git already has that the, it has the practice of us working together built in and we just take advantage. So, and GitOps really is for everyone, right? So a lot of the times, people think that GitOps is really like a developer tool and for a lot of time is, right? You're literally deploying code using this practice, right? And I'll go over in a bit, a little bit about things like the sync tools, but in actuality, it's really for everyone, right? It's really, it's a DevOps tool, right? It's DevOps in practice. So you can't have DevOps without the ops either. So it's really for developers for operations, for SRE teams, for, it's really, it's really not a geared specifically towards any core user group. So, and so so OpenShift and GitOps, right? So I always, I always make the comparison, right? And I think it's going to be, become my tagline is like, it's like peanut butter and chocolate, right? So it's, you know, GitOps is a declarative method to, to declare what's in your cluster and OpenShift, it's a declarative environment, right? It's built on Kubernetes and and so, you know, you have a declarative way of deploying your application in your infrastructure stack. And now you have a method platform that does that, right? So it kind of just like fits together, right? So, all the declarations are in YAML files, right? So it's visually stored. You can have OpenShift suck that in and have it either modify operators or actually just do simple deployments. Like if you have just like a deployment, simple deployment service route sort of thing, it's all stored in Git, right? So that's, it's a, it's a perfectly match. So some of the GitOps principles. So now this, this is the part where I always, always talk about how like this is a journey. This is where we are. This is where we are like now, right? This is kind of, kind of the, the idea that's where, where we are currently of how to use a GitOps and OpenShift, right? So I always recommend separate application source code from your manifest YAML, right? So in the beginning, I have always, I always had the application source code and the YAML manifest deployment in, in the same repo actually, it's a lot better if you maintain those separately, right? So they have, they have source code commits are independent from deployment commits. So all your deployment manifests are in standard Kubernetes manifests, right? Everyone says that there's a, you know, I'm a YAML engineer now. It doesn't have to be YAML, right? It could be JSON as well, but all those manifests are standard Kubernetes manifests stored in Git, right? So one of the big things is that what you want to do is you want to avoid duplication of YAML, right? Across environments and I'll go over, I'll go over that in a little bit, how that looks like, but, and manifest should be applied with standard OpenShifting Cates tooling, right? So it, there's really nothing new here that you haven't really been doing if you've been working with Kubernetes or OpenShift. There's not really any new tools, right? Per se, right? There's a couple of new tools, but it really is just kind of like a standard OpenShifting Cates tooling here. So really, like I said before, your day two looks really a lot like like what you've been doing normally, right? For the developers out there, it's really just kind of like what you've always been doing, right? You've been doing a pull request, you merge the pull requests and then you run your pipeline and it just automatically happens, right? So for the operations guys, I mean, for me, from an operations background, this is a little change, but it's really something that's been there for a long time, tired, tested and true, right? So for an operations guy, when you hear that, that kind of just calms my nerves. Okay, it's tired, tested, true. People have been doing it for a long time for years is this whole process of get pull requests and merges, right? And automation. So all the changes are triggered from get, I remember one day, this was a long, long time ago, I saw a talk by Kelsey, Kelsey Hightower, right? Some of you may know him. This is really, really early on Kubernetes, right? This is like Kubernetes alpha and he was talking about like, Kubernetes is how you design a system when I take your SSH keys away, right? Get now, get ops now, not only am I taking your SSH keys away, I'm taking away your cube CT all the way, right? So everything is driven from get, right? And that's the whole idea. So there's one of the things that get ops uses, right? And I think one of the things that's become very popular is a syncing tool, right? So a sync tool is really just, it's built on native Kubernetes primitives, right? It's the whole CRD and custom resource definition that's built in Kubernetes, right? The way to extend the Kubernetes API is how these sync tools are built upon, right? So some of the things that a sync tool would do, right? The example on the right here as you see is Argo CD, but you automatically detect drift and you correct it, right? So it's built on that control loop that's just built natively in Kubernetes, right? It'll see desired state and it'll see current state and it'll try to reconcile that, right? And so some of the popular get ops tools for syncing is like things like Argo CD, ACM, Ansible, Flux CD. Those are kind of just some of the big ones that bubble up when you're looking for tools that do the syncing, right? So it's really nothing new in the Kubernetes and new, I mean, that relative to Kubernetes, right? Is basically you're taking the concept of CRDs and CRs and you build a tool around that to make sure their cluster is completely in sync, right? So once you get like the CR, CRD in place, right? Once you get your sync tool, there's a way to represent your entire stack in a manifest, right? So for example, the example on the right here shows an Argo CD application is what they call an application and you basically tell it things like what server I wanna deploy it on, what project I wanna deploy it to, what the repo URL, what the path is, what the branch or tag name is. If I wanted automated, right? Do I want to prune anything that's not in that namespace? So you can kind of just declare, declaratively say I have an application, I want this to be deployed in this cluster and I want you to watch this repo, right? And the entire stack, right, is in get, right? All namespaces, deployments, ingress definition, secrets, right? Operator manifest. And so usually the sync tool has a way of defining that, right? And this is kind of right here, again, the example here is with Argo CD, so. So the synchronization, this is a basic workflow, right? This is not like the end result workflow, but this is kind of just conceptually so you get an idea. The real workflow is a little bit more complex than this, but the idea is from a 10 million foot view is you make a change in get, right? Either someone merges a pull request or you have some sort of automation that automatically approves certain pull requests and merges it. The sync tool will either be a polling or a push events or whatever, or in case like if you're using a sync tool like Argo CD that happens in the control loop automatically, we'll then check the status, right? It'll check to see, okay, hey, the declared state says one thing and the current state says something else. So I'm gonna go ahead and synchronize, right, I'm gonna reconcile those two. So it'll change anything, it needs to change and then it will then do that in cluster. So one of the things, one of the issues, right? So there's certain challenges that come with get-offs, right? And there's like, you know, again, I can do a whole hour, a whole couple of hours about this, but really is to avoid yaml duplication, right? So that's like one of the big things, right? So I can deploy these across multiple environments, but how do I manage this, right? Without, you know, I have like 10 environments, how do I manage this without having 10 versions of the same yaml, right? So that's one of the things that you have to look out for and one of the things is to do use templating tools, right? So there's all kinds of templating tools where you take a core yaml file and then you templatize that, right? So you have one core and then you may be changing a few, pinching a few things in that yaml file, right? So some of the popular templating tools are really customized and Helm. So now that we have gotten to this part, right, is to templating tools with Helm, I'm gonna go ahead and pass it over to Andrew to talk about a little bit about Helm. Awesome, thanks a lot, Christian. Let me go ahead and share my screen. We can go from there. Hopefully everyone can see the screen. So for those of you who don't know, Helm is a package manager for Kubernetes application. So those of you who are familiar with typical package managers on your operating system, DNF, YAML, AppGit, Brew, which is kind of the unofficial for Mac OS, Helm has become the de facto packaging mechanism for packaging different components in a Kubernetes application, integrating it deployed to a Kubernetes environment. Helm really consists of three primary pieces. First one is a chart. A chart really is just a set of related Kubernetes manifests. A typical application can consist of one or more different types of Kubernetes manifests, everything from a deployment, a service, maybe a config map, anything that can be deployed to Kubernetes environment and encapsulated into a single atomic unit goes into a chart. Now, once you create these charts, where do you want to store them? Make them more distributable. Those go into a chart repository, very much like an image repository like Quay, for example. And finally, a release. A release is when you want to install a chart from a repository potentially. It's in standardization at a specific instance that's deployed to a Kubernetes environment, is called a release. So how does Helm work? Really, it's just a combination of multiple components, all being orchestrated through a command line tool. So you have your chart and associated templates, values, which are the configuration inputs. So think of it as templates are their dynamic abilities to customize what your Kubernetes manifests may look like. So in certain cases, you may want to customize the image location, the types of resources that you want to apply to your application, maybe a live-inness or readiness probe along with other components, those are combined with values. Those are the injected inputs for customizing those specific templates. So in certain cases, I want to use my dev image or my pride image, depending on what environment I have. Using the Helm command line tool, we'll go ahead and instantiate the two of those together and put them into a release and that then gets installed to a Kubernetes environment. So here's an example of what a Helm template does look like. As you can see, a lot of the bracketed fields, those are the components that will be dynamically templetized and replaced by the values or other types of injection, instantiated injection at runtime. So for those of you who are familiar with the Helm context, values.build.url, so when you go in and specify if you want to build from a certain URI, you either provide that as a default value that's provided potentially within your Helm chart or you can override that value at runtime. And for an example, a value file looks very similar to what it did here on the left. So basically build.url is this GitHub URL for some quick starts. That then gets combined at runtime using the Helm install command and then you can specify a specific file. So you do Helm install, you give it the name of the release you want to create. You give it the location of the chart, whether it be from a repository or a local directory on your physical machine and you specify optionally a set of values files or value inputs. And those go ahead and create the different type of resources that will be installed to your open shift in Kubernetes environment. Now, this is where very much like Christian said, the peanut butter and the chocolate start coming together because we can go ahead and extend the capabilities that Christian mentioned earlier regarding using GitOps, Argo CD in this reconciler loop and integrate it with Helm. You can add charts that are stored in Git repositories or Helm repositories. So you can use many of the same principles that you've already leveraged, as well as including overrides for different chart values, whether they be complete files themselves or individual parameters. And we'll walk through that as part of the demonstration today. And all of this can be managed what via the Argo CD user interface, as well as the CLI. Okay, demo time, my favorite time of the day. What we're gonna show today is a GitOps approach for managing applications as Helm charts. We're gonna leverage the Corkus Red Hat Helm chart, which is in Helfa, we're still curating it, to be able to deploy a Corkus-based application to an open shift environment. And we're gonna explicitly demonstrate how to integrate Argo CD to really show you how GitOps can be used to manage not only your open shift cluster, but also using Helm. Sound good? Awesome. I know, I'm excited, yeah. All right, give me one second. I need to unshare my screen for one second. Well, I go find the super secret password. The cluster. What I've been waiting for. So, come back on the screen. Sorry, what was the password? One, two, three, four. The same as my luggage. There you go. So, in the background, I've actually deployed an Argo CD environment. No, maybe I waited too long. Go back here, go ahead to Argo CD. And we'll go ahead and log back in with the open shift integration. And now we have our Argo CD environment. So, those of you in the chat, who here has never used Argo CD? We're gonna kind of walk through how to use Argo CD together and how to add a Helm chart to it and then have it managed by GitOps. So, just to browse around for those of you who are unfamiliar with it, this is the home screen for Argo CD where you can define a set of different applications. And Christian, please feel free to interrupt me at being the all master of GitOps in case you have any areas that you wanna inject yourself into. All right, so, inside the configuration page, we have a set of repositories that we can set up. Certificates are important, especially when working in typical organizations that might have self-signed or industry-provided certificates. You can add those so that your GitOps server, Argo CD, can trust those destinations. Projects allow you to go ahead and configure different types of permissioning. So, let's say you wanna have certain teams get access to perform certain actions and certain open shift projects and namespaces. You can configure it there as well as customizing the type of resources that you wanna have be deployed. So, let's say you don't wanna give them access to create custom resource definitions or other specific type of resources. You can create a whitelist and blacklist items as well. But in particular, I actually cheated here because I actually went ahead and added the GitOps Helm Quarkus application repository which will allow us to basically kind of get really fast into this demo because we have a short amount of time. So, this is our sample application which is basically a hello world for Quarkus. Quarkus is a lightweight Java framework, supersonic subatomic Java for fitting up a Java application very quickly, very little runtime, very similar. If you're familiar with Spring Boot, even faster, but using that microservices architecture for developing Java applications in the cloud. Actually, this would be the application. And it is basically a hello world. So, we're gonna kind of demonstrate that in Argo CD. So, to add a Helm chart to Argo CD, very much like you would any other GitOps-based application, we'll go ahead and click on Create. We'll give it a name. And we'll give it the name of the repository which is basically GitOps Helm Quarkus. And of course, it took that. Go ahead and, here we go. Let's go ahead and put the project, which is default. A default project comes by default, no pun intended with your Argo CD-based application. Now, Christian, do you wanna talk about sync policy? Yeah, so there is a, the sync policy does, you can choose one or two things, right? There's a manual sync policy, meaning that it'll just create the definition, right? It'll just basically create the instance in Argo, but not do anything, right? It'll wait for you to manually do the syncing at each time it happens, either via an API call or whether you actually click the button that says sync, or automatic, right? So, you can do an automatic sync policy, meaning it'll continuously monitor that repository and it'll automatically apply changes as they happen. So, I'm gonna go ahead and try to perform the automatic sync policy. So, you have here also the idea here is I'd like to call out real quick is the prune resources, right? So, the prune resources meaning that if it finds something in the name space you're deploying to, that's not in the repo, it'll delete it, right? So, you have the choice of either keeping things it doesn't know about or deleting things it doesn't know about. So, that's a very important thing to call out. I also told it to go ahead and auto create a name space. So, if I wanna go ahead and deploy to a specific name space that doesn't exist on the cluster, go ahead and create that as well. So, I get a chance to pick from my configured repositories. I said I wanna use the GitOps Helm corpus application repository and that's not the one I actually wanted to deploy which is kind of strange. Let's double check that really fast. This is basically my source control. Okay, cool. What we're gonna do here is we're gonna specify where is my Helm chart. And my Helm chart should be located at, let's go ahead and double check that here, which is, hmm, I want you to love live demos when things, not things should go wrong. Yeah, exactly, it'll always be. Yeah. So, we wanna specify where, Where is mine earlier? Yeah, we wanna specify where the Helm chart happens to be located. So, the Helm chart is located here in the Red Hat Developer Organization. So, we'll go here and demonstrate that. And we specify we want the corpus chart. We wanna send it to the local cluster, which is basically, you know, if you're familiar with Kubernetes, it's kubernetes.default.svc, which is the connection to the Kubernetes API in the local environment. And we wanna send it to basically the same name of the application, which is gefs, Helm corpus. And we got a chance to override these value files. Now, if you're familiar with Helm, you'll notice that there are a number of values, everything from, at least from this chart, everything from where your image is located, where the, if you're performing a build, you can go ahead and build your application. You can, on the deployment side, you can specify all the different parameters that you wanna have for your application. So, what we can do is we can either provide a file itself, or we can override certain values. And I am, and I'm gonna go ahead and actually override a few values. Number one, we already have an image deployed. It's already out there in Kuwait.io. We're just gonna go ahead and leverage it. So, I'm gonna turn builds to false. I'm gonna scroll down and skip all of the build stuff. Don't worry about that. We're gonna go down all the way to the bottom, and we're gonna specify the image name and image tag. And once again, gonna cheat here just because like a good cooking show, just gonna go ahead and go down and find the values that I want to leverage. And if you're familiar with Customize, I am using Customize here, which we'll show a little bit later on, on how to automate this entire process. So, Christian, do you wanna kind of talk about Customize for a second? Oh, I go ahead and finish this up. Yeah, so Customize is another way to template. Kubernetes resources, right? So, the idea is that you have a single deployment manifest. Let's just take a simple example of a deployment manifest. And you wanna change certain things depending on which environment it's on, right? So, for example, the most common use case is the image. So, the image that you deploy in development is different than deploying in production. The only difference is the image, but the manifest is the exact same. So, what you can do with Customize, you say, hey, when you deploy to this cluster, use this image, when you deploy to the other cluster, deploy the other image, right? And it's done using JSON patching or any other. There's a lot of ways to patch of Customize and you can do, again, another hour on just Customize, but that's the idea of Customize. So, all we had to do here was we modified two override values. We modified the image name, basically just pointing to an image out there in Quitta IO, and we modified the build.enabled to false. And we're gonna go ahead and click on Create, and our go CD is going to create this application. And if you notice, it is deploying a set of resources. It's gonna set up a service, a deployment, as well as a replica set and a pod. This is just an instantiation of our chart. Now, one of the benefits is you can play with this chart right now. I'm using the default chart, and if you go over to the developer perspective and OpenShift, you'll find this chart. And I don't even see that. So, if you go to Add, there's this brand new, let's go ahead and just pick on this to have it. There's an entire, there's an entire Helm chart section of the OpenShift developer perspective to add. You can scroll down all the way to the bottom, and you'll see there's a brand new Helm chart. This is the same one that I'm using in this demo right now. So you can go ahead and give it a try in your environment. Obviously still Alpha, so it's still in a fork and progress. As you see, Helm, our Argo CD was able to synchronize this application, and we have a route that was created. So we can access it externally. Argo CD makes it really easy for you to query different resources on the cluster itself, all from a single interface, very much like the OpenShift web console. We scroll all the way to the bottom, we can then see the URL of the application. Go ahead, copy it, open up the brand new tab, and you'll see we have, oh, GitHub loves Helm, just like that. It's great, but this doesn't really, it gets us there. We got deployed, we showcase how a Helm chart can go ahead and be used with Argo CD, but we really didn't emphasize the GitHub space approach. I know Christian's like, this is nice, but we can do better, right? That's right, that's right. We can do, we have the technology. We have the technology. So inside this example application on the main branch is the application itself, which is basically, hello world of Quarkus basics, if you wanna go ahead and deploy the application and build it yourself on your local machine, you can do that, but there's a GitHub branch that contains all the different manifests that you can use to not only demonstrate or spin up this demo in your environments, but we're gonna leverage it to be able to manage our Argo CD manifests right now. So let's go ahead and basically go back to our applications page. We have one application, we'll just put you over to the linear view, and we're gonna use customize to basically stand up the state environment. Sound good? Let's do it. All right, so on here, I'm gonna go ahead and basically I have a set of manifests and I have this bootstrap application. A bootstrap application is basically what they call in the Argo CD world as an app of apps. Basically as an app that can deploy other apps. So I'm gonna basically use customize locally to create an app of apps that will deploy and basically manage the application, basically the one that we created previously, but it's also gonna create another application that will deploy the same application, but slightly different with different values for the production environment. That will allow us to, because with Helm, it makes it very easy to change some of the small parameters that really drive the configuration of your application. Let's go ahead and let's spin that up. So I'm gonna do OC apply dash K will basically allow us to use customize instantiation, Argo CD face, and it's gonna go ahead and templatize all of those customization resources. And as you see here, we have two new applications that were created. One is this bootstrap application that basically includes two app of apps. One would be the application that we created previously, the GitOps Helm corpus app, but we have this brand new one, GitOps Helm corpus prod that represents our production application. We're using the same charts, we're just providing different values. And we'll walk through that right now. So if we click on that, sorry, go click on that. You'll see this is basically very similar. Everything's been synced, everything's all green, everything's working great. Look at the application details and we scroll over to parameters. You'll see we actually have a different parameter here. Under environment name, under deploy.m.name, we have a new environment variable called environment and a value called prod. Our application will go ahead and look for that environment variable and change how the application reacts. So if you recall in our development environment or our non-production environment, we have a little blue background. Let's go ahead and look at the route that's exposed via this production application. We go here to the route, wait for the manifest to load. Go all the way to the bottom. We have a different URL representing our production environment. We'll go ahead and open the new tab. You'll see we have our same application, but we have a different background that represents our production application. So as you saw, really easy for us to manage our home-based installation and configuration all through our Go CD, have it roll out and have it all be managed in a get-off-based approach. Prishan, anything that you wanna add? I just wanna add one little quick thing you mentioned and I think it's worth calling out. As you mentioned, you have an application called an app of apps. So this is something that a lot of us do right in the get-offs world is we do an app of apps, meaning we try to solve the chicken-egg problem, right? It's like, how do I deploy my application automatically without manually deploying it, right? So there's this concept of an application that does nothing but deploy your application, right? So it's kind of, truly we try to solve that chicken-egg problem there with the app of apps. I'd like to call that quick, which is something I do a lot, right? Is the app of apps as well? Yeah. So this really is the end of the demo. I know I'm interested in your feedback and especially the feedback of others here on the call, Karina, DMACC, you know, I'm gonna probably turn it over to, not sorry, Karina, pardon me. Karina, do you have any areas that you wanna start addressing as we start moving towards the panel discussion? Well, we did have some questions in the chat, so I wanted to make sure that we got those answered and DMACC has been awesome in answering those. So let's just bring that out. So first we have, how are data migrations handled in the event of a rollback with GitOps? DMACC, do you wanna dive into that one? Sure, actually to create it for like multiple people answering that, but like generally, I think as a high level, like conceptually, you shouldn't see GitOps as something that addresses all the problems around deployment, it focuses really on a single thing and solve that really, really well. It focuses on deep driven workflows for everything, not just for your development, when you're developing code, but also driving your operation. That's the single problem that is solved, but as a consequence of that, you get a lot of visibility and auditability and traceability because of the Git provider. That's what it focuses on. When it comes particularly to rollback and switching between virgins, so it doesn't really itself do anything for your data automatically, it still relies on the application teams and the operators to be aware of their architectures and the way that they use Argos CD or they can use the GitOps process to be aware of the changes in data and work that into the process. That said, Argos CD gives you a couple of tools to be able to work with. For example, it was mentioned in a chat screen with Christian, perhaps, or Shubik that mentioned about the hooks. There are hooks in Argos CD that you can tell, say to it that before you sing, perform these operations, for example, after you sing, do this other operation, which could be used perhaps for backing up their schema, for example, the restoring and things like that. And there are also a lot of discussions around, like regardless of what kind of process, this is not a question related to GitOps really, it's related to any type of deployment, how do we manage rollback, regardless of how you perform the rollback, I mean, it comes to data. So there are a lot of conversations and guidelines also on the application architecture, how the application itself could be more resilient or are changing to the database schema, for example, so that from one version to other, it doesn't immediately break if the schema doesn't match the expectation or the application also of the schema. For the database, for example, there's also a life cycle managed so that the multiple schemas are supported at the same time for a single version of the application. And so to summarize it really, it doesn't give you the silver bullet for how do you do with your data when you're rolling back, but it gives you a little bit of more flexibility if you want and still the responsibilities on the team itself and the operators to use those possibilities, they'll use those tools and plan for how to manage the data that's been changes on their agenda. Thanks, CMAC. Yeah, I'd just like to just kind of add a little bit to what CMAC was saying is just there's, the tools are out there to do the things that you need to do, right? There's hooks. There's actually a good demo buy into it that does, that uses hooks to put up a under construction page while they do some maintenance and once the maintenance finishes, it deletes that page, right? By kind of just doing, kind of showing the idea of AB deployment and schema changes and things like that. So it's really heavily dependent on the process and the tools that you, and how you leverage the tools to do that process. So we also had other questions in the chat. Let me find the next one. I love that we have a lot of chat going on. All right. I guess for HA purpose, is it better to have Argo CD as an external cluster from OpenShift? I don't know who wants to take this one, CMAC. I can talk about a little bit. So if you look at the role of Argo CD in this process, it is really in charge of your deployment to make sure what you have described in your Git repo, it is what you are seeing on your cluster. So what happens if Argo CD is down, let's say, you completely remove it, undefloy it all together. The only thing that happens is that you cannot do any new deployments through this get-out process. So no new releases will be rolled out, right? I just want to put in perspective the criticality of the role of Argo CD for the application itself. The applications that are deployed and running, nothing is really happening to them, which is if you think about it and not using get-out process, it is essentially what you're doing today, right? You deploy the application and you're done for the day till the next time you want to deploy your application. In between, there is nothing else happening around deployment in between two releases to your production. It is as if in the get-out support, your GitOps engine is non-existing between those two releases. So it is like from a customer's perspective or from a lot of our customers that I worked with, it hasn't been as critical as buying into the overhead of running, like thinking about HA for Argo CD or in multiple instances, or having separate management clusters for Argo CD. All of this, for the high availability that you gain, there are more management costs than operational costs to gain that. And we haven't seen many of those instances, but if that is needed, it is required. So that is definitely one approach to externalize it from the cluster that your application is running on. There are other approaches around HAO, the controllers itself, to be discussed. There are ways to achieve that, but I haven't seen it very often. I don't know if Christian or Rishu, we can recall their other stuff, have seen more use cases around this with customers. Most of my customers are less concerned about that aspect because they find the benefits that come with GitOps and Argo CD outweigh the risks because this is actually providing more uptime and availability than what they had previously. I hope that answered your question. Yeah, I'm learning stuff too about Argo CD. All right, so where and how should passwords for different environments be stored? I think Shavik answered this. Andrew, I think you're welcome too. Go ahead, Christian. Yeah, although there's really two schools of thought in terms of just secrets. Secrets in general is either use something like sealed secrets where you're encrypting encrypting the secret before you put it in Git or you use something like Vault, right? You use some sort of secret management system, right? They both have the pros and cons and there's still kind of a discussion going on in the community in terms of what is actually GitOps using Vault is actually GitOps, is it? We don't know. Or is sealed secrets still the way to go? So there's two methods of doing it, right? I don't think either method is right or wrong. It's really what it works for you. I like using sealed secrets. Which is because I'm more of a GitOps, everything in Git, right? I'm trying to push myself towards everything in Git as much as I can. But yeah, really secret management and storing secrets encryption really is your friend there. Especially if you're really just putting secrets in your Git repository, in your environment. So first of all, you have your own personal Git repo inside the environment, right? So they're behind the firewall, behind multiple layers of security. So you're just adding one more layer of the encryption on top of that. So. You wanna add something, Shelby? No, I think we are good. I think we've answered and we've covered everything on it. Thank you. Secret. All right. And there is a request for the link to the Intuit demo. If anybody, do you have that handy Intuit demo? Yeah, I have it handy. I forgot to, I put it in the Twitch chat. I forgot to put it here. I'll put it in on the. Awesome. Thank you. On the Lujins here. Okay. Let's see. Can you share the repo with the customized home charts that you were deploying from? Another request for you, Andrew? I can. I'll put that in the chat in a second. Awesome. And let's see. All right. I'm trying to scroll through all the chats, which is awesome. Hey, Carlos, will Argo CD still apply or more like Flex CD will fit here? Yeah. I think the question by Carlos was something around whether we can use get ops for cluster state. And I answered that. I think yes. The answer is yes. It's very common to maintain cluster configuration on git. But of course, to a larger state, as long as you can describe it using manifests, that's how you would do it. So for that, you could do it with Argo CD or Flex, both work equally with it. Thanks. And let's see. Because Carlos also asked, do you guys cover get ops for keeping the app state? But what about keeping the state of the cluster also in git? Yeah. So I know Christian has talked about that and showed it on his get ops hour with Chris Short a lot. So definitely take a look at that if you haven't on his happy hour. I believe you're doing one on storage this week, right? Yeah, this week of Thursday, yeah, shameless plug, get ops happy hour. We're talking about storage and get ops. So on this Thursday at 3 p.m. Eastern time. So hot topic. All right, 3 p.m. Eastern. That's awesome. All right. And were you using a Helm chart stored as files in git or a Helm chart released in a Helm repo? So I did answer this one. I was using a git base repository, but one of the benefits of the Red Hat Helm charts, they are stored as a chart repository itself. So I could have easily gone ahead and used that option an hour ago. I just chose to use git. And manifest files should be stored in a separate repo apart from application code repo. Did manifest files also be stored separate to the application configuration files, which differs per environment? Who wants to take this? So I definitely would keep them separate. You could, it really depends. At least putting out a different branch is key. But at least have a different release cycle. But I've seen from my customer, they do almost every single type you could think of. There is a long conversation of whether you should use a single flat repository with different folders or different branches. So that's certainly a big question and one for discussion in the whole GitOps space. Yeah, yeah, that's definitely something that hasn't, I don't think technically has been solved yet, but I am a fan of different repos for different things because they have different life cycles. And LC ask mono repo or separate repos. I think you already answered that, right? Definitely separate. I typically do separate, but just for demonstration purposes, I threw it all into one, for this example. All right, could you please double click on configuring event triggers, event triggers, sorry, that activate your GitOps pipelines. For example, when someone changes an application deployment descriptor YAML and checks it into a GitHub repo, what's the underlying tooling pipeline tooling? So what you can do is it really depends. You can use web hooks in your Git tool to be able to invoke like an Argo CD sync or Argo CD will just natively be able to pull the change on a regular basis. It depends on how fast you want the change to be applied and ruled out. That's it for the questions in the chat. That's awesome. All right, so we have a few more minutes. If anybody has any last minute questions. So given all the questions that have been asked and so between all of you, I mean, are there any other things that you want people to take away from this discussion today? Because I think we have a lot of follow-on, pre-zones and demos and everything after this one. Deeper dives, so. All right, Sam says maybe spoiled questions. So what is the state of ACM versus Argo CD? I don't think that's a versus. C-MAC, do you want to take that one? Yes, sure. So it is exactly like I said, there are two tools that sit really well together. If you want to apply GitOps across all of your infrastructure, right? So we say infrastructure as code practices. We talk about Argo CD and take down on a CI CD a lot. You need it to land on the application side with a little bit of touch of infrastructure because Kubernetes is a declarative platform and obviously it takes it to the extreme with being able to configure everything through YAMLs that you put in Git repo. But it doesn't go really further lower than that. So if I want to provision a cluster, we usually don't talk about that. How do I go from nothing to a cluster on Azure and then roll out my application to it, right? Or if I want to also enforce that once the cluster is up on Azure, there are certain operators installing or the way around, there are some certain operators are not installing it. There are enforcement of policies on it. So that's where ACM, that's a power that ACM brings to the game and the combination of Argo CD that it expands the GitOps story or GitOps capabilities beyond just the open shift infrastructure and all the layers on top and it pushes all the way down to cluster provisioning and policy governance and aspect like that. And in Argo CD, it's in fact going to be a component within ACM as well that drives the GitOps process there and really marries well to those provisioning capable of ACM and policy partner. So this is something that you would see roll out gradually over the next six months in ACM as well. And we have another question. So with multiple repos comes the issue of versioning that version, config version, manifest version. How do we handle versioning? Will each of these get their own version numbers? I don't know who I would say. So you would definitely want to meant because just think about it, you may have your application have a different life cycle than your configuration. Your application may be static for years but your database password or configurations may change. Let's say you get a brand new image repository location. Your application may never change but the configuration will. So that life cycle should be managed separately. Great question. All right. Well, we have two minutes to go. So Freddie, I know you dropped in a link there. Do you have a question about it or? Okay. Christian already answered. Yeah. Yeah, nope. Can you explain to the people who can't? Yeah. So yeah, the people who can't just watching the recording they're asking about the ARGULU Flux join forces. That was an idea they had a while back and they decided to go separate ways, right? So they decided to, they had different goals. So they decided to work on a different approach to that goal, so. I can quickly add to it in general what that means for the community and customers and the industry. So we have a working group where we are all participating together to ensure that the basic GitOps principles, the best practices we should all have general agreement on that. So in respect of which tool you use your paradigm shouldn't be shifted. That that should still be similar to the typical GitOps principles or the DevOps principles that we're talking about. Thanks, Shavik. All right. So Omar, thanks for this next question. All right. So promotion from dev to prod, for example, and approval gates, best practices. Would somebody like to talk about best practices for promoting? Yeah, that's part of the CI process really more than anything else is the, it's Argo CD is just does the CD aspect of it, the CI process of approval gates and going from dev to prod. That's all done in the CI process, right? So there's been a lot. So Jenkins did a really good job of like globbing them together. That's why it was CI CD, right? But really now that we're separating the responsibilities is your CI process does a lot of that gating and that release process. And the sync tool, in this case, Argo CD is only responsible for actually doing the change that it was told to do. So the approval gates will definitely just still be in your normal CI process. The CD process, that's further downstream. Thank you. I think that's, we definitely have a lot of follow up stuff and more sessions and loving this. So we are out of time though. And thank you so much Christian, CMAC, Andrew for presentation and demo and the great conversation all the Q and A lots of great chat. So thanks again and for everybody else thanks for joining us and all your questions and we will see you soon. So thanks again for joining us in this OpenShift Commons. Thanks everyone. Thank you.