 Welcome everyone. It's Michelle Dupama here, your host for the Data Services Office Hour, and I am here with Gaurav Mitha, and he's going to talk about some really fascinating stuff. So how are you? How are you this morning? Oh, your evening. How are you? I'm good, how are you? I'm good, how are you? Good, so can you give us a short introduction about what you do, and then let's talk about what we're going to get into today. Yeah, sure. So I'm working as a principal architect in the Data Foundation team. My primary role is to work on multi-cloud and data foundation related items. Okay, so what are we going to talk about today? Today we're going to talk about multi-cloud GitOps. So we have the- Okay, so what do you mean when you, well, I guess you're going to get into it a little bit. I was going to say, what do you mean when you say multi-cloud GitOps? But let's see what you're going to talk about. So GitOps is basically like inverted as code and we'll be using the GitOps to deploy and manage different clouds, AWS, LZO, GCP, IBM. So we'll talk about those clouds and how we can manage the departments as well as environments into those clouds. Fantastic, okay. So do you want to share your screen? Do you want to- Yeah, sure. Do you have any slides or- Okay. Yeah, sure. Please let me know once you're able to see the screen. Okay, I can see your screen. Okay. Slides, right? Yes. Yeah, yeah, yeah. Okay. Okay. So today we're going to talk about the solution back to it's multi-cloud GitOps and pretty excited about this topic. So in this session, we'll cover these things. What is GitOps? What is multi-cloud? What is multi-cloud GitOps? GitOps application delivery model. Then architecture of multi-cloud management with GitOps and how does the flow look like within the multi-cloud management GitOps. And finally, we will have a demo. So we'll start with the, what is GitOps? So GitOps is simple, consistent and secure approach for managing environments and application. And so we'll have, but the core idea behind GitOps is we will have one desired state defined in the Git repo. And we'll have the GitOps controller running in OpenShift or Covenants. That GitOps controller will be watching the Git repo for any changes or anything new coming up. And GitOps controller will ensure that the desired state which we define in the Git repo is applied on the cluster so that the desired state and final state are exactly the same. And if you want to, if you want to deploy or update any new application, we'll just modify the Git repo and GitOps controller will automatically deploy the new application which we just updated in the Git repo. That is what is GitOps. It's basically defining the interest code in the Git repo and then controller managing it. Here in the below diagram, the green box is the desired state and we are putting it in Git repo. And from there, we have the GitOps controller running in the Covenants or OpenShift environment. It will read the conjugation from the Git repo and it will apply the state into the Covenants and OpenShift environment. That is what is GitOps. So you get all of the advantages of versioning and control but you get with normal Git workflows. And then anyone who's run large clusters or many clusters is really, this also helps you troubleshoot because you know exactly when something happened, you know exactly what was applied, et cetera, et cetera. So okay, fantastic, yep. Yes, yes. We'll have the history like who made what change and all things can be tracked. So we are applying all the like best DevOps practices with the GitOps. So moving to the next slide. So what is a multi-cloud? So in an organization which is using multiple clouds, Google Cloud, AWS, Azure or IBM for deploying or managing their workloads, it's using the multi-cloud. So here in the below diagram, so let's say AWS has the best compute pricing. The organization will be using AWS for the compute services. If GCP is offering the best pricing for the storage, then organization will be using the Google Cloud for that. So depending upon the different benefits the cloud providers provide, the organization will be using different clouds. Now, what's like, how multi-cloud is different than hybrid cloud? Hybrid cloud is a place where we have one private or on-premise data center. And in addition to that, we will have the multiple public cloud running in that. And multi-cloud can be a hybrid cloud and hybrid can also be a multi-cloud. But multi-cloud may or may not have the private cloud running inside it. Right, it could all be public. It could be any combination. Okay. Okay. Okay, so moving on to the next slide. So what is multi-cloud results? So organization are looking for an approach where they can use a simple, secure and consistent way to deploy their application on different clouds. On the multi-cloud setup, how they can deploy using one particular tool, how they can manage their deployments to different multi-clouds, public or private. That is what is a multi-cloud get-offs is. And here in our architecture, we'll be using three things. First one is the get-offs. So talking about get-offs, like we'll be using Argo CD to manage deployments in the public and private cloud. And second is the Red Hat ACM. Red Hat ACM will be used to manage the multiple clusters. We'll have one unified console and we'll be able to manage the clusters from there. We can also deploy the application from Red Hat ACM. And third is the HeshiCop Vault for where we'll store our secrets. And using the external secret and Red Hat ACM will basically deploy those secrets to different clusters. And cluster, when I make cluster, it's basically clusters which are deployed in different multi-clouds. So those are the three main items. And if you're looking for more detailed information, I've put the link here. So there's a website there. All the information is there, STTPS, hybridcloudbetterns.io. And multi-cloud get-offs is the topic where we can find more information about it. Okay, will you go into the architecture and layout of ACM above the other cluster? Okay, so I'll save my questions. Sure. Okay, so moving to the next slide. So right now, we'll talk about the get-offs application delivery model. How does it happen? How does the get-offs work? So on the top left, we have source code repo. It's the application repo where an engineer or a developer will commit the code. And as soon as he commits the code, the continuous integration tool will kick in. And this particular tool will be watching the repo. And it will build an artifact. Artifact will be a container or the Docker image. And that Docker image will be stored in one of the imager history. And once this continuous integration process is finished, that process will create a pull request to another repo. The config code repo is the repo where we are storing the infrastructure as code. And it will update the image version which was pushed to the image registry. And once this pull request is approved and merged, then the continuous delivery will come into place. So the controllers which are running in the open shift will pull the changes from the repo and update the application. The application version will be pulled from the image I see in the latest one which was just built and it will be deployed into open shift. Okay, so two questions. So the two pieces are coming together, your approved config code repo that has, that has to happen. And then you have to see if there's a new image that matches that. They're both used in the new delivery part. Is that accurate? If there's a new image, or yeah, like you actually, the config, you know, maybe we probably need to see when you do your demo, it'll come together a little bit better for me. But I'm seeing like, so your config code travels a different path, that's what I'm saying. And then the other path is you've got your image and you've got your config code over here, you put them together and you deliver. Sure. Is that if you try just always in open shift? That's correct. Okay, okay. So we have like different applications. Let's say one application is for checkout, one application is for browsing the windows. So those application, for each application, we will have separate source code repo where like application code resides and each application code will have its own image in the image registry. And we may have one common repo which deploys all these applications. That is the config code repo. I see, okay. Yeah. And for any number of application, we can just keep one repo and update that particular repo. And then open shift controller will pull the changes and update the application using the image from the image registry for that particular application. Is that image registry typically the internal open shift one or do people, what have you seen? Like just curious, is that inside the cluster? Is it outside? It depends upon like customer. So we do have key.io where we store all the images in open shift itself. And we also have like public images, public registry is being used like Docker architecture where people click their images. So moving to the next slide. So this is the architecture of multi-cloud management for the GitOps. And in the center, we have management hub. In the management hub, we call it management hub or the hub cluster where we will have three components deployed. First one is the cluster management. It's the Red Hat ACM. And Red Hat ACM will be managing the clusters which will be deployed. These clusters can be in data center or public cloud. And second item we have here is the GitOps system. So the GitOps system is the R2CD which will deploy the injector as code in the clusters. And third is the secured world. So we are using Hashtag of world for storing our secrets. And these secrets will be populated across managed clusters using Red Hat ACM. So on the right, we have the managed clusters and in the clusters, we'll have different applications. They can, these applications can scale depending upon the requirements. And the secrets will be accessible to these applications in the managed cluster. And these secrets are being copied or created from the Hashtag of world into their particulars this. In the demo that you're gonna show us, how do you make the management hub HA? Like what's your, I'm sure the demo has it maybe as a single point of failure but is there a discussion around how do you make sure your management hub never goes down? How do you position it? Like where does it, in all your multiple clouds in your data center, that might be another show. Yeah, sure. So right now I'm using the three master and three worker nodes for my management hub. So it's HA setup, so I'll show the details. And client here, like outside end user will be able to access these applications which are deployed in the managed clusters. So what are the driving factors for this architecture? So we get one console where we can manage the cloud environments in the data.acm. And we have the individual security where we are using the Hashtag of world for story about secrets. And third thing comes with the GitHub system is we are using the best practices for contents delivery. Awesome. Thank you. So moving to the next slide. So this slide is how the flow will look like. So I'll just talk about and then I'll turn to demo. So here on the left side, we have the infrared team and dev team. Or we can also have an automation which can update the Git repo. Here in our scenario, the Git repo contains Helm charts. First step is add or modify the code. And then we have one cluster. We call it as Herb cluster. It's just the, it just right now it's just running the open checked cluster which it has open checked deployed in it. And when we run the make commands from this particular repo, what will happen is Argo CD will be deployed in this particular project, open checked GitOps. And the job of this Argo CD is to deploy a few of the application as well as one more Argo CD in the multi-cloud GitOps Hub project. And here we can see we have the second instance of the Argo CD. And this Argo CD will install the applications Hasheco port and Red Hat ACM. So here we have one box which shows we have Red Hat ACM applications and Hasheco port. And Red Hat ACM can be used to manage and import different clusters. And once the clusters are imported or created from Red Hat ACM, those clusters will be shown here under managed clusters. These clusters can reside in any of the cloud provider AWS, Microsoft Azure, GCP, IBM cloud. And once ACM starts managing these clusters, it will deploy Argo CD in instance in each of the cluster. So in three clusters, we have three different Argo CD. And Argo CD will be responsible for deploying the applications in that particular cluster. Can I ask you a question? Why would I, I'm just curious from the GitOps point of view, there's always like in your hub cluster, Argo CD is being installed and in turn, it's gonna install ACM and the applications and Hasheco port. Why wouldn't you leave ACM there? Like, do you need to re, are you just showing this because it's fresh and new? And then the next time we go at it, like the hub cluster will already be set up. Do we always redeploy that piece, redeploy Argo CD, put it in the multi-cloud GitOps space? When we leave it there and then just fire off jobs as needed? So this, like deploying the ACM is just like done one time. One time, okay, I'm checking because I'm like every time I see it, it's being deployed. I'm like, you can't possibly mean we redeploy it every, okay, thank you for coming up, I'm just kidding. It's just one time. On the show where we start from scratch, so that's why I always see it, okay. So it's just one time, one, yeah. So once we have set up the hub cluster, it will have the ACM installed. And once the ACM is there, we can add as many clusters as we want to manage. So I'll move to demo then. Okay, I was gonna say, we can see in action. Okay. So in my current setup, this is my hub cluster. Like it's just plain vanilla OpenShift cluster, and this is my domain, datamulti-cloud.com. And I have deployed this cluster in AWS. So this is the cluster. And we have three master and three worker nodes being deployed. And I'll use it in Jambox. This is not Rosa or OSD, this isn't used. This is not Rosa or OSD, this isn't used. This is not managed. Okay, okay. Yes. I created it just before the demo. And then we have another cluster, which is sitting in Azure. Perfect. And we'll be using AWS one as the hub one, and then we will manage this cluster, the Azure one. Okay. So I'll just switch to terminal and just show the window. So here on the top side, I'll split the terminal into two spaces. First one, the hub one is for AWS. And second, the below one is for the Microsoft Azure. Okay. And I'm getting the just project right now, but all projects are there. And here, we don't see any GitOps being deployed here in this particular project, in this particular open ship. And if I go to the operators and click on the installed operators, right now I just have this packet server running. So brand new cluster, fresh. Yes, brand new cluster. And right now I'm using one of the repo and this is the public repo, multi-cloud GitOps. And I phoned it from this particular hybrid cloud patterns, multi-cloud repo. And here in this repo, I'm going to use the make command to do the installation of Argo CD and then Argo CD will take care of further deployments of different components. So this may come in, excuse me, kicks off what should happen in the hub cluster, which is the first step, because we're starting fresh and then everything else happens. Okay, let's check. Yes, yes. So I'll just run make show now to just show like what all components will deploy. So I ran make show and it will show me that what all components are getting deployed. First one is the Helm template is getting deployed. And it's going to use my repo, the multi-cloud GitOps and it's going to use the develop branch. And once that is done, then second thing it's doing is it's creating the namespace for open check GitOps. Then third one is the application, Argo application and name will be the multi-cloud GitOps hub. It will be deployed in this hub cluster. The in Reddit ACM in cluster is the hub cluster and it's pointing out like which repo will be used, which branch will be used and under this particular repo, which folder will be used for the deployment. So just a quick question. So in cluster in this case means in the hub cluster and then out cluster then I assume there's an out cluster later on and then we define which cloud that would be or we'll see that later. I don't want to go jump ahead. For managed one, I'm using the Microsoft Azure one. Okay. So I'll come to that part. Okay, okay. And here right now it's just creating the Argo application with different parameters. And then we have last one is the subscription where it's applying the OpenShift GitOps operator subscription and in the namespace. And here these are the details which are important for the subscription where it's using the stable channel and it will install these from the OpenShift marketplace. Okay. So this is the make show right now and what I'll do is I'll just run the make install now. So for people watching, this is your bootstrap, right? To get the hub cluster up and where it needs to be, you would have to, you obviously have to know where your hub cluster is and then it's running OpenShift and you have access to it but you'd also want to have to have all of the, you'd have to have your Git repo set up, right? So you pull this down, you fork it off, you change the repo, you change what branch you want it to be from, like you said, have it undeveloped, if you wanna change something else. I'm just wanna help people along to like, if you're gonna try this out yourselves, you'd have to have two things. You're gonna have to work this repository, you're gonna have to have a hub cluster that you can mess around with. Okay, so go ahead, sorry, make install. So I started the make install, so it's going for the help and solution. Once that is done, then it will go and start looking into the vault in it. So here, let me go back to the operator. So earlier, we had just one package server and if I refresh this, we'll see the OpenShift GitOps coming up. Give it a second. There we go. So Reddit OpenShift GitOps is installed just now and then we'll have an Argo CD which will come up shortly. And here, it's running the vault commands. It's initializing. Yeah, okay. So let's wait for one minute and then we can see. Take some moments. Live demo. Yes. If this part were a video, you would know because it'd be too quick, but these things actually take a little bit of time. So that's good, no problems. Okay. Is there anything for the hub cluster, for the make and solve, for anything you're doing, is there anything else that people would find, people modify? Right? I mean, you have your, you know where your hub cluster is gonna go. What are other considerations if you had other policy? There are like files, values.yaml. So we'll have to modify the values in here. So the DNS value will change and like if there are any custom parameters, those values can be changed in these files. Okay. Let's see, did it come back yet? Nope, still going. Do you wanna talk about some of the other questions we had that which I think you're gonna answer in a little bit, but while we're waiting, we can talk about it. So this is multi-cloud as opposed to, you and I talked about multi-tenancy. Do you wanna talk about the difference in like, how would you, I mean, clearly that work happens after the bootstrap, right? But where do you go to define that? What's the approach and the thought behind, if you wanted to run in addition to multi-cloud, you wanted to do multi-tenancy? So yeah, sure. So multi-tenant is basically like, let's say we have one AWS account and in AWS account, like different teams are using a different VPC. That is like multi-tenant. Same space is being like rented out by different tenants for their application. And we can use one AWS account to host multiple open-chip clusters. And those open-chip clusters can be managed by the Red Hat ACM and through VITOPS. So that's one AWS account, for example, to multiple clusters, manage the Red ACM. Are there any other configurations? So we're not talking, so the tenancy there, okay. So we're not saying you have one big large open-chip cluster that you're managing with multiple, you don't separate out tenants that way. Okay, all right. Yeah, so right now, like it, depending upon the team size, like there may be requirement, like one account straight into like multiple, one account is being split for multiple teams. And then another configuration can be like one team using a one particular account for their workloads. Okay, so you can certainly have more than one AWS account if needed, if you grew or whatever reason, you need to separate, okay, okay. Yes. And that's not just for AWS, that would be any, whatever you're running on. Yes. Yes, yes. So if multiple teams are using the same AWS account, then we can manage all those clusters in one single AWS account through the GitOps and Data.acm. Okay, okay. All right, it seems to be all done. Yeah, it's all done. So see it like ACM was installed and the open-chip GitOps has already done. So we'll go and check the projects, what all projects were installed. So here we have the open-chip GitOps where the first Arvo will be was deployed. And then we have another one, multi-cloud GitOps where another Arvo was deployed. And now open cluster, yeah. So this is the project where Red Hat ACM gets deployed open cluster management by default. Okay. And if I get the route for that, this is the route for the Red Hat ACM. Okay. I have a quick question, maybe not so quick. So in this bootstrapping process, when you install Vault, and you loaded it with secrets, correct? Yes. Okay, so where do those reside before they get to Vault? Okay, so. Are you gonna, let me know if you're gonna show it. Those, okay. So right now those secrets, I'm using it in like, I'm using the dummy secrets. So I've just put it in the report shell, but we can have the Ansible Vault store those secrets and then HashiCop Vault can fetch the secrets from the Ansible Vault and calculate it in the HashiCop Vault. Sure, okay. Let's make the login. One second. This is the Red Hat ACM console. Under overview, we'll be able to see like, what all clusters and applications are deployed. So right now, five applications are deployed and we have one cluster. One cluster is compliant and we have these many pods running in the cluster. So okay, so this is on cluster, looking at Azure. Yes. Your Azure cluster, okay. No, Azure cluster, we'll have to add it right now. Okay, that's it. It's the hub cluster. So the hub, so ACM is managing the hub cluster. It's managing itself. Yes. So, okay. Yes, right. Yeah. So right now. Forgive me, I haven't actually seen ACM, so I have questions. No problem. So it's managing itself the local cluster and here AWS will show up in few seconds. Okay. And now going back and getting the route for the Argo CD. So this is the route for the Argo CD. Okay. Let me extract for a secret for this Argo CD. What's your name is, Adnan? My name is Prasad, and it's this. So this is the application which is deployed in the initial Argo CD, and here we can see like the different components as part of it. That's a long list. So this is the first initial Argo CD, and to get the second one, we'll have to go through this particular for the multi-cloud get-offs. Okay. All right, just so for anyone who's doing this, bootstrap is complete, right? Yes. Hub cluster is set up, get-offs is set up. We're now going into the dashboard of the multi-cloud get-offs, which was set up by the get-offs flow. Yes. Okay. And you're going to get the admin password for that. Yes. Okay. Multi-cluster or something? Multi-cloud get-offs? The name of the secret is something different. Ah. It's this, it's hub get-offs, just here are the applications which the second Argo CD deploys ECM. It deploys the sample application, compact demo, it deploys the Golang acceptable secret operator. So the purpose of this operator is to basically fetch the secrets which are stored in the Hashtag of Vault and create the Kubernetes secret from those secret. Okay. This is the Vault. So I'll just show the Vault UI. So this is the Vault UI, and I'll just take in the token and I'm logging into Vault. These are the secrets which we created. And I'm going to this particular secret which I created. This is a conflict demo application which will be using this particular secret. And if I see the value, the secret name itself is secret and the value for that is testing one to three. Okay. Now what I'll show is I show that this is a demo application conflict demo. So I'm going to get the URL for that particular application. And you'll show us where the secret's being used? Yes. Okay. I'm going to this particular application. So this is the application. So it says that hub cluster domain name is this. Okay. The pod for this particular application is running on the same, perhaps, OCP410, not multi-data multi-cloud. And the secret value is this secret. So in fact, from this, it will show me testing one to three. Okay. And that was put there by that ex-Golang external secrets application? Yes. Okay. All right. So we're good to go from that piece. Okay. Yeah. Cool. Now the hub cluster is ready. So what we'll do is we will go to the Azure one and try to- Is that the final check on the hub cluster being ready? Like going into the same little, okay, all right. Yeah. Checking the application is up and ready. It's able to access the secret. And that the secret is present. Okay. Okay. Yes. Okay. And here in the incognito window, here we also see that it came up with Amazon Web Services. This is the interactive provider for the hub cluster. Okay. And we are using 4.10.5. And from here, we can also see the applications which applications are deployed in the through the Red Hat ACM. So ACM conflict demo, Golang external secret, multi-cloud, Github, sub, and what-out. These are the applications which are deployed. Now, if we want to like manage different clusters, there are two ways to manage different clusters. So first one is to create the cluster from the Red Hat ACM itself, the fresh cluster, new cluster. If you want to create the cluster, we have to just hit create cluster and add the details. And we can create the clusters in AWS, GCP, Azure, vSphere, OpenStack, and also in the data metal. We'll have to provide more details on this platform like secret credential for launching the cluster in AWS. And those credentials are stored here. So if I click on add credential, let's say I want to create a cluster in AWS. So it will ask me the credential and all the details. And if I put all the details, I'll be able to like use this credential to create a new OpenShift cluster from Red Hat ACM in the AWS account. But that's if you're creating the cluster fresh, okay. Are we gonna take that or? Okay. No, due to like time constraint, it will take like time to create the cluster. And so second thing which I want to show is like, how can we import an existing cluster? So let's say we have three S. So this is the second cluster which I was talking about. So let me just show this thing. So this is my Azure window and A is the cluster. It's Azure Red Hat OpenShift. And on the locators, right now it's just a package server. And what I'm going to do is I'm going to import this particular cluster into the Red Hat ACM. And steps for that are pretty simple. So I have to go to Red Hat ACM and click on import cluster. And I have to just manage that at the name of the cluster. And then I'm going to go to any labels. So in Red Hat ACM, we can deploy different applications using the label. So what I'll do is I'll add one particular label and it's called as cluster group region one. What will happen is it will deploy the config demo application on this cluster once it's imported. Okay, so any application that has that label will get deployed on this cluster? Yes. Okay, perfect. But right now that's just config demo. Okay, okay. Yes. So I'm going to click on save import and generate code. Okay. So what it will do is it will create one script for me. And I have to just copy that script from here. I'll just copy this command. And then I'll go to the Microsoft Azure. And let me just check that it's connected properly. Okay. Yeah. So it's running. So I've copied the command and I'll just hit the paste and just hit the end key. Okay. Okay, so the workflow was in ACM, you imported the cluster at the bottom of it. It tells you, and the decision there would be if you're including any labels that matter for what you want to see in your cluster. But at the bottom, do you have it generate a command line that you can then go apply to the cluster you're importing? Okay. Yes. Yes. All right, perfect. Okay. So what it will do is it will just create a few custom resources. And then once those resources are up and running in this particular interface open cluster management and more add up. So once those components are up and running, then the cluster will be shown as imported in the ACM. Okay. Once it's imported and marked ready, does the rest begin? Does the rest of the ArgoCity workflow for a movie called begin? Or do you have to do? Okay. So what will happen is one, but what it will do is it will create the ArgoCity in this particular cluster, as your cluster. And that ArgoCity will be pointing to the Git repo and it will install the applications from the Git repo itself. And we can use that ArgoCity to add more applications and deploy different applications in that cluster. Okay. So it's showing as ready, but it will take like one or two minutes. Once it's ready we'll be able to like see the application that's getting deployed. So that was the, that's the end to end bootstrap process, right? For an imported cluster. If we had created it would take as long as it takes to create a cluster, but so hopefully this will come up in a couple of minutes. Okay. Fantastic. Yes. It will like take one or two minutes to import the existing cluster. And if we see the route for the config application, that's, I think it's still done. So. Apologies if you can hear my dog in the background. It's probably a squirrel. Yeah. I'm sorry, I'll go mute for a little bit. Okay. Okay, so not quite yet, but okay. Is there another place to watch progress? What were you, you didn't get, you were looking for a route on something or just so people, okay, here we are. Yeah, sure. So here we have like different projects. If I shift to- Just wanna see if your config demo projects. Okay. Okay. So I have to go and check here and get points. So here like, so these are important. Resources which needs to be like working to ensure that this cluster is able to talk to the publisher, which is one more project. It takes like one or two minutes to like come up properly and one sets up then. Yeah. So this one also. So a question. So in this demo, will you be installing like ODF or cause we previously did a show on how to manage ODF from GitOps, but are there other, is the idea behind this is to make sure that the infrastructure people know that they should be treating all of their, all of their infrastructure this way, whatever. And then you can sit on the very top and kind of pull the strings on the clusters below. And like I could see someone from infrastructure going, I'm doing storage. What does this have to do with storage? It's like actually you should be managing ODF in each cluster this way. You should be managing multi-cloud gateway this way, that kind of thing. So I was just curious in your demo, the actual, the project, are you, is it just for workflow? Are we gonna, you want to do ODF? What would you like? Sure. So this is the like first phase of the GitOps. And then second and third phase will be like the storage and the security. Right, okay. So second phase will be like to add the ODF here and then third will be to add the ACS so that we can implement the security controls also. Okay, so we're coming up on time in about 10 minutes. So we may have to do a second part where we show all of those pieces and say, okay, so now we have this starting here. We bootstrap, the bootstrap has been complete. Let's, maybe if we add in another cluster, here's how we would configure ODF for Azure. Here's how we're gonna configure it for AWS and just show how the different configurations would be laid out from the GitOps side. I think that would be interesting to see. Like in your repo, how do you parse that out? Is where you apply what and how it matches your labels in your regions and things like that, that would be interesting. Yeah, sure. We can like do another session we talk about. Yeah, absolutely, great. Okay. This is amazing. Okay. Thank you. I was there a few more minutes. She wanted to send them. I think we should be able to. Yeah. Yeah, so see like OpenShift GitOps came up. So if we get the route, so this is, this will be the route for the Argos seeding which is deployed in the managed Azure. Okay. And manage mean Azure, and we'll have to extract the secret. So, I'm going to use this to extract the secret from Azure and just the secret. Okay. Okay, so you're going into the managed imported cluster to Argos CD in GitOps to see where it is in its process of finishing out the rest. Okay, cool. No, that is done. What I'm showing is this is the Argos CD which is deployed on the managed cluster. Okay. So we deployed Argos CD on managed clusters. This is the managed cluster Azure one where the Git Argos CD is deployed and these all applications are also like getting deployed. Okay. So we do have the config demo application. I see, okay. And if I go and get the route for the config demo. So see the config demo application is also up. And yeah. Okay, just. So this is the config demo application which is deployed on the managed cluster. So. And the secret is. Her cluster is this and board is running on this Azure Red Hat OpenShift. Right. And secret is this. Is this. Okay. So you saw the full workflow. Okay. Yes. Yeah. Finally, it's working. Okay. So if we were, okay. So now that the bootstrap is all up, let's say if we had another hour, what would be the next thing you would go for? Would you do something with the Open Data Foundation? You would you, what would you, what I'm trying to think of, I always think of topics for the next show. So maybe the actual deployment of ODF in two places with different configurations through this. ODF, ODF work is in progress. So like I'll let you know, like once it's ready, then we can definitely do a demo for that. Okay. Thank you. Awesome. Awesome. So, okay. So I want to thank you for that was wonderful. And now I'm actually thinking of future shows where we're going to show exactly how you would go and manage more data services stuff through GitOps through this way. But that's the full on bootstrap process for anyone watching. Like that was actually really key to see it end to end especially with an imported cluster. And if we did a create cluster, the only difference would be having to provide the credentials for whatever provider and waiting for the cluster, the created cluster to come up. But otherwise it's the same, same, whoa. Okay. Wow. That's really amazing. I'm trying to think, is there anything else you think our users should know? Any other questions that we might have had? Just to make sure we cover all of our bases. Talked about secrets being stored. We've talked about, oh, and it doesn't matter where the hub cluster is, correct? Like you could, I could see an organization wanting the hub cluster on-prem and then managing multiple clouds. And that's it. Like not one of the hub cluster external. Doesn't matter. That's possible. Only thing is like connectivity should be there between the on-prem and then the clubs. Okay. Multi-clubs. Okay, perfect. Awesome. Well, I look forward to having you back and we'll get into the course of multi-cloud get-obs with very particular Open Data Foundation products. That would be fantastic. Thank you so much. Thanks for joining me. Thank you. Thank you.