 Good morning, good afternoon, good evening, wherever you're hailing from welcome to a special edition KubeCon EU office hours. This hour we are talking about GitOps and everything that GitOps can do for you. So let's start with a round of introductions. I will toss it to Josh first, Josh. Yeah, howdy, I'm Josh Berkus. I'm here as Red Hat's community architect for everything cloud native. And I asked some of our GitOps folks. So Tecton Pipelines, Red Hat DevOps, to come and talk to us, including Natale. Hey there, my name is Natale. I'm a developer advocate for Robinship. Happy to be here at KubeCon Europe with you talking about GitOps and more. And Cia Mack? I'm Cia Mack, I'm a product manager for OpenShift, Red Hat. I look after CI CD capabilities on the platform. And Jafar. And hi, Ron. So my name is Jafar and I work as a tech marketing manager for OpenShift. And I mostly focus on stuff related to developers or CI CD pipelines and such things. And we're here primarily for questions. Folks have a few things they could demo or whatever, but we're really here to talk to you and answer your questions and discuss things with you. So feel free to ask. And as a little incentive to step forward, we have some of these newly designed GitOps t-shirts, which we will be giving away to people who participate in this office hour session. Yes. So bring those questions, folks. Yep. So to kick things off, I actually have a couple of questions of my own. Cool. So a lot of the GitOps tooling that I'm familiar with ties into GitHub or GitLab, but not all of it does. Like, Tecton, I believe uses built-in Git that you run on your cluster, correct? Yeah, you could say that. So I think that's a I kind of know where you're going with that. And I do like this question because GitOps is really focused on the process, really, the workflow. So we had this conversation in the coffee break this morning with Natal as well, that when you look at GitOps, the main thing within GitOps is the cultural bit, the collaboration, the people, although it also had workflows and processes and practices and technologies associated with it. But the main, really, the gist of it is that culture change. When it comes to GitOps, it has cultural aspects to it, but the gist of it is really the process. And the name, I think, is a little bit perhaps misleading because the name already has the name of a tool in it. So it feels like you're already narrowed down and pick a tool for you in this process. But essentially, GitOps talks about the workflow, the way of working, the process of working. And Git is being the prominent vision control system. So it happened to be in the name, but the practice is nevertheless applied to any vision control system that the team might be using. And nothing really would change. You might have a little bit of trouble finding, perhaps, tooling that supports that particular vision control system that you have. But the idea of GitOps really not specific on the choice of tools, even on the Git provider, Git itself, could be the idea of having your vision control as the source of truth. OK, so the... Yeah, go ahead. Yes, sorry, if I could add something. So to complement what Samak said, so one of the major things that shifted when we speak about GitOps is, so everyone used to store their assets somewhere, so whether it be a folder or a spreadsheet or whatever. So you basically have some place where you have your reference configuration, your variables and all of those items or whatever scripts you want to use to deploy your infrastructure, your applications. But one of the main things that are really important now with GitOps is this ability to be able to not only deploy stuff that are declared in your repos, but also the ability to continuously check if what is deployed is actually in sync with what you have declared in your desired state. So what we call maybe this synchronization loop where basically you are calling information from your Git repo to see what's the desired state, what did I want to deploy. And then you are also pulling information from your target systems to see what is actually deployed. And then you can see if there's any drift, if there's something that you need to remediate to make things as they were desired. So it's not just a matter of storing your data from a Git and pulling the repo and at some point in time, deploying it from there, because everybody has been doing that for a while, but it's also some new practices that come with it that are also enabled by the technologies that just happened to be created in the last years with things like Kubernetes and such things. So it was a long complimentary section that I hope. Yeah, so basically just we got all the pieces of declarative frameworks, which then means that we're no longer dependent on having a very large layer of implementation code in between the configuration and the reality. Yes, exactly. So you're basically relying on some built-in capabilities where because you are able to define things in desired states or using things like YAML resources, the logic of how you deploy your application doesn't have to be there anymore because it's built in within the targets. So for instance, with Kubernetes, you wanna deploy an app, you're just gonna create a deployment and other resources, but you don't have to declare how you are going to actually instantiate or boot the application and such things because it's Kubernetes that has those built-in capabilities for you. So I think that leads into Andrew Sullivan's question, which is about GitOps for operations. So I'm gonna paraphrase him a little bit, right? Because we have pipelines and stuff for application building and testing and deployment via GitOps. But what about GitOps for operations and system administration? The, I'm actually a little bit familiar with that because we actually do a bunch of that in Kubernetes, but I'll tell you most of the tools we're using for that are ones that Kubernetes community built themselves. So do you see tooling around that? Do you see that getting integrated with GitOps for apps or being its own completely separate thing? I can start talking a little bit about that. So I do see a lot of similarities of what GitOps for operations and GitOps for app delivery, that the principles are essentially the same. It's just that within the active space, this has been really the way that active has worked forever as long as I remember, right? This is, we call it to give it a new name, but having the source of the application in a version control as the source of truth and everyone collaborating around that are all the changes going into that, the branching and different workflows that people use, that's well understood. Well, most teams you talk to today, they do have this process in place. So it is really nothing new from that aspect. And even when it comes to like operation side, if you go a couple of years back, even pre-kubernetes and people are like automation of infrastructure really took a lot of traction when tools like Chef and Puppet came along that people had a way to describe what they want to happen on an infrastructure and Ansible, soft side, there were a lot of tooling terraformed, a lot of tooling created around that space. And IT ops or application ops or middle of a different operation group, when they dealt with these technologies, they didn't keep it on their own laptop or on a web server. It was always, they versioned them, they put them in some version control, maybe using them was not automatic. So they want to make a change on a fleet of virtual machines or on their cloud infrastructure or on Kubernetes. It was a person, perhaps running an Ansible playbook, like a latest version of it or a particular version of it from that version control from Git, let's say, and running against that infrastructure for that happen. So even that part of it is not really new for the operational teams. And what GitOps really for operation means is that there is another level of automation like happening in the place that instead of a person like taking those manifests in whatever format they are from the version control and applying into the infrastructure that happens automatically. Somebody sits in the middle and does that. So coming back to your question specifically from a tooling perspective, I think the ideas are not really changing between the app delivery and operations and the tooling is the same. They are exactly the same needs just used for different purposes. And what I see also in the conversation we have our customer to see that the approach to infrastructure also has started to look a lot more like the app, the process. So we talked about having like on the app side you maybe have a CI or pipeline in place that builds a code and deploy and so on. I more and more see that on the operation side also changes on the configuration of the cluster perhaps there is a CI process for that in place that there is a pipeline that gets kicked off. And with the advancement that has happened around Kubernetes provisioning or specifically OpenShift provisioning or the cloud infrastructure managed Kubernetes that you can treat your clusters as ephemeral in a sense like short lived maybe spin up during CI process test the configuration that is really the change set that an ops person has created on that cluster validate that a change has happened correctly and become easy on that cluster. So it's becoming a lot more like active event on the operational side. So what I'm trying to get that is that these two ways of working of converging the rapid space that rapid pace the way I see it and becoming quite similar and the same set of tooling that is being used but there are a lot of variety of tools that exist within the get off space that can augment this process. And some of them maybe they're a little bit more tier toward active a little more about operation but the function of those tools I see them like converging as a need on both sides. Yeah, sure. And so if I can add to that so maybe one of the evolutions that is driving this is we see more and more all of these like infra equipment like routers and all of those things becoming API more and more API driven like focusing a lot on bringing those capabilities to allow them to be configured in an automated way. Some even go embracing things like the Kubernetes state of mind where you are basically declaring a file that describes here's how I want my configuration to look like and then this configuration can be automatically applied to the equipment, whether it's like storage or routers or firewalls or whatever. And so one of the things that we see more and more is so if this capability is not building the tool and it cannot be like all this automation cannot be implemented within the tool itself. We see those vendors starting to build their own operators like embracing the operator model where they are basically building that logic into the operator, the operator understands in terms of declarative ways what we want him to achieve and then the operator has in his logic everything needed to interact with the APIs that are provided by the ops side of things. And that's basically a way things can be handled from an upside in an automated way but still embrace that GitOps approach where everything is declared or described as code. And so that leads us into another question which is how does GitOps work when we consider databases or stateful applications? And that's basically one of the best examples. If we look at databases on Kubernetes like three or four years ago, it was more of something of a myth because it was lacking a lot of things. But now say you have a database that is operated by an operator and that operator has enough capabilities to do things like backup and restore. Then all you have to do is create a YAML file that will say, I want you to do a backup of this thing and then the operator will trigger the operation side of thing which is doing the actual backup of your database or things like scale up and down or auto remediation and such things. So it's basically coming together as Cia Max said. We are seeing some practices that were more in the app world becoming also embraced in the ops side of things. Yeah, and about this about the stateful app and the database, I think it's a very good point because if you think about database migration, right? Where we need to migrate the schema. So if when we use GitOps, we can delegate this to an operator, to the CR following this life cycle or we can take benefit of the GitOps tool for the hooks. For instance, for OpenShift GitOps, we use Argo CD project. Argo CD has hooks. So we can work on the hooks. We have pre hook, post hook. So before the syncing process of Argo in this case will monitor the change and before it take action, it can execute some script and this script can be the database migration for instance. If you use any tool like a flyway, you can invoke your flyway to migrate the schema and then you can apply your change. So it is consistent. And I think we are working with Christian that I see in chat to a Caracoda lab on learnOpenShift.com for about hooks and sync way. But in the while I can also share in the chat the new Caracoda lab that we have about GitOps. If you wanna try GitOps and Chris, maybe you can send it in the rest stream. So we will, yeah. If you want to learn GitOps and also see how it works, you can use this lab. And we are working also on this hooks part. So you can understand how to work with stateful hub using the operator, but also taking benefit from hooks and the sync waves which are the tooling in the Argo to control the stateful app life cycle. Awesome. These labs are great. This is fantastic. Yeah, I'm gonna put some stuff in the chat actually more about this. I, one thing I'll add is that actually Git based database migrations are, believe it or not, a new thing. I'm gonna paste in the chat a project called Squitch for SQL databases that dates back to when Git itself was only two years old. So this is something people have been thinking about. They just haven't put all the tools together necessarily. But we have a different question here from Andrew Sullivan. Are there, presumably this would be, you'd answer this probably in the context of Tecton. Are there guidelines and suggestions around repo usage when repo per cluster, per app, per team? What's a good approach? Like any good question in the IT space, the answer depends really, right? So there are, I don't feel like every single customer I talk to, this is one of the first questions that they get asked. And I don't think there is a single right answer there because each of the strategies, so this applies also for like how do you do the configurations of a cluster? And if you're using ArgoC to sync that or delivering the application, how do we design that repo? And there are a couple of things that affect that decision. First of all is access control, right? And that is very much tied into, there are two things that have influenced that. One is that what is the desired access control structure within the organization? There you see some orgs that they want they operate a platform ops team to own the configuration of the cluster and application teams are given a namespace or a number of namespaces and they are allowed to deploy apps too. So they're not allowed to create namespaces but they're allowed to deploy apps into those namespaces. So that gives like one indication of what kind of access control we need. There are organizations that give a cluster to every dev team, a small cluster and that dev team is admin to the cluster, not cluster admin, but they can install operators, for example, they can modify certain configs at the cluster level but upgrade of the cluster, the config of the nodes, the machines, they're all owned by the platform owner. So there are these levels of structures of access control that affects how you should slice your config in a way that it can actually control that access on Git. And from the other side, the Git provider that you use, they also support different ways of controlling access. Some of them support, for example, controlling access within a single repo that certain folders are accessible to certain roles. Some don't, you can only have it at the repo level, some at the branch level. So these two play an extremely important role on what is most sensible for an organization. And with this trade-off, as you would expect, the simplest is that there is one repo that has everything. The other way, the other side is that you have many repos for every slice of access that you want. So the overhead of management of this repos varies a lot on how much control you want on those configs and how much overhead you accept for managing these repos. I don't know if it clearly answered your question, but it's a choice that needs to be made by really analyzing what a team expects really, an organization expects for their teams to be able to do. I don't know if others have any other point of view on this, this is a very contentious area and a very common question that comes up. I mean, I've always looked at it as, there's a convergence at some point between Dev and Ops and security and everything else that comes with it, right? And that convergence point is the deployment, using Argo or Flux or whatever, those repos then can come together into a fully fleshed application. Wait, so when you're talking about putting everything in one repo, do you mean everything, everything? Like as in your entire application suite for all of your applications plus permissions management and everything else? For a single cluster, yes, like the one repo that would represent the entire cluster, all the namespaces, if there is elastic deployed there for everyone, that's also in that repo. If my application payroll is on that cluster name space, that also is in that repo. So that's one side of the extreme and it fits certain scenarios. I actually have talked to customers that use that way. Doesn't that make it really vulnerable to somebody doing a bad push though? Yes, and also like that means that you are, so with that way, you don't have any overhead of repo management, you have a single repo, that makes it easy, but you're creating a different bottleneck because every PR across the entire cluster, there are various teams that send in PRs have to be reviewed by a limited number, it's a small group that is usually a platform owner. So they become the bottleneck, the gatekeeper of what kind of changes can go into this cluster. So depending on how, what's the volume of changes from other teams, how many other teams, how many apps are on that cluster and PRs are coming from those, this can also run into a bottleneck of, you have, it comes like the Kubernetes repo, you have tens or hundreds of port requests waiting to be reviewed to go into that repo so that a deployment can happen. So you, if you look at the grand scheme of thing, then why are we doing this at all? Why are we, why would an organization do gate apps? You wanna be able to roll out changes faster than be able to look at what has happened. So if you run into that bottleneck, you're kind of like neutralize the value that you would be getting from this process. So for some organization that that method would work as long as it doesn't become a bottleneck for others, if the volume of change is high, it would become a bottleneck. Yeah, and so as Samak said, this is a very moving space. So things are evolving a lot and the way, I mean, the practices are also evolving as at the same time as the tools. So if we take a very simple example for some time, if you wanted to deploy an application that has many microservices, then because of some limitations of the tooling you had, for instance, to put all of your code within the same folder, if you didn't within the same repo in different folders, if you didn't have to manage that overhead and be able to trigger the changes. But what happens is if you have 10 microservices and then you are doing a push on the repo, you have to start passing exactly what happened and see what folder has changed in order to minimize, like triggering other things that haven't really moved. But because you had a single push that triggered a whole pipeline, then everything starts to be rebuilt, even if there were no changes and such things. So things evolve and then people started looking at, okay, so if I wanted to avoid those things, then let me separate my application code from my resource, from my deployment resources to avoid what Josh said, like I made a big push and then somehow everything gets triggered. So now you are separating your application, but then again, people starting to realize that they might want to still have all of their applications or microservices managed into separate creepers and still be able to trigger those changes in a get-ups way. And things started to happen within the tooling side to cope with that. And one of the examples is what we call application sets with Argo CD, which is basically something that allows you to manage a meta set controlling many applications. So everything can reside within its own repo, but at some point you are governing all of them with this notion of an application set that allows you to handle your 10 microservices as being part of one bigger application, but everything still reside in its own repository to have more granular control over your repose life cycle. So it's still evolving. And I think as this is going to be embraced more and more, some general practices are going to start to happen, like becoming some recognized patterns. And if there are some missing things within the tooling, then this is also going to evolve to cope with those patterns. So it's basically how continuous improvement works all the way. And that was the end. Okay, we have other questions. Do you want to tackle? Yeah, go ahead. Do you want to tackle the database question first? Oh, I think that's actually very simple. We've got somebody who was asking about the Argo operator. Which is, you mentioned the ability to back up and restore an Argo CD cluster. You thought that meant that we were backing up and restoring a database. I wasn't aware that Argo had a database. It uses a CD, right? Yeah, but it's the CD for the Kubernetes cluster, which you handle backup of separately. Right. But with Argo, you can do a quote backup by creating a new Argo CD export resource. It's documented right here in the operator hub page. I'll copy it here and drop it in the chat. But yeah, under usage and backup, it shows you the ways in which it can do things. So... And I believe Argo also uses this as a way to store information. I don't know what exactly goes in there, but it's one of the intermediate, but it's not like a stateful database. It's just to improve things like caching and being able to compare things faster, like key values if you wanted to compare things between what's the desired state and something that you pulled like two minutes ago or something like that, then it accelerates also those operations. So we have another question. So this is about management practices and get-ups. Andrew, who's the asker mentioned specifically ITIL, but this would also apply to a number of other process management regulations. How do you reconcile that with get-ups and agile practice? Because these are certifications of cascading processes that you're supposed to follow. And they are often requiring a heavy paperwork process from step to step. Is there a way to actually implement get-ups practice in an environment where you are required to comply with ITIL or other such process regulations? I would say yes, because like get-ups, like workflow does not impose any particular pace on the practitioner, but rather it enables the infrastructure to be able to deliver at any pace that is needed. So following, I told that obviously put some strength on the process and slows down the workflow because of the requirements of that process. But the way that it maps to get-ups is that if I simplify ITIL to say there are certain milestones for an application to be delivered, there are certain milestones that require a pause and things need to happen between those milestones that are outside really the get-ups process or about documentation, about approval process and certain workflows to be followed. When the checklist is complete of a milestone and we can move to the next one from ITIL perspective, what happens is that instead of those actions happening manually, that would be somebody approving in case of ArgoCity, for example, approving this thing, someone authorized and those actions happen in a declarative manner, a sync of whatever that needed to go to the next stage goes to those clusters. So, and from that point again, you have a pause in the process to fulfill what ITIL requires you to do. So what get-ups would contribute really, the value in that scenario might not be directly that you are deploying multiple times a day into your environment, but rather the visibility that you get into the process that every change is auditable and can be discussed and especially when failures happen, you have quite a straightforward way to backtrack to find what was the change set within that timeframe that a failure has happened in production that might have caused that to be able to really narrow down the analysis that you're doing to the changes that it can easily get through and the visibility that it can get easily through the get-ups process. That's how I see those two words with live alongside each other. So there's some interesting comments happening in chat and I wanna touch on those just real quick if we can. You know, there's some confusion about backing up ArgoCity or the application data, right? Like the application data that ends up landing on a PVC, for example, that backup process needs to happen outside of Argo or you could potentially input an operator to do that maybe in your Argo deployment of your app. But again, you're backing up application data, not Argo data, if that makes sense, right? Like all the Argo data lives in the Git repo. You just redeploy if you need to. And then as far as ITIL goes, ITIL, ITIL, however you wanna say it, Andrew points out that he sees GitOps and ITIL as being very complimentary which normally you wouldn't say about like DevOps and ITIL for example, the change request becomes the PR and the change review board becomes the PR's approver. You could argue that the repo becomes the CMDB, right? Like you can use GitOps with ITIL, right? Like it's just a little bit of a mind shift to get there. Yeah, that's an interesting way definitely to look at it that like some of those roles and concepts take a different role and they piggyback on the Git provider and what you would get out of Git essentially. So that does resonate with me but I'm not an ITIL practitioner. So that's from my opinion is too much to be relied upon in this context. I mean, I did ITIL years ago and it fits, you know kind of that model, so. It's cool to see how many ITIL fans we have today. I haven't seen so many ITIL fans for so many years. Right, it's a good point though, right? Like this could bring folks forward into a new development paradigm to where they can move faster. And that's really what we all want to do essentially is push features faster, make customers happier, you know, that kind of thing. So the idea of GitOps and ITIL being somewhat compliant is something worth pursuing, I think. Different question, this is sort of for Jafar, which is can we deploy OpenShift itself using a GitOps workflow? What did I do, why me? So yeah, that's actually a very interesting question because if you look at one of the major evolutions that happened between OpenShift 3 and OpenShift 4, if some of you remember OpenShift 3, we used to deploy the solution using Ansible Playbooks and basically all the intelligence or all the orchestration of how you are going to deploy the different components was triggered or orchestrated by the Playbooks. And one of the major shifts that happened is that the whole architecture of the platform was redesigned around operators. And one of the desired goals was to make everything configurable using traditional Kubernetes approaches like here's a YAML to configure my authentication if I wanted to connect to an end-up server or if I wanted to use OpenID Connect or here's the YAML file describing how my registry should be configured to have HTTPS and such things. And all of the 30-piece operators that compose the platform have all specific configuration files. And we have also a whole wrapper file that basically describes the installation of cluster where you can see how many masters you want, how many workers, what are the placement rules, what are the types of machines that you want to use. Of course, depending on the target it's going to be different, et cetera. And once you have those files then you can start deploying them in a get-ups fashion because, for instance, if you look at ACM the Advanced Cluster Management Tool that's basically what it does. We have created some APIs on top of OpenShift that allows us to drive the behavior of those operators and install OpenShift that way. So you have a file that describes your design stage for your cluster and then it's going to install everything. And there is an operator in the, like operator have an OpenShift as well that is like really the core of what enables that functionality in ACM Advanced Cluster Management is called Hive. So if you have access to an OpenShift cluster you can try this out already, install the Hive operator and then you have a declarative way to even provision cluster. And the reason for Hive exists is really it has to enable the get-ups for provisioning my life cycle management of a cluster. It actually would be good to have, should I remember this to bring one of the Hive folks to all. But it does follow a very interesting, somewhat get-ups pattern, yeah. Yeah, and also what Jeffer mentioned about the authentication, right? So if we have multiple OpenShift cluster we want to sync all the same authentication system for multiple OpenShift cluster maybe controlled by ACM. Let's say we have one Active Directory that will be used for getting access to multiple OpenShift cluster, maybe different group but if we wanna track and control that we can use also get-ups approach for this. Even if the installation is not that the get-ups way yet we can still take benefit from get-ups for convey to operation on our cluster such as authentication system sync, which is very cool. Oh, let's see. This is actually, at least that fascinates me a lot that how much of this way of thinking is enabled because of Kubernetes, so these patterns are like the very way of working is not something that would say is new. It's just that it is reappeared on their different terms and shapes. Because of Kubernetes, that's how I interpreted it and it started with, it's similar to like if you think about microservices that distributed computing has been there for a very long time but microservices kind of like rejuvenalized that patterns. So Kubernetes really like started with this very way of looking at infrastructure even though the concepts were there and now like fast forward five, six years we talk about provisioning Kubernetes itself in declarative manner and the cloud infrastructure underneath it also in a declarative manner and also the managed services on the cloud services also as a declarative manner. Maybe I want to use this QS queue in my application. How can I be clear to be on Kubernetes provision? That's like that concept is propagated now. I think in both directions, downguards and upwards and I contribute a lot of that to Kubernetes to be honest that brought life into this way of thinking in many other layers of the infrastructure. Yeah, I mean I'll say from having tried to do some of this six or seven years ago that prior to Kubernetes being a way to do infrastructure we had major missing stair problems as in there were always critical pieces of your infrastructure application stack that could not be declaratively managed. Made it very difficult. And in fact my first deployment of Kubernetes in version 0.4 Kubernetes was specifically so I could in fact do declarative management of a data analytics application because it wasn't really possible otherwise I needed to be able to swap out the version of the application and do the database migration at the same time. And having it containerized and the very primitive version of Kubernetes enabled me to do that. And so I think mainly what it did was this is what we always wanted to do and now it's possible. It's exactly, really that's how I interpret this as well like those concepts where the technology was not there to support those concepts in an accessible way. Like Kubernetes made these very accessible. It's the same reason that I say like DevOps really took off containers and Kubernetes made it accessible. The level of automation is required for that. And talking about GitOps like if I wanna make it parallel like GitOps is not really that different from how Kubernetes operates like just instead of putting the stuff in HCD you put them in a Git repo that the rest is pretty much the same pattern the same way of working. So I think that's one of the fascinating thing in about Kubernetes that has been enabled or in many different areas that that way of thinking and that automation that perhaps not up from it wasn't like the division was not there that it becomes these patterns become so pervasive but in practice over the last couple of years they have been reused and applied really too to every layer of infrastructure and applications that we see. Yeah, and I said this years ago, right? Like to me, the holy grail of DevOps is GitOps, right? Where I make a change via Git, it's tracked it has hash, it's auditable and then an approval process takes place CI puts everything into place and off you go, right? To me, adopting GitOps kind of helps you jump over that culture, you know, speed bump that you hit sometimes within your organization and it kind of forces you into a GitOps mode of thinking about things, which is awesome in the sense that it's like you get one thing with the other for free kind of deal but there still has to be some cultural adjustment to in the sense of we're just gonna handle this with our repositories and our approval processes and not necessarily with a board meeting of some sort. Yeah, the definite place, I had a person agree that there are some adjustments that need to be made and I see this happening like mostly bottom up to be frank that it starts with single teams moving in that direction and propagates better than I haven't seen any GitOps initiative like I've seen a lot of GitOps initiative I haven't seen any GitOps initiative yet so this is very like surgically starts with smaller teams and propagates because of the values it provides and also how accessible it is but it does require a change in mindset of how we work. Yeah, and also one of the things I like with this GitOps approach and the way things have evolved is if you think about how we used to automate infrastructure for instance provisioning you had some scripts that you triggered and then you had your info already somewhere but as weeks go by some people start to directly connect to their instances make changes, change environment variables and such things manually and then you have a disconnect between what you had provisioned and what is actually in real life and you are unable to reproduce things because of those people that have tempered with your info and what is good I believe with GitOps is that because you have this notion of being able to think the real estate with what you wanted and prevent drifts then it really tackles this problem of somebody tempered with the environment variables for my application and then everything stops working because if you had put things right in the right way then it's gonna be automatically remediated and you shouldn't have the problem persisted. Yeah, so Andrew, well, so first, did we tackle the question that's in our little Zoom chat here is it common to have a global Argo CD instance that deploys cluster configs from a repository? Are there any problems with doing that? Do we already answer that? I think we kind of there maybe. We sort of, yeah, we talked about it with the GitOps approach for cluster provisioning, but if somebody wants to jump on it. I can talk a little bit more specific about it as well. So because that behavior is actually the OpenShift GitOps behavior that you install the operator and you get a global instance of Argo CD pre-provision to pre-configure for you. This is a very, very common pattern that every cluster has one instance of Argo CD that is, it has elevated privileges, it's not cluster admin, but it has more privileges than regular users have so that it can perform operations on the cluster, configure the cluster to the degrees that the platform owners allow. And this instance is usually owned by the platform team. And you might have additional Argo CD instances for the app teams, if that's the direction, the organization is going to give more autonomy to the app teams to deploy our applications, but between these two patterns, the part that usually is there is that global one that is owned by the platform teams. Awesome. So Andrew asked a question. We have Argo, it's out there. There are other tools that do GitOps. What are some considerations? Why did we choose Argo versus Flux, for example? Or is there a good reason to maybe use Flux instead of Argo in some people's cases? Because we have an operator for Flux, I believe. Yes, yeah, there is an operator built by Vmore that is on an operator hub and there are other tools. There is a fleet from Rancher and there is, like Google has to Google sync, a little bit more specific perhaps to Google infrastructure, but it follows the same. And there are other tools in this space as well. They are, like, the problem they solve is extremely similar. They are all more or less to the same thing. They make, they sync contents of GitPrepose to the clusters, that's what all of them do. The difference is more on the feature and nuances of them from a user perspective, developer perspective. And then there are differences in the communities around these projects from governance or continuity perspective, if you will. So if you are, if you're a platform owner or drive and own some part of IT, slices of IT infrastructure in an organization, then business continuity of the technologies that you choose play a role also that becomes an important factor and you see differences in those areas. So when we, at Red Hat and for OpenShift, we kept hearing from customers about the challenges of both configuring cluster in a consistent manner and also delivering application in multi-cluster environments. Because we could see already that the number of clusters that are on the rise, they're having a lot of customers that used to have two clusters, now they have six clusters, 10 clusters. And you can see that in the survey that CNCF does as well from last year that they compare from 2017, I think, to 2020. And you can see the number of clusters that are like five, number of respondents that are running five to 10 cluster or 10 to 20 and more in all groups is increasing. So we kept hearing about the challenges of how to give all of this consistent. So we started looking at the various open source projects in space, this is what we do with Red Hat. We support community projects and productize the ones that are mature or more ready for enterprise consumption. So we talked actually with most of these projects and really you didn't, and Argo really sticks out both from the maturity of the project itself. It's an extremely popular project used by many, many organizations. There are about a hundred references on the GitHub repo of Argo CD, like public customers using it in production. So then in very active community, so you see quite a large number of contributors that are working around the project, commenting, bringing their use case. And that's an indicator of a healthy community from our perspective from Red Hat. Like the worst thing that can happen to a community is that a single vendor is driving it and based on their own view of that space and based on their own requirements, which is also a threat to existence of a project if that one vendor stops contributing. And we see that Argo CD has a very vivid community with a lot of vendors being there. And we also really like the Kubernetes native nature of it. It has, like it's really designed after the native experience that a user would have on Kubernetes. So because of that, we started like getting more and more closer to the community and contribute more and join the community as really the major GitOps tool on the portfolio. And we're really happy to see that it progresses in the maturity in CNCF as well. So we're hoping to see that quite soon, Argo CD would be one of the graduated projects in CNCF as well. That's amazing, awesome, yeah. So we got a few like two or three minutes left and I need to hop on over to the next office hour talking about OpenShift 4.7 questions. Anything else you want to wrap up with? I think if anyone is interested, so we talk a lot about GitOps and OpenShift and I know like it might sound that it's not that accessible when thinking like, how am I going to try this if I want to look at it? And I know Argo CD, but how does OpenShift GitOps look like? We do have this interactive tutorials that I think not all and then the team puts together. I don't have the link actually handy. Talking about the learn side, yeah. Yeah, exactly, Catacoda tutorials. I do definitely encourage you and if you're interested, try out OpenShift Pipelines, OpenShift GitOps together or as a separate labs, try out that link. You get an environment in your browser and a really nicely organized tutorial that walks you through getting your hands like a little bit dirty with these technologies with the process itself, you get a feel of it. And once you do that, if you have any feedback, please do reach out to us. We'd love to talk GitOps. We love these technologies and see how people use them and what the experiences they have with them. Awesome. Well, folks, as I mentioned, we have another office hour coming up here in a few minutes. So unless there's any last minute questions, we're gonna wrap up here. Thank you, C.M.A.C, Jafar, Natali. You're all my favorite when it comes to GitOps. I'm shocked Christian didn't magically appear out of nowhere. Yeah. But he did in chat, but on the Zoom, I mean, yeah, so. He's, or I was there, so. If you say GitOps three times, he'll appear somewhere, so yeah. The genius. Yeah, there's a great article that he just put out on the open shift blog. I will grab that here and... And one from C.M.A.C, too. Ooh, C.M.A.C did one, too? Yeah. Ooh, well, I don't know which one to grab now. See, let me find it. Now y'all are making me do this on the fly. This is fun. And there's a third one. And a third one, okay. Are you pasting these links anywhere? No, okay. I'm gonna paste them on the... I got them. Here's C.M.A.C's. Here's yours, Jafar. And... And Christians. And Christians is right here. Kind of like a bevy of blog post for everybody to get you up to speed on this stuff. So yeah, super important. All right, folks. Thank you very much for tuning in. Stick around if you're on the YouTube or the Twitch channel will be coming up with a Q and A session about OpenShift. So feel free to hang out for a little bit. Okay. Okay. Okay. Okay.