 My name is Karina Angel and I'm one of the OpenShift product managers, and welcome back to OpenShift Commons, and this series of briefings we're kicking off, it's a deeper dive into what's new in OpenShift and 4.8, and that's coming out really shortly. And as part of my role, I help cover products that sit on top of OpenShift. So today's session is really important in that we're talking about OpenShift Pipelines and OpenShift GitOps, and joining us are Jafar and Christian Hernandez, two amazing technical marketing managers, and OpenShift as well, and love it if you both could introduce yourselves. Hi, thank you. Thank you Karina. Thank you everyone for giving us the opportunity today. So my name is Jafar and I work as a Tech Marketing Manager for OpenShift, focusing on pipelines. And prior to that, I've been working as a solution architect for many, many years on OpenShift itself, so I'm really glad to now be part of the cooking area. Alright. Cool. Thank you. Yeah, cool. Thank you Jafar. Christian Hernandez, a similar background to Jafar Technical Marketing. I focus on GitOps and also, again, like Jafar said, I was also an SA for a long time on the OpenShift product. So I've seen this product grow so much and now I'm excited to be part, as Jafar said, as be part of the kitchen on the back side. Thanks. Go ahead. Okay, so yeah, let me go ahead and show my screen, and please let me know if it all works fine. Good. Alright, so in this first section, we're going to speak about OpenShift pipelines. So it's, as most of you OpenShift lovers already know, we have been providing CI CD capabilities on top of the platform for a long time. But we've had a really interesting theater cooking up for a while on the platform. And we've been happy to announce it as GA in early May. So this is what we are going to cover in more details. But before diving into the specifications of what it does, let's have a quick refresher on what happens, what happened in the DevOps space for the last few years. So yeah, we all know that DevOps is a key approach to be able to deliver high quality applications into production in a very fast pace and to cope with the customers' demand and requirements. So, and part of the DevOps approach was a pillar around continuous integration and continuous delivery. So both those processes or methodologies were at the heart of making everything as automated as possible from building the application, running the different integration tests and security checks, et cetera, until releasing the pipelines. So traditionally, there's been maybe separate tools that do continuous integration with things like Jenkins and other tools specializing into more the latest phase, which is the actual deployment of the application into different environments. And when we go from building the application to deploying it in an automated way, that's what we call the ultimate continuous delivery. And if we look at how things have been done for the past maybe 10 to 15 years, so there were some long existing solutions around continuous integration with things like Jenkins. And as you all know, OpenShift has provided those capabilities for a long time. But we as we saw the space evolve into different directions, we wanted also to make the OpenShift the perfect platform to run not only the traditional CI workloads or CI CD workloads, but also embrace new communities native ways of doing CI CD with things like TechTown or OpenShift pipelines that we will be speaking about today, or even embracing GitHub's approach as Christian will be showing later in this presentation. So OpenShift has evolved to embrace a new way of doing sort of DevOps settings on Kubernetes itself. And that's what we call OpenShift pipelines as being the product that we provide on the platform. So let's go back on what happened in the last, I would say, 10 years of doing CI CD and trying to enhance not only the toning, but also the way of designing and thinking about the CI CD pipeline. So if we look at the traditional way, things were designed for a different space. Containers did not exist yet. And CI or CD basically revolves around having a dedicated set of tooling that you install, that you deploy on different environment, that you size to be able to have multiple pipelines running at the same time. But those solutions themselves were monolithic applications for the most part. And they were not really, I would say, designed to fit with the cloud scale. So having thousands of pipelines running at the same time in different environments, et cetera. So of course, you had this ability to deploy what we call agents that are capable of doing on-demand tasks and then scaling up in some sort of capabilities, the CI pipeline. But they also had some limitations and there were some drawbacks. When starting to open up those platforms for the whole company, then you started to have things like colliding versions of pipelines or plug-ins or agent versions and stuff like that, where some projects needed some capabilities and some other project needed some newer capabilities, for instance. But they couldn't upgrade because they were facing some collisions or incompatibilities between all the plug-ins or extensions that they needed to run their applications. So basically what happened is those people started to instantiate dedicated versions or dedicated instances of those CI or CD tools to be able to cope with those different problems. So then things started to evolve, something very big happened in the IT world and it was the emergence of containers and of Kubernetes as an orchestrator. And then those people working in the CI-CD space started to think about how we could evolve the way that pipelines are being defined and being managed and run by trying to leverage Kubernetes native capabilities. So that's what happened when all of those actors came together and created what we call the CD or continuous delivery foundation, where they started thinking about how we can all standardize about something that provides Kubernetes native CI-CD capabilities, meaning that the goal was to extend Kubernetes by some standard mechanism that we call, for instance, custom resource definitions that allow you to extend Kubernetes with new features and new capabilities. So many actors, big names started or got involved in it. Of course, you have Red Hat, which is a bit there in it. And the goal was to basically come up with a new standard that everyone would then implement in its own set of capabilities or tooling, but at least it will be on a set of standard, whereas prior to that, everybody had his own way of describing what a pipeline would look like. So if you did implement it, for instance, in Jenkins or in GitLab or in any other type of solutions, there was no compatibility between those layers because it was some proprietary, what we call domain-specific languages that are interpreted by the CI tool itself. And the goal here was to come up with a generic way of describing it for any Kubernetes platform to be able to understand it and run it as a native resource. So that's where the upstream project tech team came up. And Red Hat is a big contributor to the upstream project itself. But of course, as we do with anything that we run on OpenShift, you have the upstream projects, but you have then what we call the productized downstream distribution, which is called OpenShift Pipeline. And OpenShift Pipeline comes as an operator that you can install on the platform itself. It comes out of the box with the platform. It doesn't require additional subscriptions, so it's part of the core, I would say, capabilities of the platform. So the goal here is that it's built for Kubernetes. It scales very well because we know that we have proof now that Kubernetes is a very scalable operator. So it's very appropriate to cope with those cloud-scale requirements. And the cool thing is that it also plugs into the security mechanism of Kubernetes. So for instance, when you want to define who can run what on any CI tool, traditionally, it's something that lives in the CI platform. And it doesn't really integrate out of the box with the security authorizations or RBAC capabilities of the target deployment platform itself. But now with OpenShift Pipelines and with Tecton, it all relies on the security capabilities of Kubernetes. So you don't have to duplicate who can do what in different solutions to be able to deploy your applications on different environments. And of course, one of the great features that Tecton comes with and thus OpenShift Pipelines is this notion of extensibility. So traditionally, for instance, if you wanted to deploy to a cloud environment from a Jenkins pipeline or things like that, you would have to install plugins or extensions that understand how to interact with the target environment. So that would be relying on plugins that are deployed, developed, maintained by some third-party providers. And then you would have to understand how to programmatically define the logic in your pipeline to interact with those target environments. But what we wanted to do with Tecton was the ability to provide it as an out-of-the-box Kubernetes type of mechanism that allows you to add more tasks or more features into the platform itself without having to develop or write some custom plugins to be able to interact with third-party solutions. So we will be speaking about that later on in the presentation. So what we did with OpenShift Pipelines is... So of course, you have Tecton, which is at the heart of the platform itself. So Red Hat is one of the main contributors to the Tecton upstream project itself. But what we do is we have prioritized it in OpenShift Pipelines. So we have a native integration with the OpenShift console where you can not only run and see your pipelines being executed, check the logs graphically from this console pipeline view, but you can also design the pipelines from the OpenShift console. So if you have never played with Tecton yet, it's very YAML-heavy. So you have to write your pipeline tasks in YAML to describe what you want to do, what task comes after which one or in parallel, etc. It takes a heavy... So you basically need to be a YAML back belt to be fluent with it. So what we wanted to do is give our users the capability to create and design their own pipelines without taking care of the YAML back office things. And the OpenShift console does then generate whatever YAML resources we need. So it gives you all this flexibility to start with something easy and meet from the UI. And then if you wanted to go into more sophisticated modifications, you could switch to the YAML view and then add whatever you wanted from the YAML perspective. So everyone can find something interesting for them. So one of the major improvements then is we don't need a CI or CB solution anymore because if you've been looking at how things evolved over the last maybe four years, so some vendors start to provide containerized versions of their solutions and there's nothing wrong with it, but it requires some administration. You had some overhead in the way that you had to deploy the solution, upgrade it, maintain it and cope with whatever I mentioned in terms of incompatibilities and such things. Now Kubernetes becomes the running CI-CD platform and it understands natively what the pipeline is, et cetera. So I will be speaking about that in more details just in few slides. So that's one of the, I would say, the great improvements is that you don't need to install any CI or CD solutions anymore. It's already, it becomes part of the OpenShift platform itself. So there's a lot of interest in the communities and outside world and people started to create their own tasks for this Tecton ecosystem. So basically I'm a third provider and I want to integrate with pipelines. Then I can create my own set of tasks that I publish and make available to anyone who wants to use them and we will be seeing that. So that's what we call the Tecton hub. So we will be speaking about that in more details. So let's have a few, a very high level overview of what it brings into Kubernetes and how it all works together. So basically what it does is that it enriches Kubernetes with few new concepts. So the main ones are the pipeline. Then you have a task and then you have a step. So a pipeline is basically a graph that describes the overall workflow that you want to achieve. And it's comprised of different tasks that are run either in sequence or in parallel. And now Kubernetes starts to understand this notion of pipeline. So if you're familiar with Kubernetes, you know that this kind keyword describes the type of resource that you are handling with things like pods, pods or services. So these type of native Kubernetes things. But now when you install OpenShift pipelines, Kubernetes starts to understand, okay, so I have a new resource type called pipeline and I know exactly what to do with it. And basically when you run your pipeline, it's going to run pods and containers that will run everything that is defined within this pipeline to do whatever CI or CD tasks you need. So the next essential concept is this notion of a task. So a task basically does something specific. So for instance, the example that I have here uses the build a container image. And all of the tasks and that's the beauty of it are run from a container image. So basically I want to build a Java application. I can use a container image that has all the binaries that I need, for example, something like Maven. If I wanted to do a Maven build of my application. Now I want to build the container image from a Dockerfile, for instance. Then I can use the build the latest stable image. And I can provide some parameters to do whatever I want. So I can do a build, I can do a push to an external registry and such things. So that's the beauty of it. Everything is defined in container images. And as long as you can reference that container image, you can reuse that in your pipeline. The final step, final component or notion is this notion of a step, which is basically inside a task, you can have different steps that are performed. So for instance, I'm going to first do a Maven install of my dependencies. Then I can do a Maven package or whatever I want to do within my specific task. And all of the tasks happen within a same pod. So they can share resources. You can, for instance, have a step that will clone the Git code somewhere in and store it in somewhere in your Kubernetes environment in a persistent volume. And then this other step can reuse that storage and find the code and build the application, for instance. And once it's done, you can run some tests and then you can package your application in the containerized image in a different step and push it to your container registry, et cetera. So these are the main essential concepts. So basically you have the notion of the pipeline, then you have a notion of a task and a notion of steps. So a final essential concept is this notion of work spaces. And that's basically what allows you to share data between different tasks, as I mentioned. So for instance, you can have your first task that will clone the code and then second task that will build it, et cetera. All right, so OpenShift Pipelines comes out of the box with a lot of what you call closer tasks that are reference tasks that can be used by anyone in that has access to the capability. But if you wanted to extend your pipeline environment with some new tasks, for instance, you have a very specific solution that you want to integrate with. So as long as you can create a container image for it and have, for example, a CLI embedded in it or a script that does whatever you need to do, then that's something that you can share and reuse at enterprise level. And to make that even simpler, there's a marketplace that has been created. So it's called the Text on Hub and basically it's a marketplace where people create those container images and tasks that can perform specific items. So, for instance, Christian will be speaking about OpenShift GitOps, which relies on a solution called Argo CD. And here we have an example of a task that can interact with Argo directly from an OpenShift pipeline. So that's a cool way to find things so you can start looking for, for instance, you want to run Ansible from your OpenShift Pipelines or you want to integrate with AWS. You can first look in the text on marketplace to see if there's something there. And if not, then you can say, okay, I'm just going to build my container image and push it to my registry and then people can start using it and publish your task and then people can start using it in their pipelines. Yep, sorry. So, we also have created a very nice plugin for VS Code. So basically you can visualize and run or troubleshoot your pipeline executions directly from your VS Code environment so you don't have to switch back and forth from your developer, your development environment and an OpenShift if you wanted to remain focused on your development tasks. So what it does is it gives you the ability to trigger your pipelines from either the command line, there's a TKN CLI that you can use, or there's this plugin that you can install on VS Code. And I believe on Colorado WorkSpaces too, where you can then visualize your pipelines and interact with them graphically directly from your development environment. So that's about the GA feature that we brought to OpenShift in early May, I believe it was on May 3rd. So that was the first GA release of the OpenShift Pipelines operator. But now there are also some new improvements coming up with OpenShift 4.8. And this is a feature that I'm really excited about. It's called OpenShift, it's called Pipelines as code. And basically what it does is it allows you to define your pipeline as code inside your repository, your source code repository. And once you have an event that gets triggered on the repository, for instance, a pull request or something, it's going to look for the pipeline definition within the repository of the application. And then it will trigger the pipeline automatically on the platform. So that's a very nice feature. And I have a couple slides or one slide that goes into more details about it. So that's a very cool feature. It's in depth preview for the moment, but it's going to be very interesting. And there's a lot of, I would say, interest to make it go to TechReview and then GA in the next releases. So it takes some things like GitHub actions where you basically define whatever logic you want within your same repository of your application and then it gets triggered automatically when specific events are triggered. So, yeah, this slide explains into more details how it's structured and how it works. So basically you will have your application, Git repo, where you create a dot-text-on folder, where you define the pipeline that you want to be executed. And you will define in the pipeline what type of events are going to trigger the pipeline. So when that specific event happens, for example, a pull request, this is going to instantiate the pipeline directly on OpenShift and everything will be run on demand as containers and pods on top of the platform. And once it's done, it can also auto prune itself if you have too many instances that have already been run and things like that. So you keep your execution history clean if you only want to keep the last 10 runs or something like that. So let's have a look at an example where we can see how this upcoming feature will be implemented. So basically I have this source repo and in there there's a dot-text-on folder. And basically what it says is whenever I have a pull request, here's the pipeline definition that needs to run. So basically it says whenever there's a pull request on the main branch, then you have to run this specific pipeline. So now I don't have to set up as a developer anything on my OpenShift namespace because once I'm going to trigger that specific event, so it can be a tag push, it can be a release, it can be a pull request again, as mentioned here. This is going to automatically create everything that we need on OpenShift and run the pipeline and report back the results in the GitHub checks. So there's a bi-directional interaction where the stuff runs on OpenShift but at the same time it does update the GitHub status so we can see what steps are being run, it comes up with the logs also directly in GitHub, etc. So it's a very neat upcoming feature and there will be some, there will be a talk actually about it in one of the OpenShift Coffee Break sessions that we run in EMEA every other Wednesday. So we'll keep you posted whenever we have new material on that. And so before handing the presentation to Christian, I wanted to invite you to check some learning material that we have so you can go to learn. That OpenShift.com slash GitOps and you will find a tutorial on using the OpenShift Pipelines feature that we just talked about. So it's a hands-on scenario where you can play with it, it will instantiate an OpenShift instance for you on the fly and then you can try it and see how it works. So thanks again, I hope this gave you an overview of all of the nice capabilities that we are adding to OpenShift in terms of Kubernetes-native CI CD. And one of the key aspects of it is how we can have that continuous feedback when we deploy applications into production. We don't want to see it as something that is done one-shot and then your application leaves on its own. There's a more sophisticated way of getting information from what's actually running in production and making sure that the lights stay green. And that's what Christian will be speaking about now in the OpenShift GitOps section. So thank you very much. And now it's off to you, Christian. Thanks, Jafar. Let me share my screen. Let me know when you can see my screen. Just give me a holler here. Yeah, it's fine. You can go. Sounds good. All right, cool. Thank you, Jafar. So yeah, again, name's Christian Hernandez, the technical marketing, one of Jafar's counterparts here. And I'm going to be talking about OpenShift GitOps and what we've been doing, what's new and what's up and coming. I'm going to be going pretty fast to try to power through some of these things so we can get to your questions. So again, feel free to drop questions in the chat and we'll get to them at the end here. Before I actually talk about OpenShift GitOps, I want to actually talk about GitOps itself because I want to abstract the way to get what the GitOps is as a practice first before we actually dive into the tool and what that provides for you. So what is GitOps? And GitOps is a purposely bad term. It's supposed to be kind of like an earworm term. It's supposed to be, it's meant to be a catch your attention kind of deal. But there is an actual practice behind it. From a 10,000-foot view, and in the next few slides, we'll dive a little deeper, but from a 10,000-foot view, you're using Git as the source of truth, meaning that not only are you storing your application data, not only are you storing your infrastructure, right? Kubernetes manifests in there, but you're just infrastructure in general, right? The entire platform is described in a Git repo and you treat everything as code, meaning that brings me to like this last point here is that you're doing everything via Git workflows, right? So currently developers, right, they're doing things like, I want to make a change to some code. Let me do a PR. Well, now operations, right? The operation folks, their experience is going to be the exact same way, right? I want to add a new Kubernetes node. That's a PR. I want to scale the infrastructure. That's a PR. I want to build a new cluster. It's a PR, right? And so you're doing everything via a Git workflows and something that is understood generically in the industry. So, but diving a little deeper, right? I want to talk about the GitOps principles itself, right? Removing again, not talking about any tool, anything, any specific implementation of GitOps, but we're just kind of just talking about GitOps principles and what it is, right? Someone asked me, is GitOps just a buzzword and or is it an actual thing? The answer to that question is yes to both those questions is actually yes. You know, two things can be true at the same time. It is purposely a buzzwordy word, but it's actually an actual thing. And I'm a member of the CNCF Open GitOps SIG, right? So it's kind of like a subcommittee or a sub SIG, I guess, in the application delivery SIG in the CNCF where as a vendor neutral group, right, comprised of people from Red Hat, right, myself from Red Hat, Weaveworks, Amazon, Microsoft, Codefresh, we got together and we want to have a driving principles to what GitOps is, right? So putting this really simply is that, one, a system's desired state must be declarative, right? So we're talking about a declarative state, right, fits really nicely into cloud native architecture of Kubernetes. So in order for you to be doing GitOps, you have to have a system desired state must be declarative, right? So this kind of goes to the idea of infrastructure as code. So if you're doing infrastructure as code, congratulations, right, you already hit that first pillar, right, the first principle here. So number two, your definitions, right, everything that you're storing has to be immutable, right, or also known as keep it in SCM, aka Git, right, this is where the Git and GitOps comes from. But it doesn't have to be Git, right, it can be SCM. But the idea is that you want to be able to keep track of things using Git workflows. Those versions have to be immutable. And so this is why a Git fits really nicely in here. And number three, and I believe this is key, this is the most important principle, and this is what differentiates it from things like an event-driven architecture, right? So you can have infrastructure as code, but it's, you know, you have things that are more event-driven. This is what separates GitOps from more like a traditional DevOps practice is that you have the state is continuous, reconciliation must be continuous, right? So you have a software agent that's sitting on your cluster that's always running, that's always making reconciliation of that cluster. So you're taking kind of like the desired state and you're running state, which I'll go over in a second and make sure that's reconciled and make sure that's continuous. And then number four, right, it's declarative operations. This is what we, what we like to call, yes, we really do mean it, meaning that it's just operations should be done by mutating that declaration, right? So it's essentially a PR, right, is what I explained to before. So you have operations must be done via mutation of that declaration or aka operations as a pull request. So I actually want to, so one, two, and four are kind of, I want to say, self-explanatory. But number three, since I said that was important, and I want to spend a little bit more time on that before I dive into, which of GitOps is that you have your desired state in Git, right? So that makes sense, right? And you have what you currently that you're running, and the differentiating factor of GitOps is the CD part, right? This continuous delivery, continuous reconciliation, continuous check, right? So take how Kubernetes works, right? Just a primitive Kubernetes, right? You have your declared state, which is your deployment manifest, right? And you have your running state is how many pods you have, right? So let's just say you have a deployment that says I want two pods running, and then you have, but you have one pod running that that replica that replica set controller, right? That controller sees the difference will reconcile it, you know, making sure that your desired state matches a current running state. And when it runs again, and, you know, you have two replicas, then it does nothing, right? It kind of just say, okay, you know, your desired state and your current state are matched. So I don't have to reconcile. Take that up a level, right? Take that up, not only from to your infrastructure by your application delivery. It's that same approach, right? We're taking that idea of Kubernetes, but you're using it to operate not only the deployment, right? But your entire system is now operating on that same principle. So some of the things that you get with GitOps, right? It's a standard workflow, meaning that everyone can understand it, right? I come from an operations background, right? I'm an ops guy, and even I, you know, Git is something that I've used, right? This Git is something that even operation folks use nowadays, and it's just a standard workflow. So it's familiar to everyone. I like to glom points number two and three, right? The visibility and audit and enhanced security, right? You get enhanced security from GitOps because you have that visibility and audit. Everyone sees what's going on in your system, right? Everyone sees the changes, who did it, who approved it, right? If you're using protected branches, you know, you have that process in place. The security guys can take a look at it, right? It's out there for everyone to use and for everyone to see. So you get to catch some of those things ahead of time. And if you're deploying many clusters, if you're taking care of many clusters, you have that consistency across all your clusters, right? If you're multiple clusters and you want to make sure they're all in sync, right? You have a dev environment, maybe with five clusters, you want to make sure that's always in sync. You can use these practices to make sure they're all in sync. And kind of a high level workflow, that this workflow will seem familiar, right? Most people that are doing DevOps are doing this thing here where you have some sort of source code repository. There's a CI system build test, right? And then that ends up, you end up with an image, right? In an image registry, and you may have that CI system pushing that there. And the CI system may make a pull request to the configuration repository. And some CD system, right? Whether it's event driven, whether it's the same CI system, does a, you know, either via push or pull, and that, you know, image registry, that the new image gets deployed on the cluster. And again, last time, hammering at home, the differentiating aspect of it, right? What makes get ops get ops and what makes it different than a traditional event driven DevOps workflow is that the software agents always just running there, always continuously delivering. So then making it act on, you know, someone makes a PR and someone as soon as someone merges that PR, your declared state changes. And so the software agent, then the text that drift is like, oh, hey, you know, you want, you know, there's a difference there, it'll take action, it'll always completely keep your cluster in sync. So now that we have that baseline of what get ops is, and, you know, kind of what it gets you, let's talk about openshift get ops, specifically. And as Jafar explained a little bit, right, this openshift get ops is really the downstream version of what the upstream is, is powered by Argo CD. So Argo CD is our openshift get ops is based on Argo CD. And what you get with that is that, you know, we like just like everything in openshift v4, it's operator driven. So you get to subscribe and you get to enjoy everything that you that you get with operators, right? You get the automated upgrades for the operator, multi cluster config management, you get this opinionated get ops bootstrapping with this downstream version, productized version of Argo CD. So Argo CD is a, what the name implies, it's right, it's a continuous deployment continuous delivery tool, right? And it's really built on the fact that it always keeps your cluster in sync with what your configuration is and get. So it is that tool, it is that CD part where I described earlier about that, that agent that's that sits on your cluster to make sure that it's everything's always constantly in sync. That's Argo, right? And that's what Argo does. And you can, you know, track a different different branches, different paths, right? You have a granular control on deployment, right? So it does not only works on your stateless application, but also your stateful application, right? A lot of people, I guess most people are running stateful applications, you know, stateful applications aren't going away. Argo CD gives you the power to control sync order for more complex rollout than just something that's a stateless. And since you're using get, you get that the ability, you get to leverage all those get workflows, right? So if you're going to need to roll back, you can roll back just by changing the the git commit, you can roll forward, that sort of thing. Argo CD has built in templating, right? So, you know, I'm a fan of not dry, right? Don't repeat yourself. I'm a fan of, you know, since you're, you're, you're syncing YAML, you don't want to copy YAML over and over and over and over again, that same YAML. So there's templating support using customize and helm are two of the big ones. Other ones are JSON it, but I think most people either use customize or helm. I'm a big fan of customize. I use it all the time. And you get to visualize, right? So Argo CD comes up with the nice UI that lets you visualize how your application is spread out throughout your environment. And speaking of how you're deploying your applications with, with OpenShift GitOps, you got kind of flexible deployment strategies to fit whatever needs that you have. You have sort of like a central, like kind of a push, right? A hub and spoke sort of design where you have Argo CD sitting on a cluster somewhere. It's managing multiple Git repos and it's deploying those out to multiple clusters, right? Whether it's OpenShift or Kubernetes itself. You have the cluster scope, which is probably what most people use is the one I use a lot is essentially you have an Argo CD installed per cluster. So you have an Argo CD sitting, you have, you know, five different clusters, you'll have five Argo CDs. And essentially the scope of the Argo CD deployment is that cluster itself. So it takes care of that entire cluster. With OpenShift GitOps, you have what we call the application scoped, where this is the multi-tenant deployment method of Argo CD where you have, you know, Team A and Team B managing an application stack that may spread through multiple namespaces on a single cluster. Or you can use Argo CD for that deployment. So this is that CI CD, that last mile CD aspect of Argo CD where you have Argo that's, you know, deploying to a few namespaces and controlling a few namespaces. And Team A and B may or may not know about each other, right? So, you know, they may or may not necessarily care about each other. This is like the multi-tenant deployment as well. So we have here, so before I explained the opinionated bootstrapping for Argo CD, right? So this is dev preview, what we call the GitOps application manager or CAM, right, KAM. So this here is the idea is it's supposed to take you from zero to GitOps, right? So if you can imagine I'm a developer, I'm starting a new project, but you know, it's just a green field. I want to be able to do things in a GitOps-friendly way. Where do I start, right? What are the best practices? And the idea is that this bootstrapping gives you that best practices out of the box. So, you know, it's an opinionated way, right? You do a CAM bootstrap, and it'll build all, it'll build out the directory structure and the configuration you provided information about your, about your deployment, and it'll build out all that directory structure and configuration for you. So it'll configure webhooks, it'll configure Argo CD, it'll use customize to templatize everything for you. You can integrate with secret managers, right? So whether you sealed secrets, whether you're using Hashicorp vault or external secrets, right? You can integrate with external secrets as well. And anytime you want to progress your application, right? You can do CAM environment bootstrap, right? Add staging environment, add production. And then in the coming, you know, now we're talking about what's coming up next is that you'll get this view in the developer view, developer perspective in the UI where it's what we call the environment views, where like you can actually see your environment. You can see the dev stage, right? You can see your environments spread out fully integrated into the UI. And so this is kind of what you get, right? With CAM bootstraps. You do, you know, you do a bootstrap and it gets you this whole pipeline from beginning to end, right? And to end where you get that tecton pipeline configured for you, right? As Jafar said, you don't have to be a YAML expert, right? You'll get like a known good template. And you'll get a known good Argos CD directory structure. And you get all that set up for you. So again, this is that preview. If you look up a get ops application manager in Git, you can see all the changes we've been making. Like the famous saying goes, you know, feedback is welcome PR is a welcome. It's, you know, something that's ever evolving and currently in dev preview. So with that being said, I do want to talk about what's new in 4.8, right? So we released, open should get ops in the tail end of 4.6, 4.7. I think it's 4.7, sorry, 4.7. We're moving to 1.2, right? So we did 1.1 and 4.7. We're going to 1.2, 4.8. And some of the things that we're at or that we're adding, what's going to come out of the box is the integration with Red Hat SSO key cloak for those using the upstream is that essentially now all that's a manual process. You have to spin up Argo City, you have to spin up the SSO. You have to actually do the connection to yourself and make sure everything's all set up correctly. Now that's out of the box. The operator does that for you again. Open should be for operator based. So we do that out of the box via the operator now in 1.2. Argo City again, the privilege configuration. That's been some slide where you can actually give other, you can actually kind of hand the keys over to your namespace, right? So if Jafar has Argo City set up for his CD and I don't really want to manage the CD part of it, I just want someone to do it for me. I can actually annotate my namespace to let Jafar's Argo CD manage my namespace. So it simplifies the privilege configuration for Argo City in 1.2. Again, enhancing the environment view in the dev perspective on the OpenShift UI so you can actually see your application as an application versus a namespace driven. So you can actually, if you have an application that spends many namespaces, you can actually manage that application essentially without having to switch context that way. So one of the big things that's coming in 1.2, one of my favorite things is ACM and Argo CD integration. There's going to be tighter integration with ACM and Argo CD. So for instance, ACM will now recognize that you're using Argo CD and will pull that topology view into its UI. And also it'll have native support for things like application sets where you can actually define application sets from the ACM level and have that bleed down into your managed clusters there. So it's going to be really, really tight integration. So some of the things that's up and coming, right? So some of the things that we've done, we went GA on the second half of this year, right? And again, I just talked about what's coming in 1.2, 1.3, it's going to get even better at the end of this year beginning of next year. We're going to have the namespace Argo CD. So remember that deployment mechanism of the namespace version of Argo CD, you can use OpenShift's authentication with that as well. We're going to have OpenShift get ops on dedicated, right? So we're making updates to Argo CD. So that way you can run it on OpenShift dedicated. And we'll be able to build also what one of the cool things is that Helm charts will be built into the application delivery manager or CAM, right? Where you can actually specify Helm charts, right? So with CAM now kind of becomes a centralized tool for managing your CI CD process. So I do want to leave some time for questions, right? And we're approaching the top of the error. So I do want to say thank you. And if you want to learn more about get ops, there is learn.OpenShift.com, what Jafar said, right? Slash get ops, you'll learn about Tecton. You'll learn about Argo CD. You'll see how that all those pieces fit together. Catch my, I guess bi-weekly show. Get ops guide to the galaxy. If you miss past shows or if you want to want to see what we've been talking about with get ops is red.ht slash get ops. Go there. You'll see the playlist. There's tons of content for you to watch over the weekend. So with that, you'll stop sharing my screen. Maybe we'll get see if we have any questions. Thanks, Christian. That was like a whole load of information. Yeah, it's a brain dump. It's awesome. So the goal is actually could one of you bring back the resources up on the screen? There is a question about, you know, Argo CD resources and just so people can Oh, yeah, so I said, yeah, yeah. So I first created same as you as like learning resources. But I think the question was if you have some resources as actual parts or services that are stuck and you wanted to troubleshoot what's going on. So Christian, did you see the question or I did? I did not see the question. I was a presentation. What was the question? Yeah, so yeah, so the question is, are there are there any resources? So these are not educational resources. Are there any resources that explain how to actually it is troubleshoot an Argo CD app that's stuck in sync status and the information provided by Argo CD is not enough. So this is like, yeah. Yeah, well, this is yeah, this is a troubleshooting question. So I'll be the first to admit and they are working on this is that there's a documentation is kind of lacking both upstream and downstream. We did hire a lot of people and they're ramping up to do documentation is both upstream and downstream. So that's coming. Some of the things that Argo CD provides, you really have to go down deep into the weeds, meaning that you have to look at the controller. There's a lot of controllers in Argo CD. There's a controller for the get repo and there's a separate controller that actually does the sync. And there's actually separate controller does application sets. So you have to kind of try to figure out where the problem is. First of all, if it's if it's Argo CD specific, it'll be in one of those one of those operator pods, you'll get a log. The logs are pretty verbose. So there's there's plenty of information there. If it's not in one of those, you have to see if it's an actual open shift issue like with RBAC, it's usually with RBAC and permissions is probably the the two biggest things in terms of the open shifts aspect of it. And then the last the last thing is that Argo CD sometimes doesn't understand your CRDs. So if you're doing things with CRDs, you have to just update the configuration to let make Argo CD a little smarter about the CRD. So without knowing the specific issue, those are some of the things that if I say yes, so maybe to add just a few comments to that. So as Christian said, if the information is not showing up in the Argo UI itself, you'll probably have to look at the like the in front level like the pods what's happening within the Argo application itself. And one tip would be to probably try to use the open shift login UI where you can aggregate whatever comes from Argo star. So you don't have to switch back and forth from different pods to try and understand if something has happened in in the 10 pods that are, you know, contributing to Argo CD working in open shifts. So you could try to look in the aggregated logs view and build some specific dashboards to capture that types of events. And you can also so what was I was going to suggest something else that I forgot what it was. So yeah, this is just one one way of doing it is trying to aggregate logs. Oh, yeah. And the second one, as Christian said, it could be related to the events. So something misconfigured with the service account or something like that. So check the communities or the open shift events stream and see if something is is providing the the tasks to happen or whatever. So that could be something like that also. So and this is where it would have been nice to have CMAC and, you know, capture all of this. We'll take that back to product management and say, hey, we shouldn't be asking people to build their own dashboards to troubleshoot. And what are we doing right now? And then as part of the get off working group in the CNCF that you discussed earlier, Christian, is this an area that the working group is looking at making it easier to troubleshoot? Because the whole point right of get off is making life easier? Yeah, yeah. And it's it's it's part of the best practices, right? Or implementation, like almost like a reference architecture aspect of it, which is coming down later. We still need to figure out the principles and and firm those up. But yeah, definitely. That's that's something kind of upstream is what we're working on. Hey, this is the best practices. These are some of the things to look out for. So yeah. Well, we do have to wrap up shortly after this is the OKD working group. So definitely stick around for that. But as far as some last minute thoughts to foreign Christian, integrating OpenShift pipelines, get off or what would you like to leave everybody with? Yes, I think there is nothing better than learning by experimenting things. So I would say as an expulsion is please go and try those learning resources that we have for you so you can get familiar with it. And if you wanted to. So I don't know if there's a way to provide feedback other than given our I would say emails or yeah, we can we can provide the Slack channels that they that you have there for the comments briefing. If you wanted to provide some feedback, then we can have a look at it there. So yeah, I would say that that's the call to action and check Christians regular get up show and I'm I'm setting setting up a new show that we just started like two sessions ago on Techton. So you can also subscribe to it and learn new things on Techton and OpenShift pipelines. I would say every other week. Yeah, so just kind of just kind of echoing what what Jafar said I start start off slow. Because as as you progress, you'll find it especially with get ups for me. My opinion changes as I learned more and more and so so it's an evolving thing. So definitely try it out yourself and some of the things that we put out for you guys. Awesome. Thank you. Thank you everybody for joining us. And next Tuesday is another another deep dive into what's new in OpenShift 48. So please join us again. Same time. Thank you both. And first we'll see us out.