 And with that, I would like to welcome everyone to DevCon 2022. My name is Lucy and I will be the moderator for today's session. And it's my pleasure to actually announce Martin Jackson's talk. GidOps and Hub. And GidOps and Hub. No, GidOps for Hub and Edge patterns all the way down. If you have any questions, I would like to remind you that there is like a Q&A that you should definitely take advantage of. And we will have the Q&A time at the end. And I will hand it over to Martin. All right, thank you very much and welcome everybody. It's great to be here this morning, afternoon or whatever time zone it happens to be in your particular region. You may not recognize me. I'm new here. My name is Martin Jackson. I'm a software engineer for Red Hat. I joined Red Hat in July of 2021 after a long career working for a very large customer. I am on the validated patterns team in the ecosystem group, part of the CTO organization at Red Hat. And this is my first time speaking for DevCon CZ and I'm really excited to be here. So GidOps is one of the hot topics for DevCon this year and this is yet another one of the GidOps talks. This is the introduction to the validated patterns framework. It is a companion piece for the lab and workshop that my teammates Lester Claudio and William Henry are putting on in about an hour. And I will also be in that session to help people through the demos and things that are going to be part of that workshop. But again, GidOps. Okay, so maybe you are not from a Kubernetes background or you don't have a huge amount of background in Kubernetes and OpenShift. I certainly didn't when I started at Red Hat. But let's say you're all in on now on Kubernetes and OpenShift. Awesome. Now what do you do? OpenShift comes with some good demos, the Rails and Postgres thing and lots of people do the Nginx. Demos as part of DO180 and a lot of the early training in Kubernetes. But what does it look like? And what are some best practices for installing larger scale applications and actually operating large scale applications under OpenShift? That's the idea with step two, you were here. And the idea with the image is that there is nothing quite as intimidating to a writer as a blank page. And so if you're a new administrator who's new to administering Kubernetes and you're not exactly sure where to start, how to lay things out. The validated patterns framework might be an answer to help you with how to get started and how to get off the ground. Step three, of course, is always profit. GitOps itself is a concept that has emerged in the last couple of years for declarative workload management. What we mean by that is that in GitOps all of the application state is intended to be held in a Git repository such that when you push changes to the Git repository, those changes should be reflected in the way that the application is actually running. So all of the YAML that is part of your Kubernetes application is kept in a Git repository using various techniques for expanding on Kubernetes YAML and programmatically generating Kubernetes YAML. And then changes to that YAML should, when you push those changes to your repository, those changes should then be directly reflected in the way that your application is running. Say you want to scale it out, say you want to add a new application element, change a deployment, and so on. And the advantage to this approach is that you should be able to fast forward and go back in time if a deployment doesn't go well. You should be able to revert it back out using all the tools that you're familiar with and comfortable with in Git. Now in GitOps we have a number of native tools that will help us along that path. The first and most important of them is Argo CD, which is the agent that runs inside your cluster that takes the YAML and applies them into different namespaces and can even apply them across clusters. So Argo CD is the heart of GitOps. It's been productized by Red Hat as the OpenShift GitOps operator. Tecton Pipelines serves the role that has traditionally been served by tools like Jenkins and CircleCI, Bayer, and a bunch of other things like that. It is the building test engine for GitOps deployments. Open Cluster Management, which has been productized by Red Hat as Advanced Cluster Manager, is there to help manage the complexities of doing multi-cluster deployments. Since this is an edge talk, one of the biggest ideas behind GitOps is we want to have some tools and strategies for being able to manage multi-cluster deployments where there's a hub cluster and one or more edge clusters. And finally Ansible Automation Platform is included in the Red Hat umbrella of GitOps. We don't have a direct deployment or demonstration of it today, but some patterns that we're working on in the future will include Ansible Automation Platform as part of the GitOps strategy. And so now that we've introduced GitOps, what is validated patterns? Validated patterns is a framework for dealing with these different GitOps tools. It provides a somewhat opinionated way of applying these different concepts in these different tools and making those tools work together. So you can say that validated patterns is here to simplify things, both for Red Hat engineering internally and also for our customers, partners, and users. One of the things that we want to emphasize is that a validated pattern is a living GitOps architecture. When you take a validated pattern and actually deploy that validated pattern, you get a running Kubernetes application ecosystem. And you can inspect it, you can see what makes it tick, you can understand it in ways that can be very difficult with some demo frameworks. And our goal with validated patterns is that it should solve a real world problem. It should do something useful and interesting. And so far, we've been focused on OpenShift and ACM as the targets for our GitOps patterns. But the next pattern that we're going to work on is going to include a rail for edge deployment and Ansible Automation Platform. And also would like to point out that our framework provides some ways to use the OpenShift GitOps operator and ACM to manage workloads. That's its whole purpose for being. We have hub components, we have edge components, and this is our recommended way of working with those things together. The link below that you see is our primary community website. And so feel free to visit that. These slides, by the way, are attached to the, as a presentation, are attached to the SCAD invitation for this session. What do we see as the benefits for validated patterns? For Red Hat, we can tie together a bunch of commonly used or interesting technologies in useful ways. There are a lot of operators that are available in the workspace. How should they all work together? How do we tie them together? How do we lay things out across namespaces and clusters? This is a way to explore that. We can also enhance the integration story. Have you ever been in a situation where you were in the middle of a release and something that's a feature of the new release that's regressed or broke something in one of your components? It happens sometimes. And by having these validated patterns, where Red Hat engineers are actually responsible for developing and maintaining these patterns, that helps us enhance that integration story to potentially spot some of those problems before they make it to users and prevent them all together, hopefully. For partners, we believe that it will strengthen and deepen our joint value propositions because Red Hat is not going to publish software and products that solve every problem that an organization may have. And so we can better understand where our solutions complement each other and understand better how we can go forward together and solve the needs of our joint customers and users. We see a big advantage in being able to get their hands on much more substantial demo applications than is typically the case. And when you see the extent of the industrial edge validated pattern, which is our first officially published one, you'll definitely see what I mean. There are a lot of different components to it. And we hope that these patterns will help you build confidence in various deployment strategies and in the technology choices that are made in these patterns. Again, validated patterns are living GitOps architectures. When you install a pattern or develop a pattern or contribute to a pattern, it should be easy to pick it up and you can inspect it, explore it, look around in it, understand what's happening at the Kubernetes layer, understand what's happening at the ACM layer, understand what's happening with all the different tools. Let's take a bit of a walking tour of the major components of the pattern. First off, we have Argo CD, the OpenShift GitOps operator. Argo has a cute little mascot and we've got the logo there that we use for the OpenShift GitOps framework. Argo CD's role in the pattern framework is to use the Git repo as an authoritative source of data so that when we make commits to that repo, those commits and those changes that you're making to the repo actually change the running state of your application. Argo is built on an eventual consistency and automatic retry mechanism. So the way it works in our pattern so far, Argo will pull the app designated repository every three minutes looking for a new commit. And if it sees a new commit, it will render the YAML in the repository and apply any of those changes to Kubernetes as intelligently as it can. It also, it has its own UI which is integrated nicely into the OpenShift console through OAuth and it has a nice CLI tool called Argo CD as well. Next up, we have Tecton pipelines. Tecton does what Jenkins and other CI CD tools do with the added benefit that in Tecton, the pipelines are actually Kubernetes native resources. So you build your pipelines in YAML and those YAML files will live alongside your application definition and your other Kubernetes application definitions in your repository. And you can change your pipelines by making changes or commits to the YAML files in the repository. Argo CD has some special handling because of the nature of pipelines since you're running pipelines regularly. There are certain resources that are, you know, ephemeral resources, a given pipeline run, you know, isn't necessarily something that you want to manage the state of. You want it to run and then you want to, you know, know that it ran, but you don't necessarily want that thing to stick around. So Argo CD has some special handling to manage those temporary resources and ignore them so that it doesn't, its state management doesn't interfere with the rest of your running of your application. Tecton can also interact with outside resources. It can do things and some of the things that it does in the Industrial Edge demo that we have the Industrial Edge validated pattern. It can push commits to external repositories. It can push tags. It can manage pull requests and, you know, those pull requests can allow for the gating of changes to staging and or production. Tecton also has a nice UI which is integrated into the OpenShift console under the Pipelines tab. And users can trigger pipelines that way or they can use the command line tool that Tecton provides called TKN. Next up we have Advanced Cluster Management. Advanced Cluster Management's job is to enforce configuration standards on remote clusters. And so it is very much a policy enforcement and management engine. ACM defines policies and standards centrally and it enforces those policies on remote clusters. So the Validated Patterns Framework defines that the GetOps operator must be installed on all remote clusters. ACM takes care of having the policy and enforcing the policy that enforces the installation of the GetOps operator as well as other individual applications. And we're looking at ways to expand our use of ACM in the future. Helm is the rendering framework that we use to help Argos CD render Kubernetes applications. Our deployment of Argos CD will render Helm templates internally and then deploy those results. Our framework uses Helm in two key ways. The first one is that it applies standard Helm templating to all of the files in templates directories in an application by default. So most YAML files in the GetOps repo will have Helm template expansion turned on for them by default. And also the framework can deploy standard Helm charts that are external applications that, you know, you can download from the Internet. For example, the HashiCorp Vault Helm chart. You can reference those things and define them as top level objects to have Argos CD deploy for you. And you can pass overrides to those Helm charts as well. Two key ways we're using Helm. Common is the repository that lives in our GitHub space that enforces and defines the key elements of our validated patterns framework. Common defines a deployment of Argos CD, which is designed to run in both a hub and a distributed way. It defines a deployment of advanced cluster manager. Common defines a hierarchy for application configurations and overrides. And it provides some opinionated ways to deploy using Argos CD as the mechanism, things like OpenShift operators. You can deploy namespaces and projects. You can deploy applications as paths, in which case they'll be treated as Helm charts. You can deploy external charts. Common also defines ways to define some bootstrap resources. One-time setup things like the creation of individual secrets and application setup processes that we're currently doing through shell scripting in a make file. Our first pattern is called industrial edge 2.0. And it incorporates an application called Red Hat Manuela. Manuela is an application that applies AI ML techniques to IoT metrics to determine if various observed equipment on a factory line is going to experience equipment failures. So it uses Argos CD and Tecton pipelines to build a fully featured CI CD pipeline, including a full test suite and the sell-down operator to actually do the AI ML training and application. We also use tools and technologies like MQTT, CamelK, and Kafka for message passing. We have an S3 storage bucket for data analytics and a component that will allow the analysis of the sensor data through Jupyter. And it uses Red Hat Advanced Cluster Manager to deploy production applications to a factory cluster, which could be the same cluster as the hub cluster or a remote cluster at the user's discretion. The Serpinski triangle is the official motto of the validated patterns effort and the validated patterns team. And the reason we chose the Serpinski triangle is that like the standard Mandelbrot fractal, it is recursive. Patterns are composed of other patterns. And so the pattern that you use to deploy an application is itself a pattern. And as our hope is that as we develop more and more of these patterns, it will become easier and easier to add and substitute components. Plus, the logo looks distinctive and it shows up well in smaller formats like icons. We've learned a bunch of things so far. One thing we've learned is about helm versus customize people have a much easier time. We who are developing the patterns had a much easier time wrapping our heads around how helm works for templating, as opposed to customize, although the customize approach is to build a patch and apply that patch to a data structure. There are still places where the customize approach makes sense. So our default mechanism for variable expansion inside the patterns is helm. Customize is still an option and we still use customize actively in the industrial edge to a pattern. We learned a lot about get some modules and get some trees because the whole the name of the game literally is get ops. We have to use get as a tool as tooling and technology to reflect the changes that we're making in in the system. And so in order to keep the applications similar to each other and the pattern similar to each other to get the reuse that we're looking for. We had to have a common substrate and so common is included initially as a sub module then as a subtree and more recent developments and I'm actively working on retrofitting the places where we have it as a sub module into a subtree. Sub modules caused a bunch of problems that causes detached demos to have to mirror more repositories and there are some subtle gotchas with sub modules for consumers of patterns that we that they didn't like and that we think they shouldn't have to go through. We're also actively developing certain areas of the pattern framework. We're working on getting better at secrets management. We're currently favoring hashi corp vault. The next pattern that we publish is going to integrate the use of hashi corp vault for secrets management along with external secrets. That isn't necessarily an exclusive statement of direction we may include another secret manager. But hashi corp vault will be the first one. And we are also exploring some of the interesting complexities of managing multiple edge clusters from the same hub cluster. There are certain limitations that helm has because the helm template or inside our go CD does not have an inventory data source to use to personalize the templates that it renders for an individual remote cluster. And so that can show itself if you have to configure for multiple fully qualified domain names or other kind of edge cases like that and we are actively exploring how to make the framework work better in those cases. So thank you very much. I'm ready to take questions now. Again, my name is Martin Jackson. I'm available as mhjacks at redhat.com and my Twitter handle is mjolnir40k. I'll take questions here for several minutes. I'll be available in the work adventure virtual area. And I will also be at the demo that Lester and William are putting on later today.