 So, I'm Tim and I'm a senior product manager on the Ansible team. I was actually with the original Ansible company and I've been along for the ride to Red Hat and then to IBM. And I've been working on how Ansible can integrate with and help automate things happening in the container native space. So, 1 of the things being from working for Red Hat is that we looked into, once we started this effort, how we could start to help automate what's happening in OKD and what's happening in OpenShift clusters. So, what I was presenting here was like the 1.0 that we came up with initially and how it came together. Now, I want to try to just speed through this. There's a lot of went on here and we could go a whole lot deeper, put together a demo if we have time for it. If not, we can provide the code. So, like I said, that's, that's my background where I'm coming from there. I was a programmer 1 time. I used to develop and then came a PM and then they took my keyboard away from me and said, you are no longer allowed to code. But I get to work on this type of stuff. Fobby, do you want to say a few things? Yeah, I'm Fobby and Vomphilich. I'm a software engineer in the OpenShift door guy. I work on operator framework. I've been involved in like the Ansible Kubernetes integration space since pretty much right after I got to Red Hat five years ago. So, I've just been working on building out like the Python clients and then integrating those Python clients with Ansible modules and things like that just have like better sort of like full like application to infra level integration so that people who are using Ansible in their like traditional IT things can kind of more easily transition to the Kubernetes space without having to upend all their tooling, logging, monitoring, etc. And I'll cut it there so we have time for the whole presentation. Right. Yeah, so the first I should mention if it's not apparent by who's presenting to you is that this was a joint effort between the Ansible team and the OpenShift team. Fobby was one of the engineers that came over and worked with us in developing this and we had some of our own people from the Ansible team working together on this. So, this is truly a joint effort that happened out there. So just diving in, I'm going to like someone to speak through this. So this thing that we're talking about community.okd is Ansible content collection for automating and managing the unique capabilities of okd and OpenShift systems and the key word there is unique and I'm going to come back to that. Now, I know I'm not talking to an Ansible group here. So you might be wondering, well, what's a collection? You might be familiar with Ansible, but in the last year or two we've had this huge effort going on to separate the core engine that Ansible is known for that command line tool and what we call the content so that they're separate and that they can move independently. One of the problems we ran into with our batteries included approach was that you had to wait for the next release of Ansible to come out to get new features for a cloud service or some type of other application or API change and it was just getting way too bogged down. So we came over this thing called Ansible content collections that we've been moving towards were most of the way through or it's just short for our collection and it's a new format for organizing Ansible content so that it's independent of the engine and can be added and installed and updated independently of what's happening in that. So what we're talking about here is one of those collections that is specific to working with OKD and OpenShift here. So like I said, just to review that then and so this is to focus on the unique capabilities of OKD and OpenShift systems. We also have another collection which has been now renamed kubernetes.core and that provides the baseline Kubernetes and Helm 3 automation capabilities out there. So if you're working with OKD and OpenShift, you're probably going to use both of these collections together in your playbooks and the stuff that's baseline you would work with the stuff that's in kubernetes.core. And then when it comes to the things that are specific that OKD adds on top of that then you would pull from the community OKD collection. A couple other side notes if you go out and start researching this that you might become confused or wonder about is community OKD is the upstream collection called redhat.openshift and that is the supported offering that we put together and put out there to customers. So it's one and the same. It's just once the downstream and once the upstream of that content. Another quick side note is originally our kubernetes content started off as a community effort and was called community.kubernetes. We're going through the process of changing the name, migrating the repo, things like that. So they're they're essentially the same, but the community.kubernetes is going away for marketing and business reasons and it's going to be called community kubernetes.core. All right, so that was just a little background so you know what you're looking at here. So let's talk about what is in this collection. So what we did when we when we pulled together this effort last summer to make something that was supportable that we put full-time resources on and we worked together with is we looked at the what was in that community.kubernetes collection and said, all right, we need to break this into two parts because what happened is it was just done through community contributions coming in and it was it was mostly baseline kubernetes, but some open shift specific features had rolled in and then we were getting kind of complaints from both sides people that were trying to use okd open shift and saying, hey, this is missing and then there were people on the baseline kubernetes crowd coming to us and saying, hey, what is this stuff that's in here that it's operating different than it should. So we decided the best thing to do was then to split this stuff out into their own collections so that they could both move and focus on each other's communities better rather than trying to find this like middle ground. So that was the one of the first big things that Fabian and other engineers took on. The other thing that was very, very helpful in was getting proper CI testing, including prow integration into this so that all of what we were doing got run against the latest builds that were happening there. That was something that unfortunately wasn't happening in the previous collection and work. So we migrated a whole lot of community community content over that was open shift specific and inventory plug in an OC connection plug in. There was a an open shift off module that was called Kate's off at the time we've renamed. And then we created a module specifically for working with declarative resources but gave it the added logic for working with things like I'm trying to remember some of them deployment configs and projects and things that are specific to open shift that the Kubernetes core module would sort of grip on there. So one of the things that was a little interesting that we went through is Ansible's added namespaces and we decided to make use of that. And so we had the Kate's module that like I said handled the baseline Kubernetes declarative API's. So rather than create a totally different named one, we decided to use the Kate's name again because you don't have to do it fully qualified like I've shown here. It would make it a lot easier for people to move or port their playbooks between baseline Kubernetes and then moving to open shift in that regard because then they would just have to switch what namespace they were pulling that module from so there's a little side note more advanced thing. And then we created a few modules. So this is an area that we were we went did a quick survey and so what are the most common things people are trying to automate with open shift right now to figure out what is in the 1.0. And the 2 things that came up was here was the ability to expose a route, which is sort of like the expose Kubernetes, but the added stuff that you can do an open shift. And then the other was the templates that came up with the ability to render and optionally apply those to what you were doing. We're also things that we were seeing a lot of people that were trying to use Ansible with open shift where we're trying to do and struggling and we want to make that easier so we created those 2 modules there. So I'm going to stop there like said I sped through a lot of stuff. Do we want to take the time for a demo. I think we actually have the time where there was 1 question in there. Sorry, I can't see the chat unfortunately on the mic. That's okay but I think you might have answered it James you were asking will playbooks written for community okay the work without changes when used with red hat dot open shift. Yes as long as yes, there should be no no issue there you just have to be put a little bit of care into how you're managing your name spaces if you do it fully qualified like I showed back here, you would have to do a search and replace but you don't have to do it this way. And I would recommend not doing it this way if that's what you want is the ability to go between the two easily. There's a there's a way to create like a namespace search path at the beginning of your playbook. And then you don't have to do this stuff, the fully qualified stuff in your in your in your plays in your roles. Is there a reason why the red hat open shift, whatever it's wouldn't also provide the community okay D name if they're going to effectively be identical. That was more of a marketing issue of our of customers getting confused to what is supported and what is not. And so we decided to do it through the naming to make it apparent. Okay. Yeah, there was also legal reasons that we couldn't use open shift or red hat in a downstream. It's J boss all over again. Yay. Look, I cannot tell you Bobby is my witness. I almost lost my mind trying to deal with this. I did not want this but unfortunately, like I'm not a lawyer either. Yeah. Yeah, I get it. I'm annoyed but I get it. Yeah. Yeah, me too. Honestly. Yeah, there's a couple of things in the. Oops. Okay. That James's answer was pushing very hard to get people to use the fully qualified name. This is going to make broken playbooks and roles when trying to switch to okay D and say, OCP. Yeah, that's more of an. We've done it for awareness. And also clarity when, when you're dropping in a single task to document something, you don't see all the other stuff you could have done at the command line or in the playbook declaration. And then the people may get confused over time as you have modules with the same name appearing in different collections entirely. That it was, it's just for clarity of that type of documentation. We're not recommending that that everything you do should be done in with fully qualified namespaces in your actual work. It's more of us getting this concept across and also in clarity of when you're only taking a snippet and dropping it in. You could lose someone. Yeah. I hope that makes sense. James. All right, so I'm going to demo some of the features that we've added in the newly released community.okd ansible collection, which contains a variety of modules and plugins that make it a little bit easier and interact with the OpenShift API. So first things first, let's look at what we did around image streams. We added some special handling so that we can kind of better handle resources that are referencing image streams without causing unnecessary redeployments. So this is what the basic module invocation looks like for the case module. The community.okd.case module is very similar to the Kubernetes case module, except that it has some of this special handling. For example, projects, if you don't have permission to go to create a project, it'll instead issue a project request and handle that whole API flow, which the core Kubernetes module cannot. So this will just go ahead and create a project name testing, you know, kind API version. It'll just send it right through to the API. I'm just going to kick it off on the right here. So if we look at returned a lot, a lot of that is because of this managed fields field that's returned. But this is what the API returned when we issued a create with this definition. You can see we have a project here. It's got the name of the test, and that's pretty much it. So next let's go ahead and create an image stream. Let's look at what that image stream looks like. So we can see here it's just a regular Kubernetes manifest, image stream.yaml, and we're going to be pulling in the Python Docker image. And see here, you can just reference a file directly, much as you could with QCTL or any of the other common utilities. So let's go ahead and create that image stream, which should import that Python image. All right, so you can see that it was created. We see the spec that's returned here and the status, which gives us the new, the location of the container in the OpenShift Image Registry. So next let's create a deployment config to reference this image stream we just made. We can look at what this looks like real quick. It's basically going to just, it's going to use that base Python image. It's going to spin up, but just a basic HTTP server. And we can see we have some environment variable set, and then it has this image change trigger. That will basically say, you know, if the image stream gets a new tag, then automatically update our deployment to use it. And it will also mean that in this spec OpenShift will update that image that it's referencing to be the image from the local registry instead. So let's go ahead and create that deployment config. And you can see here, we have this wait and wait condition. This will basically look in the conditions array on the status of an object, and it won't end the task until this condition is true. So we're basically creating this deployment config and we're just going to wait until it reports that it is available. So here we see it's ready. If we scroll up, we can go past all of the managed field stuff. So we have a deployment config, Hello World DC and the testing namespace. And we can see here, this spec does in fact have the image replaced by the image stream reference, and it's a specific SHA as opposed to what we put into the spec initially, which was just Python. So now we're just going to create, we're going to run this exact task over again, we're going to recreate the deployment config. Now because the deployment config is already there, the modules will see that and rather than issuing a create, which would obviously fail, they're just going to issue a patch instead so that any difference in this deployment config definition from what is currently in the API server would then should result in it to be redeployed. And we'll note because we have this image field from the API changed to the registry, whereas in the deployment config, we just reference it as Python. So there is actually a difference between our local file and the file in the API currently. So, but if we run it, we can see that even though there's a difference in the local file and the file in the API, it reports no change. It issued a patch, but it issued a patch using the trigger because it knew that this image field is once managed by another controller and there's no sense in wrestling with that controller for control of that field. And this is especially relevant if you're writing an operator or something like that, where if you're referencing the image streams from an operator, it would then lead to infinite reconciliations as the deployment config continually emitted new events and your operator continually went and tried to change it back. Okay, so now we can do the exact same thing using a regular deployment. You can see here, deployments can actually be made to use the image streams using this special annotation which tells it what container to pick and what to replace. So if we go and we go ahead and create this deployment. Yes. So that will deploy it. It's using the same base image. So hopefully this will be pretty quick. And then while we wait for that, and you can see here wait, while we wait for that to finish, here we're issuing only a patch request to replace only the fields that we specify. So we only want to replace spec template spec containers. We want to replace the container named Hello World with the Python image. Again, this should result in a change, except because these are the open shift modules and not the core ones. We know that we don't want to update that because it's not it's not going to stick. So there's no point in doing that work. Okay, cool. So that's the end of the image stream. So next let's look at what we added for routes. So routes are basically open shifts method for creating ingress. So here let's go ahead and just create this simple deployment. It'll deploy a Hello OpenShift container that runs a Docker container that basically just outputs Hello OpenShift in over HTTP. So we'll go ahead and do that. We'll create the service as well to expose that port. And then we will look here at the OpenShift route module, community.okd.OpenShiftRoute. OpenShiftRoute is approximately equivalent to OC create route. It can expose most of the same stuff. So here we can see we reference the service that we just created that exposes that container that we just spun up. And we're going to create it every all of the stuff in the default namespace. So when we create that route with a few as possible arguments just giving it service and namespace, we can see that it returned this object which includes this URL. We can go ahead and hit that ourselves and we can see, well I can see let's, that it outputs Hello OpenShift when we visit this route now. And we can verify this from the Enable Task by attempting to hit that URL and we can see that it also returns the content. And here you can look through the code to see all of these different methods. I'm not going to go through all of them now, but basically there's a lot of different arguments you can pass the route. Again, it's roughly equivalent to create OpenShift or OC create route. So you can do custom names, you could allow TLS or disallow TLS or make TLS redirect. All different kinds of things all exposed through the module without needing to get in there and do this sort of definition by hand. So let's just get through the rest of this route stuff. The third thing I wanted to look at, we've added the ability to interact with the OpenShift OAuth server directly through this module called community.okd.openshiftauth. So first let's go ahead and create the secret which will contain the information for this user, a username of test and a password of testing 123. Let's go ahead and configure the HT Password Identity Provider so that it uses the secret that we just created to verify users. And we'll go ahead and create the test user and we'll mark it saying that it uses the HT Password Provider and it's the user test there. And we'll create a cluster role binding. This cluster role binding will give our new user cluster reader access. So it should be able to see all the things in the cluster. So next we're just going to use this community.communities.katesclusterinfo module which will return information about how we're connecting to the server, what the host is, what authentication parameters we're using, etc, etc. And we're just going to store that in this cluster info variable. So let's go ahead and get the API URL. So you can see here all the information that that module returns. It returns all the information about the connection. It returns information about what APIs are supported by the server that you're connecting to and a bunch of stuff like that, as well as client and server version. So next we come to the actual indication of the OpenShift auth module. So you can see here we're just saying login is the user test with the password 123 to this host that we just pulled from the Kube config that we're using to connect right now. So let's go ahead and obtain that access token. And there we go. So you can see that we ran that and it returned this API key that we can use to authenticate. And now that we have that information we're going to use that API key stored in this auth variable here. And we're going to use that host that we picked up before in order to see if we actually can list pods which we should be able to because we have cluster reader access. So here we go. Here we can see all those pods and we listed them in testing namespace. So these are all the pods that we created during that ImageStream demo just a little bit ago. Okay. Next we also added the ability to interact with OpenShift templates which are basically a way that you can either locally or in the server set up some basic templating without using Helm or the Ansible templating language or any of the other options that are out there right now. So the nginx example template is one that's included by default in an OpenShift installation and it pretty much does what you'd expect which is create an nginx deployment. So that example lives in the OpenShift namespace. And then here we can pass the parameters. So we want to deploy it to the OpenShift namespace as well and we're just going to give it a name test 123. And also we are putting it in a rendered state which means that we don't want to go ahead and create the resources. We just want to see what resources would be created. So let's go ahead and render that template. So we can see here these are the resources that would be created. It would create a service that exposes our nginx pod. It would create a route that exposes it on the, you know, that would expose the ingress for it. And a built config for building the image from a git repository and the deployment config for actually deploying it and the pods. And it's all hooked up to the nginx example. I'm sorry, and it's all hooked up with the imagery tag. So now that we have these resources, we're storing them in this result variable here. We can go ahead and create those rendered resources by looping through them. And this apply parameter means that we will be using, you know, basically the equivalent of kubectl apply in order to create them. So let's go ahead and do that. And we can see that it made all of those resources that we had rendered before. Build config, et cetera, et cetera. So let's not delete those resources. We'll give it some more time. And then you also have the option rather than create, like rendering and then creating the manually that you could also process and then create them directly in one step. And we should see pretty much nothing change here because it is the same resources. All right. And finally, the information that the last little bit that I wanted to highlight was the OpenShift Inventory plugin, which gives you the ability to use OpenShift as a dynamic inventory. And basically this means that the plugin will go look at the cluster, look at all the pods in the cluster and add those pods to your Ansible Inventory as targetable host. So you can see here, this second play targets the namespace testing pods group. This is a group that is automatically created by the OpenShift Inventory plugin and basically it will target all of the pods that are in the testing namespace. So let's go ahead and run that. So you can see the first task, ignoring this meta one. The first task that we run is setup. Now, it is important to note running modules other than raw does require Python on the container. So we can see here this test1231 deployment container doesn't have Python installed. And so this is not going to work on it. So it was failed. But we can see these test123 enginex example, the Hello World DC and the Hello World deployment pods that we spun up earlier all were found and setup was run. And we can verify the setup run successfully by looking at the value of the test environment variable. Because if you remember, in our deployment configs, we added that environment variable just test with a value of test. And you can see here, it output in the Hello World DC that the value of test is test. And in the Hello World deployment, the value of test is test. And of course, in the enginex example, it does not have this environment variable defined. So we had a failure there. And then last you can, as long as there is Python on the pod that you're targeting, you can copy files to and from the host from Ansible, making it so that it is, as long as there's Python installed in the pod, basically, or in the container, there's basically, you can do anything that you could normally do with Ansible there. All right. And that is all that I wanted to demo. So thank you. Yeah. So thanks, Bob. Great. I hope it got across like all the different things that you can do to automate and cut down on all the command line stuff you would have to do in manual work and repetitive work every time you would deploy a cluster or do anything. The big question that we have for you is besides using it trying out, seeing how we did in our 1.0, is, you know, what could we do next? I want to see in this next. There's a lot of areas that we didn't touch on. And the question we kept asking ourselves was, well, would that be useful? Well, which 1 of these is a priority? Which 1 is not? That's the type of feedback that we're looking for. We got a good core set of feedback, mostly from Red Hat consultants and a couple other people we knew in the community that were doing work with Ansible and OpenShift. And they gave us that initial batch of use cases and we've essentially covered them all right now. So we're wondering, where do we go next? So that's the feedback we'd love to hear. For those of you who are interested, I will give this deck to Diane to send around. But these are some of the repos of the content you were looking at starting with Bobbian's demo code that he was just running through. And the repos where we're developing the collections we've been talking about. And then there's a bunch of blog posts if you want to go deeper and read about this stuff in more detail, maybe more elegantly than I've been speaking about it. So that is all that we had. Well, that sounds good. I think we could use some feedback from the community on maybe some more useful examples. The example was good, Fabian, that was bad, but just how people would use this in production. And so Joseph and other folks who are doing that, your feedback will be most welcome. So one thing I'd love to see and maybe it's already part of the playbooks is kind of standard operating procedures for admin tasks like key rotation. I'm not sure we probably don't have that yet, but something like snapshotting and backing the backup of cluster states. So I'm not sure, do we have a playbook for key rotation, for example? No, so the content there was mostly focused on just kind of like providing those basic building blocks, kind of the foundational work in order to allow us to build more things on top of that. But collections do allow you to currently distribute roles. And I believe in the future it's planned to allow you to distribute playbooks as well. And so the hope is that as we've sort of built this out, get more community involvement, get some, you know, better subject matter experts. I deal a lot, you know, working with operators with the Kubernetes API and low level components and things like that. I have less experience with sort of higher level cluster administration. And so that's definitely, like, we would love to have playbooks and roles that would enable users to, like, very easily automate those tasks. But, you know, that's sort of like now is the point where we go out to the community and, like, you know, look for people who know about that, don't necessarily need to do all of it. But if they have, you know, requests for features like that, documentation, maybe some, like, getting started places, like, those would all be very useful things for us to see pop up in that repo in order to help us prioritize and also in order to help us understand what exactly those cluster administration tasks are and how we can help automate them.