 Folks, welcome to our session today of the working group meeting and we are about 1 minute before the hour. So, we'll wait a couple of minutes and I'll stop sharing for now. They hide everybody. Hi, Fabian. Hey, there and Fabian, you were you're here to talk about 1 of the sub topics. Is that right? Is that you on the agenda? If I have you muted, I might have you muted. You're self muted. Whoops. Sorry, I have my headphone mute too. I confused. Yeah, Tim Aptol and I were going to talk about the OKD Ansible Collection that we introduced last quarter. All right. So, and can you do that in under 10 minutes? Well, Tim had the main presentation that he was going to give and I think it's probably pretty quick. We also have a demo, but I've got a repo with a demo in it with instructions to run it as well. So if we don't have time to run through that, we can just paste the link in anybody interested and take a peek there. So I think that should be doable. Okay, because we have a lot on our jungle today. So, my vet team is here. Good. I'm there and in the link is the attendance thing. So if you can do that, what we're going to try and do today is because I know we set you up to do a talk on this because I like to have people give this is we're going to try and. Give updates on from different projects here that intersect with OKD, but we're also going to try and do a triage today of the open issues list. So that if you can do this, Timothy's here, I can see Timothy will get started right on time. And what I might try and do is get you guys to do your stuff up front in the first 10 minutes or so. And then so we can use because I just wanted to introduce you and your topic to the group. And then you can give everybody the resource links and then we'll move into the triage. Vadim, does that sound reasonable to you today? Yeah, I think we should be. We could, we could skip the triage. Yeah, there are quite a few tickets just need verification. And it's just not worth wasting time on it today. Okay, so. Okay, well that changed my plan for the day. Cool. Let me just swap back out here for a minute. And start scaring the screen. So, all right, well, let's get started then. So, one of the things my top of my list today was to do the open. Okay, the issues triage list, because I wanted to see if we could minimize the 77 and I saw this morning when I woke up, it was down to 73. So that's that's a good, good thing. And I wanted to thank John for stepping up and helping out this past couple of weeks and making that happen. Vadim, you want to give a quick update on what what's gone on with closing some of them and what the status of the releases and then we'll go into the Ansible collections talk quickly. So this Saturday we're going to release a new stable version. I don't think it has any major fixes yet because we realized that VCR installs broken because of weird behavior of network manager. We have a fix in the pipeline and hopefully it would be merged this week. Another significant problem is mirroring images that depends on how fast our CI gets updated to the new image and we'll have this rebuilt. So, not much depends on us here. There has been two CVs fixed in Fedora Kernel, one of them related to Pseudo, two CVs, one of them Pseudo, one of them kernel. Fedora folks are releasing a new stable today. We will be updated to it automatically and hopefully by the end of the week it would be in the stable build. I think that's all the critical parts for now. Okay. That's that's awesome news. So I did notice a lot of people stepped up and did some closing and commenting on the issues. So thank you all for that. Are there any outstanding questions? If we can do this part of the day quickly, then we'll get it done from any of the issues that people had posted. Hi everybody. So I think Joseph just pointed to one issue that is still standing. That is the copy mirroring issue where the MIME type is wrong. So that's been fixed in the library and now we just have to get that into into the builds into our image builder. So, yeah, I'm not sure if there's a PR open for that already, but that is going to trickle down eventually. Yeah, we're just not there yet. Okay. Awesome. All right. Well, then I'm going to stop sharing for a minute. Question is it do you think that even if we get a 407 release before this mirror problem is fixed in 406 that we can push 406 in between. Well, the guys that are still caught on 405. Yeah, it doesn't matter which version it lands because our CI upgrade our CI build farms need to be need to have this fix and our I built farms are on 47. The only question now is to make sure that we cherry pick that fix to all the versions we release so that you wouldn't hit it when you build your own image. For mirroring all we care about is the images produced by our CI and the only thing which matters is the version running on that CI. Okay. So you think that we could get a 406 that maybe has this mirror problem fixed so we can proceed with 407. Right. And not have to jump over the 406 release. Okay. Thank you. Right on. Sounds good in the chat. Do we have a 407 to start asking people to do a public push on? We have nightlies. And eventually we will be we will release those nightly stable. We're pretty far from that like at least a couple of weeks far. But it would be really great if we did this during OCP release candidate testing. So that fixes from Mockety and the issues we had would be prioritized and would be counted as if they were hit on OCP. We were trying to do the whole thing like last couple of releases but maybe this time it would actually be happening. I've been building on 407 and playing with it with some of our testing with Vadim. Looks okay overall. I think there's a couple of things but I'm not sure whether we should be posting bug reports yet for 407 on Okiti. Or is that a hot point? It's certainly worth it because we would still have to land a fix for them in the 407 and then she would pick the 406 anyway. Yeah, I'm seeing some weirdness with the installer so I don't know if that's a particular 47 or a particular. If Jamie is offering to do a little perhaps a recipe or write up on deploying how to test with the nightlies for 47. I can do some outreach for us, maybe create a postcard and a link and do some socializing of that if we want to get that test. And then we can figure out how you want them to add the issues and if we tag them as 47 issues or we'll get that updated. And so if you give me a release link and that Jamie, then I can send it out on mailing lists and other places. Let's see if we can get that going. Joseph's asking what are the most important fancy features in OCP 47? Yeah, Vadim that's kind of what I think this was a stably release. Yeah, I cannot think on top of my head what's super great. Oh, maybe you weird as an install platform. But it probably depends on ironic meaning we don't have it in OCD. This needs testing and we'll contact the docs team to have what's new prepared so that we could evangelize it. There was a 47 I think public facing debrief as well. So I'll see if I can find the link to that and post that in the mailing on the Google group. So with that, wasn't that last week, I think they had a big presentation of what's new in 47. Yeah, lots of networking stuff I saw was was one of the biggest things. Yeah, I'll find that I just pasted a link to these to the slide deck of that presentation. Yeah, that is obviously OCP 4.7 but the same is obviously true for OCD. Lots of stuff but I it sort of felt more like a stabilization release to me than anything else and there's the YouTube video so you can watch it and follow along with the slides. Cool. All right. Builds V2. That is essentially we have this build API and open shift and the build V2 project is called ship right the upstream project. I'll paste it in a second. And it's essentially based on top of tecton tasks. So yeah, it's the successor to the build V1 API. Oh, so does this replace build configs and stuff. Yeah, I think eventually that is the plan. Okay. Coming to a theater near you. And yeah. The other thing that Jamie mentioned was taking all the three dot 11 mentions off of the OCD site and in there. I think it's time. So, though I keep seeing three dot 11 questions on OCD and open chip dash dev chat room. So, there are people out there still using it. Yeah, I'm still using it. Three lemon in production. Yeah, it's still supported. And it's still quite a bit easier to deploy a smaller cluster with. I would still support it until July anyway. Okay. So maybe I have some time before I take it off the site. Okay. Yeah. I'm not saying that that's a good thing. I'm saying that, you know, this is this is a gap. We still haven't closed with OCD for. Yeah. We're working on it Fabian lowly, but surely. They say it's going away. Yes, Jamie. I think we have to figure out some verbiage for that as well. Get moving. When we get that self contained single node cluster build fully up and running that that will make it easier for the homelab people that don't have a full stack of compute power. Like me. Exactly. It's an up device and it cost me a quite a lot of money to have and it's it's all I have. All right. So yeah, I think the the single cluster single node cluster setup is slated to come in 4.8. It will not support upgrades in figures in 4.8 at least that will certainly come later. I think I wanted to say out regards to docs. I'd like to really implore each and every one of you to participate yourself in the improvement of those stocks. It's all open source. It's in the Openshift docs repository. If you see something like we still refer to three point X or you know that there's our cost mentioned in. Okay. Please feel free and go ahead and open a PR year. That just makes it so much easier for for all of us here at Red Hat because obviously it's the same amount of time we have to spend on that. And we, yeah, it's not a high priority. So obviously if you ask us to do that, we're probably not going to do it right away. That might cause frustration and you can work around that by doing it yourself. So yeah, that's really just all I wanted to say. We do appreciate all the help with the documentation a lot and it really helps us a lot. And I think if you as a community dive into that as well, it's going to be even better. Please, please, please contribute to our documentation. So the other the other side of the coin to is our onboarding and contribute, you know how to contribute in the contributed contribution ladder content on the site is pretty weak as well and Amy. Maric who's on the call and I sat last week and did sort of a quick audit and run through of what what what was missing to make it really easy for people to get started and do some low hanging fruit stuff. So I am going to say that maybe every other Tuesday when we're not meeting here this hour, I'm going to sort of dedicate to working on the contribution and the onboarding stuff in this in the site. So if any other people want to join. Let me know and I'll just invite you to this blue jeans and we can just hack and work our way through that. Because that is the deal you've done some amazing stuff getting you know helping people get stuff there but there's some very basic stuff that's missing. And if you go to visit wonderful site that I've been reviewing that that I'm. I have a fantasy that ours will be as good as it is called Porter dot sh one of the new CNCF projects and they have great documentation there so I'm going to try and mimic some of that stuff. And with a little coaching help from Amy who's on the call here so thanks again Amy and so anyone who wants to do that kind of work with me. I'm going to raise up your hand and I'll invite you to a Tuesday meeting next week and we can look at it and see if it's done. So, yeah, because getting started isn't is starts off a little bit higher level than most people who are brand new can get get started with jumping in. It's the first time that I joined this meeting. I'm Sandra Bonazzola from Red Advertisement and over team. Nice to meet you. Hello, Sandra. Hi, Neil. So, I joined the meeting after testing OKD in the past months and I come from the overall community which is kind of different community than the OKD one. And the first thing that I saw trying to get into OKD is that it's kind of difficult for a user to get user content from the web. If you have any trouble and you try to search for a solution on the web, you don't find anything. I would suggest to start thinking about how to make the discussion happening on the Slack channel or on the Google group somehow indexed and searchable so people can find what they are looking for without having to understand first where to search. It will make it easier for a first timer getting the basic stuff for getting started. Yeah, one of the pitfalls of, you know, the Kubernetes community as a whole choosing to use Slack for both community support as well as development is all that knowledge is locked up and not available for the broader. For everyone else to take a dive in and to learn from and that's really unfortunate and that's one of the reasons I've been kind of unhappy that we use Slack for all this stuff because it's so closed and it's not fair to everyone else who wants to be able to learn from this stuff. So we have some work cut out for us and Sandro, if you want to join in and on every other Tuesday, I'd love to figure out what we can do to make that better. And we do have space on the GitHub repo for OKD.io also to host some of this if we need to. So we can do that. Make that happen. So I want to stop for a minute if that's okay with everybody and hand it over to the Ansible collections folks who came today to chat with us about what the work that they're doing. So Fabian and Tim, if you want to introduce yourselves and your topic and so that's okay and keep chattering in the chat while they're doing that. There we go. Yeah. Thanks for coming. Thanks. My slides show up. Yes, indeed. We see you. Okay. Yeah, we see them. I'm on too many video conference systems. I lose track to how each of them work. I know your pain. Yes, I'm up to six of them now. That's why I think I'm at two. So I'm always losing buttons and features for that reason because I'm like, wait, oh, that's on zoom, not on Google. Oh, that's blue jeans. Yeah. Anyway, sorry. So I want to start here. I'll Fabian introduce himself. So I'm Tim out and I'm a senior product manager on the Ansible team. I was actually with the original Ansible company and I've been along for the ride to Red Hat and then to IBM. And I've been working on how Ansible can integrate with and help automate things happening in the container native space. So one of the things being from working for Red Hat is that we looked into, once we started this effort, how we could start to help automate what's happening in OKD and what's happening in OpenShift clusters. So what I was presenting here was like the 1.0 that we came up with initially and how it came together. And I'm going to try to just speed through this. There's a lot of went on here and we could go a whole lot deeper. I mean, put together a demo if we have time for it. If not, we can provide the code. So, like I said, that's that's my background where I'm coming from there. I was a programmer 1 time used to develop and then came a PM and then they took my keyboard away from me and said, you are no longer allowed to code. But I get to work on this type of stuff. Fobby, do you want to say a few things? Yeah, I'm Fobby Van Vijlich. I'm a software engineer in the OpenShift org. I work on operator framework. I've been involved in like the Ansible Kubernetes integration space since pretty much right after I got to Red Hat five years ago. So I've just been working on building out like the Python clients and then integrating those Python clients with Ansible modules and things like that just have like better sort of like full like application to level integration so that people who are using Ansible in their like traditional IT things can kind of more easily transition to the Kubernetes space without having to upend all their tooling, logging, monitoring, etc. And I'll cut it there so we have time for the whole presentation. Right. Yeah, so the first action mentioned if it's not apparent by who's presenting to you is that this was a joint effort between the Ansible team and the OpenShift team. Fobby was one of the engineers that came over and worked with us in developing this and we had some of our own people from the Ansible team working together on this. So this is truly a joint effort that happened out there. So just diving in, I'm going to like so I'm going to speed through this. So this thing that we're talking about community dot OKD is Ansible content collection for automating and managing the unique capabilities of OKD and OpenShift systems and the key word there is unique and I'm going to come back to that. Now I know I'm not talking to an Ansible group here. So you might be wondering, well, what's a collection? You might be familiar with Ansible, but in the last year or two we've had this huge effort going on to separate the core engine that Ansible is known for that command line tool and what we call the content so that they're separate and that they can move independently. One of the problems we ran into with our batteries included approach was that you had to wait for the next release of Ansible to come out to get new features for a cloud service or some type of other application or API change and it was just getting way too bogged down. So we came over this thing called Ansible content collections that we've been moving towards were most of the way through or it's just short for a collection and it's a new format for organizing Ansible content so that it's independent of the engine and can be added and installed and updated independently of what's happening in that. So what we're talking about here is one of those collections that is specific to working with OKD and OpenShift here. So like I said, just to review that then and so this is to focus on the unique capabilities of OKD and OpenShift systems. We also have another collection which has been now renamed kubernetes.core and that provides the baseline Kubernetes and Helm 3 automation capabilities out there. So if you're working with OKD and OpenShift, you're probably going to use both of these collections together in your playbooks and the stuff that's baseline you would work with the stuff that's in kubernetes.core and then when it comes to the things that are specific that OKD adds on top of that then you would pull from the community OKD collection. A couple other side notes if you go out and start researching this that you might become confused or wonder about is community OKD is the upstream of a collection called Red Hat.OpenShift and that is the supported offering that we put together and put out there to customers. So it's one in the same. It's just one's the downstream and one's the upstream of that content. Another quick side note is originally our Kubernetes content started off as a community effort and was called community.kubernetes. We're going through the process of changing the name and migrating the repo things like that. So they're essentially the same but the community.kubernetes is going away for marketing and business reasons and it's going to be called community or kubernetes.core. So that was just a little background so you know what you're looking at here. So let's talk about what is in this collection. So what we did when we pulled together this effort last summer to make something that was supportable that we put full-time resources on. We worked together with is we looked at what was in that community.kubernetes collection and said all right we need to break this into two parts because what had happened is it was just done through community contributions coming in and it was mostly baseline Kubernetes but some OpenShift specific features had rolled in and then we were getting kind of complaints from both sides. People that were trying to use OpenShift and saying hey this is missing and then there were people on the baseline Kubernetes crowd coming to us and saying hey what is this stuff that's in here that it's operating different than it should. So we decided the best thing to do was then to split this stuff out into their own collections so that they could both move and focus on each other's communities better rather than trying to find this middle ground. So that was one of the first big things that Fabian and other engineers took on. The other thing that Fabian was very, very helpful in was getting proper CI testing including prow integration into this so that all of what we were doing got run against the latest builds that were happening there. That was something that unfortunately wasn't happening in the previous collection and work. So we migrated a whole lot of community content over that was OpenShift specific and inventory plug-in and OC connection plug-in. There was an OpenShift off module that was called Cates off at the time we've renamed and then we created a module specifically for working with declarative resources but it gave it the added logic for working with things like I'm trying to remember some of them deployment configs and projects and things that are specific to OpenShift that the Kubernetes core module would sort of grip on there. So one of the things that was a little interesting that we went through is Ansible's added namespaces and we decided to make use of that. So we had the Cates module that like I said handled the baseline Kubernetes declarative APIs. So rather than create a totally different named one we decided to use the Cates name again because you don't have to do it fully qualified like I've shown here. It would make it a lot easier for people to move or port their playbooks between baseline Kubernetes and then moving to OpenShift in that regard because then they would just have to switch what namespace they were pulling that module from. So there's a little side note more advanced thing. And then we created a few modules. So this is an area that we went and did a quick survey and said well what are the most common things people are trying to automate with OpenShift right now to figure out what is in the 1.0. And the two things that came up was here was the ability to expose a route which is sort of like the expose Kubernetes but the added stuff that you can do in OpenShift. And then the other was the templates that came up. The ability to render and optionally apply those to what you were doing. We're also things that we were seeing a lot of people that were trying to use Ansible with OpenShift were trying to do and struggling and we wanted to make that easier. So we created those two modules there. So I'm going to stop there like I said I sped through a lot of stuff. Do we want to take the time for a demo. I think we actually have the time where there was one question in there. Okay sorry I can't see the chat unfortunately on the mic. That's okay but I think you might have answered it. James you were asking will playbooks written for community OKD work without changes when used with Red Hat.OpenShift. Yes as long as yes there should be no no issue there you just have to be put a little bit of care into how you're managing your name spaces. If you do it fully qualified like I showed back here you would have to do a search and replace but you don't have to do it this way. And I would recommend not doing it this way if that's what you want is the ability to go between the two easily. There's a way to create like a namespace search path at the beginning of your playbook. And then you don't have to do this stuff the fully qualified stuff in your in your in your plays in your roles. Is there a reason why the Red Hat OpenShift whatever it's wouldn't also provide the community OKD name if they're going to effectively be identical. That was more of a marketing issue of our of customers getting confused to what is supported and what is not. And so we decided to do it through the naming to make it apparent. OK. Yeah there was also legal reasons that we couldn't use OpenShift or Red Hat in a downstream. It's J Boss all over again. Yay. I know. Look I cannot tell you Bobby is my witness. I almost lost my mind trying to deal with this. I did not want this but unfortunately like I'm not a lawyer either. Yeah. Yeah I get it. I'm annoyed but I get it. Yeah. Yeah me too. Honestly. Yeah. Yeah. There's a couple of things in the. Oops. Okay. James is. Ansel is pushing very hard to get people to use the fully qualified name. This is going to make broken playbooks and roles when trying to switch to OKD and OCP. Yeah. That's more of an. We've done it for awareness and also clarity when when you're dropping in a single task to document something you don't see all the other stuff you could have done at the command line or in the playbook declaration. And then it you people may get confused over time as you have modules with the same name appearing in different collections entirely that it was it's just for clarity of that type of documentation. We're not recommending that that everything you do should be done in with fully qualified namespaces in your actual work. It's more of us getting this concept across and also in clarity of when you're only taking a snippet and dropping it in. You could lose someone. Yeah. I don't know if that makes sense James. Yeah. Yes. Let's demo. Let me let me drop control the screen here so that we have Fabian here. There you go. Fabian can grab it now. All right. Let's see. All right. Let me turn off my second monitor real quick so that it doesn't get confused by this. Is this showing up properly for everybody or no. Now we see your blue jeans screen. OK. And can you see my my terminal here. Yep. All right. Cool. So I pasted a link to the demo code in the chat there. Basically it's just going to run through this demo playbook. Yes. Can you make that font bigger please like very bigger bigger more. There we go. Can everybody read that. OK. Yes. Now we can read it. Yeah. All right. Let's see. This is going to be a little rough. OK. So basically first I'll just run through what the image stream changes that we did. And essentially what we did was we made it so that operator authors who were using deployment configs or deployments with image streams had this issue where because they specified an image it would their operator and and the open shift controllers would constantly wrestle over the image field. We just added some special logic to make it so that we would parse all those image triggers and things like that and not update if they're just going to be updating to what the controllers will set back. And so this little if this even shows up properly. All right. So this is creating the project. We automatically translate the projects to project requests when the user who's trying to create a project doesn't actually have the proper permissions to the project. So that's one of those nice changes. And it's kind of difficult to see what's going on with the size. But this is just basically letting you know that the image stream has probably been created. This is like what it is, what the status of it is. Now we're creating the deployment config. And this is mostly just to show you the resources being used at these. Mostly it's just you just send it the raw YAML the same as the COI utilities. Basically deployment. Here's the image stream we're using. It's just giving us the Python Docker image. And it's nice and speedy on this image pool. Sorry, my cluster died right before the presentation. So I had to spin up a new ephemeral cluster. So there we go. Okay. And you can see this is the deployment config that was created. It's the Python one. It's just spinning up an HTTP server so that we can make sure that our requests work. And then here we're going to issue this partial update. It's a patch. So it's targeting this hello world DC deployment config we just made with just the small bit that we want to change. Ansible will use that to create a patch request. Telling it to replace the Python image, but we're already using the Python image so nothing should change. And we can see here that in fact the task was marked as okay, not changed, which means that no actual change was made in the Kubernetes API and nothing's going to be redeployed. And if you were in an operator context, you wouldn't just be spinning now forever in an infinite loop, spinning up hundreds and hundreds of deployment configs over the course of minutes, which is what users were hitting before. So here we're doing the same thing, creating a deployment. Now this deployment is using the image trigger annotation in order to automatically replace that. So just wanted to demonstrate that this is working even in the sort of less common image trigger context. So we just got to wait for the image pull to happen. And let me know if we're running low on time and I can skip through some of this as well. Yeah. So we created the deployment reference in the stream and then here we're going to do a similar thing, just issue a partial patch, only update the image, which should do absolutely nothing. And indeed it does. And we can also see that the actual image that is being used is the one set by the image stream rather than the one set by the user. And we were able to detect and avoid an unnecessary update there. So that was the image streams. So now we're going to show the little utilities we made for doing routes. First we're just going to spin up a basic deployment that spins up the Hello OpenShift container. We'll create a service. You can see it's basically just an inline definition. You can do either inline or files on disk. It'll work fine either way. And then here rather than having to craft the route definition by hand, we have this OpenShift route module, which is the same basic functionality as the route specific features in OpenShift and OC expose as well as OC create route, but re-implemented with Python that integrates with Ansible code better. So we can see like with this, we just specified the service we wanted to expose with the namespace we wanted to expose it in. And it created this route spec based on that. It parsed the target ports from the service, you know, all the things that users would expect. And we can see hit that URL that the content we get back is the Hello OpenShift from that Hello OpenShift container. Clean up that route. And then I'm going to skip through most of these. I mostly just added them in here for, you know, if anyone's interested and wanted to go through to see all the different things they can expose, but you can create routes with custom names. You can create routes with different termination policies that allow insecure or don't allow insecure or, you know, have redirection, basically everything that you could do with OpenShift create route or OpenShift expose. So let's just skip past all these. All right. So off. Users for some reason always want to use the username and password to log in. The password, obviously, ideally provided by a secret and not inline, as in this playbook. So in order to enable that, we added this OpenShift auth module, which can handle all of the OAuth server interactions that, you know, OC login does. So here we're just going to create an HD password, a secret and identity provider, a new user that uses it, give that user the cluster reader access, and then use this OKD OpenShift auth module to manually log in to the cluster using just the username and password that we configured there. Do this real quick. Does that create a cigarette? This created the identity provider. Now we create the user, create the cluster role binding. This task is using one of the modules from the community of community to be community.core collection in order just to get information about the cluster, what APIs it supports, what, you know, what host and connection cert file, like all the different information about what the cluster, what cluster we're connected to and how we're connecting to it. So now we'll issue this login command, which I may have just accidentally skipped and I did. Yeah. Start with that. I'm going to restart real quick. Made some kind of type. I think it's running run it in step mode. It's confused here. My apologies for that. It wouldn't be a demo if it didn't have something Fabian. Yep. I knew it was bad when I saw the cluster go down and I was like, okay, everything I set up to make sure that this will work. No longer exists. So, okay, there we go. So, now that I actually ran all the proper tasks and all the variables were defined, here we can see it logged in. It returned an API key and host for an ephemeral cluster. And now, as that user, we're going to just use this basic case info module to list all the pods in the testing namespace, which is the namespace that we were running those initial industry tests in. So, we can see that user, hopefully, yes, that user does have the permissions. So, we logged in as that user and the user permissions were properly configured to list all those pods that we defined in that previous playbook. And then just the last one here. So, this is just basically the OpenShift process, an OC process sub-command, accessible via a module. So, you can either, you can target local templates, you can target templates in the cluster. You can either render them and get the list of rendered resources back, which you can then store and then create by hand in the cluster. Or you can just say, you know, render and then create all the resources in a single task. So, we can process this template from the cluster, we're rendering it locally. So, this is the basic, like default included nginx example template. So, you know, it's just a service, a pod. And now we can iterate over them, create those resources, ignore those warnings. Yeah, basically pretty straightforward. And then here, same basic thing, but we're just letting the module handle creation as well. And it's just going to basically render the same logic. And since it's the same template with the same parameters, it actually shouldn't really result in any actual work being done. Yeah, so. All right. And then this is the last little bit of the demo. So, this is demoing the inventory and connection plugins. And what this basically allows you to do is, if you just configure that you want to use the okd.openshift inventory plugin, it can basically just, when Ansible starting up, it will query the cluster. You can trim it. You can have multiple clusters in your inventory. You can have multiple namespaces or all namespaces. But basically, it will go out to all the different pods and add them to the Ansible inventory as targetable hosts. So, here you can see, like, we want to target all the pods and namespace testing. So, these are the hosts that we specify. This is a group that was automatically constructed by the OpenShift Inventory plugin. And basically, we're just going to do a little ok. And it immediately failed. That's cool. If it failed, it's probably, it looks like it was a build image. It failed because it was not an image that had Python in it, I think. But we can see here that we had our Hello World deployment image and our Hello World DC container and our Hello World DC container, the ones that we defined in that initial image stream. An example that are in the testing namespace. Like, they both were found, discovered, and we were able to reach out and get information from them. I didn't point it out, but if you go to our deployment config, it has this environment variable test. The deployment has the same variable test. And so, we're just going to verify here that we can output the message of that end bar because this code is running, that is so difficult to see, because this code is running on that actual container running in the cluster, automatically gathered by the playbook. We can see there on both of those two hosts that it found that were successful, the test end bar was defined, and its value is what we set it to. And then this is just showing that you can also then use the copy and flirt modules to copy content back and forth between the cluster and the host and retrieve file content and assert that content is the same, except I skipped one of those tasks. Yeah, that's basically it. That's the demo I had prepared. Apologies for the choppiness of it at times. But basically, we're just trying to make it so that the experience of using Ansible to automate your workflow has got all the same functionality as using Bash, but with all the niceness of using Ansible, like identitence, you saw there, my demo crashed a couple of times. I was able to run through it without everything being changed and constantly redeploying or anything. So, yeah. So, yeah. That's a good question to ask. Timothy? Yes. Yeah, so thanks, Bobby. That was great. I hope it got across all the different things that you can do to automate and cut down on all the command line stuff you would have to do in manual work and repetitive work every time you would deploy a cluster or do anything. The big question that we have for you is, besides using it, trying out, seeing how we did in our 1.0, is, you know, what could we do next? What do you want to see in this next? There's a lot of the areas that we didn't touch on, and the question we kept asking ourselves was, well, would that be useful? Well, which one of these is a priority? Which one is not? That's the type of feedback that we're looking for. We got a good core set of feedback, mostly from Red Hat consultants and a couple other people we knew in the community that were doing work with Ansible and OpenShift, and they gave us that initial batch of use cases, and we've essentially covered them all right now. So we're wondering, where do we go next? So that's the feedback we'd love to hear. For those of you who are interested, I will give this deck to Diane to send around, but these are some of the repos of the content you were looking at starting with Fabian's demo code that he was just running through, and the repos where we're developing the collections we've been talking about. And then there's a bunch of blog posts if you want to go deeper and read about this stuff in more detail, maybe more elegantly than I've been speaking about it. So that is all that we had. Well, that sounds good. I think we could use some feedback from the community on maybe some more useful examples. The example was good, Fabian, that said it was bad, but just how people would use this in production, and so Joseph and other folks who are doing that, your feedback will be most welcome. So one thing I'd love to see, and maybe it's already part of the playbook, is standard operating procedures for admin tasks, like key rotation. I'm not sure we probably don't have that yet, but something like snapshotting and backing the backup of cluster states. So I'm not sure, do we have a playbook for key rotation, for example? No, so the content there was mostly focused on providing those basic building blocks, kind of the foundational work in order to allow us to build more things on top of that, but collections do allow you to currently distribute roles, and I believe in the future it's planned to allow you to distribute playbooks as well. And so the hope is that as we've sort of built this out, get more community involvement, get some better subject matter experts. I deal a lot working with operators with the Kubernetes API and low-level components and things like that. I have less experience with sort of higher-level cluster administration, and so that's definitely, we would love to have playbooks and roles that would enable users to very easily automate those tasks, but that sort of now is the point where we go out to the community and look for people who know about that, don't necessarily need to do all of it, but if they have requests for features like that, documentation, maybe some getting started places, those would all be very useful things for us to see pop up in that repo in order to help us prioritize and also in order to help us understand what exactly those cluster administration tasks are and how we can help automate them. No. Wait. So I just hopped on maybe halfway through the thing. Sorry. Well, OK. Well, I was actually going to ask something on your behalf, so I guess now that you're here, you can ask me. Yeah. Speaking specifically to cluster administration, one of the things that has been a pain point in deploying newer clusters, fresh clusters, is the simple things like approving certificates and stuff like that. I didn't see anything specifically in your demo about those sorts of one-off kind of deals. It was more about making yambles and pushing yambles into places. And I know you can sort of hack the existing Kubernetes plug-in. You can do some stuff and write some Ansible magic that would be really brittle. But supporting workflows like that, maybe specifically certs because that is such a pain point, would be really cool to see. Yeah. Yeah. I mean, I would love that. That's the exact... Sorry, go ahead. And I'd like to tack on to Sree's ask. Specifically, one of the major pain points I have right now is orchestrating UPI, OKD slash OpenShift deployments because if you read the guides or the documentation on this, it's fairly involved, is somewhat difficult to coordinate. And this is one of those things where Ansible orchestration can make this a very easy path to get things set up correctly when you can't use the IPI style orchestration. Like for example, at work, we have an open stack that's sufficiently old that the IPI cannot orchestrate it. And so we have to do UPI-based deployments. And it would be very nice if UPI wasn't a pain to do. And a lot of this comes from... We previously orchestrated using Ansible for OpenShift 3x. And there is basically no automation path for OpenShift 4x for a UPI-based deployment. And that's sort of... I think if you're considering admin tasks, that's probably what I would consider the number one pain point in terms of admin tasks that people would frequently want to reach for Ansible for. Yeah. That's good feedback. Have you met Roger Lopez at all? No. He's... I believe he was working on UPI. He wrote some Ansible playbooks to help with some UPI. But also to these points, one of the things that we have heard on a very broad level that where people are interested in applying Ansible is what we're calling last mile configuration. It's like the installer gets you so far. But then it gets you to a point where you need to do additional things to make that cluster useful. And those things are probably very specific to what you're trying to do with that cluster, that it's hard for an installer to capture it all. And then it's doing all those repetitive tasks. Yeah. And that is something that we've heard. It was stuff with SCD, it was stuff with NGINX, stuff with PERME. And then just getting storage into the cluster can be a huge pain on UPI. What you're describing is exactly my experience. I have basically a bash script. And the first half of the bash script is use the installer to get the cluster going. And the second half is slam YAML file after YAML file into it in order to get it to a point where I can actually use the thing. In the right order with sufficient amount of retries with some blind hope and prayer that it will actually do the right thing. It's usually a three or four hour job. Yeah. It shouldn't be, but it is. So I think... So what I'd just like to, I think point out here is that this Ansible project is probably not meant for installing the cluster. At least that's not the focus or it shouldn't be. I think there was a decision and informed decision made not to repeat the installer we had in Origin 3.x. So, yeah, I'm not sure how much you want to focus on. No, no, no. The outstanding upstairs day zero. It's not that... It'd be nice if things like that could be more, like I'm not asking for another OpenShift Ansible like we had for 3x, right? That was its own special, very entertaining, but I think for this kind of stuff, it would be nice if the... And I guess this is one of those things where we will just have to sort of play with it as a community and figure it out, but there's a lot of these sort of like the cert approval things like that and I would think making it a little bit more templated in terms of like deployment configs would also help a whole bunch, just for like making it quicker for people to spin up their own Ansible playbooks. Because right now it is highly, highly manual to write all of the Emils out in Ansible. And I think you guys are already getting there, but yeah. Yeah, like it's so manual to the point that it is preferable to just do a bash script and hope and pray, which is I think not what anybody actually wanted. The problem with the cert approval in particular is that you actually have to look into them because there is no... The only way to find out that a malicious node in joining your clusters to actually looking into those certificates. So making an automated approval is very complex unless you're in IPI world where you can confirm that the cert is valid. You can confirm the cert being valid without IPI just fine if you know what the master cert is and that's part of your playbook or whatever. Like we have a copy of the parent cert inside the bash script and it just checks that to see if that's what it was derived from and again, that's stupid and manual and horrible but it is still... It is not... Throwing your hands up in the air and saying UPI is impossible to automate is basically the wrong approach for making this successful because I can promise you almost 100% of all small-use workloads are going to be UPI because none of the IPI's work just zero of them work. And so you can't say UPI is effectively unautomatable. Like you have to start looking at if you've got all the IPI stuff done what are the bits and pieces of UPI that you can bite off and turn into something automatable because right now it's such a terrible place to be that it is just easier to go with the own special place of pain that is the OpenShift Ansible playbook or 3X because that is better than the current situation for UPI on 4X. Yeah, let me jump in here again. I think that again is a huge opportunity for the community, for you, Neil, to step up and contribute some playbooks there. I think, yeah, with UPI obviously we prefer everybody to use IPI. It's much easier. We have a much clearer picture of the cluster we're getting in the end. That is kind of what we're aiming for, right? So if you have to use UPI, that's your only option then, so be it. But we can't really account for all the different, there's just vastly different setups there. And so if you have one that you think hasn't brought enough, it is applicable in a big number of places. Yeah, I mean, I think... Yeah. So guys, we're also at... That would be great. We're at the end of the hour, which is what happens to us every week. And so, yeah, Neil, if there's a broad-based one that we could do to solve a group of them or at least get a good recipe done, that would be a great contribution to get started and maybe working with the Ansible Collections folks to get it tested and it will be a good thing to do. Yeah, sure. Yeah, I know. It's work. It's not that it's work. I don't care about that part. My fundamental problem is that the way that the OpenShift installer has been designed is that it's either I know all the things or I know none of the things. That's not an acceptable path for making it a true successor. You've got to be able to make the installation process more composable. I would love it if the IPI steps that are part of OpenShift installer were just pieces I could invoke separately. And then I could say, well, this piece doesn't apply in my stuff, and I could just call the rest of it to second stage through IPI and first stage through myself. Like, that is not even possible. There's no pluggable back ends. Like, this is... There is no avenue to make this better as far as I can tell other than writing all the crap from scratch over and over and over again. And that's, frankly, not good. And that's my problem with this. Obviously, we do test UPI in an automated PI setup. And if that were, you know, more agnostic to the underlying platform, I guess you could use that everywhere. But that just works on RCI because it's set up for that. And with UPI, you just have to, each time, adapt it to the environment you're setting it up in. So it is difficult to come for all of that. And if you want to come for all of these, you know, options, then you end up with something like the Ansible installer in OKD3. So, you know, there is some trade-off to be made here. And I think we're definitely, you know, if you're able to use IPI, it's much easier that way, and you should do that. Obviously, that's not going to help anybody who isn't able. But still, I think I want to just highlight that the IPI install workflow works really, really well and much better than in OpenShift 3.0. Yeah. And there's a little conversation on the side that I want to encourage Sandro to continue to have offline, maybe on the Google mailing list that we have to continue this conversation because we are out of time today. It's a good and it's a healthy conversation to have, Neil. So we'll keep poking it and making the installer better. Let's see what we can do. Could I add a quick comment here? Yep. Just because the whole idea of like pluggable installers and stuff is coming up, and I had a really relevant kind of architecture call earlier today, and we were talking about how we're going to bring more providers in for OpenShift. Like this is in the context of like an Alibaba cloud provider and Equinix Metal and whatnot. These are some of the work that's going on now. And it sounds like there is some traction taking place, at least from the kind of hive and installer teams about this notion of how do we create materials that would allow some of these cloud providers to more easily integrate with our installation sources like I don't want to say an interface or an API, but something they could write to to make it easier for providers to plug into that. And it's not the exact same thing that like Neil and Shria are talking about, but it's kind of in the same area, I think. Yeah, it's going that direction. I mean, if that existed, if that existed, there is a much easier path for even me to orchestrate within our internal infrastructure because things are kind of special because of reasons. But it's still a cloud infrastructure platform. It's still OpenStack. Right, right. It's a software-defined fabric, so I should be able to do that, but there's just no avenue to plug those things in. Right, so the long and the short is it doesn't exist. It's very much an idea right now, but people are thinking about this idea and we're especially thinking about it in terms of how do we open up the installer so that we could allow more platforms to write to the installer without needing to go through this huge process of creating a machine API implementation and then reviewing that and then getting it into the installer and then reviewing, there's a lot of chicken and the egg problems that go on here. So I'm just giving you a little window into the Sausage Factory here of kind of things that I've been hearing about. Well, I'm glad to see that something is actually progressing on that front, even if it is just conceptualization at the same time because that's better than just saying it's not going to ever be fixed. Yeah, people are thinking about it and it's a great, I mean, what you've talked about here is kind of interesting and I'll certainly yell at people on the other side about it. Well, not yell, but... We'll use top capital letters in text messaging back and forth in Google Chat. Excellent. Not quite yelling, but it is, Neil, it is not to say that your concerns aren't heard or anything, it's just we are trying to figure out how we can move from the community point of view as well to get some content or some basic stuff started, even if it's a stub to move this forward. And if you and Shree have that bandwidth to even get there, that would be helpful. So, and yes, your passion is totally, totally taking... Yeah, both of us I think are wanting to clean up the UPI stuff that we did and make that kind of publicly available on the OpenShift org somewhere as a cookbook or whatever, but I just wanted to highlight the general problem that we have where these kinds of things are just difficult to create and then maintain because the underlying architecture of OpenShift Installer is so hostile to this kind of stuff. And that's something that really needs to be rethought. Yeah, well... We'll get there. We'll get there. That's a rethinking for the next meeting. I'm going to reiterate, and I'll put a note out on the Google group about if people want to come and talk about docs next Tuesday at the same time. I think what I want to use is the other week to do docs work and have conversation about how to move that forward. So I will do that. And yes, and we do want to invite Neil to his many team meetings to keep spurring people on. That would be great. So anyways, it's after the hour, about almost 10 minutes. Thank you for taking the time and we will pull in three to, of course. So anyways, thanks guys. We'll talk to you all, some of you next week. I'll post this session up, Timothy and Fabian. Thank you for coming on our YouTube channel. And if you want to redo it, Fabian and Timothy as a little short briefing, or I can just edit out your stuff into a short little video clip. I mean, Fabian, if you want to redo maybe your demo, I'll be happy to rerecord it or have you rerecord it and share that out with the universe as well. So thanks for all of your work and hopefully we'll get you some there. And OpenShift Commons content. I always want OpenShift Commons content, but I think I want this sooner than I can schedule an AMA briefing on it because I think I'm booked out until the end of March now. So Fabian, if you ping me in Google chat if you want to rerecord or if you're okay with me using it as is, I'll just snip it out. Yeah, absolutely. Okay, take care, guys. I'm hanging up now. Everybody, deep breath, have a wonderful week and we'll talk to you all soon. Take care. Thanks, everyone. Thank you, take care. Thank you. There you go. Bye, everyone.