 Come here is Jamie. Hello, Jamie. I am here. Yeah. Cool. I am just singing your praises and saying my gratitude for you taking over chairing and moderating the call last week and your approach to it. So, as we discussed at the docs meeting, I'm hoping I can coerce you into doing that in an ongoing way and taking the reins again today. This is my role in the world these days. I am trying to replace myself everywhere I can. So, especially if we can get external non red hatters too. So that would be wonderful. So with that said, I'm just going to share my screen for a minute. So, if people don't have the details, I'm going to put the the screen here for where things are and the GitHub open shift community projects thing here planning page and GitHub. But what we need to be able to do is get Jamie access to add stuff and take stuff away from here and we don't have that yet. So, I'll make, see if I can make that happen or move it into the O. K. D. repo the O. K. D. dot where we do have control over who has access to stuff and can easily add external people too. So that might be a way to do this. But I'm going to stop talking. And I'm going to stop sharing ask Jamie to take over the reins and drive this meeting. Excellent. Well, welcome everybody. So the first thing I want to do since we're here and there are 12 people on the call. I wanted to see if there are any new folks that wanted to introduce themselves who maybe haven't been to the meetings before. Or say something in the chat. Also, feel free to say something in chat if you're new and haven't been to the meetings before and maybe well, folks are getting situated. I'm Jamie McGarron from the University of Michigan. Diane, you've already introduced yourself or maybe you should for people who are new to the call. Yes, I'm the director of community development for the cloud platform. You over here at Red Hat and have been driving behind OpenShift and Origin and O. K. D. for the past 7 years. It's time to give the reins away. And we see Dusty's in the chat. And of course he's working with Timothy on Fedora Coro s. Vadim, you want to introduce yourself real quick. I collect products and work for that and fix various issues related to O. K. D. and upgrades in general. Does anyone else want to chime in real quick. Say a little bit about yourself. Hey, so, hey, I'm Tim Otteradier. I work on the chorus team. And so I do most of the, well, I work on the chorus team does a lot of the work on Fedora chorus, which is underlying the O. K. D. platform. Mike, you want to say hi and introduce yourself. Hi, I'm Michael. I'm a technical writer with the OpenShift team. Excellent. All right. Well, it seems like we've gotten some introductions out of the way and folks should feel free to introduce themselves. See, Mohammed said hello. Folks should feel free to say hi or introduce themselves in the chat, ask any questions and whatnot. So let's get started with our usual update from Vadim about latest and greatest things happening that we should know about. Right. So last weekend we released another stable cut from 4.7. The most notable about this release is that it has a new signature GPG keys. Unfortunately, our previous ones have expired. So if you want to update this version, you would have to force update and skip the verification of the keys, but from now on for the next two years, we should be covered on this situation. Along with 4.7 stable release, there is also a new release candidate for O. K. D. 4.8. I encourage everyone to have a look and we're waiting for feedback on that. The most notable change from 4.7 is a new way we build machine OS content. So no longer you're required to use a specific DoroCore OS version and OS extensions are no longer used. All of the RPMs are already baked in the OS content. If this way of building it is proven stable and reliable, we will also copy that to 4.7. And we would no longer have to use a fixed version of the DoroCore OS. And we don't have any particular plans yet when to use O. K. D. 4.8 as a stable version. First of all, we don't have sufficient feedback. Second of all, OCP is still in so-called feature freeze, not code freeze. So the early testing would be very appreciated, but we still have a couple of weeks until we could start discussing which version is more stable. I believe that's all from the technical part. On my list, I also have an item to contact Daniel Messer about the O. K. D. specific catalog. I don't think there's been much progress on that, so I'll ping them and see if there's been any use. I think that's all we've got for today. Are there any questions for... Oh, sorry, go ahead, Diane. I have a quick question. The Ansible team, Tim and Daniel, had some questions about the moving the O. K. D. community stuff from Ansible. Their playbooks, did that get completed or is that still in limbo? No, I think they're still discussing if they want an entirely new repo, if they want to fork or if they want to archive the previous one. But in the end, that shouldn't affect the community really. It's just a matter of where it lands. The code remains the same anyway. Thanks. Any other questions for Vadim in regards to what he just laid out? Yeah, Vadim, the vSphere failures, were those IPI or UPI or both for the 4.8? In CI, there is... Yeah, there is a common problem for all vSphere clusters there. The main problem is that our samples are still using Docker Hub. So we pull in CentOS 8 issues and the most common failing test there is that it cannot fetch all of them because our external IP is being plugged by Docker Hub. Unfortunately, the fix for this is pretty complex. Ideally, we would convince the software collections work to mirror all the images to Quay, update all the image streams in their repos. There's like 20 of them. And then it would be picked up by samples operator and we would have this updated. The short-term fix is that we could probably install some kind of a proxy or a mirror with those releases, but that would complicate CI a bit. We would be working with our infra team to fix it up, but I don't think it's easy to achieve. So we would have to live with that for some time. The most problematic issue I see right now in 4.8 is that upgrade tests in 4.8 are mysteriously broken. We'll also be looking into that, but in my manual test, things have been working fine. So I'd be very interesting for some results from the field. Thanks. Any other questions for Vadim? Let's move on to... Yeah, thanks Vadim. All right, let's move on to some updates from the Fedora Core OS side of the house. Dusty or Timothy, do you want to fill us in on anything that you think might be relevant? Yeah, so what the current news on this side is that we just moved Fedora Core stable to Fedora 34. The base was the initial base app in testing and now it's in stable. So it should be... Well, I don't think we have any major issue right now with this one. I don't even get any from the tracker. The podman issues have been resolved as far as I understood. So it should be good. So Morazzo's smooth move to Fedora 34 for now. This one, so this one does not come with the C Groups V2 change right now. It's coming up. This specific change is coming in the next testing release that will happen approximately in two weeks. And yeah, but this one I think is covered too by the changes in OKD. So that should be fairly transparent for now. Because I don't think OKD will be switching to C Group V2 right now. But that's a different issue. So yeah, I guess that's it from me on the Fedora Core S side. The counting work has been fully announced. Okay, does anyone have any questions about the Fedora Core OS side of the house? Okay. I missed everything, but I assume it was fantastic. It was fantastic, but I'm sure you would have found something to criticize. Let's see. I'm the monitor and I'm not supposed to say stuff like that. Yeah, it's okay. I assume we're moving to C Group V2 now, so that makes me happy. Yeah, it's about the land. So it'll be in testing next week and then stable two weeks after. For existing nodes, they're not migrated, but there will be a nice little banner that says, hey, you're on C Group V1, you might want to run this commands to switch up. The problem there is like, if there are any compatibility issues with people's applications, we don't want an automated upgrade to break them. So we're trying to balance things there. But for newly deployed nodes, they will be on C Group V2 by default. So if you would like to stay on C Group V1, you'll have to basically add a kernel argument to keep your system that way. Ignition just did land support for actually changing kernel arguments, which is nice. So you'll be able to, soon enough, you'll be able to write a butane config that says, hey, this kernel argument should exist on the command line and Ignition will apply that for you. So that's kind of a nice recent change. I don't know if it really helps OKD much, but it might be able to be leveraged by OKD at some point if that's something that's needed. And any other questions regarding Fedora Core OS? All right, then let's move on to the discussion section, reviewing the discussion section. There was a discussion item put in 12 days ago. I'll put this in the link for folks that don't know where this is. This is the new discussion section. And folks have been using this for ideas and also for bugs or issues that they come across. Diane, did you have something? Yeah, I was going to ask you maybe if you could share your screen and just show that on the screen for people who are watching this as it's recorded. Absolutely. Second here. So for me to do that, I'd have to quit and reopen. Let me rejoin real quick. Yeah, it's up to you. Or if someone else wants to share their screen with it, that would be. Why don't I share my screen for this time around so you don't have to quit and keep talking. There we go. Sorry, new laptop. All right, great. So the new item is listing vSphere resources which are created during IPI installation via GovC. For folks that don't know, GovC or GovC, depending on some folks who say it different ways, is the command line tool to work with vSphere. That's an open source project. And a lot of my stuff is based on it on my scripting and stuff like that. And so the question here was, would it be convenient to or improve the user experience if all of the resources which were created were fetchable via that? Yeah, I don't know. Vadim, did you ever find out or do you know if there was like a list of resources that get created or if there's an available list in regards to the installer? No, it's handled internally in the installer. But all I know is that it's supposed to tag all of them with your cluster name so that you could remove it. And you can list them and remove them or fix them if something needs to be fixed. Quite tag. Yeah, I'm not sure at the extent of how many resources could be labeled. For instance, in AWS, you cannot label DNS zones. And we have specific applications which are cleaning up after our CI if it fails in the middle or we are actually making a bug and leaking something. So ideally, you would be able to list all of them by tag. If some resources are missing, it's a bug in the installer and it needs to be fixed because it's, well, it's flagging all of our systems and it's pretty severe. Does anyone want to join me in doing some installs and querying on items created, looking for items created and making sure that they're all properly tagged? Are there any other VCR folks on here that like to join me in that task? It actually sounds like a worthy project because then we could put in bug reports on anything that's not getting tagged. Anyone else? Well, go ahead and put your name in the chat if you are interested. Because I think I'm going to do that, basically, you know, start with a fresh vSphere and then just see what gets created and make sure that that matches with what gets tagged. Jamie, I was talking with this person when they were asking in chat about listing the resources with GovC. I seem to have the impression that what they were trying to do was have a way to audit what had been created or during a failed creation, they wanted to go back through the vCenter or something like that and find everything. So I think you're absolutely right in what Vadim's saying is great. We should just enable you to be able to use OC to see what was created, what was not. Just pull the tags that way. That would be a lot better. This also feeds into kind of some other things we have going on about, you know, tagging all resources that get created and whatnot. But I think your instinct was great there. If people wanted to, like, just confirm what's created, that would be amazing. Excellent. And, Mike, if you find that, if you can scroll back, when you get a chance, not necessarily right now, but find that discussion in the chat and link us to it, that might be helpful. Maybe just put it in the actual discussion item on the repo. That'd be awesome. All right. And other discussion item was upgrade failed 4.6 to 4.7. It doesn't look like anyone looked at that. Yeah, there's a lot going on here. I wouldn't even know this would be some serious debugging. But if anyone wants to take a whack at this, this is on bare metal install on VMware. This is the most recent discussion item. So my time is limited at the moment in terms of VMware stuff the next couple of days anyway. Now let's bounce over to the issues section of the repo. So, yeah, just to the left, there you go. Perfect. Thank you, Diane. And anything you want to draw to our attention here or anything that, Vadim, that you think that folks could help out in troubleshooting or any other way that you could leverage the group to address some of these? I'm thinking CRC build issue is the most problematic right now. So we had a problem for some months to build a 4.7 based release of CRC. Charo would have the details, but I'm thinking it was because we are using a different partition and the image he's spinning is small enough to make it build. So we were kicking the scan down the road and our stable build has expired because the cubelets certificates are valid for six months only, meaning we have to rebuild the 4.6 image every six months at least. So I'm thinking, if I remember correctly, Charo has documented how to build CRC fully so it would be great exercise for the community how to build a CRC from 4.6 or ideally 4.7 that would be great and that would really relieve us and Charo in particular with official build of CRC which we would post and this issue would be resolved probably. And do we have anyone on the call that would have an interest in that? I'm taking a look at that. Give me a shot. All right, put your name in the chat or send an email out on the working group email list. If you're interested, we'll try to leverage the group as much as we can to get some of these taken care of. I know Charo's not on the call today so we need to get him added to the group, the repo so that he can do that and figure out that the OKG landing page there and get that updated too as well. So those will be two things and I'll take a look at how I can get that facilitated. Yeah, and we'll get a link to Charo's. I don't have the link to his document, his CRC document, but we'll find that and then put that out over the email list so that folks have that handy. Anything else, Vadim, that stands out that you'd like us to leverage the group for? No, I don't think we have anything else easy to start with. Most issues are related to our infra and moving away from Docker Hub is a Titanic task, but working with SCL folks and establishing contact with them at least would be very, very useful because we barely have any. I wanted to ask Jamie a question about we talked at the docs meeting last week about doing some pair review slash programming of the docs. Is that something that you think we should add as a discussion item here? Is that a way to keep track of that? Or do you want to keep that separate? I think we can add it here. You know, there's a couple of ongoing things that are sort of between the groups. So yeah, we could definitely do that. I don't see Mustafa on the call at the moment. So now that I've brought it up, I'll just do a little update on the conversation that we had and I know Bruce and Jamie were both there last week as well at the docs meeting. And there's a, for those of you who don't know, there's a group of new Red Haters onboarding who are going to be coming through the OKD working group as part of their onboarding process in the engineering team. And Mustafa is the first guinea pig of that group. And he's been on most of the calls except this week. So I'm not quite sure where he is today. But we had a conversation about doing some pair reviewing or pair programming approach to reviewing the docs. So taking one person in this instance, we were thinking Bruce Link and Mustafa, who's new to it, hasn't really done a lot of deploying of OKD and reviewing the docs.okd.io. And testing out the workflow of how that pair programming would work. And I say pair programming, pair reviewing. And then how we interact with the docs team. So we have one Red Hatter in each of the pair, which would be Mustafa in this instance and work out that workflow. And then once they did, you know, I'm not sure they're going to do all of the docs.okd.io, but like a section and figure out what that is, is to replicate that in like we did for the hackathon on testing and deploying and having taking each of the section of the docs and basically pair programming with a Red Hatter and an external person who's got some experience in it and reviewing the docs and doing using Link, not LinkedIn, hop in. Again, to do something similar to that. So the first stage would be to do one pairing, figure out what the workflow was and then document that and then host probably not a Saturday in the summer, but a day in an afternoon in the summer or let people do them individually at their own pair, you know, scheduling of their time and effort. And then work through the docs that way. And that would be one of the onboarding tasks for all of these newbies to okd and anyone else who wanted to participate as well. So that could help us like with the CRC docs that Charo has done getting those up to speed, because what we've all seen in the docs.okd.io section is that it is just a rebuild from the OCP. The OCP are cause stuff and often there are little things that are tweaks for what we're doing that aren't getting integrated into that flow. So part of that is one of the things that I'd like to see if we can't get moving forward. And Mustafa, I did talk to him, Bruce, he's game. I did not send an email out because I knew you were at Victoria Day yesterday. So once and you're still in school, so schools should be ending. So once school ends for you and you've finished all your grading, I'd like to set up that first event. And if anyone else is interested in doing something like this side by side with a Red Hatter at the end of this week, good to know. We'd like just to kind of use that to onboard folks. And if there's an area of the docs that people think it really needs focusing on, raise your hand or put a discussion section in there. And we'll take that on and pair somebody up with you to do work through it. Because I think you need one Red Hatter on there to have the interaction with the docs team or the docs.okd.io, but you may not need that for other chunks like this. That's on okd.io. Did I catch that right? Jamie, Bruce, other things you want to add? I have a question about this. This may be better for the docs meeting, but I'm just curious because like, I would imagine that the okd docs, we're going to keep updating them every time a new release comes out in the same way that the ocp docs are. But I'm curious, was there any thought about like, should do the okd docs exist as a change set on top of the ocp docs? Because like, it seems like it's going to be a ton of material to go through every time ocp docs get updated. Yeah. So, so that's the workflow we have to figure out. That's why I wanted to do one pair first, get someone from the docs team. And I don't know, Michael B, you're on here. I think I heard you said you were a technical writer. Are you on the docs team, Michael B? Yes, I am. All right. So I would love it next Tuesday. You could come to the docs meeting, which is the same time, same place as this. And we can kind of figure out what the work, how you want to be informed of the things that they want to change, those kinds of little details. And that would be really helpful for us because we have resources and Larry just stepped up and said, yeah, I'll do the vSphere stuff. Everybody, you know, at just each of these things. And that way, I think once we know what the overlay is just for the docs.okd.ios part of the documentation set. Then hopefully each time we rebuild a new release, those patches get applied properly so Michael doesn't have to go in and handhold them and all that. So that's, that's my thought process, but Michael chiming now or next week and what it would be, maybe it's a three way, a trio, not just a pair. It's a trio with one person from docs, one red hat engineer and one external person who's, you know, an expert in an area or has deployed in an area. And use the docs actually from an external point of view. That trio might be what we need to kick off the first one because, you know, there are lots of processes for getting the documentation updated. And I know the person that we had a year ago working with us on docs has left the farm gone to another farm. So that's what we're, we're missing. That sounds good. Just let me know what can help out. Yeah, same same date time. But next Tuesday just come to this session. And then I was going to ask Mike McEwen where if at all anything has happened with the guides and moving them over or if you're just as busy as I am and nothing has happened. Well, I, you know, I don't want to speculate at your business or my business, but no, I kind of backed off because I thought, I thought you were going to take a look at it. But if you're not having time, like, you know, we got a long weekend coming up. I can, I can take another look at making a PR there. If you could make a PR that would be great because yeah, no, I have. Yeah, I just didn't, I just didn't go through with it because I thought it last you and I talked, I thought you were going to take a look at it. So I just didn't get any deeper into understanding that like a jecklish blogging platform that's there. I forget. I forget what it's called. Yeah, I think in the person who can, the community person who can help out with that who understands the back end is Joseph Meyer from Rody and Schwartz who's not on this call. So I'm going to volunteer him to help you with that. So reach out to him because he didn't show. So he gets volunteered. Yeah, I mean, I think it's just one of those situations where I just, I need to spend, I just need to sit down and spend a couple hours with it and just kind of figure out what's going on. Once, once I wrap my head around it, I should be able just to copy those docs over. We'll see. Yeah, it's not bad. And there are a couple new blogs and that went out very quick ones about one of Joseph's video from the last KubeCon that he did a nice talk on and the other one with the office hours. And I think I did them right. They're public. And the other thing that I have is if anyone's interested in it, the video will go up later today. Hopefully the crossplane operator needs to be tested on OKD. And that's just been released. And just prior to this meeting, I just did a review, a briefing on that. And there is a blog post on OpenShift.com that Chris Choudhury wrote. It might be on developer.com. I'll find the link and post it to the group. But if anybody would like to, they're looking for some feedback on that crossplane operator and making sure that it works with OKD and vanilla Kubernetes or any other kind of Kubernetes as well. But that's been the big push. And I think they have it for it. They've done a lot of testing on AWS. So the safe bet might be testing the crossplane operator on AWS. OKD might be the easy path just to make sure everything works. And then I'm going to zip it and let Jamie go back to it. So the link that I just posted in the panel is to an older discussion that's ongoing. We talked about this last week. This will be something that the docs group is going to tackle, which is clarifying the community support model, like what that actually means. And in essence, I'm going to move a little bit because there's some construction happening there. In essence, the issue is that we want to make sure that when people look for help that they're not reaching out directly to a red hat employee necessarily or picking one individual person that in fact they're asking the community for support and helping to build community resources. And this is spawned out of multiple situations of folks reaching out to red hat employees or reaching out to individuals or having certain expectations in terms of time and resources that aren't really, don't align with the model that we have for support. So the docs group will be talking about this. What we will need from the community at large is any place where you think that this language would be appropriate to put any type of communications, the website blog in any of the repositories, any place where you think that this language would be helpful. Please let us know, like put a comment in the discussion thread there. That would be super helpful. We're going to come up with the language within the docs group, I think. And if anyone wants to chip in on that, we'll probably be doing that, like adding to this thread with that language. Any questions or comments on that? Any thoughts? Yeah, actually. Hold on. We got three people. I think Diane was first, then Mike and then whoever the third person was. Sorry. That was three. Okay. So, I don't know. Who's going first? Let's do it in order of people. Mike, then Sri, then Diane, because Diane seems to be having Mike problems here. I was just going to say, like, what, yeah, can we just put this in a giant banner across like docs.okd.io? Sorry about the Mike issue. I think there's some good verbiage on a couple of other open source community sites that we can lift in and reshape as well. Then I'll try and find that that could help. Put those in the discussion thread. That would be awesome, Diane. Sri. Yeah, I was thinking specifically in the case of the Slack channels, like, maybe it would be possible to have Slackbot say something to people when they first joined. Like, hey, if you're using okd, maybe paying, and we set up a group with, you know, community members who want to be involved with it, just like an okd or something. So that they can ping that instead of ping Red Hat employee directly or someone that they have seen in a GitHub issue, which I think is more often what's happening. Right. Yeah. So that's, I don't know who has control over, do we have control to be able to do that with the Slack bots there? The Kubernetes channel, we don't have control too much. I can reach out and see if we can slightly modify the OpenShift Dev one with a, you know, a reference to okd. I know Amy and the other managers there. So maybe we can get something out of there. I'll see. Let me add that to my to-do list today. That would be fantastic. I think Shree had a nice idea in there too. Like, in addition to the cluster bot, like having an alias for like a group of people who want to volunteer to be the okd support team or whatever. Like, if there was a group of community members, you know, but I mean, if there were a group of community members who are willing to volunteer for that kind of stuff, say like, well, if someone pings this, you know, this alias, then these people will get alerted, they might respond. I don't know if that's something people would be interested in or not. Yeah. It's not like an obligation or anything, but I know there are a few of us who hang around in there and respond to people's threads. So just formalizing that ad hoc arrangement a tiny bit. Yeah, totally. No, I mean, yeah, like formalizing it a little bit or even like giving someone, you know, having a banner in there that just says like, look, if you're looking for helping this, you know. Let me ask what's what's possible. The art of the possible. Excellent. Anyone else have comments or questions on this topic. All right. And then the other one, which sort of ties into this is we had talked with Vadim at the last meeting about coming up with like a list of like the group, the community coming up with a sort of a quick little guide to how to troubleshoot. You're self and OKD install. You know, in addition to the little bit of stuff that's that's in the official OKD docs. And I know Vadim was going to come up with a list of things that he thought we should sort of fill out, provide more details on in terms of install stuff based on sort of common issues that he sees and whatnot. Vadim, did you have anything you wanted to chime in on that? I don't think I had any progress on actual implementation, but I'm thinking we could start with a markdown in the repo, get it reviewed. And slowly it's right from here, maybe making a video or a full blown blog post. I could take a look into this this week, hopefully. If I forget, please do remind me to do that. That's very important, actually. Yes. And we want to be mindful of your time. So the goal would be not to put this on you. But, you know, just the few ideas that you can provide. Do you think our pain points basically for someone doing an install that you've seen. And then we can do all the legwork of filling that out and putting in details and stuff. Right. I was just thinking we should involve our architects because that list would also be very useful internally. And make it more generic to support various platforms and effectively pass it to the installer team to have this documentation completed. So I'm thinking I'll start with my own notes, then we'll pass it to architects and the community and we'll work with that and it would land. In the end, it would land in the installer repo as a developer documentation. That would be fantastic. Thank you, Vadim. This would be cool to. Sorry, I was going to. I don't want us raising hands or something, Jamie. Do you know that's fine. That's fine. No, okay. I just wanted to kind of tailgate on what Vadim was saying there like art, you know, it would be cool to collect some of these resources together like because our team has also been trying to create troubleshooting docs for our components. So we have like a machine API troubleshooting guide. We have a machine health check troubleshooting guide that's in the works and like a cluster auto scaler troubleshooting guide planned as well. So like the, and it's similar to what Vadim is talking about. We've been putting these in the repositories where the code lives. So like it'd be kind of cool at some point to get like, you know, have one page with all these troubleshooting guides like in one place or something like that. And then the question is there anything out there like that just shows like a schematic or a diagram of all the various OKD components. There, there's there's some stuff out there for base Kubernetes, but OpenShift has a whole bunch of stuff on top of it. That would be helpful to put there just so people have an idea of how to orient themselves or what could be the cause of their issue. That sounds like a great idea. I'd be happy to contribute to that. I don't think we have an exhausting list for this, but if you look into the release info payload, that's the list of components included in the release payload. It doesn't explain how do they, how do they work together because, well, this thing changes every release and I don't think we can keep it updated every single time. But just to give you an idea of what can possibly go wrong, it could be networking, it could be monitoring. It's, it could be useful and it also points you to a particular commit it was built from so you can find out which repo is it built from. And hopefully this repo would have a troubleshooting guide or some developer documentation. I think that's what we have for now and probably, I mean, we can work with architects to improve that, but I think that's the best we have for now. From a more architectural standpoint, not just from the component like operator standpoint, because I totally like Sri, what you're talking about makes complete sense. Like I've actually gone through like every operator and tried to put pieces together for myself, and it is not, it is not an easy task. But I had been talking with another redheader internally, Fabiano Frans for the other redheaders here. When we had first set up that guides and deployment repo, he and I had kind of talked about he was working on a project to generate architecture diagrams of open shift clusters based on like, I think terror. form or something so like he was generating these really cool diagrams that would show you like topologically how your cluster was laid out and where each component was and like you know how these things fit together. We never got to the point he was talking about generating some documents that he was going to contribute to that. Okay, D, you know, deployment guides repo, but we never got to the point where he was generating them. I could always go back and ask him like if we, if he made any progress there because that would be, I thought the diagrams he was creating really instructional in terms of like just seeing how your clusters laid out. Very cool. Any other thoughts. That's like a major thing. How do we get people acclimated to just this environment, what components are in their stack that aren't another other people's because it deploys to so many different places. Everybody needs. Yes. You're cutting out a lot. I got one. That's unfortunate. Come here microphone. No, I was just saying it just to help onboard people who are perhaps new to okay D in particular, or who might even have experience with Kubernetes. Because very few like there are maybe three things on this list that could conceivably call it be called like oh yeah I recognize that from a vanilla Kubernetes, you know cube proxy. Woohoo. I know, I know what that is. And then everything else on top of it is just like who knows. Yeah, what the team said about changing and whatnot between releases that that's like going to be totally, you know, we're adding more operators in the next release, you know, so it's just not it's not going to get me better like anytime soon. Yep. All the more reason to maybe start now. I don't know. I might be totally willing to contribute at least write down what I have figured out about what's on top all the sort of open okay D or open shift specific value at stuff. But I'm sure I've gotten about half of it wrong simply because I don't know. How about if Shree if you if you took a stab at that and because I'm just sort of listening in the background but I'm also thinking the work that that Bruce did around the taxonomy for things to watch out for when you're installing that this diagram or this piece. All of the things in his taxonomy should have an entry point in this diagram or this list at some point so that kind of is this to me it's the same work effort. There are tons of OCP diagrams out there in all the product marketing speaking slide decks I've seen over the years. I'm not sure Vadim if you've ever seen one that was actually really engineering useful. So that which is what I've seen. I've seen a bunch and all of them are outdated. They have been outdated when they were created. Yeah, they're almost outdated as soon as you build it. So that's that's the problem. So, yeah, like a sort of related thing Diane is that, you know, whenever I sort of take five minutes just for the heck of it and look for a what's the difference between, you know, OCP slash OKD slash Kubernetes. It's very fuzzy. And, you know, so we have all these OCP OKD specific components. You think that you could make a bit of a clearer statement from a marketing standpoint. You'd think you'd think and I'm asked there there is there is some market ease around that on the OpenShift.com site in different places. It's not really explicitly clear often, even in my humble opinion, and I'm inside the inside the beasts. I think it is something that we need to clarify in the docs a little bit better beyond just this is community supported and it runs on Fedora CoreOS. I mean, there's there's some other things that are a little bit different. But there's there's such a strong message that's repeated over and over over again that OCP is just Kubernetes. And I guess I guess the idea is don't be afraid. If you've heard of Kubernetes, you can use this as well. Yeah. And okay, I could understand that part of it, but it's not the whole story. I think part of the problem here, Bruce, is like when you ask that question, like what, you know, like how does OpenShift relate to Kubernetes? You're kind of entering a propaganda zone too, right? Because like on one hand, there's Red Hat's marketing and propaganda. There's Red Hat's competitors, propaganda and marketing. And then there's like, all these other opinion pieces. That's somewhere in the middle. Well, yeah, and then there's the truth, right? But like, so for a while the messaging was yeah, OpenShift is Kubernetes because like from a marketing perspective, you know, we didn't want to scare people who were like, well, OpenShift is like, you know, you get locked into this specific type of Kubernetes, right? So that was a message for a while. But like, now it's like, well, how do we differentiate OpenShift? What does OpenShift add? How do we enumerate that? And I see salespeople struggle internally to figure out like the best way to talk about this messaging. So like, I agree with your, with your, what you're saying, like we should own that message and have it somewhere on the OKD site so people don't have to rely on, you know, the propaganda and the fun and whatever else is out there. But it's like a difficult question to answer. Right. Well, maybe everybody. So that's an interesting thing that we're talking about because Sri and I have actually been doing this kind of differentiated messaging internally because when I like to count like six, seven, maybe eight different Kubernetes deployments internally and they're all different with different things. Everybody's got a different set of opinions and that means they have a different set of components on top of the base and they're all somewhat good and somewhat bad. And it's just been such a mess. Yeah, the best thing that I could ever come up with that people wouldn't get up in whatever with me was calling OKD the Kubernetes distribution that powers OpenShift. And that was as close as I could get to something unopinionated, but it doesn't do. I mean, I made that phrase up. So, you know, blame me. So the way that we've we've messaged it internally is that OKD is the is essentially the OpenShift Kubernetes distribution. And that and that allow and we and we because we're deliberately saying Kubernetes distribution, we get to explain this in the same context that we do Linux distributions which is an opinionated assemblage of components that are integrated and connected together in such a way where the user experiences are unusable and like we have much more detailed stuff internally that we've used about messaging around OKD versus other options that we've also been using for promoting the usage of OKD and OpenShift and all that stuff. That I think it took us a long time to land on the word distribution as well. But that's the first thing that we've tried that people can analogizing their head that sticks. Yeah, I want to say you have micro list that's the SUSE distribution. And then you have OKD and OpenShift that's the red hat rel distribution, you know, it's Red Hat's opinions. It's SUSE's opinions. It's Red Hat's tooling. It's SUSE's tooling. It's, you know, whoever else's vanilla Kubernetes is basically the Arch Linux or Slackware of Kubernetes. I think we kind of liken it to Slackware. Let's pull back a little bit. Let's pull back a little bit because we have seven minutes left and there was actually a question in the chat and want to be sure that we get to it. So I think it's a good question about CI. Do we have our CI CD documented? There is on the front page on the read me, there's a little section that talks about the release process and whatnot, but I don't know of anything else. Vadim, do you know of anything else that actually lays out the CI process for OKD? Yes. Surprisingly, it's very, there are a bunch of technical documentation about how to work with our CI because, well, this is the internal thing. The main idea is that OKD contains all you need to start up a cluster. Your only choice here is effectively the starting image, which we will fix also later. And what our CI does is that it follows the book and does that every time we make a new release. To answer the question directly, the Fedora CoreOS and OKD components are being built by respective repositories. The repository to hold our OKD flavor of Fedora CoreOS is called OKD Machine OS. So every time we make a comment there or a pull request, our CI makes a new build for OKD and replaces the component in question with the testing code. It builds the release and runs a conformance end-to-end test. And if this change is acceptable, it will merge it, build a new image and push it to the image stream. And from this image stream, we are building an official release on our release controller page, which is like a release page for all of our there. And if this so-called nightly build is stable enough, we declare it stable and move it, mirror it to Quay and move it to first table link. So we have very detailed explanation of how the I can be configured, how it works. I don't think we have a great documentation about release controller. I think we should file a bug and we could ask our info folks to describe some more in the workings of this, because that one is also very interesting. We have to specify that everything is done on CI. The only decision we do is the mirror files to the mirror, the stable release to Quay, and that's something in the works also to avoid doing them manually. So that our only interaction with that would be just one command to give it a special annotation so that it will be mirrored and put to Quay and put in the fourth stable channel. Let me find if I have a documentation about another CI, but I don't think we have a great documentation about release controller really. Let's file a bug so that we could track it and add it to the OCD repo. That'd be fantastic, Vadim. Okay, we have four minutes left, and so are there any last questions or comments about anything OCD related that we can deal with in the next three minutes? I want to be mindful of people's time always. Any other questions, comments, anything you want to bring to the table before we adjourn? All right, sounds like we've got everything we need. In terms of that discussion that was going in terms of communicating what OCD is, that's a great discussion. I wanted to pull it back to make sure we had enough time, but that can be carried on, I think, for sure. So please don't feel like that was cutting it off in any way. Not intended to do that. Diane, did you have anything? No, I'm just going to say to Shree that if you want to take a stab at it, and I'm really glad I hit the record button because I think Neal articulated it very nicely there in a few words that I might try and go back and get the transcript of that. And that would be wonderful to get that done because I think we do this internally, externally, across communities. We really don't have a great way. I am really attached personally to calling it something distribution. We are not allowed to say what the letters OCD stand for legally. I think I've said that before in the meeting. So we can't like OCD does not stand for the OpenShift Kubernetes distribution. We, Red Hatters, cannot say that. Other people can say whatever they damn please. You can call it okay, Diane. That's all I care about really. That is absolutely freaking hilarious and kind of depressing at the same time. Well, the CNCF and Linux Foundation own the Kubernetes trademark. And so you can't say, yeah, that's why there is already licensed. You can't use Kubernetes in the name of a product or an open source project. That's why you see all the K8S's and the EKS's and the GKE's and everything. That is just the way the world bounces. When Google donated it to the Linux Foundation and set up CNCF, they donated the trademark and CNCF owns that brand. So, yeah, it's an interesting world, but it also keeps us honest. It just makes it hard to message stuff. So hence why OCD has that, you know, the Kubernetes distribution that powers Red Hats OpenShift. And it's tough internally and I'm just going to take the last one minute. If you think about how many different flavors of OpenShift there are, it's not just OCP anymore. So, you know, it's crazy. So anyways, thank you for a wonderful meeting, Jamie. I'm so happy that you're willing to do this and I love your backyard. So we'll meet again if Michael B is here. If he can please join us for the next week's docs meeting. We will flush out the workflow for a pair documentation reviewing then as well. So thanks everybody. Special thanks to Jamie.