 It started. Welcome to the OKD working group meeting for March 29th of 2022. And let's take a quick look at the agenda there. And if there's anything that was missed, please let me know or add it in there. Let's see if we have any new folks. Any new folks want to introduce themselves? I see one or two names that I don't recognize. No pressure. But if you want to introduce yourself, feel free. Then let's move on to our first agenda item, which is the OKD release updates. Christian is not here today. Vadim is taking a hiatus from OKD. And so basically Christian sent me some info that basically there's a blocker and I have a link to it up there. And this is for the 410 release. And as soon as they can get past that, then we will have a solution to that. And it looks like this issue here might be related to it. But basically within that link, yeah, okay, so that is the link actually that first link. So basically if you go in there, you'll find some stuff related to the ongoing issue for OKD 410. And we'll be talking more about that at the next meeting. The docs group is just sort of getting things settled where we'll have multiple people with access to the Twitter and be able to tweet out things like new releases and whatnot. It's just a question of everyone having the same bit more than set up to share the credentials for Twitter. We'll talk about that in the docs updates. Is anyone from the F cost any red hat folks from F cost here doesn't look like it. So, I guess we can go to the docs update Brian go ahead. I wasn't at the last stop meeting how I was ill so I'm not sure there's anything happened there. What I can say is I've moved the charter into the OKD the IO docs and that put a new items list in the repo. And so we'll talk about how we want to sunset the rest of that content. There is a roadmap in there and there is a membership and there are membership lists in that repo. And I don't know whether we still want to do those or keep them up to date or just drop them. So we'll talk about that there. But in terms of the charter, the charter as it now stands is in the OKD IO docs in the community section. And I am still working with Brandon to get some styling updates added. And he's done some tweaks on basically the CSS style sheets just to update some colors and to make the site a little bit easier to read and improve accessibility. And so I'm not sure whether he's going to do a pull request on those or he's going to wait until he's done any further styling. I'll catch up with him this week. Yeah, there was some people have commented there has been a discussion group about site stability. And this is down to GitHub because OKD IO is now hosted by GitHub. It's on the GitHub pages and we just redirect the OKD IO URL to the GitHub pages site and GitHub have been having a number of issues. Apparently they're having some database issues which cause the outage and I put a link into the HackMD minutes where there is a blog from GitHub discussing the issues they've been having over the last few weeks. The site has been stable for over a week now. So I think whatever the issues were, GitHub seemed to be getting on top of it again. OK, so it wasn't related to build someone had thought it was related to. No, no, it was nothing to do with us. It was purely the GitHub pages environment was down. So every site hosted by GitHub was down for a while. And I noted that there was some repo at about the same time I have some repo issues where you couldn't access repo. So it was purely a GitHub issue. OK. I think that's it for docs. Dusty did pop in to talk about F cost stuff real quick. He's only got a few minutes. So let's pass the talking stick over to Dusty real quick to talk about F cost stuff. And then we'll switch back to maybe docs after that. So Dusty take it away. Gotcha. Yeah, I didn't know if we had any specific questions. A couple of things to call out just happenings in the community. Our next stream in Fedora CoroS is now on Fedora 36. So the beta for Fedora 36 came out today and we've been testing on next develop for a few weeks running that through CI. So hopefully, you know, people don't find issues. But if they do, at least they may, you know, we found them before they reached our other streams. So I don't know what the beam and Christian have set up now for, you know, testing future streams. But that might be something to look at. And then one other thing, Jamie, I know I think you run some VMware, don't you? We are updating our VMware images to use UEFI and secure boot by default. And that is happening on our next and testing streams this go around and then it'll happen in stable in two weeks. We also have some documentation that we're working on. I don't know if it's published yet that will show you how to essentially go back in and edit the image if you, for some reason, need to stay on BIOS boot. That's it for me as far as updates. I don't know if we had any specific questions from the Fedora CoroS perspective. So I thought that the the FCOF cause image production process actually produces hybrid boot images that they do both UEFI and BIOS. So because like that's how it is for the KVM images, if I remember correctly. So I'm a little confused why that's not being done for the other ones. I think the OVA's themselves have metadata in them that specify like what this image prefers. And we're updating that to say default to UEFI and secure boot. Okay, so the actual content of the disk image inside the OVA is still hybrid, but the metadata that is associated with it for the OVA is what's changing. That is my understanding. Yes. The reason I asked that clarification is that at work, I deal with some VMware or some things that I pretend to hope I don't have to care about very much, but it's a thing. And some versions of VMware ESXi straight up ignore the value that's set in the OVA metadata and we'll just always do one mode or the other depending on what bug is affecting the ESXi or vSphere software at this point in time. So that's why I'm concerned because if the actual disk doesn't have both parts, then if the field is being ignored for some reason that the image won't boot and there's no clear way to actually fix it. Yeah, I'm pretty sure that it supports both because for the most part, our images on different platforms are mostly just the same bits just repackaged in a different way. Because different platforms take different inputs like I think GCP takes a tar ball and some take a QCOW and some take a VHD and some take an OVA. But nine times out of ten, there's nothing, as far as when you boot the machine, there's no changes between the different platforms other than a platform ID that gets embedded. So because obviously we can boot on either or for other platforms, it should work as well on VMware if you try to go in and override that and go back to BIOS. Okay, cool. So that's all I really care about is just not that I want people using BIOS, but I'm already aware of vSphere and ESXi versions where there are bugs in VMware where it doesn't read the metadata value properly and will do one thing or the other, regardless of what you tell it. And so I would just rather that actually work regardless. Cool. Yeah, if you have access to one of those environments, go grab our testing or next VMware image. You should be able to grab those on the webpage today and see if they work. I will ask to see if I can. I will make no promises, but all I know is I know that this is a problem because I've been bitten before and I just wanted to ask to make sure. Cool. Yeah, we always welcome that feedback just because it's hard to know everything. So we only know what people tell us. Jamie, I can't hear you. I don't know. How about now? Yes, I can hear you now. Any questions or comments for Dusty related to Fedora Cora West side of the house? Anything? Awesome. Well, thanks, Dusty. We appreciate your time. We appreciate you popping in like this. Just unspure the moment. Yeah, no worries. Yeah, I was planning to cover today, but I got busy with some other things. Sorry. No, no problem at all. I'll see you all later. All right, take it. All right, let's bounce back to doc stuff. So one of the things that came out of the docs meeting is the need for a contributors guide and a guide on the infrastructure. Like the actual infrastructure behind the builds. The reason for this is something that I alluded to earlier in the meeting that Vadim has decided to take a hiatus from working on OKD. So right now there's Christian contributing when he can, but that's it. We can put in bug fixes and code changes and whatnot and get them approved by Christian and a handful of other folks. So we can contribute still, but there's really, there's no guide for sort of how things work in the background and, you know, Brian has rift on this quite a bit. You know, there's nothing to say what the infrastructure is in the background very clearly or what is the path to being a contributor. At the docs meeting Diane suggested following sort of a stepped approach to contributions. So just like in Kubernetes or some of the other projects. You know, you, you get a badge basically after so many of certain types of contributions that allow you to step up to make more sort of fundamental contributions. That's something that the dots group will talk about is does anyone have any feedback on those ideas. Any thoughts on this. You know, this was a sudden the morning of the dots meeting. He sort of left the channel and said he's going on hiatus. So it was kind of sudden. Okay, yeah, Jeremy. Yeah, I have been quite vocal of this. I think one of the one of the challenges is before anybody can start contributing. They have to be able to do a build and do a local test because nobody really wants to push anything or no one should want to push anything and do a pull request until they've at least checked that their code compiles and hopefully works. And my problem is I don't know how to do that. And I don't know whether I can do that, or whether okay D is so integrated linked with the red hat build system the proud system that runs internally with red hat. But I'm not sure how to go. And there is the, the 600 odd repose in the open shift organization. So just trying to find out which repose I actually need to care about to actually work out how to do a build. I've tried to go down the sort of the sort of M maze before, and you just end up going round and round circles and getting totally lost. I'd love to be able to do an okay the build. But even things like always still in Vadim's repo for the special. Okay D sort of process. Vadim had that in his personal repo for a while. There was talk about it moving into the open shift repo again. So I don't think anybody outside red hat really understands or has the ability to understand how to do a build. If it's possible to do a build outside of the red hat infrastructure. And if so, what is the end to end process flow. So I think it'd be really good to actually try and get that at least documented. I'm quite happy to write things up, but I need some guidance. I need some money to walk me through it so I can actually then document it. I think it depends on what you mean by a build. You know, I built, I've built my own images for the last year, you know, for testing. You have to realize that okay D, you know, an open shift are built with a wide variety of modules and each of those modules kind of has its own build process. They're kind of similar. But 99% of the time you don't need to rebuild those you just go out there and you access the ones that are currently out there and currently built. So it's more of how do you build a particular module say, you know, okay D machine OS. And how do you integrate that into your own image to test with. It's not a hard process for that particular one. Yes, it's honestly, it's not even convoluted. But what you probably do need is your own repository out on. Not get up for Dr. I just went right out of my head. We're all the images are currently key. Key. Yes. So, you know, you need to have your own repository there that you can push out these images, and then you build a complete image and you push it out. And then you can actually do an install from that secondary image. You know, isn't going to look exactly like what you what we're currently doing with all the dates and everything. No. And, you know, there's probably details that we can get from redeem on that, you know, to help with the process, but for debugging and building. It's really not that not that difficult is just not well documented. Yeah, so I can, I can probably speak to this from front to back depending on where you want to start but like, I pretty much agree with what John saying, like, I don't think it's good to advise like okay D community members to try and build an entire release image on there. That's going to be like boiling the ocean and like what you're going to have to do is basically fork the open shift release repo, set up your own proud system, and then allow because that's how we build the release images now is we use all that proud automation to build the individual images, we put them through testing, and then we promote them into release image. And this is kind of where, you know, Vadim is in taking the inspiration for cutting the OKD images off. So like, and this is. Yeah, go ahead, Jamie. Well, I just let me add a little bit more context. One of the things that came up in the docs meeting was Diane is actually going to check into what are the ramifications of making OKD a truly external open source project. So it's this is necessarily in terms of users building it, but the community, the community. Yeah, right, right. Well, and so I'm trying to get to the point of like what you originally said about like making changes. And then, you know, Brian was talking about, well, you know, you probably want to test your changes before you do anything. So like, it what I my point here is that like, yeah, I think if the OKD community wants to grow to the point where they're doing, they've got like their own proud system that's building everything like that's, you know, that's awesome. Right. Like all that code is still going to go back to the OCP stuff, unless OKD becomes a full fork, right. And then there's like no stuff. There's no linkage between the OKD code repositories and the upstream or downstream, I guess, is the way we look at it. OCP repositories, right. So like, we're working with partners to help them do these kind of things now from the red hat side. And the way we do it is exactly like John was just talking about now. So like the entryway to get into testing your things is learning how to build an image for the component you're working on, a container image, right. And then using the OCADM tooling to build a release image from a known good OKD image that has your component image inserted into it. And so like to me, that's the level one process you want to get to once you once you know how to create like a component image that you're working on. Let's say you're working on machine config operator, right. So I've got my repo, I know how to build an image. I know how to put it into key or quay depending on which side of the pond you're from. And then I know how to build a release image that points to my image in the registry, right. And so now I've got that all bundled up and I've made sure that the internals look good so it'll it'll match the image when I deploy it. Just getting to that step alone is a really big accomplishment. But once you get there, you can do what John's talking about where you can build your own release image that has your component in it and now you can at least test it before you put it up to, you know, the upstream. And then the next step would be like, OK, learning how to do that for each component and building a full release image. But I think for a contributor, just get like if we had guidance of how they could take a currently released OKD image, use the OC tooling to replace one of the components that would probably get people started at least on this process of contributing. And honestly, that's how I do my testing. You know, we identify a bug, you know, whether it was MCO, you know, back a year ago or OKD machine OS or whatever. That's how I do it. And then I build it on my local environment and test because honestly, it's faster than going through CI. It's so much easier to go through and iterate, you know, over an hour than it is over four hours waiting for CI to do a test. You know, then once you have some confidence in it, yes, I'm only testing a VMware, you know, a lot of changes have to go through and, you know, go on OpenStack and, you know, all the other things. But at least you have some confidence that at least, you know, in one environment it runs and builds and then CI can test it. Hopefully, wherever else it has to do. Part of the issue we have right now, especially with OKD machine OS, is that right now it can only, you can't build it from a poll from another user account. So if I create something, I have a fork build and then I want to do a poll. It won't work because of how OKD machine OS is currently built with Vadim's magic. He has some magic that happens in there. So the only way that you can build is if you have actual access to the direct repository outside of a build. You can still do a poll, but you have to have real access to it, not from a fork. So that's one of the issues that we have is that somebody has to be able to control it. Right now I think it's only Vadim and Christian, although I could be wrong. I haven't looked at the owner's site very much, but it's not really a flaw, but that's a disadvantage of how we're currently building. Because that's how we're building the images for F cost and adding all the other magic that happens that differentiates us from using RH costs. So there's some discussion that has to happen in there if you don't have build authority in that repo. Now, one of the things that came up is that OKD is now theoretically a true upstream of OpenShift. So that might change the dynamics a little bit in terms of what Red Hat's thought might be about our spinning off and how those any fixes we would make would end up back in OCP. Like they might be reluctant to relinquish it now because we are a true upstream or they might be really happy to do that because there'll be innovation if there's a lot of developers working on it. Are they talking about completely forking it? Because right now we have the exact same, except for OKD machine OS. My understanding right now is that we are completely on the same code base. So that would have to fork everything if we're going to be completely upstream. Or we flip it on its head. The other way to think about this is the existing process shifts to being more in the public view. The one that actually runs right now that builds the OCP slash OKD, whatever's. And if we're saying, OK, we're tracing all this, we're already OK with the content set, the literal binary blobs that make up the different container images that are the unbranded bits are identical. Then we take the bits that actually have like the brand name, the integration bit. So that's MCD web console container, and I think like two others out of like the 360 odd containers that actually make up OpenShift. Those get splintered. One you get that is OCP built internally and one you get OKD built publicly. And then the rest of them just happen publicly in one built process. Because doing that flipping that, because right now we're sister builds, we're not like the truth is we're sister builds. And the way Red Hat does it is that they promote at their own pace rather than following the train that we do now. So if we keep that model, which I think is actually a really good model, then we can minimize what kind of splitting we actually have to do architecturally for motivate making this more of a community platform. Right. So if we shift what we're already doing from inside the Red Hat Farm to outside the Red Hat Farm, and then we take the bits that actually have to be different and split their processes. Inside the Red Hat side and one outside the Red Hat side, then we don't wind up boiling the ocean, I think like two or three times for no particularly great reason. And it maximizes the value to the OCP team and minimizes the, how would I say this, responsibility or pressure or onus on the OKD side. And that I think is probably what we want to move towards because I think we basically want to emulate without the bad parts, the way that RHSP is handled right now, which is, you've got the RDO stuff. That's all being built essentially once and then Red Hat promotes internally at their own pace because otherwise it just turns into madness because it's like hundreds of components. That need to be recycled through and it's just like nobody want nobody on this planet wants to boil the oceans faster. So let's not let's not do that if we don't have to. You know, part of this is like, you know, Neil, I don't disagree with what you're saying. But I think there's like a really complicated part of this and just, you know, to push back on the notion, you know, Jamie you're saying OKD is an upstream. But I think like that, if that is kind of like what the community thinks that messaging is certainly not happening internally at Red Hat. I think that there's a big disconnect here. And, you know, I love the notion of OKD becoming the pure upstream and us building a model, you know, like RHSP or like Fedora or something like that where it's like we can consume down. But there are architectural decisions that are being made inside Red Hat right now about the core structuring of OpenShift and how it will change going forward. And so this has to do with our plans on, you know, what, what we're going to do with OpenShift as a product going forward, right. And so I think that it's disingenuous to the community right now to say that there's even any sort of upstream notion, because there is no forum whereby the community is having input on these like, you know, kind of big design decisions that are being made. I would expect that if we ever get to the point and I hope we do, where OKD is seen as the upstream to OCP is downstream, I would expect these decisions to be happening in an open forum that's, you know, curated by the community and right now that's just simply not possible and, you know, just to be blunt about it because this is Red Hat's product and it's making a lot of money right now. Yeah, this is a big deal, right. It's a very big deal and so there's no way that these architectural decisions could be moved outside of the Red Hat fence at the moment, right, not while things are in flight like this. So like, I would expect that like, at some level, the OKD community coming together to start building a mirror of the build process that's happening internally, that might be the natural course here. And then at some point, there could be a discussion about like, okay, is there a way that we move the fader over more towards the community side where we have confidence that there is a proud configuration happening the same way that we do internally. And now can we start to model what it would look like to have the community doing this, but like as far as where I'm sitting, I haven't heard any of these discussions internally about like moving to like a community model or OKD being upstream, you know. Maybe that's happening around what who Diane's talking with and whatnot, but like it hasn't percolated to the engineers. Who's going to pay for the community proud because right now it's all being it's all being paid for by red hat and that is not going to be cheap. It is not cheap. I mean, I can't pay for it. Yeah, I definitely have community funding that I'm aware of, you know, that would allow us to go through and do something like that. You know, we're not Kubernetes, you know, we don't have, you know, that support. So, you know, and, you know, the jaded part of me is saying, hey, y'all are owned by IBM now you guys are a profit center. They're not going to give that away. And I see you have something and then Daniel you had something as well. So Brian. Yeah, I was going to say, I mean, I think let's do what's achievable. So we know there are bits of OKD that are different. We've talked about getting an OKD catalog for what are currently the licensed operators. So that's difference. We do have some differences in branding. So there are some differences where we do have a different we're based on fedora so there are bits of it where we own our own destiny in terms of those little pieces. And I think if we start there, we can then set up something working. And then I agree. I mean, I mean red hat is very vocal about its open source credentials. And I think this could test those because at the minute. I mean, OpenShift isn't really an open source project. There is no governance outside of Red Hat. So I think there are some discussions to be had primarily within Red Hat initially. And then wherever those discussions lead, where sort of be holding to them. And we can't really do anything until those discussions have been taken at senior level within Red Hat. But there are bits we can certainly do. I'm very keen to try and get this operator thing up and running. It both our community catalog is broken currently because the community catalog that gets shipped with OKD has dependencies on the registry at Red Hat operators that are missing. So for example, I can't put the community chain operator on OKD because it's looking for the terminal operator that's part of the licensed Red Hat registry. So without a pulse secret and pulling that operator, I can't even put the community chain version on. So there are things like that. I think that we can own our destiny and we can actually make progress on. I think you should focus on those. And then we in the background, we let the discussions around what's going on within Red Hat to go on and then we'll pick up whatever those discussions produce. Yeah, I will let Daniel go. So I'd just like to propose to the Red Hatters. One of the ways this could go is the same way CentOS Stream went, where it's Red Hat controlled, but it's out in the open enough that the community has some level of participation. And just like Red Hat is, I mean, Red Hat Linux is obviously a huge money maker, but CentOS Stream still makes sense as a collaboration model. I think that's one of the ways we could do this for the same reason. And, you know, for what it's worth, I don't participate a lot in OKD compared to a lot of you, but I personally would be OK giving up some amount of control over OKD's destiny if the result was that we in Red Hat were aligned in moving in the same general direction. I'm curious to hear other people's perspectives on that. But I just wanted to point that out for other, for Red Haters who are talking about this internally. So this is pretty much the reason why I said that we probably want to focus on, instead of having a conversation of how to replicate the whole infrastructure, let's, the, I think the strongest and easiest thing to do internally for Red Haters. Is to have a conversation about moving the prow that builds OCP, the infrastructure that actually does that out of the firewall. And then splitting the bits that actually, like, when you look at all the content that makes up Red Hat OpenShift and OKD, let's ignore our cause and F cause because those two bits are already technically independent build infrastructures. All the other stuff that's layered on top of that, most of it is bit for bit identical. We produce the same containers and they get consumed by both sides. Then you have the very small proportion of things. Right now I think it's MCD, the OpenShift web console, the API server and like two other things that are actually fundamentally different because they need to be tweaked, specifically to be Red Hat OpenShift container platform versus OKD on F cause. Like, if you, if you consider what actually makes the differences between the two platforms, then the bigger win initially is to try to move the thing that already does the stuff into the public view, so that people can see what's happening. And then we can talk about access controls, we can talk about contribution levels, and we can talk about where splits need to be done for stuff that happened that it has to happen internally versus externally. We can talk about how do you deal with, like, cause then you can, the focus point then is dealing with things like, okay, you have sensitive OCP, you have CVE things, you have nondisclosures and weird things that vendors do, that those need to be done, you know, maybe hidden from view and then later exposed when, when things are fine, that sort of stuff, but like duplicating the OpenShift container build, OpenShift container platform build infrastructure. I do not think it's a productive thing. It is, I don't think it is productive for us to consider going down that path for the working group or anyone as a whole, beyond the fact of how can we do a subset of this for the OKD specific parts, because the rest of it is identical and also it's enormous. And frankly, I don't think anyone wants to build all of it more than once. Anyone else want to chime in just the one thing. The one thing I just want to make sure though is obviously Vadim's got his reasons for for taking a step back at the minute. If Christine gets pulled into an internal project, are we going to be left in the state where OKD stops. I think as a community, we need to need to make sure that that that can't happen. And if that means that somebody from outside Red Hat needs to be able to step in and do the work that Vadim or Christian is doing, assuming they can get the knowledge. I'm aware that those guys have a lot of internal knowledge that we in the community probably don't have. Maybe John, I think is probably one of the more technically advanced in the community. But I just think as a community, we need to make sure that we're not beholden to a single person is about person has to step back for either personal or job related issues. We don't want this project to stall. I think that's my concern. Yeah, and I've voiced this before is that I mean Vadim is basically the lynchpin in all of this, right? And if Vadim gets abducted by aliens, you know, then we're kind of screwed, right? It's we can assume that Red Hat would put someone else in there. But, you know, we don't know, we do know that Red Hat has put some effort into this new operators catalog that's OKD specific. That's a step towards acknowledging that it's OKD is still viable, I think in the eyes of Red Hat because they're putting some engineering. But we don't really have any guarantees and no promises and lots of people there as Diane phrase it to me the other day, there have been many Vadims. There have been many people in the in the life cycle of OKD who have stepped into that role and then stepped out because they get promoted. It's basically a stepping stone to getting promoted within. So it's going to happen. And it's just a question of how, you know, as you said, Brian, like, how do we fill that? And can it be filled by an external person? Or can we show the value so much that that Red Hat very easily puts someone else in who's going to do a good job of leading this project, right? It's been said and it's probably accurate. I'm not going to mince words about it, but it's likely that Red Hat views OKD as a stepping stone to OCP, to people purchasing OCP licenses. It's like, hey, here's this thing. It's nice. By the way, it's even nicer and stable and supported if you pay for OCP. Many, many organizations have gone that route, right? Of playing with OKD, testing it, then they buy their OCP license. That's so, you know, yeah, go ahead, John. Comment on what Daniel said a little bit ago. He had mentioned, you know, Centos streams and stuff. And this is, you know, I kind of felt, you know, really bad after it happened, but after Red Hat changed the whole deal with Centos, you know, and stuff like that, it left a lot of people with a lot of bad taste in their mouth. I specifically asked when that happened, what's going to happen to OKD? You know, because again, you know, you could look at OKD as being a competitor to OCP. You know, it's free. You know, people can manage it themselves. You know, you look at a lot of the problem tickets, whatever you see online, you can manage a lot of your issues yourself if you have the technical teams to do it. So, do you do something with OKD, like, make it like Centos streams and now make it much harder to use in production because you can't be sure about whether or not things are going to be stable. The biggest issues that we have with stability right now, and I know I've said it before, you know, we're probably tired of hearing it, is because of FCOs. And you know, I know you're going to roll your eyes. It's okay. But we know, you know, that any OKD issues themselves are probably the same issues that OCP has in terms of the actual product outside of networking and some of the other things. So we have a high level confidence that OKD itself is probably fine. If we go to an upstream version or like Centos streams or, you know, something similar to that, my concern is that we will go to a different model of how quickly we do things. And now we're going to have so many changes, you know, kind of like we're doing in Fedora between, you know, 32, 33, 34, 35, 36, where it's going to make it hard for it to be used in a production type of environment, like I'm using it, versus a lab or a college or something else where people are learning about OpenShift technology. So there's my concern in terms of, you know, going to a Centos streams type of model. I understand why Red Hat did it, at least I think I understand why Red Hat did it. But it really made the community, the Centos community unhappy. And now that's why you've got Rocky Linux. But just in point of fairness, John, like I think, you know, you say it was a Red Hat decision and I think that's definitely what the popular conception is. But like the decisions that were made were actually made by the Centos community or whatever their governing body is. So it wasn't like a dictate that came from it. Yeah, but I would say that most of that leadership were probably Red Hatters. I'm not denying the closeness between those communities. I'm just quibbling about semantics. No, I understand. And like I said, you know, I was a Centos user for a long, long, long time, you know, and that kind of hit me hard and still bothers me. But the reason I say that is because I mentioned that a year ago or however long ago that this happened, that my worry was this was going to happen with OKD. And this is the first time that I'm not saying that it's officially been said, but the idea has been brought forward, you know, to model OKD, you know, like, like, you know, the Centos streams. And Daniel, I understand exactly what you're talking, you know, how you're talking. It's not, it's just something that I heard that I wanted to express. That's fair. And to be very clear, I am not a Red Hatter. I have no insight into what Red Hat is planning. So this random idea that I threw out should not be taken as any sort of official direction of anything. Oh, sure. Sure. I'll just say no comment, but there are continual discussions inside Red Hat about like, you know, the best ways to improve open shift and kind of the future and whatnot. So like, you know, I, well, I get what you're saying. And I think in, to me, in an ideal world, what we would want out of a situation like that is kind of like the best of what's coming out of the Centos streams. You know, thing, which is that streams is like this kind of bleeding edge next rel that's coming and the community can put things in there that will eventually make it into rel. And I would, I would imagine that like, if that's the model that Red Hat wants to use going forward for open shift, you know, I mean, to get OKD at a point where the community is using it. Out front and putting changes in there that eventually get into OCP. You know, that would be ideal. But yeah, it doesn't doesn't serve you, John. Go ahead, Neil. Sorry. So a couple of things I want to quibble since we're now at quibbling points. First of all, Centos stream, not really bleeding edge and not even like the door leading edge or any of that sort of thing. Right. That's it. It's no particularly worse than what happens that you get that rel minor release, three months, four months, six months from now, right? That's how that works. The second point of order really is we have always been this way, even if we haven't tacitly acknowledge we haven't really explicitly acknowledged this. The Centos stream model, as it were, is essentially how OKD operates now. We basically take what Red Hat does. We spiff it up with a alternate logo, push it out with a F cause underneath it instead of an R cause. And we kind of call it a day that that's pretty much what's happening now. And the main point of difference between OKD and OCP is that where the version numbers track. So like, OKD right now is I think going to be tracking to 411 and OCP is just about to track on to 410. So we're already in that like OKD is the next minor version of OCP thing. The continual integration of container technologies and such, and especially trying to get more Kubernetes stuff, you know, bettered has necessitated using F cause as the base rather than building a sent cause or whatever you want to pervert words into making a Centos version of Koryos. Like though that is already today, formalizing that structure makes it easier to make the engagement more positive between Red Hat engineering, Red Hat product management, technical marketing, and the community. Because right now the situation is we all just don't talk to each other and we pretend we don't exist. And that's pretty bad. Like being involved in Centos stream now, like I'm not in Red Hat. I don't know people. I don't know what's going on in the internal discussions, the minutiae of what goes on inside of Red Hat. I know some people claim to think I work for Red Hat or do whatever, but I promise you I never have. I don't even know if it ever will happen. But I can say this much that Centos stream. I was one of those people that pushed very hard on the Centos stream concept even before it existed. Like when Red Hat acquired the Centos project to revive it from its like half broken state that it was before. It was something that I talked to a lot of Red Hat Enterprise Linux product managers about like I want a way for this to be a thing because y'all pushed out to point releases that broke all of my servers. Because you didn't test AMD stuff enough to make sure that you didn't break them. And I had to like hold back rel minor releases and the the faces that they made, when I said I had to hold back rel minor releases, because they because of, you know, actual faults in the software, I think drove the point home that something like this needed to exist. When rel 9.0 GAs in a month and a half. I think people are going to be surprised at how up to snuff it is immediately out of the door with the collection of Apple with all the software with all the hardware testing, and all the usage that has happened from having six, six months to a year of existing Centos stream nine ahead of time right like that that's a huge win that I want to replicate for open shift. And I, and that is I think where we should move our north star towards, and I think for Red Hatters like Elmico and, and others, I think that's a productive point to go towards talking about how in the world we want to make okay D viable successful and interesting to Red Hat and the community. I mean, I tend to agree with you know, I tend to agree with you but like, if those conversations are happening I hope they're happening on Diane's end because like sadly, they're not happening in the places that I'm hanging out in turn. So this has been a great conversation. I think we covered a lot of ground. It sounds like first and foremost, we want a contributors guide that includes as its first chapter sort of an overview of the infrastructure. It works the build process overall. And then chapter two is our first goal which it seems like we sort of agreed upon is building components successfully building components and integrating them as being a first step to really building contributors to it. And so, does that sound like a game plan moving forward that the docs working group can start on that next week is basically create a template and start asking questions to fill out that template so that we have a the first two chapters of a contributors guide. I think I heard the other way around. I think the suggestion was the easier option, the easier way in is learn how to build a single component and integrated into an installation, rather than the bigger build picture. That's what I mean actually that that would be, but chapter one would be like, what is everything like how is it all connected, like just sort of an overview of the of the build process to even know that there is this thing out there that you're taking the whole of and then inserting a component into I just dropped a Lincoln chat here because this is a documentation site that we just launched that I've been kind of leading the effort for internally and this is like the documentation that we've been creating over the last, you know, six months to a year when we've been onboarding IBM and and Alibaba and whatnot and this is the documentation that we're going to that we're starting to show to like Nutanix and others for how they would integrate like core components into open shift like integrating new infrastructure layer components. I don't think it's necessarily specifically related to like the whole notion of the contributor guide, but I think it's another window into the type of activity that someone contributing might want to do. And it's got a lot of links to like PRs from IBM and Alibaba and how they integrated their components. I think it's extra information that would help someone who's really trying to dive deep to understand this. So I just wanted to share it as like one more point to think about as we're maybe assembling a contributor guide. You know this is more resource and we plan to continue to update those docs with more information about how to you know how to put these components in. So, yeah, just forgot I shared that. Jamie, I'll put something together. It sounds like I mean except for Christian, I probably, and obviously Vadim, you know, I've been through this process a couple of times. I'll try to get something maybe in the next week, depending on my on my time, at least how to do it may end up being videos because a lot of times videos are just easier to do. But if I can, I'll try to drop in on the docs meeting next week or touch base with me and Brian, Brian and I, and just let us know what you have. And we can shape it into something that we can bring the doc script. And I do a meeting coming up with the team where I'm going to pick his brain a little bit. So, yeah, I mean, John, if it will make up, make it easy for you. If you want to do a demo, we can record that and then I can turn that into documentation. It'll be easier for you. Yeah, I mean, it'll probably be a combination of, I mean, like I said, I've gone through this quite a few times. What I'll probably want to do is take two, like OKD machine OS and then our good old friend and build those two. Those are the two most interesting for us, but the process is the same for a console or for author or anything else. You know, there's usually a build file or a make file or something that you run. Then you push the document and you know the Docker file or container image. So the process, the high level process is the same. The minutio of each component is a little bit different. Okay. Cool. All right. I'd like to share two more links here, if you don't mind. Yes, share, share away, buddy, share away. So these are some hacking guides for the components that you know my team directly works on and I work on the first one is like our new operate. Like, I'll just give you the component. The components don't really matter, but that first one is for our new operator that manages the cluster. The cloud controller managers. And the second one is for the machine API. But the first one has our instructions about like replacing a component inside of the inside of a release image. So I know John's going to like demo that and everything, but there's a little pre canned text for you if you want something to reference to go along with what John's doing. And the machine API operator one is all about like hacking on the machine API. That might give you a little insight into how we talk about contributor guides and kind of what we talk about to other people, you know, predominantly red headers who are contributing these things but I figured those might be helpful as inspiration for what you guys are doing. I appreciate you sharing. Fantastic. Well, as you guys noted there's like 600 repos in there and this stuff is buried deep in each one of them and it's like, you know, this is like the Fremen on Arrakis or something like no one knows where all the water caches are you have to go out and look. Now, another thing that we've talked about that we haven't done yet. And I brought up even early on in my involvement with OKD is we need a list of who has what resources for testing and building and whatnot. So I think the docs group I'll add that to the agenda for the docs group for the next meeting. Just to get a sense of the membership of the OKD working group and surrounding community. What can people contribute in terms of infrastructure for testing, building, etc. Like I've got, you know, vSphere access, someone might have, you know, Alibaba, someone might have, you know, whatever, like the more that we can test across the platforms and do builds on different infrastructure. I think the stronger will be to contribute. Right. If I wanted to donate AWS, that would be AWS resources. I'd be happy to test AWS, but I can't pay for it on my own book. Right. And, and, well, so I, here's the thing. I did. AWS, a basic OKD and AWS, it ran about $340 a month to run it. It was, it was not cheap because you figure you've got your worker nodes. And you're, you're, you know, so it's, it's, it's, it's can be kind of expensive. We might be able to come up with some way of a couple of companies contribute every month or we develop. I'm actually involved in a lot of community groups that take contributions to do various types of. Well, for, for this, I mean, it wouldn't be running it all the time and be more like, okay, can we build and test. Right. And do the basic work. Can we test a particular issue and then fix that issue and then turn it over to Proud for, for initial regular testing. Yeah, exactly. You know, like it doesn't always have to be testing on those specific clouds. Like if it's an AWS problem, yeah, you're not going to be able to solve it without testing on AWS. Yeah. But when I think about what's going on with the operate first community that's coming out of like the cloud native computing foundation and redhead has a bunch of inroads in with it. We're deploying like open stack instances that are community operated. And I've talked with this with the operate first people, but you know, there's the notion that the OKD community could link up with the first community and utilize something like their public open stack to run, you know, CI infrastructure that deploys open shift using the open stack provider for one. But then you could also do vSphere installations on top of open stack, you know, assuming you could you could handle the nested virtualization or whatever. But you can see what I'm getting at here. Like it doesn't always have to be AWS, GCP, Alibaba, whatever. Like as a community, we might decide to choose to use open stack because it's also a community project that we have good, you know, coverage on and we have people who can operate it. We link up with the operate first community, figure out a way for us to like have administrators who could maintain our infrastructure. And then OKD could actually be doing like testing and CI on their own inside of a public cloud. You know, and I think that really gets to kind of the crunchy granola side of the open source, which is like, you know, we're collaborating with others and using shared infrastructure to do these things. And like, for me, that would be like an ultimate kind of milestone to reach. It would be tough to do, but, you know, very cool. Well, just the reason I say AWS and GCP is that those are the, those are the stopping points right now for future releases because that's where we're having the issues. Gotcha. Okay. Sorry about that. Oh, no, no, no, no, what you said makes perfect sense. Yeah. You know, we might be able to do AWS. I mean, I'd have to map it out with like, you know, anyway, it's, it's, it's possible. Right. And it's possible. I think if, if we got the community, everyone chips in, you know, $5 a month or something like that, like, you never know, there's no, I mean, and I'm serious, like there's, there's, there's a myriad of ways in which we could like, fund or get donated stuff, like, let's not limit ourselves in any way. Right. But like, if they would support it. But like, for example, if let's say John is like working for a company that's using OKD or John owns a company that uses OKD and they happen to be using AWS and they're hitting this issue. Then it's like, okay, if you fix the issue on your own deployment, and then take it back to propose the code change upstream, and we're using at that point, the community infrastructure or red ads infrastructure. Then like, I would expect as long as the test pass on the CI at that point, I wouldn't necessarily need the OKD community to have their own like, you know, AWS infrastructure. I mean, that may be the way it works in the end. Yeah. Yeah. And actually, the organization I work for does have OKD and AWS and OKD and vSphere. But we need to document who has what and who has access to what to be able to know what we have. So I think that's the first step. All right, I want to be mindful of people's time. It's three minutes after the hour. Thank you so much for this awesome discussion. We'll pick it up at the next meeting. There'll be a lot of stuff happening in between, so please check the mailing list. Oh, matrix. I was able to get into matrix. Other folks have as well. So they fixed an issue where the room wasn't visible. Now it's visible. If you just, if you're in the matrix server, you can now find the room. So anyone that hasn't gotten into it, just shoot a message to me and I'll help you get into it, because it seems like we might be having some conversation. Any last minute thoughts before I end the meeting? I'm good. Thank you. Awesome. Well, thanks for the excellent discussion, folks. This was great. Look for the video and look for discussion in between.