 All right, don't forget to put your name in the attendees section of the meeting notes that we know you were here. Keeping attendance helps us know who was available for particular conversations and if there's votes on things, we like to be sure everybody's been included and whatnot. All right, are we good on the agenda? Is there anything folks want to add? We're good. All right. Not hearing anything, Christian. Go ahead with release updates. Unfortunately, I don't really have a lot. Vadim is out currently. Once he's back, we will actually start to write like a standard operating procedure for creating OKD releases so we can better disseminate that knowledge and have more people help out with that. I'm not sure how much the community can do in that regard since those things will require some permissions within our OpenShift org, but yeah, disseminating that knowledge is definitely going to be a good thing. So, yeah, for now, no updates on the release part for the usual OKD releases. And for our releases, we're still working on it, unfortunately, that we're still blocked internally by some issue. But once we have resolved that issue and we actually have CI builds for OpenShift, for all of OpenShift, then we can also mirror out and tag OKD releases so that is kind of bound together internally with the CI work for the ARM platform. And then that will actually enable us to also do the OKD release for that platform. We do now have, and maybe Timothy, as I think that's already in the F-Cross updates mentioned there. So we do now have Fedora Coro as uploaded to AWS, so the AMIs are now available, I think, in all the releases on AWS. That was one of the main missing pieces there as well, because we need the installer to reference that AMI. So now we have those AMIs, and now the only part missing is having actually the component image on our proud CI system, which should be, yeah, I'm hoping it's going to happen this week, but it shouldn't be much more, we're really on the verge here. Can you get that piece in the proud system and we can, could we do a little very short blog announcing it somewhere? Oh, absolutely. There's also, I'm also going to prepare a presentation on the whole OpenShift, maybe if we get to doing the OKD releases right away, that'll be focused on OKD too. So that is definitely my plan. Maybe if you're going to do a presentation on OKD on ARM, we could do that as a briefing or an AMA too and broadcast it out. And did I invite you to the office hours at KUKON? I think you're in there. So if you could do a little, that could be part of the spiel there. That would be great to include that and that update. Thank you for all that work. Are you welcome? Thank you. Next up, FCOS updates with Timothy. Hey, so Timothy, I'm here from the Forest Team at Red Hat for FCOS updates. So I have three items today and most of them are forthcoming things. So the first one is kind of a warning, but shouldn't be a big one. We are trying to move away from the legacy of the table back end in Fedora Chorus to the NF table-based appetizers back end. So this is something that has been done for a while in Fedora, but for various bugs that it hasn't happened in Fedora Chorus. And we want to have that happen at the same time, approximately at the Fedora 35 rebase. So it's coming like in a couple of weeks. And we'll try to do that for new nodes first and then for everything in node because we don't really want, we don't really expect any issues here, but we still want to give folks some time to try things and make sure that nothing breaks. So the short version of this is the difference between the two is essentially you're using different paths in the kernel, but it should be fully compatible. So there should be no breaking changes. The second item is about the Fedora Chorus test day that we are putting up in, I think it's next week, something like that. And so we'll prime and suggest a few tests that people can do to make sure that Fedora Chorus works well on their platform. So this is like side lead on the side related to OQD because if you want to make sure that Fedora Chorus works well on your platform, that will certainly help OQD work well on your platform. Maybe the link on the agenda is wrong, pointing to the same tracker as the IP table. True. Oops, I did a wrong copy base. I think that now it should be the good one now. And yeah, so we are testing next, which will be Fedora 35 base, will be the most interesting from a testing perspective. The other ones are less, don't have that much change right now. And then the third point is around Arch64, which is coming, so we have builds right now, they're ready, and we are enabling that in the download page. So you should be able to try that out more easily, in an easier way, Fedora Chorus. And this is just the links, right? The builds already exist, they're just not linked out to yet. Yeah, yeah, it's just official links to have that officially in the download page, because you can figure out the links from the full builds, brother and everything, but we don't necessarily publish as officially all the builds that we do, all the release builds that we do. So if you get the raw list of builds, potentially you are using builds that we don't think are about it. So the links from the download page are fully stamped and should be good for usage. I have two questions for you. The first one, I think it has been already replied by Jamie, the latest 93448 and latest table 447 to be used for testing Fedora Chorus 35 on the test day, right? There's no plan to test 4.9 on top of that. You've not done any, I don't know, anyone's been testing 4.9 at this point. Well, I give it a right this morning, but yeah. And the second one is the reason for not testing 4.9 yet is that it hasn't actually been released yet. So it's not a stable version of the OpenShift code base yet. That is supposed to happen very soon. But since I think we are still not able to upgrade 4.7 to 4.8, our stable stream is still stuck on 4.7 for now. We should definitely test the latest 93448, though, because once that is figured out, we will be updating stable to 4.8. And then subsequently, once 4.9 is released to 4.9 as well. But for now, 4.8 is still the most current stable version, even though it's not yet in the stable stream, because we're missing the upgrade path. It should be installable on a fresh install. Yeah, and for that, I would just take the latest 93448, obviously. Yeah, just wondering, because from the OKD virtualization six side, we are running on bare metal UPI for now. It's kind of complicated in its own ways, so yeah. And we are kind of trying to use the latest version in order to provide feedback before it gets released. So that's... I mean, yeah, if you're using experimental features, you can probably test the 4.9 builds as well. They should be, like, you should be able to install them. I wouldn't expect too big, like any problems that are really big, because I think, yeah, 4.9 OpenGift Compute Platform Product 4.9 to release is not far out, so builds by now should be pretty stable. Yeah, we're not saying they're stable or usable, but yeah, for experimental testing, especially if you rely on features that have only been added after 4.9. Feel free to use those as well. I think they should be. Okay, thanks. The second question that I have for Timothy is, where should we report back to the Fedora bugzilla or to the Fedora CoreOS GitHub? So, bugs in Fedora CoreOS are best reported in the Fedora CoreOS tracker. So I'll put the link there, but the best way to report them is to find an issue on GitHub. So, the link is coming in there. Yep, there, and that is the link to notes. Excellent. Any other questions regarding Fedora CoreOS? Questions or comments for Timothy? All right, excellent. Let's move on to the next item, which is doc updates. Take it away, Brian. Okay, so we've got the beta site up and running, and I've ported all the sort of content across there. There's still some work on the content to be done, because, I mean, things like the FAQ that's in the repo, it hasn't been updated for quite a while. So, I'm not saying that the documentation is finished, but we're now in a state where at least it's got everything that's on the current site in the new beta site. And so, it's really when do we want to switch it live? As far as I'm concerned, we're good to go now. There have been a couple of questions asked this week. One of them is, why are we in the OpenShift CS org, not the OpenShift? And is this a problem? If so, should we be looking to move before we switch it live, because it's all going to be linked in with the GitHub pages? Another thing that I just wanted to ask is about the code of conduct. We don't really have a code of conduct, but most open source sites do. So, a suggestion that I've actually put in there is, should we just point at the CNCF code of conduct and say, no, we, yeah, so, so, let's do, let's do, hold on, let's do one at a time for our listeners here. Diane, if you could give us the elevator pitch version of the history of the repo, we'll start with that. All right, so I did, I don't know, Brian, if you saw the email that I responded to that thread with the short history of why the repos are the way they are, did you get a copy that I'm not sure what, I probably should have, I don't think it went out to the whole main list, so I think it was just an set of individuals, so I apologize for that. Basically, when we started the open source side of OpenShift, it was called Origin, if anyone remembers that. And the repos are still all, where the code lives is still all origin, which makes things wonderfully confusing. We didn't change the name at the time for multiple reasons. One is we had lots of end users who had written scripts and things that we didn't want to break during the three point X era. And the other was we didn't really have the resources to change even our own internal processes. So we kept that and you'll still see all of our open source code in the Origin repo. And the other is in order to get, and I think Christian touched on this earlier in the call, the permission from the engineering team to edit and create a landing page under the OpenShift organization was not gonna happen for external users or even for myself to get that kind of privileges and permissions. So we created the OKD repo. The reason it's in the OpenShift-cs, which stands for customer success, is the folks that helped us build it. That was the repo they owned on GitHub to do it in. And it's not an optimal place for us to live. As the Kudbert folks pointed out, that was I think the kicker, the OKD vert, folks were the ones who were asking the question. So, and it's not a perfect world, but now as we're moving with Brian to your MK docs version and creating that in GitHub, once we get that done, I think it's a very good time to revisit, I have a theory that we could do, OK, a GitHub slash OKD repo was what I suggested in the email thread. And that would be much more open source politically correct than OpenShift-cs, especially as we move to not need the resources we needed from the customer success team, which has been renamed five times since the beginning of that time, but it still is the customer-cs repo. So it was a good question. And I just wanted to give the background on why it's there. And it may sound lame, but that's the history. And I would love to move it to OKD, but putting it underneath the OpenShift repo just begs the question of multiple permissions that we'd need to get from the engineering teams and everybody else. And that would just slow down the process and make it less open, shall we say. So I'm seeing everybody plus one, the idea of creating the OKD.org, I would love to do that. We'll have to run it by the engineering folks. And so maybe Christian, in one of the next engineering sessions that we have team meetings we have, we'll do that. So that's that part of the... I guess, so just what we talked about that. The only change that's gonna make is when we do the DNS redirect to the GitHub pages. So I don't think it'll be a problem bouncing to the OpenShift CS GitHub pages and then later bouncing it to an OKD.org. Yeah, I would love to see this happen. I don't see anybody having any objections to it. And it would just make the whole... And especially the guys who used to be in this customer success team would love to get this off their... And out of their repo and move on to their new jobs. Cause they all have been promoted like at least three times since they were customer success people. So yeah, so that's the story behind the repos. And it came out of the conversation about the overt folks creating their own OKD overt repo. And I think Jamie was doing some outreach to them to see if we could get them to move under customer success. And that's what brought up the topic. So, yes. Well, to get them under, to be clear, to get them under OKD wherever that may be, right? Yes. Okay, and then the next one was the code of conduct question. Yeah, so the code of conduct, well, we can't really point to the CNCF one. I had pulled the one from Ansible and I have been lax and was going to post it as a discussion for this group to look at the Ansible one and see if we could use that because I know that's been vetted by Red Hat Legal multiple times and it seemed pretty robust and to the point. So I will do that as an issue and let the group review that and then we can pull it in, Brian, to the landing pages that you're creating if that's works. And that way at our next docs meeting we can look at it, edit it anyway. We need to do and then at the next working group meeting we can approve or debate the finer points of it. Okay. That worked for everybody and if you want to take grace over to the Ansible docs is a whole page there that I was going to lift and shift and do a cut and replace. Okay. And then just the final one in the footer, we do have some social media links which point to various places to link to do with OpenShift. So I think in the agenda, Jamie did put about a Twitter but there's also a Facebook link as well. Is the Facebook, are they just going to the OpenShift Facebook page and the OpenShift Twitter? There is no OKD Twitter handle or Facebook page that I am aware of. That's probably just- Yeah, I mean they're going, it's OpenShift home on Facebook. So facebook.com slash OpenShift and the Twitter were on- One or is that a different one? Oh, that's just the regular red hat OpenShift. Yeah. Yeah. And the Twitter is going to twitter.com slash OpenShift. So the reason I threw this on here is the, yeah, the documentation group which I'm almost thinking should be the communications group. It almost seems like it's, we talk about more than just documentation but just communications in general. The idea came up again of Twitter, of having a Twitter because so much communication is done in terms of announcements of new releases and updates and bugs and whatever. And so this was, been revisited a couple times once before I was in the group and once I think just after I joined the group. What do people think about registering a Twitter with OKD something? Because obviously it, I think OKD has already taken, we had this discussion, but. Yeah. You could ask nicely to get the OKD name back. You could ask. Or use OKD project. Yeah. Diane, you seem to have some hesitancy of a Twitter in general. Let us know your thoughts. Yeah, so my thoughts are, it's a pain in the arse to manage. And if we have announcements, the OpenShift Twitter handle would reach a much wider audience for us. So if we wanted to start doing announcements of OKD releases, I'm sure I could get them to be added to the OpenShift one and it would create. And that there is a person who manages that and watches it and responds to stuff on it. Hesitating because I already have OpenShift Commons, which I manage and it is often quite silent. But it does, both of those, the OpenShift Commons and OpenShift, if we really, and we could talk about this and maybe in the doc slash communications working group. Yeah. I'm wondering if I created that OKD GitHub repo. I'll just have to check and see if it's me that opened it. But for people who are watching this and that seems like a non sequitur, someone posted a link to github.com slash OKD, which is, yeah. So Mike, you got your hand up. Go for it. Yeah, I was just thinking about the whole Twitter thing and whatnot and communications in general. I'm kind of curious, like what does the Fedora community do around me? I would expect if we were gonna have an OKD specific communications channel that we'd wanna have a little process behind it before we just start blasting stuff out. So I'm kind of curious, like does Fedora do something similar with the way that they roll out their announcements and their kind of official communication channels? So Fedora has the mainline Fedora Twitter handle but also has for many of the product variants, you can, for many of the product variants or SIG variants or whatever, there are specific Twitter handles for them. It is actually up to the working group slash SIG to elect to have one. So for example, Corios has one, Silverblue has one and a number of other variants do as well, like KDE and Kinoite don't yet. They may, they may not, don't know for sure yet. We haven't decided. So it is up to them and there is a, there's an informal policy that things coming from specific subteam deliverables, subteam Twitter accounts get retweeted by the main one. So to amplify that reach, but also there is some complexity in terms of making sure people are granted access to those Twitter handles while at the same time Fedora retains control of all of the Twitter handles. So like the main problem is delegating permissions and I don't think we actually have a setup for that here within the OpenShift team. So yeah. Diane? Yeah, I guess. Oh, sorry, I didn't mean to cut anyone off. Was that John? No, that was me. I was just going to respond to what, I was just going to say like, thanks Neil, like that really appreciate the kind of guidance there. I guess like my preference would be to see some of that kind of governance and process set up first before we start doing the other thing. But that's just kind of my gut feeling. Well, let me ask this, Diane, do you know, so what if it's something not a release? Like what if it's the meeting videos getting posted and stuff like that? Would they be willing to post stuff like that as well? Or does it have to have a certain threshold of coolness and- With my advanced social media skills, which are so advanced. What I think like what would be easier for the OpenShift Twitter handle managers and myself as an OpenShift Commons Twitter handle manager is to have an OKD one created of some ilk, whether it's projectokd or okd.io or whatever we do in order to find an OKD one in Twitter, my own. If we tweet it and then we ask for it to be retweeted, sort of what Neil is doing as a way of going through it. Though I do think it needs to be owned by or managed by someone. And I'll have to look into the OKD remark that someone's making in the chat about it not being trademarked for IT use. This is, we'll have to ask legal that. But I think we will probably need one person to be a red-hatter and multiple people can have access to the username password for a Twitter handle to manage it. But I think it would have to be owned, I would say, by Red Hat is my gut, is what it says. But I think it's much easier if there's a sub one to get others to re-broadcast it for us. Like the Red Hat Open, Red Hat Community, Red Hat in general when we have releases and stuff. So, and the videos are kind of just specific to us. They're not really big watch things for normal OpenShift folks. So I would say creating, I'm not against creating a Twitter handle if we can find, agree to one. Okay, well let's talk about it in the docs group. I just wanted to bring that up because there is so much communication right now that happens via Twitter in terms of Kubernetes stuff and tech stuff in general, but it seems like we're sort of missing out if we don't take advantage of the medium there. Let's move on, and I'll add that as an item to put on the agenda for the docs meeting. Let's now move on to, Jamie, yeah. Jamie, just to move on. Oh, let's vote, actually. Let's vote on this one. What is the process of actually when we decide to switch live? We take a vote on it right now. So, this is an official vote because it's something big. I will send something over the Google group as well because that's actually in our bylaws. But for the people that are here, the proposal is on the table. I'm making the proposal that we allow Brian to make the shift and Diane and Brian to work on getting the DNS change to make the beta site go live ASAP. Motion is on the floor. Does anyone wanna second that? I'll second that. Okay, seconded by Bruce. Any further discussion? Yes, we are following Robert's rules here. Any further discussion? All in favor? Say aye or plus one. Aye, plus one. Okay, and anyone opposed? Okay, anyone abstaining? All right, so let the record show that everyone in the call voted with a plus one. Let's see, yeah, that's everybody. Okay, and so go forth and move it at your earliest convenience and I'll send something out. I'm assuming that no one is gonna like override the 12 or whatever votes that we have here when I post to the mailing list. So I would say that just go ahead and do it. And we will note that an official vote was taken on this issue moving forward. So Brian, let's chat via email over this and we'll add in its Jerry Fala because Will Gordon is off on paternity leave. And so we'll get it switched over hopefully in the next 48 hours or 72 hours or whatever it takes. Thank you so much, Brian, for all the work. Yeah, I echo that. That was some awesome work, man. Thank you. Does this even unlock things? Yes, very much so. And I wanna move on to issues. There's one that is kind of, it's causing some weird problems in a couple of different places. If you look at, in the agenda, I've got a link to it, it's issue 873. We've had a couple opened up that are duplicates on this sort of overall issue. The change log stuff in the nightlies from the CI is broken. You're not able to actually see the change logs. It ends up showing an error saying that it could not generate it. If you go to the repo, and this is Vadim's repo where the CI is pulling from, all of the commits are gone after February 14th or something like that. And so these are all referencing commits. That's why it can't get the change log because it's referencing commits that don't exist in the repo anymore for some reason. Vadim is out until October. Probably the first week of October, maybe the second week. So we don't really have a way of fixing this at the time. If folks could just be aware of it and the different ways in which this impacts users and ideally we'll find some solution where maybe a couple folks have access to Vadim's repo or have, I don't know, we'll have to figure something out because it seems like a really weak point in our process for one person, if the repo goes south, that CI is based off of that there's nothing that we can do about it if they happen to not be here. So we can have that discussion on, yeah. If I just can add to that. So we are working internally on pulling all that code back into the OpenShift code base and essentially unforcing the machine config operator now upstreaming that into the master branch and not needing Vadim's fork there anymore. Yes, for the time being, we cannot fix this until he's back but in the future, we will definitely pull that back into the OpenShift art and make all of those branches force push protected. So these things cannot happen in the future. Excellent. And in terms of other issues, are there any issues that people wanted to highlight out of the issues submitted in the repo? Sandro submitted a whole bunch actually. There's a lot there and there's like what, five different items which we'll have to take a look at because these were all like early this morning. Yeah, so some of them may be related to the usage of some HTTP proxy that maybe has caused some of the troubles that I reported and still investigating them. And a few other are probably related to Selenux issues that I also reported to the Fedora Selenux policy package. And a few others I don't know, I just reported them because I have no clue on what's going on there. It's kind of hard to understand where the installer gets stuck and why it's not an easy task to find the right place. Right, we're actually working on a document. Actually Vadim was helping with this that sort of helps people troubleshoot installer issues or at least know better where to look on that. That's a work in progress. So this might be a good case actually use case to help build that documentation. So expect us once we can comb through what you've submitted this morning, we can provide some feedback and then maybe use that to inform a document that can help folks with installer issues. It brings up another point that I'll talk about a little bit later in the meeting but bare metal. We don't have a lot of bare metal testing. We have a lot of people coming to us with bare metal issues when they're trying to actually use OKD out in the world. We don't have bare metal testing. And that's always hard because obviously there's 20 million configurations but it might be worth us to look at rounding up some folks willing to do bare metal testing. Just get the community to help us so that we have something that we can see what's going on and not just sort of be reactive but be proactive in terms of bare metal stuff. Yeah, that's... Oh yeah, go ahead, Christian. So yeah, I've actually been working with the bare metal team internally to get support for bare metal IPI, install of provision infrastructure. So far we've only had support for user provision infrastructure, UPI. But yeah, all the pieces are now in place. All the code should now be there for essentially the ironic parts that I used to automate this installation for bare metal installed. And what's still missing is actually building those images in CI and putting them into the OKD release payload. I have a card for that to work on at this sprint. So this should also be happening very soon. Once those images are built, support should kind of just arrive with the next lively build then. This is, for now, this is master. So this will probably only land in 4.10. I don't see a good chance for us to backport that to 4.9. But yeah, just as a heads up, we should soon have, on the 4.10 builds, yeah, bare metal IPI support. And you can check out the code and in the ironic image repository, there's also the ironic IPA download repository. Most of that is in the ironic image repository though. There's a specific, an OKD specific Docker file now. And yeah, once that is hooked up into CI and automatically built, we'll add it to the new OKD release payloads. Excellent. Do they need that to start testing bare metal though? We could test bare metal. No, this is completely separate to the UPI bare metal that we already have. Obviously, more testing on that is always. Yeah, so it might be worth it for us to get a communication out to the community saying, hey, if there's some folks who can test bare metal UPI for us, it would really help the OKD project and something like that. So communications and documentation can take that up. Anything else from the issues? Yeah, from the verdict, you will get bare metal testing because it's basically the expected place to run OKD or running virtualization on top. So you will get a bit of bare metal testing from the SIG. Once there are OKD VRT SIG people to do stuff with it, I don't... It's fairly new, and I don't think we've even done anything yet to tell people that the SIG exists because I think we just created it last week. It has already 50 followers on Twitter, so it's great. That was purely good luck. Yeah. All right, so for folks that don't know, there was a conversation between myself and Sandra and a few other folks about trying to make sure that we have a joined effort in this and stuff like that. So expect in the next couple of weeks more info on how that we're all going to be united. And that's going to be a subgroup, basically of the OKD working group so that we can share resources and whatnot. And some of these website changes and possibly new repo and stuff like that are all going to help this, I think. Any other issues? Oh, there's one that I'll address, actually. We have a handful of AWS IPI single node issues get submitted. I'm actually setting up a AWS IPI-CI to test this regularly for 4.7 and 4.8 just because there was some mixups and it was supposed to work, wasn't work, et cetera. So if anyone can do other providers, if you can offer up some resources to try other providers to do quick builds of single node, that would be awesome. So just reach out to the group if you're interested. Anything else for issues? All right, moving now over to discussions. There was one discussion item, I'll have to put the link in the meeting notes, but 896 is a NVIDIA GPU question on OKD. This is like our third question in that regard and in particular about the operator that's available. Has anyone had a chance to use the operator NVIDIA operator to do the NVIDIA GPU stuff with OKD? We, not on this call, anyone, I would doubt. There is another Diane, Diane Fetima, who's a Red Hatter who has tested it extensively. It is now being maintained by NVIDIA as opposed to something that's created out of Red Hat. So there are contact people there. So we could hook, I could ask Diane Fetima to weigh in on the, go ahead, Michael. Oh, I didn't want to interrupt you. I've used it a ton on OCP as well. I've done a lot of testing around it. So like I don't have the same, like Diane's using it a lot for like workloads and whatnot. I've done a lot of work with it in terms of like auto scaler and like cloud infrastructure stuff. But I haven't tested it on OKD yet. I would imagine it works. The only difficult part is it needs like, on a full OCP cluster, it needs build entitlements to do like a build in there. I don't think there's an equivalent on OKD. So I'm not sure if it will pull the proper packages to do the building that it needs to do. Because right now that operator looks to pull a couple of specific like kernel header packages that are specific to the rel installation, you know, the core RS installation it's on. So like I don't know what it would do on an OKD cluster. It would probably try to pull a package that doesn't exist or something. So you might get, you might be able to get up to the point where it's trying to build the driver and then the driver might fail there. That would be my suspicion. But I haven't tested on OKD yet. Can you, can I ask you to respond to the folks that posted the issues and just? Yeah, sure. I can share what I know. Is there a link in the notes to that? Sorry. Yeah, there's a lot. Just put it in the notes. Yep. So it's under the discussion section discussions. And I put a link. It's discussion item 896. Yeah, OK, cool. Yeah, I'll search for the other one. If I recall, there have been a couple of OpenShift Commons briefings on it with people from NVIDIA. And there was a red hatter who was key to creating it. I'm not sure. I can't think of his name off the top of my head. But yeah, it would be lovely to get it working. But I think it's pretty finicky, if I recall. It is so, so, so finicky that it is not even funny. Supposedly though, NVIDIA is going to be making this better in the future. Like right now, the way that they're compiling this driver is that it must have the kernel headers for the exact kernel version that is running on the node that it's compiling to. Supposedly in the future, NVIDIA is going to be getting better about creating dynamic kernel modules. So it will be able to install into a range of kernels, is what I understand. But that work is still ongoing. They've already done it. They've got Kmod tracking packages. However, the Red Hat team that created the operator didn't incorporate any of the work that NVIDIA already did to use KBI tracking Kmods. So the NVIDIA team doesn't know how to do this, and so they're stuck. OK, that's what the speed bump is. Yeah, so somebody between Red Hat and NVIDIA needs to work together to get them to start using the stuff that NVIDIA already created for regular REL to do this stuff for this REL Corios OpenShift stuff. It, however, will not work on OKD. And so for OKD, we're going to need to be able to detect and do the right thing and stuff like that. Well, that sounds like a great thing to do. Martyr. Well, I think then what we can do is find out what those things are specifically and push up some changes or submit some issues to get us there, right? All right, so let's mic respond, and then I'll put this on the agenda for the next meeting as well, and we'll see where we are with that. OK, Sandra, go ahead and take it away. The virtualization's sick. Almost covered the whole section in the previous discussion, so not much left. Just raising the awareness on the existence of a kind of prototype of the SIG website that should be included in the upcoming OKD new website when we have a place to squeeze it in. And the Twitter and Red Detendal, so if you want to discuss stuff related to virtualization on top of OKD, you have a place where to discuss them. Last one, push it up kind of a big PR to the OpenShift docs for OKD, including all the parts related to virtualization. It's being reviewed right now by the OpenShift virtualization team, hopefully it will get in soon. So that's it from the good side. Maybe for folks that are going to be watching this video, can you give a quick elevator 30 second explanation of this change that's coming and why you're testing in terms of virtualization, changes in virtualization on OpenShift and OKD? OpenShift virtualization has been around for a while, and it's getting traction in different ways. The thing that we are seeing is that he is not getting that traction on the upstream community. We want to help the upstream community to start looking at it. And enjoy it. Yeah, so my thing about this when it comes to KubeBird and to a lesser extent, OpenShift virtualization is that to be blunt, it's terrible to use if you want virtualization. If you want to use virtualization even in the open stack style way, the Kubernetes API exposes too many details. And the UIs that are available for KubeBird all are absolutely horrific, with the exception of Harvester from Rancher, which I think is somewhat promising. There doesn't seem to be anybody trying to make a KubeBird front end that actually is appealing for people who need virtualization to be able to use and manage it. I mean, if I'm being somewhat optimistic, hopeful, reaming, or whatever, I would love to see the overt UI pulled right out and layered on top of KubeBird, because the UX there is really, really nice. And it's a shame that it just is sitting there not being used. Yeah, just a quick tip about this. There is a KubeBird provider that you can use for managing the OKD virtualization VMs from the overt web UI. So it already exists. It's already there. So yeah, that's another good point. And yeah, this kind of discussion about the user experience of the whole thing is one of the things that we like to see discussed in the virtual thing. So you're welcome to start the discussion there and we'll follow up. Excellent. All right. So we have eight minutes left. I want to make sure we get everything in. In terms of new business, I'm going to, I struck out, crossed out, location of main repo, because we sort of touched on that. And it sounds like a discussion for maybe in a month once the website stuff has settled. CRC subgroup. Neil, you had originally voiced interest in sort of leading the charge on some of that. We do need someone to really get the subgroup going because Charo is still sort of doing the builds, which means it's sort of at his whim and time availability. Are you and is it Dan that was interested? Are you both still interested in leading the CRC subgroup? Yeah, I think we are. It's we just haven't had any time at the moment to start that work up. But we've talked about it, about how we want to approach this problem. We probably need to sync up with Charo at some point and just like have a one-on-one conversation about it and then proceed from there. Because one of the goals Dan and I agree on is that we want to make, we don't want to make the automation for producing. We want to ultimately automate this. And so what we want to do is make sure that this can just be straight up run from within Fedora infrastructure on one of the CICD platforms that exist within Fedora infrastructure and then be able to provide a way for people to easily take that automation and use it for their own internal deployments as well if they need to have customized deployments for their own use. Because like almost everything else around OpenShift, nobody really understands how to assemble anything. And I'd really like to not make that worse with CRC. And so demystifying just a little bit of it and making it approachable. And also automated means that we don't really have to worry so much about whether a person is doing it or not. So do you want help arranging a time to meet with Charo? Or you want to just reach out to him via the Slack and try and find something that works for all of you? Do you want to make an official first meeting? If you can help arrange that first meeting thing, that'd be great, especially since I think Dan hasn't met. Charo at all. And so it'd probably just be good to get that initial introduction out of the way. And then we can take it from there. It's Dan Axelrod. Correct? Yep. And Diane, if you could CC me on that invite, I'd be interested in attending if I have the time. I'll do that. I'm going to try and set it up for next week right after the docs meeting if that works for people. Let me check real quick. I'll tell you what, we'll do schedule stuff offline. We don't want to think out like just a thought. Like because Dan's on PTO right now, and I don't actually remember what day he comes back. I think it's either the end of this week or middle of next week. I don't know, but I'll sync up with you afterwards and let you know. Yeah, I'll just create a little Slack with the four of us and the five of us now and see if we can't come up with a time. And some other folks had voiced interest. I think Mike said he was interested in sort of following along with what's happening. And so let's, yeah, and anyone else that's interested, Dreeti, in fact, also said that she was interested. Next up, I'm going to strike out bare metal CI slash testing group because it sounds like we've got an interest in that and we can pull people together. And so next up is the office hours, which is Wednesday the 13th at 5 PM Eastern during KUKON. Diane, do you want to say anything about that? So we are going to rinse and repeat our format that we used before. And I think we have an hour is how long we've done it. Again, we'll be live streamed. If there is someone, and I will send a note out to there, we have, I tapped a bunch of people already to come on to do that. It would be great if we had a few more external people. Jamie's, I think, are one external person at the moment on it, so if someone else wants to. And I was thinking of you, John Fortin, since you're giving the talk at there, and I'll send the time and that. And if we can get Christian to do a little spiel on what's going on with ARM, that would be great. But we have a slide deck that we use that needs a slight bit of tweaking that I will tap Jamie to do and to lead. And I will try and keep my mouth shut as much as possible on it so that everybody else can shine. But it's a simple, it's a great way to do outreach to the Kubernetes community. And Timothy, thank you for agreeing to stay up late and do the answer Fedora Core OS questions. But yeah, I don't think there's, there is a limitation because it's being, it's not the same as BlueJeans. You can't have as many people as you want. We're using the CNCF's platform for it. So you have to get what's called a booth pass to KubeCon virtual and get in so that it's a little bit more complicated than usual. But if you are interested in sharing and being part of that, I think we're limited to five people in that window. So I'll have to count, but I think we might have hit that already. But I was hoping to get one more external person. So let's see what we can do. So if anyone's interested, reach out to Diane or myself and we will see what we can do and how we can arrange it. In the last minute or so, I wanted to point out that I'm actually starting a task list and putting tasks that we have assigned to people. So it's gonna be at the end of the meeting notes and I'll fill it out after watching the recording. And then I'll send an email out to the working group with the task list, like probably in the week between. So like next week, like Monday or Tuesday, you'll see an email with the task list sent out so that people are aware of their tasks and they can keep track of them. And we can keep track of you, keeping track of your tasks and make sure that stuff gets done and that it doesn't fall through the holes and stuff like that. So I think that's about it. Any last minute thoughts, needs, questions, comments, concerns? And as always, if anyone changes their mind and they are going to end up in person at KUKA North America, let me know. Cause- Elmiko, of course. Yeah, Elmiko. Oh yeah, I was just gonna, Neil said something that kind of made me remember this. And I'll just share a little bit of like, I guess news from inside the hat or whatever this community might find interesting. We're working on a series of documentation right now that we're gonna make as like a public Git repo and eventually be rendered to a site that's basically instructions for adding new infrastructure providers to the OpenShift platform. So like all the code changes that you'd need to make, all the different repositories that you'd need to touch and how that integration process would happen. You know, we had talked about doing this as like an OKD forward kind of thing, but we're focusing on OCP right now cause we have providers who are interested in getting in. So this is hopefully when it'll be done and like, you know, maybe a couple months or something it'll be a way to show off more of how the, you know, how the sauce is made or whatever for this community. Thank you. All right, we are at time. So let's call it here. Thanks everyone for your participation. Look forward to the video and the meeting notes coming up in the task list and talk to you all soon. Thanks everybody.