 Hey, Sandro. Charo. It's been a while. Indeed, it has. I can only stay for a little bit, though, and then I have to go to another meeting. Aw. Does it make sense then to do CRST's stuff earlier in the meeting? No, actually, I'd prefer for you to save it till after I leave. Just for that, it'll be at the beginning. Right, exactly. Let me take a look at it. But Vadim has my update for me. It's about halfway through, but we'll, you know, basically there's the short of it is there's people who are ready to take it on. There's like five different people that are interested in forming a little subgroup. That's fantastic. Let's get started with the meeting. So Vadim, go ahead and give us the latest release updates. There we go. Much better. So there hasn't been much happening in our releases, mostly because we're blocked on multiple ways. The most pressing issue was that Fedora CROS has regressed in stable and testing develop. There was a bug filed for Fedora CROS streams related to network manager and some weird system desetting which prevents it from booting when we pass the master recognition. The fix for this is to revert back to the older Fedora CROS stable that is one from May or from June. I have updated all the night list for 8, for 9 and for 7 and on CI for 7 still fails on this year. So we would need manual confirmation before we cut out a new stable release that will be working or we need to look into something more. This is one of the reasons why we didn't cut the release two weeks ago so that we would make sure. Another news is that Kubernetes would be re-based in for 8 and for 7's branches. We're waiting for the final tags to be added to the PRs and hopefully it would make it into the night list and we'll cut a new for 7 release based on 2020, 10 I think. The largest problem is probably that we still don't have a safe upgrade path from for 7 to for 8. First of all, there are multiple issues and regressions in Cryo which supposed to be fixed and we'll get them in the new release which are affecting OCP as well. And in our case, we have a bug with OVM team where the applications are still getting disrupted more than we expect them to. This is why the CI cannot create an upgrade path and we have no annual way to override this. So we would have to wait or come up with actually a manual way and some known issue that your obligations might be disrupted about like 10% of all the time. Although our target is less than 1%. There has been progress on bare metal IPI. We created an OCD specific version for ironic IPA downloader I think and we have a great chat with bare metal team and I think we had the progress on the actual ironic image. So we would need to land a full request and a promotion PR in release repo and we should have all the bits and pieces necessary for the night list. Once we confirm that all of these are ready, we can port them back to for 8 and for 9. And another interesting issue with Fedora was that they have changed the CLNX profile rules which led to too many stats failing. We have a full request to actually fix that but until then we cannot upgrade CLNX targeted packages in our Fedora cross which leads to inability to update Chilip C which leads to an ability to update a bunch of packages. One of them is network manager 132. So at this point we have to revert back to network manager 130 which has an issue with reverse pointer resolution. So that's an unfortunate casualty while we figure out things with upstream and how to land the fixes and pick them back to for 7. So that's something I will write a known issue about when we cut out a new stable release. I think I mentioned that we have a full request in upstream, Mustafa is looking into this. And I'm hoping it would plan fairly soon because general folks agree on the approach and matter of picking the buttons. And I believe that's all I have for today. Excellent. Any questions on what Vadim covered there? Anyone have any questions or comments? Looks like we're good for that. Thank you Vadim. Moving on to Timothy with Fedora Cora West updates. Actually, sorry, I have a small question. Just that on the various releases under the upgrades from section, there are sometimes some individual letters, you know, F or S, which presumably means something failed or succeeded. And occasionally there are four of them, sometimes there are three of them, and I haven't been able to find anywhere what those actually mean. Well, yes, F means failed, S means succeeded, P is repenting. The exact job. So I have to start them manually. I think I covered this previously, but I can get more into detail. So the way we create an upgrade path is actually make CI run an upgrade from one release to the other. There is some automation which runs upgrades from previous nightly to the latest nightly, but there is no automation which runs this same sort of jobs on stable branches. So I have to pick these myself. Usually I'm guided by latest two releases and the most popular for six from Telemetry. But if you need a specific stuff, just bring me, I can run it, but I can start it. Now, thank you Vadim. With three of them, I originally guessed it was like AWS or, you know, something like that, but individual jobs that I can probably go. Like, in that case, the change log makes a bit more sense. So thank you. Yeah, I'm trying to split them between AWS and GCP just cover all the platforms. We probably could try a vSphere upgrade, but the pass rate would be very low. Again, all that is done health manually, at least I invoke those jobs, you cannot affect them once they started. So if you need a specific edge or to try out a specific platform, sure it can be done. Thank you. Let's any other questions or comments. All right, let's move on then to the Fedora Core OS updates. Hi, I'm from the course team at Red Hat working on Fedora course mostly. So I've linked several issues. So the one of the big thing we fixed recently is around the UEFI booting, especially the live ISO. If you are troubled in the past, booting the live ISO, it should not be fixed in Fedora Core OS. So I don't know. Well, the exact versions are linked into a ticket. I don't know how that impacts OCD directly, but probably for the first boot up node, it will have an impact for the first boot. And we still have one in progress. So booting by firmware's configuration. If you had issues like when we installing nodes with Fedora Core OS and in the past, if you had some kind of issues and probably this is something you've hit before. If you want to take a look and chime in, if you have anything to add on the discussion. Fedora Core OS was live ISO type of trees. So it's essentially the live ISOs are live versions of Fedora Core OS. So it's essentially live system, fully live system Fedora Core OS running from an ISO. But it just booted from an ISO, either be it like a real disk or if you write your ISO onto a USB stick or whatever, it boot up from that. Yeah, so that's one of the issues we fix recently and one progress. Another thing in progress that is coming up is third platform console default changes. So the goal is we're working on making it possible to have third platform, different console configuration per platform. So that will make sure that on AWS, you get the version of the console that you need to get on AWS. And on Bermuda, you get the version of the console that works best in Bermuda. And for each different platform, we have different defaults. And this is work in progress. So it's not there yet. But if you have like anything interesting, if you think that we should use a specific version of the specific configuration for the console for Bermuda or whatever, feel free to chime in into the issue. Okay, so that's like two biggest items. Then we have a couple of deadlocks and a couple of issues with the kernel infrastructure that have been reported to us. So that might be of interest to, if you encoded those, hopefully you won't yet because you don't probably don't have the same kernel yet on OKG. So I'm just thinking there, then there in case you have something that looks like this and you can chime in too. And yeah, and I forgot, we are really, really close to have a 64 artifacts now. So we essentially have them available. It's, you know, don't have like the nice download page and everything on the website, but it's there and it should be working. So right now we have testing release. I don't think we have the stable release yet. Oh yeah, maybe we have it do have a stable of this. Maybe we already have one. Yeah, we just have one. Right now. So we've just had one stable release for HR 64. So you should be able to try them. We have a BSA buys we have very little we have a lot of stuff open stock images or. So yeah, here's 64 is definitely coming to federal chorus. It's there. Yeah, and the remaining things are mostly cosmetic like me sure that it's listed on the website and anything like that. So that's just coming up right away. So yeah, let's just link this one there into. I've already linked it. No. Builds are there. I'm updating the notes at the same time to make sure I don't forget. Oops, where it is. And the. Okay, and one last item that I want to point out is, if you have only fiber channel or fiber channel of the internet connections on very little nodes. For example, you might currently have no way to get a date work and you did with working. We have looked, we are looking into that to see if we need to add by default. Well, including the image, any other utilities to make fiber channel free working. On federal chorus. And so yeah, if you're interested in that if that's something that just raising interesting for you in your use case on their middle mostly. Again, feel free to chime in and. And we'll on the discussion. And that should be it for me. I think that's like the top items from federal chorus formula two weeks. We've got a couple of questions in the chat channel. Okay. Just go ahead and articulate your questions for people watching the video. Okay, sure. I mentioned the F cause sports. Live ISO stuff I didn't know that that was the thing doesn't support like hybrid persistence type stuff as well like if you put it onto USB stick that bar home would automatically be provisioned in the remaining space and you could use it. So it's it's not automatically done, but you can do it. And if either you use the space you have on the USB stick, but it's probably not what you want, but most probably you want to do what you want to do is use the disk on the system to like persist data. So best slash bar, essentially, and you can do that. So the live ISO is a fully. We're keen federal chorus environment and he is running ignition. So if on first boots. So if you want to configure the live ISO recognition is perfectly possible. It's for it. And it will do and so if you want to like say run. So you can even like in bed in each and each and each and can fix into the ISO and it will be picked up by addition at first boot and then applied and so let's say you want to partition slash bar and put slash bar on the persistent disk and keep running the live ISO as a pressure pressure in like a familiar environment, you can do that. And let's say when you reboot, then you will get the live ISO again so you have to figure out like another way to do updates. In this case, if you use it like that, the main use case for that kind of stuff. That kind of way of working is PXE booting well, where you just like you date the version of federal chorus directly on the server and you will reboot your notes and they just simply rebooted to a new version. I see so it's for bare metal provisioning basically. You, you can either use it to provision the moveable to install federal chorus onto nodes, because you have fully working environment and you can run chorus install my new leader, or automatically the ignition, or you all kind of arguments directly provided to the kernel on the command line during the live ISO boot up, or you can directly use that as an environment, it's possible. But yes, probably better use, well better feed it for your bare metal use case, you're right. Excellent. And any other further questions or comments about that. Let's move on now to docs update. Brian, go ahead and take it away. Okay, so we've been working on updating OK DIO. If anyone has actually tried to update the main site, you'll understand that it's not trivial and the learning curve to understand the various packages that make up that and you really do need to be a full front end developer to actually achieve that. So what we've done is we've, let me actually share. We've actually set up a beta branch. Can you see that? Yep. Okay, we've actually set up a beta branch within the main repo. And this has now been switched over to use MK docs. And you'll see there's a link on the side, which actually takes you to the beta site. So this is being rendered using GitHub pages. And on a GitHub push, you get all of the automation to build and update the site. So when we accept a pull request or a commit on to the branch, it will actually automatically update the documentation. So there's no infrastructure needed within Red Hat or anywhere else. The site is fully mocked down. So even the home page now is being fully mocked down. It means that we don't have all the fancy animations that's on the current site, but in terms of updating it, it is much simpler. We also have a much better navigation where we've got the high level site navigation here on the left and an in page navigation on the right. So you can just click and navigate round based on the headings one, two and three. This is all sort of automated. If I look at the site, it's fully, fully responsive. So if I come onto a phone, you'll see that the menus vanish and they sort of switch up to here and I can get to the in page navigation. So it's a fully responsive design. It also will flip between light and dark mode depending on your browser and if your system settings allow that whenever you open it, it should adopt to whatever your system settings are. So the basic site is there and I'm not precious about this if anybody thinks there's a better color scheme or anything like that. It's fantastic. It is. It's fantastic. And I think we can live without the animation. Okay, the only thing that I think honestly, the only thing that make it better is if we got rid of the weird gradient background that's behind some of the imagery. Because ironically, it makes it less readable. Like if you go look at the title where you see there it says community distribution of powers red hat open shift. Like it's actually a little hard for me to read it because of the background reducing the contrast. Okay, I put this in I mean the homepage is got some custom templates just to make it a little bit so I know Diana was very keen to keep a rocket man in. The rocket man's fine. I just really have a problem with the weird bubbly hexagon background pattern thing. Yeah, which is which is just from the outside. I'm happy to take that out. Other than that, it is solid. I just wanted to add a little bit more interest rather than having solid. You'll see that as we get into the if we get into the main site. We lose all of that. So it just becomes. Yeah, these are the guys so if we look at the bad ends. And we've got his pictures. Yeah. So it's all and you've got navigation, the search again, I can type v sphere and it just pops up and then these are all the places v sphere. And it appears so we have a sort of an automatic search index on the side as well. So if you want to find stuff. And what I have done in the community section if you're interested, I started putting documentation about how the markdown. We use standard markdown, but we also use an extension library so we can do things like have these sort of information boxes. We can put tabs in so you can see that we can actually switch tabs and just add a little bit more control and information. Tables co blocks, we can actually put line numbers in or leave it out. It's got the copy paste automatically in there. The other thing I've done is I've added a spell checker. So whenever we push something, we automatically do a full spell check and it is set to US English. So I know you guys use your simplified version of English, not real English. But I have gone and adopted US English as the standard default. So everybody needs to sort of use US English. If you want to, on a page, you can actually put a comment in which you can add words. So if you've got technical words, you can add them in to pass a spell checker. We'll not publish if there's spell checking errors or if there is link checking errors. The other thing we do is a full link checker on publish. So again, we should never have broken links or bad spelling within the content. So it's there. Go play with it. As I say, it's in the beta branch on the main GitHub repo. I'm now looking at the content and trying to sort of beef up the content and get there. One thing that we've talked about is on the install page, maybe moving some of the content from the OKD readme. So this is on the OKD main repo. There's some quite useful information within the readme there. And some of this we think should be part of the OKD documentation rather than being in this readme. And there we go. That's that's it. Yeah, I wanted to actually loop Vadim in on this. So Vadim, the readme from the repo has sort of a mix of a lot of different aspects, right? So installation and the releases, nightlies and stuff like that. So the thought was that we would break this up a little bit and make the readme more of just a true readme of this is the project and then have links to the various OKD.io web pages for those particular topics. Do you have any concerns about that? This has been kind of your... No, that sounds very good. Just not sure which site do people first encounter. Do they look for OKD.io or do they go to GitHub and figure out that it's OKD. So I think there should be some text, very brief description on both sides so that people would not get confused. But other than that, we definitely should have looked at a bunch of text from readme's way over look. Excellent. Excellent. All right. Any other questions or comments for Brian? And this is awesome work, man. Awesome. Absolutely fantastic. Brian, what's your sense in terms of timeline? Do you think a month, two months? Oh, I'm thinking a fortnight. Yeah, I mean, everyone else who doesn't know what that word means. Yeah, because I mean, I think the sooner that we can actually make this move, I'm hoping people will pick up this banner and actually contribute content. I mean, for example, the COC, we can actually move it, move into here as well. I mean, I actually do have COC section in there, but I think that we can actually add... And make this a much more dynamic site. Hopefully it's easy enough to use now. So if we have like a headline on build X is broken, please use a previous version. We could actually put that on the front page and have sort of like a headline section and it can be updated almost weekly as we need to. And as people get stuck and they figure it out, they can just go and throw something on the site because it's really simple to do now. There's no sort of steep learning curve. I think Mike appreciated just how horrible it is adding content with his guides. So hopefully it all just makes sense and sees you now. But yeah, I'm hoping to have the core of the reorganization in the pages in place within the next two weeks. So then we can maybe this call in two weeks time. We can have a look through and actually make make take the decision needs more work or let's switch it live now. I think a primary thing we'll be getting that read me parsed out and that text that Vadim mentioned. That's going to be a necessity. And once we have that, I think it's safe to do the other stuff sort of as needed. Yeah, Brian. Yeah, no, it looks, it looks fantastic. And I'm really happy that you made this switch. We might consider at like a lower level of priority after the substance that people were talking about earlier. We should make sure that we get the color scheme reasonable because when I look at it, it you're sort of lighter background in the the middle looks like it has sort of a bluish tinge to it. And then the highlights are red and the sort of red blue is the worst on the eyes. You know, the human eye can't focus simultaneously on both red and blue because they're opposite to the spectrum. But I'm just saying that we might look at the, the color balance and the accents and things like that as a, but you know, I'm happy to defer that to the docs working group. Yeah, I say, I'm not precious. I just picked colors that were on the existing sites. And a lot of thought didn't go into it. So I'm very happy for people to go and make changes or suggest changes. Yeah. Oh, sure. Well, I guess from the red hat side, you know, we like red. But that doesn't mean that we should destroy everybody's eyes. You know, in the process. Yeah. So we can use the colors incorporated into the rocket man. Right. And just sort of. But I'm guessing the, like, is it, is it themed? Is there like, I haven't looked at how hard is changed the template for colors. I've broken them all out. So it's really easy. If you go and look at the. There's a CSS folder in the docs folder. It's in there. So every color has got a CSS variable name against it. So we can just go and change them. And there's a switch for changing. Right. So that people, we need to let people know that that's there. All right, I do want to move on because we do have a full agenda and some, we have some guests here. So I need to get through some of the other things that we've got. I'm real briefly. I'm noticing in the channels that there's a lot of folks asking questions without providing a lot of context. Does anyone asked us in the docs group and folks who are sure is there like a Kubernetes based how to ask questions document that you're aware of. Like I know there's that one site that's been around for years. That's like how to ask questions. Does anyone know of a good text that we can point to or build upon about how folks should ask questions when they are. You know, seeking help in the channels or in a discussion item or and Vadim started to put some stuff together, which was really helpful. And maybe we'll just put that into a document if we don't know of anything else. It's just for information. I do plan to have that page on OKD.io so we can actually use that as a way to point people. Excellent. Very cool. It doesn't sound like there's a Kubernetes specific page. So I think we'll just build on what Vadim has and the docs working group can tackle that next week. We'll just add those items. Vadim came up with a list a couple weeks ago. We'll just put those items together and then put that in a format that can be put into the description of the Slack channel. And then at the top of the Google, well, in the Slack channel, then we're going to change the Google group to actually point to the discussion section. And then in the discussion section, we can put that there and have folks provide that information. Vadim, that can be done. There's templates in the discussion section as well, right? You can have templated answers, but I don't think these are shared across all order. I have a list of mine. And I'm not sure how to share them. These are saved in my account. Get up might be weird. All right. Just wanted to throw that out there. Docs group will talk about that a little bit more. And training materials for Docs support resource. That was basically docs about docs. And so we answered that a little bit at the last docs meeting. And we'll be talking more about that at this next meeting. Repo issues. Is there anything that people want to pull out of the repo to discuss real quick from the issue section of the repo? Any actual issues? Vadim, I know you've been pushing a lot of stuff to discussion because a lot of it was discussion oriented and you covered a lot of issues at the beginning here. But anything you want to bring to our attention? No, I think I haven't seen anything super interesting or at least something which cannot be covered with a log bundle. I wanted to discuss the whole procedure when I learned that GitHub has discussions. So I was thinking that would be a great place to figure out what this issue is about. And then you can move it back to issues. Now it turns out GitHub has a label called bug. And if you mark it as a bug, that will give you a brand new tab outside right nearby between issues and discussions. And it's going to be a dedicated place for bugs. So you can have three places where you can figure out if it's a bug or not. I don't know if I should go on with moving issues to discussions or should we keep them as issues and just mark them as bugs? My head's getting on it, honestly. Yeah, what do folks think? I'm surprised that there's not going to be a third option. I'm curious what their thought process is for doing that. Probably because a lot of people use the issues with a bug label or something like that anyway. So they're thinking, oh, let's make it easier for them. But we kind of came up with our own thing in the meantime. I don't know. Honestly, like I've been keeping an eye on the discussion sections and watching what Vadim moves in there. A lot of it isn't really like a bug bug, you know, it's not like it's just a bunch of people over and over having trouble deploying their clusters, which is understandable because it's really hard, but it kind of illustrates how far we have to go here. My idea was that if we figure out that it's an actual Fedora-Cora's bug in a discussion, we close the discussion and reopen the bug. But this happened maybe twice. Apparently, that's not the way to go. Conversely, like not everybody should be able to say that it's a bug, right? Like to me, like the normal usage that I used to see is that anybody can raise an issue and then after discussion decide whether or not it's a bug. And then I guess a discussion would then be something that people don't even think is an issue. It's just they want to talk about things, maybe propose new things or whatever. So I could see having those three categories, but definitely we would have to tell people how to use them. I don't think they went into it that. No, it's going to say conversely to what Vadim was saying about Fedora-Cora's bugs. If we surface bugs in OpenShift components, those bugs need to get brought back into like bugzilla.redhat or somehow need to make it to the components. So I can guarantee you that aside from like Vadim and myself and probably anyone else on this call, none of those OpenShift engineers are looking at the OKD-like issues list. It is also hard to figure out because oftentimes I do notice that a lot of bugs that I run into personally already are tracking and being fixed, but it's very hard to figure that out from the outside looking at. Yeah, I don't think that's necessarily really secure, but I think other people are having that problem as well. We could probably go a lot further to closing a lot of these things out quickly. If we had some way, us as external people, not just, you know, those of us inside of the red hat bubble to be able to triage that kind of stuff. I don't know how to bridge this gap. This is still a question I would like to see answered. If we could teach the community how to get more communication with the red hat people who are working on OCP, I know that if there were bugs getting into bugzilla, I know we're migrating to JIRA in the future and everything, but if we had bugs going into that pipeline, then you would have people responding more quickly, at least from people who are working on the code. I think now it's just there's this gap that we need to kind of like fill up. Okay, I want to table this discussion because we've got 20 minutes left and we have a guest and we have a couple of other important items that I wanted to do. But let's put this on the agenda for the next meeting and try to come up with an actual plan for how to move folks around. Vadim, do you know of a resource for must understanding and parsing must gather? Like if we can, if we can get more people understanding how to work with must gather, then we might be able to get through these things that get posted much better. I don't think we have a guide where to look first in the must gather era. However, a bunch of tools centered around action was gatherers, which you just did it and it gives you a very short guest of what's happening. But then it depends on on basically specific components like. API server issues are very broke can be out it can be just simply bonus starting so. For a start you can have. I think we have a short guide I'll try to find it and well open source it just like we should have. I didn't want to spend too much time on that but I think I noticed there's a lot of stuff that if, if we can get through the must gatherers of community members can get through the must gatherers. We can save Vadim and a lot of other folks a lot of time. And we're incisely answer people I think. Okay, I'm going to breeze through these really quick. Can I ask a quick question about the must gatherers. I don't I don't know if this is ever something the community would accept but like is it like would it ever be possible for us to create the kind of service where someone could like. Make a must gather and be able to like drop it in a place where we can review later we can't do that but. GDPR and CC is with killers. That's progress. So, even if we ask people like up front to like, you know, to upload their content or whatever with some sort of proviso. That doesn't get us out of like GDPR and whatnot. Yeah, because they are uploading and they are controlling the results. They have to upload it to some servers of which they can. All right, well, let's move on because I do want to get to this other stuff. The user questionnaire the group decided to the docs group decided to user questionnaire would be great. And at this meeting we talked about it last time and so user questionnaire is up and posted. Dreeti posted a response with some great suggestions. If other folks can chime in the goal is to get the survey out to okay to users. What's missing? What is the, you know, their biggest roadblocks? What do they like the most, etc. So, check that out. Code ready containers subgroup. We've got Neil, Daniel, Dreeti, myself, Charo, Brian and anyone else want to be on that anyone else on the call who wants to be part of that subgroup. That's helping maintain code ready containers. Your name in the chat or speak up. Okay, Bruce is interested as well. Okay, we'll add your name. Okay, very cool. And we're in the process of navigating that out. Basically, I think Neil and Daniel are going to be sort of the leads on this. And yeah, it looks like we got a fair amount of people that are interested. So that's awesome. And that's that was Diane's measure of success for the subgroup was that there were people interested to obtain it. Vadim answered the operator questions. Please check those out. We won't go over them here but there's some clarity about operators and we'll talk about that at the next meeting. New business was a migration path outline. Would it be possible Vadim for us to create a simple table that we could direct people to because it seems like a lot of questions are can I upgrade from this version to this version etc. Is it too complex for that or do you think we'd be able to pull it off. For OCP, we have a full blown service which gives you an upgrade path. For OCP, it's a bit untrivial. I mean, if you can parse JSON manually, we have an endpoint but visualizing this is a bit tricky. So I think we have like a static dot type diagram, which can give you the edges because we don't use any channels that's much easier than OCP. Visualization of this is always a tricky question. I think I can find this diagram which can be rebuilt dynamically. I'll put an action item for myself. Awesome. Thank you. So now I want to move on to our guest Sandra, who is here to talk a little bit. And did anyone else show up or is it just you or which topic. So was it you who's going to talk who's going to talk about cube vert. There are a few people here, it's me, Michal and Fabian Deutsch here. I would suggest to get Fabian introducing the subject. Okay. Our discussion item is because of a desire for OKD to be more stable or the switch to Cupid for virtualization, the upstream of Red Hat's virtualization offering. And we received an email about about that and invited folks to come here and talk about it. Yeah, I see that Fabian dropped it. Okay. So you probably know that within OpenShift we are shipping OpenShift virtualization and we have not really a fully integrated solution corresponding to that based on OKD. We started trying to get Cupid running on top of OKD while ago, and things are improved over the past few months. And now we would like to see if there is interest in getting things more integrated and make Cupid like a special interest group within the OKD project providing virtualization on top of OKD. I'd like to see that for sure. Like, it's a very interesting thing to me and I think one of the bigger weaknesses of Kubernetes as a whole is, you know, how poor it is at doing, you know, what people tend to expect for virtualization. I feel like overt and OKD are complementary in this regard and bringing them closer together can only lead to a better solution. So the thought would be that there would be like a subgroup, like a working group, a subworking group that would focus on this with maybe some Red Hat folks and some OKD folks. Is that what the general thinking is? Yeah, I think so. The idea is to get the community involved in this process and getting more closer to providing something that can be tested by the, for example, for the overall community to see what's the future and getting more in touch on what will happen. For example, with Rev and OpenShift Virtualization, it will help getting more understanding about the differences and the possibilities that OKD can provide in that area. Excellent. This work on multiple sort of different backgrounds because obviously OKD rugs on cloud. I mean, I think we've got a fairly large vSphere population. I run over it, but I think majority tend to run vSphere and then obviously we've got some bare metal. What is the dependency on the underlying platform on the virtualization layer? I would expect that people that want to run virtualization on top of OKD are going to install on bare metal. That's the expectation. Despite I think that also running on nested virtualization will work with one level of nesting. I think that there's nothing really specific that we require from the OKD installer for that. The other thing is that I remember reading this in the OpenShift documentation. I don't know if this is still true, but if you pair OpenShift container platform running the OpenShift virtualization module on top of RedAd virtualization, you can still orchestrate VMs from the OpenShift interface. They just get provisioned through Rev. That means that you get the ability to do direct virtualization management through the OpenShift API for Rev. So you get an open stack, cloudy, superscalar type provisioning API combined with the more traditional virtualization resource management interface and API of Overt with one solution. Yeah, we got an Overt conference today and Gal Zeidman from my team presented the integration between OKD and Overt in these terms. So I added a link to the agenda to that presentation and I would suggest to have a look. The integration works pretty well. So I think that on that side we are already in a good point. Here the proposal is kind of different. It's not really having OKD more integrated to Overt, but providing an alternative to Overt itself, which is running Covert on top of OKD. Yeah, my understanding to those like with Covert, especially if we're able to like use that as the catalyst or whatever the substrate, then you can run OpenShift on OpenShift basically, right? Oh my God, I don't want to think about that. The only real problem, actually it's not the only real problem, but the biggest problem with OpenShift virtualization, at least from my perspective, like I played with it a little bit at some demos and whatnot, is that it's too hard to work with. Like it is extremely alien to someone who's more used to like how the Overt UI works and how virtual machines tend to be managed that way, how provisioning works with that. Like I would class OpenShift virtualization closer to OpenStack rather than closer to Overt, whereas like say something along the lines of what's it called? Harvester. Harvester is closer to Overt than it is to OpenStack in terms of how virtual machine management works, and they're both using CubeBird. So like at least having used Overt and OKD, like I, if I wanted to move from Overt to OKD, I think a prerequisite would be some kind of UX migration that makes it so people who are working that way can still continue to function in an OpenShift world, and I don't think we're there yet. Maybe we will be someday, I don't know, but that's really the big gap that I see is that people who work the VMware over Rev style of virtual machine stuff basically have no home in OpenShift virtualization. Okay, let's see if we have any questions that were there. No, mostly discussion in the discussion. I just, yeah, go ahead. Sorry, Jimmy. I just want to add a side note about some of the tech behind this too, if people are curious. So like, you know, in OpenShift now we have a CubeBird cloud provider, you know, like machine API provider that we've been working on for a while. I'm not sure how up to date that is, but recently Apple has come to the cluster API community and they've put a lot of people behind trying to create a cluster, a full cluster API provider for CubeBird. And I know that our Red Hat team that's been working on CubeBird is now like kind of linking up with some of these Apple people to help maybe see if we can bridge the gap between the machine API provider that we have now for CubeBird, and what Apple wants to do in the, yeah, yes, Neil, Apple wants to do with CubeBird in the upstream. Like, so in the cluster API project, it seems that Apple is very curious about building and automating clusters of Kubernetes on top of CubeBird, you know, Kubernetes clusters basically. They want to, they want to put a lot into that. And I know that they had been looking at the OpenShift machine API provider because that was the only thing out there for a while, but it's very OpenShift specific. It doesn't work with the cluster API pieces. So another kind of interesting tidbit to this that I think will probably bear a lot of fruit over the next year or so. Do you think that the, I can't believe what I'm saying, do you think the Apple folks might be interested in coming with us in OKD to try these kinds of things? Because like, if we're heading down this direction, it would be interesting to have them involved to, like, sync up and collaborate to, like, working with the Red Hat OCP folks, you know, the CubeBird folks at the Red Hat teams and like, you know, generally like making OKD and OpenShift work this way for them. I mean, that's a great question, you know, like I talked with some of the Apple folks and I talked with some of the Red Hat folks and basically made some introductions and tried to hook them up because I don't work on CubeBird, like, specifically. My impression is that Apple is really curious to get this cluster API provider running and they would love to have Red Hats help working on it. So I know those two groups are, you know, there's some communication happening there. I don't have my nose in the middle of it to know really if, like, could they be, could Apple be convinced to do this all on OKD or something? Like, that would be really cool. Do you still have those contexts where you could? Yeah, yeah. I mean, Apple's still coming to the cluster API. Like, I could continue this conversation with them and see if we can maybe convince them to come here or something. Let's do that as much as my brain hurts a little bit from thinking about this. Yeah. All right. We've got three minutes left. So let's do a sub working group on this. I think it would be great to have a sub working group on this. We'll get the Red Hat folks, Sandro, anyone else that's interested. And then folks here, OKD folks, I'll send something out over the working group, Google group email list. And then we'll get volunteers for that and actually solidify this as a project. So it's a great offering. And then Mike will reach out to Apple folks and let them know that we're starting this as a sub project and invite them. Maybe they'll be interested. We have three minutes left. Someone wanted, someone asked, I didn't want to get to this because I know it's, we want to be fair to everyone in terms of access. Someone asked about changing the meeting time to support basically folks that are in Europe and Middle East and Africa. This is a little bit late. This is so it starts at 1pm for me. We've got some other folks where this is starting at 9am, like I think it's 9am for Diane. I don't know that we can go. What are people's thoughts? I don't want to direct this conversation. What are people's thoughts about going earlier? I don't think I can go any earlier. Like my days are already kind of sandwiched hard as it is. Like I front hall most of my community meetings through the first half of the week and then the latter half of the week is when I actually get to do other things. And like some of y'all have seen my calendars like my Mondays and Tuesdays are basically chock full and I basically don't have any more free time left. So I don't think I can really move it anymore. Even though, and like the West Coast folks, I don't think Diane can get up earlier. I don't think she wants to get up earlier. I'm at the same time zone as Diane. We're actually not that far from each other and it is actually currently at 10 because of daylight savings time. But then I guess October, whenever we flip back to standard time, it'll go to 9. And I wouldn't be able to make it any earlier. Diane said quite a while ago that it was impossible negotiating a time in the redhead calendar to get it somewhere, which makes it really hard to move. And for folks that don't know Vadim, this is what like going on 6pm for you now, is that right? That's like eight. It's acceptable moving it later would be very bad. Moving earlier might might make conflict with my page of meetings. I'm fine with the time and I can understand that some folks might not be able to make it. Make sure someone from you can make him to one of these meetings and that means they could hit me or Christian beforehand so we can pass the information and then watch the recording. That's probably the only solution to the stupid time zones we have. Yeah, what I suggest is if like if there is enough interest to have another meeting. Yeah, because like that sort of fragments things. I'm hoping we can avoid that. There's 24 time zones plus. It's impossible to find anytime that works. You know, like, there are more than 24 time zones. There's enough in the form. I know that I was being already already have a documentation music. So we're really spread. Yeah. All right, let's, I think we're going to stick with where we are for now. This is, I don't mean I understand that this for some folks that might be difficult. I just wanted to just pointed out that it's 1130 there. We'll keep it here for now and we'll come up with a communication mechanism to make sure that people get the recordings. After the fact very quickly and that people who have input into meetings can get input to point people and their ideas across via point people if they can't make it. I think that's our best compromise. Yes, yes, that would be very helpful. Okay, awesome. Okay, folks, we are over right now. So I guess it's a good time to end. And thank you so much. This is a hugely productive meeting. And I'll get the video up as soon as possible. And I'll talk to you all soon. Thank you. Thank you. Bye.