 People join. Hey, Joseph, thank you again. No camera. Well, no camera here because I've got too many people in the house and privacy issues. So today you're not going to see my smiling face. All right, let's, let's get started. Share my screen just to do that. So, if you can add your name into the meeting group file, that would be great. And then I'll go back. Yeah, so, welcome everybody to the, the working group meeting this week. We've had some really good progress on the site that we'll talk about a little bit later, but I'd like to have the team talk about the latest release. And maybe take over the, the screen share as well and walk through the open issues too. So, if you want to take it away, that would be great. Sure, not, not certain if I can do the screen sharing, but in voice, let me have. So I usually create a tracking issue where we list all the issues we fix and some unresolvable issues we're hitting. So the biggest problem fixed in this release was mirrorable payloads. We finally have the fix deployed to all the build farms. And I can, and I still had to do a few manual checks, but we finally have a way to make it properly, to schema too, so that will be mirrorable. That was the largest probably problem fixed now. We also picked up quite a few OCP fixes, most notably TANOS component has been using way too much memory that was resolved. And we also have a payload with pseudo and kernel CVE fixes coming from Fedora. I think the most problematic parts remaining here are, oh wait, we also fixed finally the installation on vSphere, OVR, then OpenStack, at least it passes an RCI. That was the remaining parts we needed to fix in system D, resolve the fixes. Of course, we need an additional confirmation on that because we still don't trust CI entirely. But even, sorry, my audio was broken. Did you say mirroring works for the current version? This image should be used for the current version. We cannot do this retroactively. We cannot change the previous releases, unfortunately, because that would mess up the manifest dashes, and we would have to do the whole signing again, and we would effectively have to go back to the original version. So, I think we have to go back to the original version. We cannot do this retroactively. We cannot change the previous releases, unfortunately, because that would mess up the manifest dashes, and we would have to do the whole signing again, and we would effectively have to release a new payload for the previous version. Maybe you remember that I wrote a few scripts, and it works. I beautified them, and currently I upgrade an air-capped cluster in my company, not my production one, but I only had to disable the repositories 405. I use my scripts, say, mirror and fix the images on Quay. It should be feasible for everybody who has this problem, and then you have to force apply, for sure, as a fixed release payload, and it seems to work. I'm not at the stage of the machine config operator, but almost. But it's not possible to install a fresh cluster with this fixed payload, because the installer has some, you know what I mean, the wrong charge. That's acceptable, as in all of your manifest dashes are different, so you have to override the signature check, and it's not the latest stable release, so it's definitely not recommended to be installed anyway. Right, but going ahead, we probably won't have to change a lot, and things would be relatively simple, but in any way we now have a way how to fix it manually, so that's a way to go. I haven't closed that issue, because I want to make sure that our build farms are constantly producing the valid images, but we can count it as half fixed, basically. Yeah, the most unresolved issues are probably two very long standing problems with OVN and OpenShift SDN. These are becoming various people who are recommending there. I think we're mixing several bugs there, so if anybody understands SDNs or anything else, both of those ithubbishes should have bugzillas counterparts, and we should be commenting there and providing necessary information. That's pretty much it. Yeah. Padim, you mentioned yesterday that there is a network manager problem with current releases fixed, or? Network manager in Fedora has been updated, and due to our, due to the way we deliver packages, so we deliver Fedora-CrewS on its stable, and we have to add it, the matching version of the Network Manager OVS. On 4.6 upgrades is noticeable, but it breaks 4.5 to 4.6 upgrade. Hopefully we will release another 4.6 stable this week, maybe next week, with the fix included, but direct 4.5 to 4.6 upgrades are not yet possible for this release. Hopefully that will be fixed next time. And speaking of the future, we are preparing 4.7 release candidate, not yet in stable, but something would be great to look into. It's not yet done, because we need to fix the SSH authentication problem in MCO first, otherwise installations would be undebuggable. You would not be able to SSH there. So once that results, we would have an out-of-channel, basically, 4.7 release candidate deployed, and hopefully by the time of CP switches, releases for 7 stable will go to 4.7 as well. Yeah, that's, I think, pretty much all I've got for now. So I know I've been talking with Joseph a lot, because he's been doing a lot of work on the OKD.IO site for us, so everybody should take a look at that. But one of the issues, and I know Joseph's been struggling with this too, is that because of the fast cadence of the 4.6 releases, which I hilariously think that's amazing to say the word fast, because of all the delays to get to 4.0, but to see about getting a little bit more stability, I think, in the releases, is he's having some issues, because each time he has a release, he's having, and maybe Joseph can say this better than I can. How can we get better stability in each of these releases, so that they don't break people as they try and update? And maybe Joseph, if you want to add in a few more words here, because I know this was a big concern of yours. There is no big news, but I think OKD, where OpenShift brings so much features with it, is that you don't have to care about the host system. That's one of the top features for us, and if exactly that is a problem in OKD, I think we should get a workaround for that. Test more, read the tests among the community members, if possible. I don't know, because in 4.5 upgrading was almost fun to do, because it always worked, and it would love to get into this situation. Back, I know that Fedora had a few problems that were not synchronised with OKD. Maybe that's a possible fix for the situation. In 4.7, I don't know, you laugh, but maybe you can say something about that. I see the situation is 4.5 exactly the reverse, because we had to do a lot of legwork just to get it working. We had to at least components forked, and we had to rebase them often. Now we have one last, at least, MCO, and the patch to installing 4.7 is much smaller and already in review. I forgot to mention the most important news ever. Our enhancement has been merged to officially part of OpenShift, and folks in 4.8 would start reviewing our forecast for installing. Why we suffer so many instability? Fedora QRS 33 definitely has added some more to that, but the main reason is we like tests. As in, we have CI runs for OpenStack, and I don't know if I could trust them, because they show that things are passing. I don't see bug reports after a couple of weeks. We probably should be able to just trust them that they are disavailant, and this is the configuration our users are using. As for vSphere, we have plenty of vSphere, and we get results there, so I trust vSphere CI, because it's pretty much close to what we're seeing. As for the cadence, OpenShift releases weekly on four streams, three in a good days, and we promote in like two or three days from candidate to stable. So that's like four times faster than OKD, and the instability on the other hand doesn't come just because it's just been lying there, nobody used it, and after a couple of weeks it's ready. It comes after the bugs are being fixed, so if we delay the cadence, we will have bugs coming from OCP being fixed and stable much longer, like Thanos thing. If we had weekly releases, we would have it fixed sooner, but if we didn't, we would say, I'm sorry, we have a cadence of one month, so you all have to wait or use night lease. So testing night lease more often would probably be the best solution right now. Yeah, and I think that's when Joseph and I were talking yesterday, it's all about the testing and trying to figure out a game plan so that we can get more community support for the testing. And I can see Jamie, and maybe I have Jamie on mute by accident, sorry guys, because we had a bit of noise going on. Everybody again, just self mute if you're not talking. So one of the things that we could do, I think, is get better documentation around what it takes to test on the different platform because I know Joseph, you were talking about testing on Azure, because that was. Visio, because I think Visio is very, very common in companies, because also all the developer, sorry, I have a delay, and I hear myself, because I think all these developer features of OpenShift and OKD are absolutely mostly used on-premises, because there is a source code and on-premises you have Visio normally. Today's may change, but nowadays I think Visio is the most commonly used platform on-premises. That's why I think we do ourselves a favor to exactly push that one the most. Because on Azure in production I don't need all these developer features, because when I have normally tested my stuff on-premises, I don't know what you think about that. Am I completely wrong in this? That's perfectly aligned with what we see in OCP, and judging by the amount of OKD bugs, that's pretty much the same. The problem with that is the AI for Visio has much less capacity, and there is not much we can do about this quickly. We've added Visio tests for almost all critical OKD components, and once we're totally sure that we trust them, because right now a couple of conformance tests are failing, and we need to find out if it's a random noise. Is it the noisy neighbors, or is it some OKD instability? Very much likely that it's just some noise, and we need to figure out before we make it a blocking job and ensure that it's. So, luckily with John 14, we worked last two weeks, that was perfect. John has been able to test quite a lot of changes very, very rapidly. That was incredibly fruitful. I'm a bit worried about other platforms, and the whole bare metal UBI thing, because there is no recipe which fits everyone. Meaning almost every bare metal UBI bug is unactual, because it depends on the local setup. It either is documentation or a common bug which affects all the platforms, so we would need to figure out something about that. Something like an OKD recommended setup, which we test, or I don't even know, it's a really tough situation. Let me just jump in here very quickly. I think that is actually the one issue with most UBI setups, not just bare metal, but also vSphere and also the other platforms. Because UBI is just the user provides the infrastructure, we can't really have one piece of code account for all the possibilities there. So I think it would be much easier for us if people were to go to the IPI installation, because we know what we're getting there. Obviously we have folks that have vSphere systems that are too old. I think Neil, that was the problem with your company, and other folks that just don't have that infrastructure. So that's not feasible for everybody, but as a recommendation I think it should be IPI, because we can debug that much more easily. I can speak for my company, we also must use IPI because we have an external load balancer and lots of specialties. But I think if we can manage to just define a core system where UBI is set always is tested, and you have to ensure that your load balancer works and so on, you have to take care of that alone, and DNS and so on. Then maybe we can encapsulate the core for UBI and this one is tested very good. I don't know if it's... We have a vSphere UBI test, but the problem is that we don't know if we should trust it. Is it the shared configuration across most of our users or not? We can do a survey amongst people on this call and so on, but that might not represent the whole community. In OCP we have a bit more information about what's happening, and we would have to use there what they have. We still have one incompatibility issue with the vSphere UBI test flow that we currently have for OCP. It actually still uses the IFCFG files for defining the network config, and that is not supported in Fedora CoreOS. I actually have a PR open on the release repository, or one of those, to change that. I think it's on the installer, and then we should be able to run the vSphere UBI test for OPD as well. We should probably just ping the installer folks again on that because it's a tiny PR and it would enable us to run that test for OPD as well. This is great. Do you think that it's possible that we can run your tests on our local setups, or is it too complicated to set that up as a test environment? At the point that I run in cluster is here, and I throw manifests on that, and I understand it's a setup. That's very easy. We run a subset of conformance Kubernetes tests, and at this stage we don't look too deep into why they are failing honestly. I'm mostly concerned about install because if things, some important part breaks during install, it most likely will not let it finish at all. When it comes to conformance tests, they usually verify Kubernetes parts, the API server can respond and so on. All of that is shared with OCP anyway. And it's real hard. I would need to find the way how to run the test. Obviously it's all public. It's just a bit tricky to actually make it run because it depends on the platform you might need. Bastion host, if your stuff needs to be SSH to and so on. What I'm mostly concerned is install and upgrade verification. We have tests for upgrade, which is effectively running OCM upgrade, but just in go lang application because we need to watch for disruptions of API server. So verifying that on nightlies and giving us a fast feedback on if we broke something or some important fix landed in OCP and we need to do a release now. That would benefit OCD most if I think so. So you mean if we could find an automated setup that maybe installs OCD, upgrades it with nightlies and is that all? Or do we run tests on the installation? No, I think automated setup, we have it in CI. We might want to extend it and since our enhancement is merged, we now have full rights to do that. The thing is we cannot trust CI if it does not match with our community. So if you have some throwaway cluster, you would use your own configuration the way you want it to look and we would ensure that our CI results are actually valid and have the same failures as in actual user setup. Without establishing that trust, all of that CI is useless. Say for instance that SDN issue, we are passing from release to release. It was originally reported for GCP, but folks with bare metal jumped in and I also said I also have this. We contacted SDN folks and they said it's GCP specific setup, some health checks were not set correctly, it was fixed later in installer. And now I don't know what to do. Should we close it because it was reported originally for GCP and apparently fixed or it's actually a long standing box somewhere else and GCP just shows it. This kind of a link and having actionable bugs would benefit most. So what is the next step here? Because we talk about this a lot about trying to get the community to step in and do some of this testing on a regular basis for each of the releases. And Jamie, I don't know if you want to speak up, but he's written some automated setup for UPI and vSphere. Is it helpful if someone like Joseph or Neil or Jamie or Bruce sets up a testing pipeline or their platform and just does nightly, every time we do a release does a nightly run of the testing. Is that something that we should be aspiring to? I would love that. I bought a Ryzen system. Nice one in there, Neat. It's always idling around. So I could do that. I use the scripts provided in the guides, in the OKD repo. I adjusted it a little bit and I always do all installations with this setup. It's almost completely automated. That's perfect. That sounds exactly what we need. I don't think we want this to be happening every single nightly, but if we would check installs and, ideally, upgrades, that would be perfect. That would be great. So, and Joseph and I have been talking, I'm just going to be honest, Joseph and I have been talking a lot on the side. We do the OKD site upgrade. Jamie gets his automated setup and UPI stuff documented and available. Would it help if we did sort of, I have talked often about hosting sort of a hackathon for building out the operators and stuff, but instead, if we did a hackathon in which the morning was, walk through this automated setup for testing UPI or whatever, and how to do it. And then had the rest of the afternoon sort of coaching sessions, everybody could have their breakout room and be trying to do it on their system, whatever they are, so that they could get some one-on-one help if they were crashing or burning or had questions. But is that like, because I can set up a date somewhere out there in the next month or so to do that and ask everybody who's interested in doing these sort of nightly setups or release cycle, and we could just get that going so that we had those tests and that stuff. So if that's, is that something people would like to have to see happen? Neil, Bruce, Joseph, community members, non-Red Hatters, would you use that if between Jamie and I, we got a date together that worked for everybody and hosted a morning session, and Jamie maybe explains that and Vadim and Christian were available. And then in the afternoon, maybe two-hour session after that explanation, you had breakout rooms where people could come and coach you if you were having problems. I think that'd be fantastic. Okay. That's a generally really nice idea. It would help make OKD and the open shift appear a lot less daunting to people. Yeah. And as Bruce is saying in the chat, we need to get the documentation done first. So, and again, next, we'll have another docs section because we were so successful with it last week. And well, on the Tuesday, so maybe we can each week, we can work through a different set of docs that we need to get done. That's what I was doing. And I can use either blue jeans, which everybody obviously can use here and use the primetime version and we can have breakout rooms in it, or set up a hop in where we have a main stage and a bunch of do-do tracks and figure out that. Use hop in. It's better. Okay. All right. Everybody says plus one to hop in. I like hop in too. So, yeah. So that that would be, you know, and maybe even, you know, I'm not going to try anything else new and different these days. Yeah, we, oh, I have lots of prepaid seats still. So that's great. Yeah, so I'm thinking I'm thinking that's because we talk a good game about it and everybody. And I think now we have that we're at the tipping point where we really need to get the stability and this testing cycles in so that it's not always on that even Christian. And yeah, there are folks to do this. And I would really love to get those 63 bugs that are out there or issues down to something more reasonable sooner than later. So that would be very helpful. So I will work with you, Jamie. Jamie, are you available next Tuesday? I know you were on vacation last week. I will be back and back in action and ready to help in the next couple of days. So that that's my goal. And, you know, all of you non red hatterters who are here if we can out do some sort of calendary thing and find a date that works for everybody. And Jamie, once we have the docs done. And it probably will be a weekend day. I'm thinking just because then everybody won't have the distractions of their work. If that's okay with folks. But I'd really like to get through this issue and get this done and get this set up and then we can focus on operators or whatever else comes up next. That works for folks. One day where we set where we tell about our environments and set up test environments, maybe ideally a common one. This would be great. Yeah, because this is a black a black box to me. If OKD is running, it runs very, very smooth, very stable. But sometimes you are searching for the missing missing link here. And I'm always dressing volume. It's a, yeah, in the night, sometimes in the night to get one hint. And I think we can help each other much more if we get the sessions and also relieve the stress from buddy man Christian. And that would be, that would be great. And then anyone who doesn't want to work on that, we can hang out in another chat room there and work on documentation or something even more fun during the day. So, yeah, so maybe and Josh, I might tap you to give me a hand with that as well because, you know, that would be that would be fun. And Josh is still working without sound or maybe I muted him. And if you guys don't know Josh, Josh is in the open source team here at Red Hat in the CTO office. So, Mr. Kubernetes these days and other things. So it's wonderful to have you here, even if you're not fully on camera. Yeah, thanks. That would be great. So, yeah, so we've, we've beaten that horse. And he's not dead yet. So we'll get there. I was going to reiterate and I put the tweet in there. It is on Thursday, our session. At dev comp top CZ. So, and I think that even and Christian, you're on tap with me to do that. It's 530 central European team. I'm just a reminder and I'll send a note out to the mailing list shortly reminding everybody who wants to come. And if you see my tweet and retweet it, and you're going to come. Then we'll get more people at the party. So that would be wonderful. All right, let's see what else is on our agenda today. Did I promise anybody of like five minutes of fame to talk about something I know I every once in a while I mess up is someone sitting there with a presentation that there's expecting to give today. Okay, Dio. Oh, yes. Thank you. I'm going to stop sharing again. And I'm going to let Joseph, if you would like to drive through. You can show it. Show it. Please. And then you commented. Okay. Okay. So, based on last Tuesday's working group on docs, we've made some changes and we being Joseph. And I just put them into power placed this morning. So we try to follow and simplify this. The navigation here with what is okay D installation documentation community. And there's still more work to be done and the FAQ and going here. So we, what I'd like everybody to do is, is to test that we did put the general surgeons warning for 3.11 in. Thank you very much. And if you're still looking for okay D, it takes you to this page, which is looks very similar to what it looked like before. So we tried to keep people from from from using 3.11 if we can help and then there's the okay D section. We do and that looks very nice. I see what you did there. We took out the video that was the roadmap video here for now and put in the impressions slider. I would like to ask that we update the what's on YouTube for the okay D for updates. So, because we need it one for coup con may and red hat summit. And so I will tap on probably Vadim and Charo and Christian to set up a time to rerecord a little video for here as well as the slider and work through updating some of this verbiage here. So if you have comments on the verbiage leave the disguise floating over I have to fix that. So that's control at five time it's in the cash. Okay. What is this jumping images have gone they all have the same size now. So we've changed this a little bit here there's still some more tweaking on this but we've tried to clean up this section so that people just come straight to the community. So we've changed this section on contribution that we're going to try and get a little more verbiage over here. But the structure is pretty much here and I have for some reason lost all of my images here so I think I probably did something in the build. But there should be the images here that will click through to the different projects. These two pieces are going to be merged. One of them somehow I've lost all of the images really quickly so. No say are in it's on my PC it looks great. Yeah, it does. Maybe it's. I'm just your internet connection is. Yeah, so we've we've rejig the structure. And now it's about rejigging some of the content as well. On other sites, what I've noticed around this end user section have a section in the GitHub repo where folks can add their names if they are using OpenShift or using OKD or using the open source project. And I'd like to do that rather than relying on metadata tags here so that people can self add themselves to be using OKD. So I'm going to work through that in the in the GitHub repo on the community side and pull it from there instead. But this is the structure and the one other there's everything now looking better looking better. The one other thing that we were going to add and maybe Joseph is to add a blog. Yes. And the one trepidation I have about the blog is I'm a big opponent of documenting by blogging. So I will try and coach people when things look like they should be pieces of documentation that we need to turn them into real documentation and maintain them. And speaking of documentation. When you select a version, it says the latest and 311 is there. So I'm thinking we need to do an ask of the docs team to have like a for listed below here so that people. Because the latest takes you to the latest version here of for but it doesn't reference for in here. So there's a little jig that we have to do for that to look better. So what I you know feedback anyone right now or otherwise on the mailing list. The thing with the blog is I think we need a central place. Ideally on OKDRO where we can write things like migration. Hence. Yeah, as example for people that are sitting on four or five and want to migrate to for six. There are a few easy steps. They must do and they it's it's. Yeah, and say on on one or six, but you have to search through several issues. And yeah, be very brave to do that. I think a little bit more documentation in a central place. Maybe maybe only with slings to issues. Yeah, but this would be great. I'm missing that a lot. Should sorry should we also call out that the open shift OKD GitHub repo exists. I know in the beginning we were trying to. The place to get like open OKD specific documentation whereas the actual doc site was just adapted from the just like a reference. Is that changing with this new effort or. No, I mean OKD IO has a link to both official documentation and our GitHub repo where we can easily push some things like Joseph suggested blog updates or some kind of a micro block with fresh changes. And there should be an GitHub icon which leads you to open shift slash OKD. We could also use the OKD GitHub repo because GitHub has this GitHub pages. We could also use this. Then you have to write. I think OK. No, not OKD open shift. That's not good because the you need to company or your name dot GitHub IO and then you have a page where you can set up a block a micro block. I'm not sure if we have GitHub pages enabled on open shift organization and GitHub. This is essentially probably. We would have to jump through a few hoops if we really want this so. I think it's easier to do it in the in the repo where OKD lives. Then to try and jump through those hoops to be quite honest, I will try. It's a better place. Yes. Yeah, so this if people don't know commons. GitHub. Commons and OKD and Project Quay and a few others live over here in OKD. They live here with the dash CS after it. And that way I don't have to jump through those hoops with the engineering team. That's that's where we're at. So feedback, thoughts, folks. That's, you know, we'll keep editing it and revising it and you can also make a pull request against this repo as well as if you have something that you'd like to the burning issues. And if we set up the blog in here, then you can also contribute here as well. And it's an easy pull request to push it. I'll stop sharing again. And I'll see if I go here if I covered off there. So I'm just going to make sure the other part of the conversation. I think we've covered this off Joseph. I'm sorry if I changed you on it was the best the most valuable place to test. I think we we covered that would be sphere over bare metal and AWS. And that's going to use that so and the other pieces looking at the agenda here. We've hit everything. That was on the agenda. So is there anything else that Vadim Christian or anyone else wants to bring up today and I'll stop sharing my screen. I can tease some of the things that are going to come to okay D. In the coming months, but I think I want to first year from Neil and three about their enhancement proposal for multi arch busted. Okay, let's do that. If Neil is unmuted. Yes. Yes. It is flying. I was I was also flying. Yes. It's not hard. Okay. Screen sharing will work different buttons. Y'all see my screen. Yep. Yep. All right, so three and I had been talking about for a few weeks or so now and we started writing this yesterday and then realized we have no idea what we're doing. Which we were talking, we talked about a few weeks back about the idea of being able to use okay with mixed architectures in the clusters where you optimize by having most of your things be one architecture. And then some of the things be another or an even balance or whatever. And in this case, specifically, we were interested in the idea of having air 64 for most the nodes and then x86 where it is wanted or needed. Or vice versa, just, you know, cost optimization and performance optimization reasons. We looked around and didn't see anything in particular that said that there was anything about multi arch other than multi arch is a key initiative of open shift, which is a wonderful statement with nothing to back it. And so we wanted to try to come up with something to sort of get the ball rolling. Exactly. So we kind of started trying to write this and then it turned out we have no idea what we're doing. And so, I think street you had some questions in particular, better questions than I did about what we're supposed to do here with this enhancement proposal stuff because I have no idea what we're doing. Yeah, like there was. Oh, Christian. I'm sorry, go ahead first. No, yeah, I was there's there's a lot of stuff that's very like specific to the workflow like if you scroll down now they're like what are the risks and mitigations what is the design details and I think for for a proposal like this, it sort of felt to me looking at some of the other enhancements that for something like this, it would need to be rather more involved than either of us really have the knowledge to sort of speak to. Yeah, I think the best way to do this is to just put something up and then during the review, fill in the missing parts as they, yeah, as they are kind of reviewed. Perfect. From the beginning. The one thing I where you, you know, you were just explaining how you were thinking about this kind of having an arm cluster and then adding x 86 notes to it. I would start from kind of the status quo, where the standard cluster would be x 86 and you want to add a, a note with a different architecture to that. But it, I would formulate it in an agnostic way where you just say we want to add worker notes that have a different CPU architecture to the cluster. No, I did specifically, I did specifically want to generalize this to being not specifically calling out AR 64. But one of the principal motivators for wanting to write this is specifically that running an open shift cluster is too freaking expensive. And one of the things that, you know, I did a back of the napkin analysis so nothing particularly concrete or useful to to put out there. But like, running most of the nodes as AR 64 and then only having things that absolutely needed to be x 86 using that like for example if they're virtualization nodes or whatnot, or if they're edge nodes or something like that. It cuts the cost by more than 60% like trying to run them. I don't doubt that. I do think that is a different goal though, cutting the costs. So I, I think this has to be agnostic so it can be used. So, in the end you can essentially say I'm going to install my install my iron cluster and then add a few x 86 or power notes or whatever. But because we don't yet even have arm here. This is actually one of the pieces I'm going to. I was going to follow up with. So the multi arch effort is going on and I don't have any specific dates yet, but we are going to be starting in arm effort, and that effort is going to be okay the first. Okay. I think that is great. And I think that work will go hand in hand with this proposal, but I don't think you should come because I don't I think you want too much here. When you say you want this to be arm based masternodes, which isn't really, at least it shouldn't be the concern of that. This should be just like heterogeneous heterogeneous heterogeneous cluster architecture, where some of the worker nodes have a different architecture, whereas the normal set of homogenous everything master and work is shared the same architecture. And this should be agnostic enough so you can say whatever the architecture we have for the masternodes we can still add a machine conflict pool essentially and machine sets for worker nodes of a different architecture. So I think for the implementation details here this is mostly something that will have to be adapted or the MCO and the machine API operator will have to be adapted to account for this. Yeah. But I don't think you should you should be you should be saying we want this to run on arm masters because that is kind of the arm multi arch effort which yeah I think is much broader than what this enhancement proposal should be. We initially did. Yeah, and that that makes total sense we initially did have it as like be able to run workers of multiple different architectures paired to the same set of masters. But then Neil and I were doing a little bit of digging and it seems like Red Hat Core OS has like arm based build so we were wondering if OCP already had arm versions of all the necessary containers ready to go so that you could run like a whole arm cluster because we couldn't find any information one way or the other, apart from the aforementioned vague statements. There's no we only have the base operating system our costs and we also have F cost bills. But we don't have any containers yet. Great. I wasn't even sure if we had F cause because I couldn't find. So I vaguely recall that there were AR 64 builds of F cause, but I couldn't find them anywhere. Additionally, the Fedora core OS build system does not publicly exist like I can't go to builds dot core OS Fedora project org and actually see what's happening. It's not recorded in Koji so I have actually no idea how anything works or how it's being released or what is being built or what is being released. And so they're being filled, but I don't think they're being distributed to anywhere yet. We don't upload them to any other clouds yet. So that is probably the thing that will be done quite soon because this effort is now kind of starting to, yeah, to to roll. So this will be one of the first things we'll do because we already built those images and we'll start with distributing them as soon as we kind of start the actual work on it. We're planning this right now and it's not too far out now. Cool. Yeah, I have to mention, if you if you're planning to submit this enhancement, which is very ambitious, I have to say, it needs to be renamed from multi arch to mixed arch because multi arch means you can have access 64 open shift cluster and arm 64 or whatever any other but these are separated don't don't touch each other. What you're proposing is to have mixed arch workers or nodes in the cluster, meaning all the workload this year needs to be able to be available in two both versions and work with each other perfectly which is very complex. Let's dig in and ask ahead to say. Wait, what, why is that, why is that distinction exists because out, because multi arch has traditionally meant you're in a heterogeneous architecture environment that's been the case with open stack that's been the case with with even VMware with when they introduced arm high ESXi and things like that so why is this different with open shift. Even Kubernetes upstream calls multi arch clusters being mixed architecture in the same class in multiple architectures in the same cluster. Oh, I mean, of course you can you can do that in Kubernetes, you would join an arm before node to X to whatever any other cluster but you still have to carry your own network you still have to carry your own CNI plugins. You have to carry your own ingresses and make sure that these are running on some particular notes that's Kubernetes you your problems in open shift we deploy for instance a network using a daemon set meaning it runs on all nodes. That means some parts are actually arm and some are X 64. And these are built from the same source but they're still different, meaning they might show a box. And in open shift we don't allow customizing this we cannot set two different pool specs these have to point to one single image. This is fixed by introducing manifest lists and OCM release has to understand those manifest lists properly build them and embed for every single image it has to include two copies of it. So your payload grows from five gigs to at least 10 if you just want to support to architectures. There are tons and tons of problems like that. I am not actually disagreeing with that with that stuff where I was mainly saying is I was confused why we are calling it, what why we're using the term mixed architecture instead of multi arch for architecture cluster. I think it just historically because multi arch is already taken already implemented and it's just would be confusing for for the reviewer. Okay. That's fine. I just wanted to know that I understand the complexity of actually doing this is my question was mainly the terminology because upstream Kubernetes open stack Linux, all these things refer to the idea of a heterogeneous stuff as multi arch rather than the term mixed arch. So I went with that term because that's what everyone else was using. Neil naming naming and caching validation. Three things naming and caching validation DNS DNS always DNS. I've got a story about DNS. I have a quick question for Christian. The folks that are working on the multi art work internally at Red Hat. Is there anyone from that team that's on that's currently coming to the OKD working group, or is that something we could ask someone to come and give us an update on. I don't think anybody is currently joining these meetings. So, yeah, I'll get somebody I let's talk about this after they and I quickly and I think we'll get some updates from from them soon. All right. I'm here from the power side. Obviously, I thought the arm side right. I love power. It's just I can't afford it. Yeah, I prefer power honestly, if it was more affordable. Yeah, an architecture that actually makes sense and I, and computers actually work the way they're supposed to. Time me up. computers don't work that's a myth. All right. I wanted to kind of say about the enhancement, you know, that's free. I think that is actually going to be very useful or it could be very useful to have kind of a an armed worker and then use that to build our containers on it or the same for power. So we wouldn't need an entire power or arm cluster for building these images. Yeah, we could reuse our CI system, pro and build our containers with the same system or multi arch containers with that same system, possibly. So that would be great to have some of the motivation for me as well because at work we have an open we have an open build service instance for building packages and stuff and it already does this kind of mixed architecture scheduling things. And we started exploring arm based stuff for some workloads and it turned out to be really nice, but we also have no actual way of doing containers and applications on that architecture platform right now. And so I wanted to try to get ahead of that and see if we can get get stuff in place for that. I would definitely suggest you, you just open that PR as a work in progress draft thing and tag all the architects on them, but in me, everybody, you know, and yeah, we'll kind of work on that as a group and also get the attention from from architects and get their input. I think that's very important here. Yeah, so we're at the end of the hour with two minutes to spare. And Joseph asked, actually, can you spend two minutes talking about the future of okay do which I think is a much longer conversation than two minutes. I can actually try to make it short in 30 seconds. The biggest part is that multi arch effort. And the most interesting part for us. And obviously we have the open shift roadmap out all of that is going to be included in okay D obviously, and now specific to okay D. We, we've been missing the IPI bare metal platform. Okay, we haven't been supporting it. And we've made some progress towards the party that there's one, one PR missing that has to go in and then we can start building ironic. Yeah, but open shift IPI, the bare metal IPI installs with ironic cake. And that might be another thing that folks might be able to actually use. If you have an IP me supported or a machine that supports it me or three machines that support me, you'll be able to to install your own cluster on on bare metal hardware. So, we hope to merge that miss that last outstanding PR soon, and then those will be starting first in 4.8 but we'll try to backport that at least to 4.7 which is also going to be released soon. So, yeah, that's kind of my update. That was good in less than two minutes. Well done and then I'm going to use the last minute. Because as Josh asked is red hat summit and Kukon are coming so we need to refresh the the demos that we have and do an updated. What's okay D. I think I had a 60 second one or just under 60, two minute one. So I'm just going to tap on Charo and Betty and Christian to help me with that. Folks have short demos of things that they'd like to reach out to me and I can always host a session like this blue jeans and record it and edit it into something that we can use in the in the demo as well. So, we can have a thread on that in the mailing list on Google groups. So that's that's my ask and I think Josh was here because I think he wants them all by Friday. Because that's when they always want them six months ahead of every event. So, so I will, I will work on the what is OPD and getting the site content up to snuff so that it sinks. And we can talk about how to get a roadmap and what is okay D. Two minute video done sometime two minutes isn't hard to do. It's just finding the one hour to record the two minutes in everyone's schedule. That's hard to do so. I'll work on that with you guys in Slack and then Jamie go to the beach with your kid go back on vacation. I don't know where you are, but I wish I was there because the beach would be nice right now. So, thank you all really good conversation today and I will work with Jamie and everybody else to find that time for a testing hackathon and getting the documentation on that we will have another meeting next week on docs. So, if you're interested in that, please come and any feedback on okd.io or typos or things that should be there that aren't. Send a note and maybe put docs in the title in the mailing list on the Google group or send them directly to me or get there. Get it to me somehow. All right. Any final words but even Christian. All right. Good. Let me just let me just thank buddy for his his work on okd. It's, I think it's been a lot lately. I think, especially on his shoulder. So I really hugely appreciate your work. And I think we all do. So, yeah, thank you buddy. The unsung hero of this police is actually John Fortin is he's been helping with this hearing things a lot. Yeah, I think this this is where we can push and try and take some of it off the shoulders in the next little while now that we've gotten through this as 4.7 is coming soon. And it's going to be another game changer. So I'm looking forward to working with you all on that. All right. Vadim, you're the best. Take care everybody. Bye. Bye.