 started first with an agenda review. If you could look at the agenda real quick, I posted in the chat and it's also available in the calendar invite and take a quick look at it. Let me know if there's anything you want to add or a couple around. We'll take about 20 seconds to do that. So let me know if there's anything that needs to be. All right, we go down the agenda. Last call. Any changes? All right, looks like folks are good. Let's start out with introductions. I'll go just across my screen starting with Bruce. Yeah, I'm Bruce Link. I'm an instructor at BCIT. Among other things, we teach enterprise Java containers, Kubernetes, all those good things. And so OKD is really handy in covering a lot of that stuff. Diane. I'm Diane Mueller. I'm been a long time OKD working group co-chair with everybody else here. And also the community development person for the OpenShift Cloud Platform BU. So been here for a while. Hi, I'm Vladimir Lugovsky. I work for Red Hat and I'm a technical co-lead for OKD project. John Fortin. Yeah, I'm John Fortin. I work for Market America here in North Carolina, senior systems architect. And we're doing a lot with OpenShift OKD in our environment. So every week's a little different. Timothy. Hi, I work in the chorus team at Red Hat, so mostly working on federal chorus. Yeah, Mike McKeon. You might see my handle is El Mico. People call me that online. I am an individual contributor from Red Hat. I'm an engineer working on cloud infrastructure stuff. And yeah, just love the OKD community. Next one, Brian. Hey, I'm Brian Innes. I work for IBM, but here is a sort of a hobbyist, home lab user of OKD. Rita. Hi, I'm here at Red Hat. I'm here as a tool for OKD. Christian. He was here. Oh, sorry. Hey, everybody. I'm Christian Glomick. I'm an engineer on the OpenShift at Red Hat. And I work on arm enablement. Excellent. He's enabling arms. I'm Jamie McGuera. And I am a co-chair here of these meetings. And at the University of Michigan, where we have multiple OKD clusters doing various tasks. All right. Let's now move on to release updates with Vadim. Again, we didn't have any news table this week, mostly because we've been pushing out 4.8 internally and watching the problems related to 4.7 to 4.8. There's been a couple of rather small ones found. So this will land in OKD 4.7, at least. Our major blocker now is OVN is disrupting services, and our CI is very unhappy about this. But we need new fixes merged in 4.8 to prevent that from happening. We merged 4.9 fixes, so we're waiting for them to be truly picked back. There also has been an OVN issue related to all of your workers not joining. It also has been fixed in 4.9. We're waiting for confirmation so that it could be truly picked back to 4.8. So helping with this would really help. Most likely this weekend I will release a new stable based on 4.7 to get all the things we get from Cubelet and from Fedora-CoS, because we didn't have a release for quite a while. So we're also gathering folks from engineering internally to get more volunteers. So hopefully we'll get more fresh faces here. And I believe that's all related to the release upgrades from me. Thank you, Vadim. And if you could, particularly in the notes, notate the things that you wanted to help on that you mentioned for the community to help on underline or start those in the meeting. We have good first issue label in GitHub. This is probably the best things to start with for newcomers. Other tasks are very complex to start with. For instance, we want Network Manager 132 and Fedora 34 or my great Fedora 35 sooner because 34, 32, Network Manager has critical fixes for us. And currently we pull them from the copper instead of having a tested release from Fedora. So other tasks are pretty large, mostly related to our internal infrastructure, like fixing release controller to have proper channels and so on. I'm not sure how to structure all those to-dos. We've published them internally, so I'm hoping engineers could hop into that. And I'm not sure how to involve community that. Should we just thumb this list and find some assignees or carefully curate and give more like easy to start those jobs? That's something we would be interesting to discuss. Let's put that as a discussion for the next meeting, because this one's pretty packed, but I would like to define a path forward for that. I think that that would be... Okay, let's move now to F-Cos updates. Right, so one big item is that we've made new releases of F-Cos to pull in the fresh kernel and fresh system-g releases to fix some critical bugs there. I've pasted the links into the notes. We currently have one non-regression with the new system version. I don't know if they made it into OKG releases, but that's status on Fedora Chorus side. And yeah, and the second point is mostly continuation from last week and the week before. We have ongoing work to bring 64 support in Fedora Chorus and also are still working looking into whether we ship by default with Kubernetes focused or single not focused defaults, but that should not really directly impact the OKG community. It's much more Fedora Chorus side discussion, but essentially it will involve dog changes and some configuration changes. And that's it for me. Are there any questions on that? Actually, I should ask, are there any questions from the folks here? And what do you mention? Okay. The only question I have is maybe for Fadim, is there any impact on the release for these? I don't think so. We cherry pick whatever is in stable Fedora Chorus whenever you want to. It's just manual actually now. We previously had an automatically import, but with the new system, we have to do this manually. So we can do this anytime. I'm planning to do this on Friday, so it shouldn't be an issue really. Then Christian, you had something? Not really a question, but for OKD on ARM, the AWS AMI or ARH64 AMI for FCOS will also be required. There is actually a JIRA card, and I'm not sure if it's public, but this should be the one where we could follow that, track that work. I'm just going to link it. If it's public, I'm going to put it in the notes as well. I just posted it on the chat. Yeah, I can read it. Okay, perfect. I'll put it in the notes. Great. Thank you, Christian. Okay. Yeah, just one minor comment. There's a test AMI that I've posted a link to in the previous notes from the previous meetings, so you can get that here if you want to try out Fedora Chorus on AWS and about the system deregulation. The system deregulation is in stable right now, so it comes with the fixes for the CVE, so that's why we currently have that in stable and we'll, well, the next time we make a testing, the next time we promote testing, so in about one weekend something, we'll be back to a fixed version of system D and a new version of the kernel as well. Any other questions or comments on that? Right. Now, moving over to our issues section, there's only one new issue that's been added since then and this was open nine days ago by ProtoSAM and this is the update, the OKD-based CRC build. Who's our point person for CRC right now? Well, it has been Charo Groover for the most part, but I think he's gotten busy. Yeah. We may need to find someone in the CRC actual product management engineering loop as opposed to a working group person to do that, so which is the link in the notes. I'm looking at the notes. Hold on one second here. You could. I'll put it in the chat first and then I'll put it in. Okay, cool. And now looking at discussion items, looks like we've had, and Brian, we will be getting to that overall topic in that discussion sort of later. We've got bootcube service not starting anything related to that that we can summon. Vadim, you're just waiting for the log. Did you actually get a log bundle on that? Yeah, looks like you did. Or no, you didn't. No, I didn't think it has it. All right. Update Okiti's site, broken, yes, broken due to end dots. Yeah, that was the red support. So Vadim, do you want to close 790s since it's OCP or do you want them to close it once they've gotten support from Red Hat? I'm fine with leaving this open. It most likely affects Tokiti as well. So we would need, once we reach some conclusion, we can. And missing worker node? Yeah, that was that Overeard bug I mentioned on my status. So we figured out that for some bizarre reason, Overeard checks which interfaces do you have in your node and finds the IP from them. They don't take into account that you can have an OVN installed and so on. We fixed this in 4.9, but we never managed to get this verified. CI has been pretty rude to us. I'll give it a couple more tries and if we ensure that this is fixed and CI will cheer it big this back to 4.8 and 4.7. But if we could make sure that manually this works, that would be even better. Vadim, I'm going to try the MyOvid system after the meeting. Is it any of the dailies? Because I noticed they're failing on some of the other platforms. Is there a specific nightly that you want to test or just the latest one? I think it's been around for like a week. So the latest would be great. If it fails on vSphere, it's most likely due to docker.io rate pull request. So feel free to pick whichever is passing the state WS. We'll be looking at this later. Thanks. This discussion has a link to bugzilla. Oh, it's already set to verify it. Perfect. Cool. So we can cheer it big right away. But some additional confirmation that we're missing. Anything else would be great. Thanks. Could you provide a little bit of a synopsis of 788 and pruning and sort of the angle you're trying to get in asking for a bug report and what you're thinking? That's about release, not present and stable, right? So that's one of the tasks we wanted to pass to release controller owners. Currently, everything which matches the reg X is landing in a particular channel. So if you use stable for you, you would have anything we tagged into this. But in Red Hat's system called Osus, we have real channels that we manually can release between stuff. So that's something what we want in a release controller as well. And the result of this is that nightly is no different from stables from the looks of how release controller sees them. So we can have them landing in different channels and it's very easy to confuse them. And if you install a nightly, you can still choose a stable release and eventually it would be pruned by release controller. And there's no way identifying that you've installed nightly. So what we want is to make sure that you will get a proper notification that it's not it's nightly, it can be pruned. And users could decide on their own if it's a testing stuff or they want to migrate to something stable or they have mirrored the release of some more information that this release may go away. And that would be the ultimate fix for this issue. But at this point, you would probably have to either reinstall the cluster or hope that that the images are now gone in the registry and hopefully this is not pruned on the image. So most likely the upgrading to stable would work, but not really guaranteed. I'm not sure how do we phrase this in discussion because it's a very, very long topic and a pretty complex one. We probably should have. If you want, we can save it for the next meeting, but I think it'd be good to sort of to cover it in a little more depth at another time. Yeah, that was my idea. We should cover the whole channel system and how things get promoted in a clear fashion, at least on a meeting so that later on we could document. Yeah. Okay, let's do that. Let's shoot for maybe two meetings from now. So a month from now. All right, that's about it for the discussions that are up there. All of the other ones we've are pretty straightforward or we've talked about previously. There's a lot of questions on certificates. I wonder if we should somehow document the certificate process better or certificate management and process better, just because I'm noticing there's been a lot of stuff in. Well, some of that was one person hitting multiple places of communication, but there is seems to be a lot of stuff on certificates might be helpful to shed some light on that process at some point and how folks can manipulate certificates better. Just an observation. I'm wondering if there's something in the OpenShift blog in the past that we can use. We can search for that for sure. Yeah, we could reuse our update and if you find something, I can tap that person to update it if it's out of date. One interesting thing here I think is, first of all, OKD manages its own certificates itself, but I've also seen the question whether you can kind of do that or trigger a new renewal before OpenShift would do it itself. And I don't have an answer to that. The second thing I think noteworthy is that there is a search manager operator now on operator hub, which I mean, search manager is probably the almost default way to manage certificates on Kubernetes in general. You can already just install the standard search manager on an OKD cluster, but there is now also the operator which will do that for you and keep that search manager deployment up to date for you. I'm not sure in which catalog it is, but I think it should be on one of the public operator hub catalogs. Yeah, it's certainly in community operators. The problem is that search manager is creating and taking care of end user level certificates, like things you would use for your ingress and things like that. But this issue discussed the internal self, where and which certificates this QBAPI server has, what does QBAPI operator does with them, how do they pass it on to each other and things like that. That's very hardcore technical stuff. And I believe it's already described, at least the expected result is already described in the enhancements repo, but it's very, very technical. So I don't think we have a real short gist how to do, how to trigger their update sooner, but that's a starting point at least. I'm pretty sure that API server folks should have a shorter description so that other teams would be able to understand what's going on. Absolutely, I agree. I should probably make that clear. The distinction is between certificates owned by OpenShift itself, which OpenShift itself manages, and then service-grade certificates, which could be, can be handled by the search manager. But that is for user-deployed services on top of OpenShift, just like it would work on top of any Kubernetes cluster. But yeah, I agree. We should definitely find out what the case is for those OpenShift owned, OKD owned certificates and how one could trigger renewal and manage those or change how OpenShift manages those itself. Excellent. All right. We're almost at the halfway point, so let's jump into new business. So this came up in the documentation meeting. It came up in a discussion item that Brian posted, and I wanted to foster a discussion with the group about the fact that there's kind of a mix of working group activity and sort of, for lack of a better term, support activity in the various channels of communication. And should those maybe be separated? Should there be a boundary of some kind? An example would be the Google group. The Google group, I think originally was intended for, you know, working group communications, but now it's more sort of the more used for support stuff. And at the docs meeting, there was discussion about creating a new Google, you know, channel basically or subgroup, right? And so I wanted to run that by folks. And so what are folks thoughts about this? Brian, maybe you can lead the discussion because I know you did a lot of thinking on it. Yeah. I mean, this really came out with the confusion I had when I sort of started following the community and wanting to get involved. Just looking at the various sort of threads that are there, you end up with quite a list of places where you're told to go to look for information. And I know that we've had a few sort of frustrated people that have just posted the same question everywhere, hoping to get a response. And for me, it would be easier if you said, if you're looking for somebody to help you out, go here. And then have a place that if you're looking to sort of join the community and get involved and learn about what you can do and what helps needed, go here. Where at the minute, we sort of have to filter through everywhere we go seems to be a mixture of people asking for support and discussions about sort of work in progress or work that we'd like to get done. And so I think from a new user point of view or a casual user for the community, having a place and being specific, this place is for this activity would help the sort of general community. Anyone else have thoughts on that? Well, I would love to see a separation in the Google group between the working group, administrative release, kind of updates and conversations and bug fixes in a race to have a separate place for people to post what we are avoiding the topic of technical support kind of questions. So some sort of separation without giving the idea that it is a supported release of OpenShift by Red Hat. So it's that kind of conflagration issue. But I also am wary of creating yet another channel to monitor for all of us out there in the universe paying attention to the Kubernetes OpenShift dev and OpenShift user. So but I think if the instructions were clearer on the OKD working site about that, we could do that and leverage the issues list or the discussions list better. That's just I think left over from the docs meeting. And I guess the question is do we need all the channels? Because I mean I know discussions are a new thing on GitHub. Have they replaced the need for the Google group, for example, or the mailing list? Do we still need all of the various places? Or can we actually sort of reduce the number of places that we actually use? Yeah, Diane actually linked to, and I'll have to look back on the notes. I don't, the site that you linked to, they actually for their community link, and Brian, you were at the docs meeting, their link to the community stuff goes to the discussions of GitHub. If we did that, that would resolve this and it allows people to link directly to tickets and other discussion items. And it's one clear place for linking to code parts. It seems like it would be a good help. Yeah, and then we could just keep the working group Google form for other things. Administrative working group stuff. Yeah, we would have to gently say to folks who have been posting to the Google group, hey just so you know we've shifted, but that's fine. That would make sense and it would certainly make it a little less cluttered when it comes to the working group mail that I get now because it's like, hey, this is tagged for the working group mail asking about OpenShift deployment stuff. It's like, I don't know what I'm supposed to do here. Neil, could you give a three second who you are just because you came in late and it's just wild on. I tried to come in earlier, but my internet refused to cooperate. It's all good. Just so people who know who you are. So, hey, I'm Neil. I'm a senior DevOps engineer at Datto working on software engineering, release engineering, focused on packages and containers and that sort of stuff. And I run one of the OKD deployments internally at Datto. I'm here mostly with my Datto hat off and my Fedora hat on where I work on technology and development in the Fedora community and providing a bridge between the Fedora community and the OpenShift community. Excellent. Thank you, sir. Okay, so if we were to do a straw poll vote, straw poll vote right now, raise of hands of switching the community, like just having a community link that goes to the discussions and making the working group just the working group Google group just for working group communications show of hands. Okay, that looks like everybody. All right, almost everybody. If I may just jump in here very quickly, I think Brian suggested removing the Google group entirely even. And I think we created it specifically for notifications around our meetings and maybe even announcements of new releases. I do think for that we should keep on using it, but we should clearly or make it clear that user-related issues and just anything usage-related shouldn't be posted there. That it's really only for the working group announcements. We can update the description to reflect that for the group and then keep nudging people to the right places. I do think as the person behind the screen on the Google group thing, we need that for administrative stuff. This would not be to push anyone out who might be interested in participating in work group stuff. We'll probably want to make that clear like, hey, yeah, this is just for working group stuff, but if you are interested in working group activities, please join us. We don't want it to seem like it's a walled space and it's only for a select few special people or anything. Absolutely. We could even make it like in the Fedora develop list where if you're new to the working group or you want to join the working group, you can send your own self-introduction to that list. Just say who you are, what you do, why you're interested, but just not have it as a problem-solving channel because that just gets very spammy and we should really focus that to GitHub and for ad hoc questions, you can still use Slack for that as well, but on GitHub, we'll be able to track it and link to it while it's getting very unwieldy, very quickly on those Google groups. Yeah, so I think if we had sort of a boiler plate that everybody could cut and paste or their own variation of, oh, this is a great question here, please go and post it over here in discussions and feel free to just cut and paste your email and place it here, but do check and see if someone else has had a similar thread already going, so we're not repeating ourselves too, so that's the only caveat, but I think that's perfect. It's actually, for me, it's a sign of maturity of the working group that we've hit this tipping point, so I'm pleased that we got here. This also simplifies the workflow for Vadim and other folks working on issues because now it's right in the discussion and then Vadim can say, oh, this is really something that needs to have an issue opened up, just go right here to this other tab and open up an issue, so it simplifies that process as well. There's another angle on this I wanted to mention just quickly, now that we're bringing it up, and I think in the future we're probably going to have to think about more of these subdivisions of the community that's growing here because, and I think about this almost like in the same references like Fedora, like when we get to the point where we have people who want to start contributing code and want to get into the developer space around OKD, we're going to have to set up separate spaces for those discussions to happen because I think, you know, as Vadim kind of hinted at earlier, you know, like syncing up with the engineering effort that's happening inside of Red Hat is no small task, and if community members want to start contributing to the separate pieces, then like this working group meeting is probably not the place to have those discussions. We're going to need developer oriented spaces where we'll be able to connect engineers from within Red Hat who know the specific components with community members who would like to contribute to those components. So that's just another angle we're not talking about, but maybe we're not that tipping point yet, but I'm hopeful. I don't think we're there yet. It'll be a little while. You know, one of the bigger challenges right now is that when it comes to contributing specifically to the OpenShift platform, there is no straightforward way for someone to figure out if they even can yet. And like that that's a starting problem that once that is more directly resolved, it becomes easier to like say, hey, you want to help make OKD better, you know, from a code perspective, then we can start directing people towards that and do that sort of thing. Like I know that right now, whenever I've tried to build like even a small portion of OKD, it's just, it's a rabbit hole of crazy because like figuring out actually how to do it and getting it to like work is not trivial at all. You're absolutely right. That's why I wanted to bring it up now. You know, in various parts a couple of different times, I mean, the OKD specific stuff is pretty easy. But if you try to contribute to the non-OKD and the common stuff, I'll tell you outsiders don't seem to be very welcome, unless you're a Red Hatter. Yeah, outsiders are not very welcome. Yeah, I think I might figure that out with the in the OpenShift three days, which is why I haven't done too much of it this time around. I mean, I think there's a pushback associated with this though, right? Like I know for a fact that my team would be happy to accept a patch from anyone in the community. But the problem is how would I know that you, you know, aside from our CI testing, it's unclear that anyone approaching one of our projects would even understand how to set up the test environment to build it properly. And that's and, you know, Neil, you're absolutely right. We're not at this point yet. I wanted to bring it up, though, because I want to make sure we're ready for it because part of my selfish desire here is I would like to help build that bridge between Red Hat Engineering and Community Engine. I mean, for sure. Like, I have certain things that I'm interested in that I'd like to see, you know, built into OKD and the OpenShift platform for, you know, admittedly somewhat selfish reasons, but also because, you know, I want to be able to do that kind of stuff. But right now, I don't even know how I would do it. So this came up at the docs meeting. The problem is that this is a commercial product, you know, and we're getting the open source part of it, but we can't necessarily add to the open source part of it without also affecting the commercial product. And that's where I think, you know, part of the pushback is, you know, you put something in that's going to affect the commercial product and it is going to be like, well, wait, you know, that's an outside person. I don't know whether we can really do that or not. Well, that would be very, so like, if anyone in Red Hat's teams were having had that particular attitude, I think that would be a problem because, like, that's pretty much all of Red Hat's stuff. And that's definitely not what came to my mind. It's like, what came to my mind is that just the mechanics of actually making a change to OpenShift is unbelievably undocumented. And to a level that like, someone who even has some Kubernetes expertise, like Kubernetes contribution expertise, is probably going to have a hard time working through OpenShift. And there's another level to this too, you know, the Kubernetes, we contribute from Red Hat engineering perspective, we contribute a lot into the upstream of Kubernetes itself. So if some of the things you're looking for are Kubernetes things, contributing into the Kubernetes work streams is probably where you should be making those contributions. It depends on what you're trying to accomplish too. But so there's like, there's multiple levels. But as you pointed out, Neil, and probably Mike too, is, you know, everybody here is looking for patches and pieces, but it's the build process, I think, and the testing of it that gets incredibly complicated. And really, the reality is, OKD is a sibling stream, rather than an upstream to OpenShift. So that's, you know, that's the caveat. But creating an issue or making a patch to OpenShift or any of the pieces that are under the hood of what is now called OpenShift, whether it's, you know, something that came through with the Knative stuff, a lot of it happens in the very, very upstream of this and then gets pulled into OpenShift. So not that I want you all to disappear and go into the Kubernetes working groups or the Knative or Envoy or whatever Istio folks, but that's where a lot of like, especially if there's this core changes to something that you're looking for or support. So one thing that came out of the last working group meeting was the documents working group meeting was that it would be helpful if Vadim did a walkthrough of the build process. So I'll work with Vadim on setting that up as something to be scheduled during one of our full working group meetings to just walk us through what documentation there is and what process there is for building. And that I think will be something that we can, if we set aside a whole meeting for that, we can then have that recording and have the documents that come out of that or that are linked to from that be used to get people started. So Christian, go ahead. Yeah, just reading the chat here. Vadim said make a PR and CI will do the rest. That is unfortunately the way it is. It's the only way to really test your changes because, and I don't want to sugarcoat this, setting up a your own proud deployment, which is the build system we use would be an incredible lot of work. And it's also that I think is the real issue here. We have some of the buildup bases we use that aren't really distributable because they are rel or they have some rel parts in them. So we probably have to provide a pure CentOS or Fedora based alternative, a builder based image, so we can actually do those builds locally even for just single components. Because once you have a single component, you can then take any any release payload, replace that specific image you build with your other the build, the replace the image in the release payload with the change custom build you made, and then deploy that to test it. But it's still for even for the single components, it's not always possible for non redhead folks to do those builds because some of those buildup bases just aren't available to them. And I do think that is a part for for our community developers, which is a real, a real barrier. And I think that is probably good, a good thing or a thing we definitely have to do at some point provide a freely distributable CentOS or Fedora based buildup base so we can actually build those components locally, everywhere, so anybody could do it, not just redhead folks that have access to the internal redhead. Okay, I want to wrap this thread up because we do have thank you Christian, I do want to wrap this up because we do have a fair amount of items in the last 16 minutes to get through. Vadim, okay the operator catalogs still work in progress, do you have anything to say about that? And actually there was a question about Brian, what operator were we talking about at the docs meeting? Pipelines. Pipelines, yeah, status of pipelines, yeah. All of that is in the works, we didn't have, we didn't haven't seen anything really happening, but we draw attention to this every couple of weeks, I think. It's almost a daily this time. It's also related to the status of getting volunteers and so on. Hopefully we'll have it soon, but the teams, some particular teams themselves already, we just need the catalog and we can start filling in, but the ball is on all M death site and we're doing our best to push them. No estimates so far. And another one that came up, this came out of the docs group, maybe I should have put this a little bit earlier, but name and scope of the install.md and clarification on the documentation. This sort of came out of that same discussion and what Brian brought forward in his observations and wrote out for us. Vadim, is there a better way that we could do the install md and sort of separate stuff out that's install versus building versus testing versus etc. It seems like install.md is kind of a mix of everything right now, isn't it? Yeah, I think I would rather pass this to docs team or rather docs work group because I want to cram a lot of things in one single readme and that's probably the worst way imaginable. I can think of really how to structure this properly. There are tons of information we want to cover there and at least have some links we could refer people straight to. So I don't know how to properly structure this. I'm hoping docs team would have better insight. Okay, so the reason I brought it here was because we didn't want to go and start suggesting and working on things and suggesting changes for things if you were particularly wedded to it for a technical reason. It sounds like you're not. Yeah, I don't have any great thoughts on this really. I would rather have some options to choose from like different layers and I'm thinking the more different links we have the better so that we could link folks to particular parts but it could be parts of one single, it could be headlines in one single markdown or it could be different markdowns. So I don't really have any preference on how to structure this. The way we have this in docs, though the I.O. is probably a good choice so we probably should kind of mirror this structure like in a sense but I would rather have some. We'll talk about it. Yeah, we'll talk about the docs meeting for sure. We just didn't want to make your life harder or anyone else from Red Hat who ultimately gets the brunt of a lot of questions and has to do a lot of it impacts you a lot. Okay, so next thing this one Mike wanted to bring in Mike, take it away migration from entry to auditory cloud controller managers. Yeah, so this is a big change that's coming in the Kubernetes community and it might be I might be skating way ahead of the puck here considering that we're kind of working on the 4.8 release but in 4.9 OCP we're going to start releasing a tech preview and I guess for Azure Stack Hub it will be GA. We're going to start releasing these out-of-tree cloud controller managers. Now this is a big migration that's happening in upstream Kubernetes. They think they'll do it, they'll finish the migration R1.25 but these are the controllers that talk between the kubelet and the cloud and do things like setup services and routes and handle node lifecycle terminations and whatnot. Right now all this code is handled in tree in the kubelet and it's all being moved out-of-tree to separate repositories which means that there will be separate deployments for the controllers and these will be done per cloud. As part of my effort to bring what's happening here back to engineering inside Red Hat I share a lot of details of these meetings with my team and my team had asked me like hey you should bring this up in the OKD group because this will be something that will be hitting OKD in 4.9 and I know how much everyone here loves to tinker and play around and this is one of those areas where we'll probably need lots of soak time in terms of testing these things out. Now to begin with it's just going to be like AWS and I think OpenStack and Azure Stack Hub will be released in 4.9 but then 4.10 we'll see like IBM cloud potentially Alibaba others and there will be more and more of these coming out vSphere as well. So like as these start to get rolled out this is going to be a change in the way that OpenShift or OKD is deployed and probably by 4.11 or 4.12 we're going to start looking at this as the standard way. The entry stuff will go away and be deprecated at some point but it won't that deprecation probably won't happen to like 4.13 or 4.14. So anyways I wanted to bring this up yeah go ahead sorry. What is this again like I think I missed the a bit of this what's changing the so this is if you've been following UpstreamCube there's been this talk of in-tree to out-of-tree cloud controller managers oh yeah yeah that yeah so that's that's what that's what's happening here the in-tree cloud controller managers will go away we're going to start releasing dev preview and tech preview in 4.9 and then that'll kind of roll forward so I just wanted to bring it up here hopefully I offered to do a demo about some of this stuff at some point so hopefully we could show it off in OKD you know ideally it won't make a big difference but I know since people here are working at the cloud layer I imagine you're going to run into problems with it so yeah that's it that's very helpful oh and just I guess by way of mentioning I put two links in the document one is the out-of-tree migration enhancement which describes like the real nitty-gritty technical details of how we're going to handle this in OpenShift and the other one is a link to a new operator that we've created which will manage the deployment of these cloud controller managers pretty cool well thanks for helping us get ahead of the game as it were and we'll check in with you periodically on where things are with that like maybe quarterly or something like that it yeah no that'd be awesome I think like you know part of the thing here too and this is kind of speaking to what you know there's this awesome conversation happening in the comments here this is kind of speaking to something that John's getting at which is that you know at least from my team's perspective you know cloud infrastructure team you know we we see the value that this community brings to the work that we're all doing and what we would like to do is figure out how we as engineers inside Red Hat can make this more successful you know how can we get the community more involved in what we're doing how can we get these components that we're working on you know closer to upstream closer to like what's actually happening at the head of development so like you know really I would like to help to solve exactly this kind of disturbance that John is talking about we don't want it to be seen as a hostile community for changes to come back from outside in we would like to accept those especially for the work that our team does where we might have you know bugs at the provider layer between you know whether it's these cloud controller managers or the machine API controllers our team is working on both these things and there is no way that we'll be able to handle the amount of clouds that are coming in and that we're planning to bring in and so like what we want to do is we want to help build out the community side of this so like you know yeah like I would love to get to the bottom of what John's talking about and if there are projects that are giving off hostile vibes when you try to bring a PR back like we should mention those we should figure out what's going on yeah exactly let us know if you're getting pushed back right I know for a fact what John's saying if John showed up to machine API operator repo or one of our providers and opened a PR it would get attention we would not just be like no you're from outside now granted if you're trying to put a feature in that might take longer to get merged but anyways soapbox disabled so I see with this you know you bring up this in-tree to out-of-tree thing and I noticed that you that the feature in in the open shift enhancements repo specifically says that CSI migration is not included is that happening somewhere else in something else or whatever because like yeah so in tree volume API the in tree volume drivers have been deprecated for what the last three Kubernetes releases now yeah it's the CSI stuff is separate from these cloud controller managers yeah so it's it's being handled separately but in parallel so that is also happening I think some of that yeah has happened but yeah it will continue to happen okay excellent okay we have five minutes left and so I want to make sure Diane has time to talk about uh what do we have next on our list it's cucon and a well and I also wanted to put in um John the the more we can teach stuff and if you want to work with me and do a briefing or a video on you know how to teach that or a workshop on teaching that um and I would be totally on board with that um giving you the space creating a hop in or whatever it is we need to do um and recording it and breaking it down into the steps because yeah the more the more people we can get doing it and this is the constant ask that I have from product management and engineering is you know what can we use okd to do you know and most of it is testing and deployment testing and deployment is what I keep hearing but um you know arm is coming soon so we'll make christian talk about that next week or the week after when he comes back from pto and has it all done and working um so that brings me to cucon um and I will put an email out on the um sort of administrative google group um asking as well but um if anyone is planning on being at cucon north america in person I would love to know um because that you know even at red hat it's severely limited who can travel so having um people who are okd savvy who are going to be there that would be great to know um and I can use you either in the booth maybe with an exchange of more than t-shirts or something this time um but it's going to be very limited attendance I think at cucon north america for open shift expertise so um I am definitely interested in yeah Amy I think Amy didn't you get a talk accepted um and I'm pretty sure having a talk accepted gets you um up a notch in permission to leave if you're a red hatter um I'm behind a border um at up in canada and I don't even have permission to travel to the us yet um yeah my talk will be at a oh three de the new gaming open search project so I don't know if that'll push me over or not don't worry I got a call I'm I'm pushing for you Amy I'm and so is chris morgan your boss so we have a call I'll come to jesus meeting tomorrow about who's going to get to go but I'm looking more for like external folks from the okd working group or fedora timothy in your community if you hear of someone who's going we just um want to make sure we have experts in the booth um and as well as in some of the upstream working group meetings too um we may do a we we may we will do an okd working group um office hours again um virtually so um you know look for that but you know right now it's severely limited who's who's going to the kukon in los angeles so um if you are and you're watching this recording reach out to me and let me know because I will use you and give you swag excellent all right we have like a minute and a half left any last minute thoughts or comments we're actually just right on time Diane well and one more I put in the link um how is it paddling upstream with or upstream without a paddle dot com launched um with uh caro groover who's a working group member and some great tutorials are on there as he says all the homelab and crc stuff and be in parallel with this meeting I've been chatting with him while he's been supposedly working on something else and not available to attend he will attempt to do a rebuild of this crc um with the current release to address that issue that um we talked about earlier so just um he's out there in the ether he just can't show his face but um if you get a chance take a look at his his new um his new blog so and it might be helpful at some point for us to go through crc as a thing so that other folks are familiar with yeah and he's said he would write up the process to do the build um and create um some documentation it's in his own personal repo right now but and make a .md file in the actual okd repos um so that if someone else wants to build it or he got hit by a bus um one the lottery let's go with one one the lottery much better charo if you're watching this you're winning the lottery soon there we go all right folks thank you so much sooner on sooner or later we'll actually automate those builds hopefully hopefully sooner than later um and have some di and then build automation for crc as well uh we've definitely brought that up internally now yeah we'll let you know all right thanks everybody have a good one stay safe bye bye