 To the OKD Working Group meeting for May 10th of the year 2002. And there's a link to the meeting notes in the chat. Put it again for any folks that have just joined. And take a quick look at it and let me know if there's anything that you feel that you want to add or change. And don't forget to put your name in the attendees list. That helps us know who's here and who we might need to reach out to if they weren't here for something. Let's go ahead and start with the first agenda item, which is always our updates in terms of releases. And I believe we have Christian here today. So Christian, take it away. Yeah, sure. I don't really have a big update, but I saw that Vadim has another release. I haven't been able to check any of the comments that might be there. For this release, but yeah, I'm happy to gather feedback here now that I will check on the forums or on the discussion with actually as well. But no, no, the biggest thing for that release is probably the FCD fix. That is supposedly that is in there. That's supposed to fix the possible data, the data issues. But that looks to be the biggest thing in the installer fixes there for VMware. Those are probably the two biggest things. Christian, do we have a sense of, and I don't want to pin anyone to anything, but will it be you or Vadim or a combination of you and Vadim in the near future cutting releases? Who do we know who's going to be cutting the releases? Is there a way for us to know? I think there is currently no way for you to know. We will essentially what we're trying to do is we will find, or we already know which team internally will pick up responsibility for doing that. And it'll just be shared better the workload. We don't want to have it all on Vadim's shoulders, but we want to have an entire team that is responsible and they will take turns doing the releases, which is also great onboarding experience for new engineers that come into the OpenShift organization, obviously. So we will use that as something they can gather experience with and obviously we will, as a side benefit, there will be less work on the core maintenance for the actual releases. We're currently in the process of working off the details there. So I can't name anybody. It would be too early for that, but there is going to be an entire team that will take on this responsibility. Excellent. Fantastic. Does anyone else? Go ahead. I couldn't have said it any better yet. We've got a bunch of people lined up, but we haven't named the players yet. It's kind of like the NFL draft. Excellent. And any other questions or comments in terms of OKD release is anything for Christian? Maybe just a quick heads up. And I mentioned that earlier. We will introduce some changes to the way we build OKD. So one of the things we're going, I actually I'm going to do or trying to do as soon as moving the OKD OS builds back into our crowd system from the external Cirrus CI that we have. We want to move that back into crowd. So we have just more ease of maintenance, especially now that we want to give the responsibility for cutting releases to another team. And we want to try and pull that back in. And also that will just enable us to do things internally and also enable us to do more things internally than just that one build that we currently do. Just as a quick heads up, there's no deadline for this set yet, but it'll probably be formalized and written up as a spirit cards sometime soon. And more as well, but I don't want to mention too much of the work we're going to do. So a lot of positive unknowns on the horizon, it sounds like. I would say so, yes. All right, anything else from anybody in terms of the releases. Right. Moving on now to the Fedora core OS updates. Yeah. Okay, can you hear me correctly? Yes, I think we can hear you. Thank you. All right, so for today, the main thing for federal coincide is that we are moving the testing release to federal six today, which has been released today too. So that's, yeah, so we're moving forward to federal six and this will be table in two weeks. I don't remember exactly which version is shipped in OKD right now, but I think it's 34 something. Yeah, so we're going to stop using 35 pretty soon into weeks basically. So yeah. And apart from that, there's also have put a lake into the idea and we've started something maybe I don't know. We have mentioned it before, but we're sending them this month in federal careers. So some sort of summary that you can find on the discussion from federal to confirm. And that helps you get a little bit of overview of what's happening in federal careers across months and changes along the line. So do check them out and we try to publish them regularly every month. And the final. Yeah. And that's that's for me. Folks in terms of Coral West stuff. Yeah, go ahead. Yeah, just Timothy, could you paste a link to that latest monthly summary. Awesome. Thank you very much. Any other comments or questions on the Fedora Coral West aspects. Well, moving right along. Let's go to the docs updates with Brian. Okay, so we had a meeting last week. A few things come out of it. We've published the style updates that Brandon did. So you should see, particularly in the light rendering. And it's, it's easier to read better contrast, not so much in the dark. Maybe like dark mode, but in light mode. And the number of changes there. We've got the community archive. The community repo archived. And we're beginning to try and tidy up the repos ready for moving to the new organ and coming up with the process of actually moving into the. Okay D project org. I would have the open shift. What's new. Okay, so Twitter. Slice snag on a Twitter account. We needed to change the email address that is in progress. And we finally got access to the survey. So if you go to, if you're interested, if you can go to the hack MD meeting, you'll get a link to the survey of anybody wants to look at it, update it before we actually finally send it out. We're planning to send the community survey out and fairly soon. And then I did move one issue into a discussion. And that is the good old topic of communication channels. There was a request to actually sort of publicize the matrix channel, but no one seems to be using it. So just a really discussion as to, is that something that we want to actively promote. And if we do, are people actually going to use it. And because if we invite people to use in a channel, then some of the community do need to be there and respond to questions and comments. Obviously, we rationalize the slack channel down to the one user channel. And we've started using the discussion forum in GitHub as the primary mechanism for getting questions answered. Do we want to add yet another channel folks comments? I'd be open to, I'd be open to using the matrix channel. But yeah, I haven't seen a lot of engagement if I'm trying to check whether I'm already on the channel. Matrix is a bit slow for me today. But yeah, in general, I do like matrix and I think it's a good alternative to slack. But we should probably agree on one avenue and not have both simultaneously, or at least agree on one main main communication channel. The team chimed in on the discussion that he thought it would be helpful to have multiple places, particularly if they could be bridged. I think, you know, Brian makes a good point of if we're going to have places they have to be staffed, so to speak, in the sense that there have to be people there in the community that are actively engaging and helping and whatnot. And I think people are going to be asking questions into the this basically. So, from my point of view, honestly, I mean, I'm, I'm busy enough watching slack and the OKD discussion groups and issues. I'm not sure I necessarily want to look at yet another place where, you know, content could be coming in. In my mind, if we're going to move, then let's move and not use slack at all. But like I said, in the chat, the advantage we get with slack currently that we also get the OCP stuff. You know, somebody has questions about OCP, you know, that kind of relate to, you know, OKD, we see that also. And a lot of the OCP folks are on there also. Just my thoughts. I mean, I'm not against it, but I'm not sure if I would be sitting on it. What could you know? I guess one of the challenges we have, the more channels we have, we encourage cross posting like we used to have where everybody just posts to every possible channel to get a question. We got quite a lot of that in the past. So. Yeah, I'm trying to let everybody else weigh in on this. I mean, I have an opinion, but it's, you know, I'm along the lines of John Fortin. I'm watching enough channels. And I haven't ever, even every time we ask, I haven't even been able to log into the matrix, not because there's anything wrong with it. It's just that I haven't even found the time to go and do that and retest it because I tested it ages ago. I got in and I'm sure I forgot it. So. I'm for, I guess, opening yet another one, but. Yeah, we're missing a few people who are very pro matrix on this call right now. Yeah, I think Neil was the driving force behind it. So I'm sure he's going to come in into the conversation at some point. And yeah, I agree. If we're opening matrix, then the question is, do we want to keep slap going or. Is it possible with matrix to actually consolidate and Chris Vadim seem to say that you could use one channel to consolidate them all. Really, that is, that is matrix you can bridge it with all with all kinds of other platforms. I'm not sure how well the slack integration works or like bridging. But you could certainly come at a box that would cross post everything that goes into slack. It would also post that on on matrix. I do think as a, we don't have to use it as our main channel, but we could because it's already there and a few folks are in there and especially the Fedora communities very active on on matrix. Because it's essentially the IRC replacement there and matrix also bridges through to IRC. Yeah, I would suggest we just keep it maybe we add in the room description we just add a link to the select channel and maybe also we can have something like. Team tags on on both slack and matrix where we could point people towards, you know, tag this team and then we'll have individual contributors getting notified and hopefully answer to whatever the inquiry is. Well, I just don't want to lose the Fedora community side when we say we're going to we're closing this letter the matrix channel again. I fear we might be losing out on good contributions from the Fedora side. I think it's been there. A month to six weeks. I don't think anyone's actually found it or used it yet so. Yeah, I mean, if somebody is a matrix of a guru and wants to work out if it's possible to do that cross posting integration, which means that we can still just keep watching on one channel. I have, I think that would be the best solution then. Fedora community like using matrix that can come in the matrix store people like using slack income in the slack door and everybody sees everything else. So we can then only just track one, I think that that's the best of both worlds. Yeah, Brian. Like I sort of, I think John had an excellent point. And I totally agree that less is better, but what you're saying is that basically red hat itself is, you know, half using slack half using matrix. And so if you're only using one, you're going to lose half and both Fedora and OCP are big feeders in the OKD. So I don't think I don't know how you could throw away one of those communities. But maybe that we will just have to live with both and try to make it work. And a lot of the open shift developers aren't on the on the upstream Kubernetes slack because we have a separate instance and internal one. And while the matrix channel really is open to everybody of course and well so is the slack channel that we have but you don't. Yeah, there's many developers who aren't who aren't on the Kubernetes slack. Who might then be on matrix instead. So does it sound like if we wanted to rope in more people from sort of a participants participatory side matrix might be the place. Because there are more sort of active technical developers and admins and DevOps and whatnot. Does that sound about right. It's also that matrix is the preferred way because it's for for the Fedora community because it's open source, obviously as opposed to slack. And it's, it's very featureful, I would say. And we'll probably get a lot more in people coming to flocking to matrix in the future as it grows. And I wouldn't necessarily expect that from our black channel alone. But definitely I wouldn't. I wouldn't like close any of those two channels I think it's good to have to be present on both forms. Well, I'm happy to troll around in the matrix sometimes I forget to log back in after I close all my tabs but there's a there's a flat pack you can install and you can also save the IRC credentials in the on the bridge so it logs you back in automatically. If you're interested in that I have a write up for that. Well what do folks think. I'll look at it. It sounds like everybody wants fewer channels but we think that it's important to to maintain a matrix presence to capture the Fedora community. So I am looking for a volunteer that can do that integration between slack and matrix. I think we volunteer Neil and then I'll try and reach him next week while we're at KubeCon and I'm in we're in the right time zones and see if we can get it get it working. So I'll volunteer and I'll reach out to Neil because he seems to know the most about it. And then I'm buying dinner for Christian at some time so we can all test it together over a glass of sangria or something. Right. Playing people with alcohol for testing sounds like a good plan. Alcohol and paella. All right, Brian do you got anything else in terms of documentation stuff. I think that's it. We did open the discussion. There is a discussion item on the transition to the repos. That's where we're going to be discussing the plan to transition to the new pose. So expect conversations starting there shortly, particularly Brian and I have both been busy respectively, but I think we'll start chiming in on that and hammer out a plan pretty quickly to move to the new repo. All right, moving on to Next up is discombobulated today. We talked about the survey, talked about the documentation. Rooksef status. Is that taken care of now, John, do you know. I know that it was put into the upstream for the kernel, but I don't know when that's going to hit. Anything that we're that we're actually using. I don't know how to find that out that probably have to be a for Dora. That goes type of thing. John. Yeah, I was looking at that whole chain of things. And it looks like it's actually a Ceph project. In the kernel, looking at the patch that was put in. Well, it was a change for the Ceph. But somebody on the kernel side is who applied it. So it'll come in on a kernel change. Right. Although, as I added the, the bugzilla report that contains the comment. With the patch is still open. So I would assume that we close that. When it makes it in and then. What wasn't clear to me is whether or not. That patch actually does this any short term good. Because if you're on. For nine. The only way you can get for 10 at the moment is by getting to the earliest for 10, which is the only upgrade path. And presumably that wouldn't have. The patch in it, which would, which would mean that your stuff would be turf. And then you'd have to recreate like sort of patch it by hand and recreate all of the pvs. Which doesn't sound like a fun experience really. Yeah, unfortunately, I mean, I don't think we're going to see that that go into a 4.9, you know, even a nightly release. I mean, I could be wrong, but I don't, I don't see us. Doing that Christian. What do you think. It might land in a nightly, but I'm, I'm definitely not going to work on back parting that because we focus on for 10 now, unfortunately. It's not out of the question that you could upgrade directly, but obviously we haven't tested it. So maybe we can just test it before the, for the next release test the direct upgrade strategy where we just skip the release. There isn't an obvious reason that it shouldn't work, but obviously there might be. There might be so much. I just, I'll be curious to know whether it's going to get 4.10. So that I don't know how long it takes for, you know, a kernel patch to make it, you know, from upstream to red hat, you know, in fedora. And then, you know, forward to our releases. So if it lands in fedora 36, we should be getting it in. Yeah, I think because we, well, that is, we talked about this. We do major upgrades of fedora within one minor version. I think we should know. I think in the past that that does create issues for us. But sometimes it doesn't. So we will definitely test it if it upgrades. Fine. If it doesn't, we'll have to see whether that kernel gets back to fedora 35, which we're currently on. And that's, that's a possibility as well. So fedora 35 is going to be maintained for another six months. So, not, not fedora coro s, but the fedora package packages themselves, the RPMs, the kernel, everything. So if we don't, if we can't make it work with 36, we can, we can definitely open a bugzilla so it gets back port to to the fedora 35 kernel. It's happening and you have already is your own use. I don't know how I did that. But anyway, so there's a, there's a link to the kernel to the patch, but I'm not sure, you know, who can track that to see where it is in terms of word is in the process. I guess that's kind of what we need to know. Yeah, we'll have to get the fedora kernel maintenance. Do you want to take that or does Timothy, do you want to take that? If there's a bugzilla already, it should be the assigned person and you can add in need info or something. Jeff Layton. It's like Jeff Layton. He should be the guy here. Okay. What do you want me to do? I'm not sure. So there's a patch that is to fix a Seth bug. If you look in the, in the meeting notes, there's a link to it. And we're just curious when, you know, that might hit fedora. Is it going to hit fedora 36, you know, is it going to be back ported to 35? Because it's a significant bug for multiple people, you know, Bruce can't update to 4.10 because of the bug. So that's just one of those things that we've been dealing with for the last month or so. So this is up to fedora rights and the kennel team that I'm trying to find a bug and yeah, essentially we should ask them to back port it to 35. The question says the patch even hit fedora yet because it's up at that, you know, at kernel, you know, how to reach fedora. We need students first probably and then we can back port. I'm currently looking at things off. Yeah, I guess we can follow, follow that on the bugzilla. We'll make sure to ask there for a back port if they don't already plan on doing it. Sounds like a plan. Yeah. And I guess it would be useful to know sort of what the outcome is because if there's, you know, you know, if there's basically no way to upgrade to a working version of 4.10 from 4.9, then I might as well, you know, just do the update. I can throw away all my PV's you have to of course backing them up and then recreate everything from scratch on the stuff side. Which is definitely worth worth trying to upgrade it there isn't really a reason. It shouldn't work it's just we don't recommend it because we don't test it we don't have the capacity to test more than one upgrade path at the moment but it's it's certainly theoretically possible that you could upgrade from 4.9 directly to the 4.10 that includes that fix once it comes out. Right by forcing it you mean Right yeah you'd have to exactly you'd have to use the force flag because it's not in our upgrade graph so it will complain but And we can test that to we can I can make a note for the next for the release that includes this one that we test upgrades from 4.9 directly might not succeed and then we we might have to think of a contingency. And I've got I've got actually test platform that I've got the you know is updated, you know, to the latest 4.10 version and that that doesn't have stuff on it and it works like a charm. So that's good. But, yeah, no, I guess I can wait a month and see what happens what's for waiting. Yeah, unfortunately. Oh, go ahead. Yeah. Yeah, unfortunately, obviously you could override the kernel manually on each note. But that is a lot of manual work. Really recommend anybody doing that. So yeah, if you can wait, I think that might be the best option. The other issue to the way thing was that right now we have a pin kernel because of the issue with the kernel. Crapping out so we have to get we have to make sure that that gets fixed before we can even look at updating to a kernel with the fix so there's a variety of things that have to happen before that becomes live. We have to have a working kernel. Absolutely. Anything else on this topic. On Diana, have you reached out to the operate first folks or do we even need to do that anymore? Given changes happening backstage. I have reached out. I've had a couple of conversation with them. They are coming to coupon next week. So I was going to course them over pay and wine. Sit down with Christian and Vadim and make them talk to each other and see if it's even viable on the it's the Boston University mass open cloud that has some hosting resources for people who aren't aware. And there's some resources inside of red hat on the operate first. And I was hoping to get them to do the code ready containers. And we'll see if we can force them to that stage and then. Maybe even a community build process. Get lucky for using for our core West. Well, let me ask. Well, let me let me ask first with the. Paisley elephant in the room. Christian, if the changes that you're. Talking about in the background happen. Is there going to be a need or. Automated community build testing. Or do you think that the expanded testing in this new situation. Will cover a lot more territory than currently automated testing does. I think both. So we shouldn't see the, the internal reorganization as, as solving what we want to solve with operate first. I think we still want to have those additional resources. Community builds. Available because what we're going to change internally isn't going to. There is going to be changes, but not all of them are really end user facing. So the first thing of just pulling the build back into prowl from serious. Isn't really going to change anything to the outside. It's just that we internally have a much more streamlined process that isn't shelling out to third platform. That we don't control ourselves. So we still want to enable true community builds and rebuilds from folks outside of red hat, which currently because they can't nobody can access prowl and use prowl. So if we have. This community builds project on operate first. That would still be a huge benefit. So I think we, we're just going to do both. And for example, GRC builds aren't going to be pulled into into what I'm doing with proud now that still has to be solved. You know, if, if operate first has as resources to, to enable community users or community members to rebuild parts or all of open shift themselves. Then that is still that that's out of scope for what I'm doing that proud internally. Christian, the proud internal that's going to allow us to do pull requests and stuff is normal, right, which we can't do. Exactly. It's especially on the, okay, the machine OS side. Currently you can't you can't create a pull request, but the CI is only going to run if the if the branch is from the same repository and only red hat folks can obviously create new branches on that repo. And that is going to change. So that is going to be, it's much easier. It's going to be much easier for for community members to open up here and actually have it tested without us first moving that branch into the repo. That is one of the main reasons to do it. Yeah. Fantastic. That will be nice. All right. Let's see. I'm still working on gathering info. Charo is not here. Daniel, I have not seen serial output for installs. This came up. Someone give me the context because actually Timothy's here and can probably answer this. What was remind me of what the context or remind the group what the context was of this we had someone that was asking. Oh, it was the person that was having the issue with the our costs images versus the F cost images. Right, Eric, you're here. Right. And what was your question about serial console stuff. Yeah, I sort of figured it out. So I wrote it in the notes that during the bootstrap or the bare metal IP I was interested in getting. Serial output from the machines that are getting booted and the way I did it is on the bootstrap node. I go into this directory where the ironic service generates the configuration files and then just add whatever I needed. But I don't know if there's a process where you can do that. Without, yeah, doing it manually like this. But at least I figured something else where I actually get serial output, which was quite handy. Yeah, I'm not sure there is a more streamlined way to do to doing it. But have you documented what you did? Could you share a link is it in the agenda. It's in the agenda. I just there's a note. Yeah, manually editing the. Figuration. All right, thank you very much. I will follow up on that if there's a better way to do it. And I think the other patch John had for passing down the args. We'll solve the problem with the fedora coro is the, the red that coro is. And the notes are getting installed. Yeah, I'm kind of curious. I'm not sure if that patch is going to change that, but because when I look at the bear installer, I mean, it looked like was everything in it was talking to, you know, referencing fcos images. So I'm wondering if there's another piece that very deep that has our cost images built into it for bear metal. And I'm not sure where to look for that. But I can't reproduce it so I don't I can't. I can't deep dive into it. The last problem I'm still having with the bear metal installation is my networking setup, but then I think I can actually take that with the open shift itself because it doesn't work at. Yeah, with the open shift installer either. And what were the specifics of your networking setup that you were running into. Yeah, so what I'm trying to do is I'm not providing two separate nicks. I'm providing one Nick with two V lands, the native where the provisioning can happen and then different V land like 10 or something that the bear metal can continue on and it's not one Nick it's a bond of multiple nicks. And the thing I read was in like 410, you can actually specify a net and an M states. I think they call it where you supply the configuration so they actually show up with the full networking and everything but during the ironic Python agents. Part of it that they lose network because they start probing every single network interface. For a little DP and figures out the lands and stuff and then. Yeah, I lose complete network so that's where the serial console comes in. When I've done and I've done on VMware and I've rebooted a node. I mean stopped it at at the single single user mode. You can go in there you can there's a couple of console pieces that you can delete and then reboot it will come up in the single user mode, but I'm not sure if that will help in your case because you want to see it while it's actually booting for real. Yeah. Well, yeah, so the just editing the files where you get the serial output helps a lot and then you can just pass another thing that allows the serial get these service to just whenever something makes a connection on the serial port, you're automatically logged in this route. Great and Timothy left a link in the in the chat there for zero console config and for F cost. Some helpful info there. Thanks, I'll have a look at this. I think this is also essentially configuring after the fact that what we really want is to provide that kernel argument upfront and then have the machine come up with it immediately. So, ignition does have support for setting kernel arguments so you might be able to just set it in ignition. That is not going to be respected or understood by the machine config operator it has a, a shim API in the machine config object. There is a JIRA card open to move the MCO to, to the ignition native API but that hasn't happened yet. But if you if you specify the if you specify that in the ignition that gets downloaded by the by the nodes at provisioning time, then that might already work. I'm not sure if you then also have to create a machine config object to reflect that I don't think so because nothing is going to check if there's a difference. But if you only if you only add the machine config object, that is essentially a day to operation because doesn't doesn't do it through kernel arcs, or through ignition, but it has a separate process for for setting those arcs later on. So it won't just come up with your right kernel arguments right away. All right. Have a look into that as well I suppose. Let's see if we can get it running first and then. Excellent. All right. Next up is to find about bare metal IPI installing our cost nodes we talked about. And that's about it. So anything else that folks want to bring to the table at this meeting. I guess I have a question if we have time. So, I've been wrestling for an interminable amount of time on an issue that happened in upgrading to from, I guess, or 748. When one of the operators that I had installed on 47 wasn't supported on 48. And in the upgrade. It turned out that some stuff was left over. From the old operator. And then that turned out to cause a an internal error with OC get. Which then prevented pods and. Packages and God knows what else from being deleted. And because when you ran OC to try and find out what the problem was, you got an internal server error. And that was sort of annoying in a long, you know, comedy after a long comedy path. I finally had time to track it down. And it turned out to be like a relatively trivial fix. But then I noticed when I, you know, like in the in the path of chasing down, I uninstalled the stringsy operator. And then I noticed that even after the stringsy operator was uninstalled, I have all these stringsy CRDs hanging around. And so then I started to wonder, well, okay, so how much other stuff is there that just gets left when we upgrade that's no longer useful and can still. Perhaps cause problems as it did in this, this one case. And the, so in my, I left sort of a discussion set of breadcrumbs on that. Which you can look in the OKD discussions. And I think nobody's commented on the alleged bugs that I put there. So I haven't yet created any bugs. But it seems to me that if OC gets returns and internal server error, that ought to be a bug somewhere, probably an OCP. But what I don't know is philosophically how much stuff should be hanging around. If you uninstall an operator or if you get rid of an operator when you upgrade. Because the, like without an operator, you might still have objects that are still functioning. And so you can't necessarily eliminate all of the CRDs. But anyway, when thinking about it for about five seconds, it seemed like that was a non-trivial issue that you couldn't just do something and it would work in all cases. But I don't know if people have thought about that in upgrades. I have thought of it and I've run into the issue as well when removing operators that, yeah, there is some CRUD left around that will prevent things from updating. And I got bit, remember which operator it was, but yeah, I mean, basically I had to go around manually deleting stuff and then killing pods to sort of let things refresh and whatnot. It'd be interesting to document those. Bruce, can you put links to your breadcrumbs? If you have any discussions or things, put them in the meeting notes and then that way we can bring them to the attention of the larger community. Yes, sure. Well, I just, I will do that. Excellent. I mean, it might be part of just how the operators were built and they don't, either they don't clean up by design or they didn't think about it. Because there are some operators that are designed to be removed, but also be able to be reinstalled and not lose your configurations. Right. So it's probably really operator dependent on how well they clean up. It would be interesting to, you know, just look at some operators and get a sense of which operators fall into which category of that. But yeah, in the case that an issue, it was a the GitLab runner. And it was sort of, I guess, I didn't uninstall it, but it disappeared in the upgrade to 4.8. And I don't remember the details, but Vadim had some good reason why it was no longer supported. And so I sort of didn't think any further on it. So it was the upgrade process that, well, I don't know exactly what it did. Okay, it disappeared. So I don't know if it was even cleanly uninstalled. And the, the CRD that was left had a bad, bad field in it, which then cause the chain of other errors. So you just fix that one bad field. Once you've tracked down which CRD it is, it's causing everything else to fail. And magically, it all works, you know, like self repairing as it should do. But let me just see where the hack can be. So another one that I ran into recently in 4.9. And it's a shame that this isn't going to be fixed because it looks like it was fixed upstream is there's some issue with the basically a lot of pods getting created to the point where you can either run out of pods or run out of networking with the collect profiles cron job, the collect profiles cron job ends up creating thousands of pods. And depending on your configuration, you're either going to run out of IPs or you're going to run out of pods first. Right. That happened to me too. Right. And basically every, every day I would go through and strangely, it would work from the console. So I would have a every day delete all the pods that succeeded in the pods that failed. And that would clean it up until the next day. Right. But it turned out that that was also that was basically caused by and I couldn't delete a pod from the console either. Okay. And all of that was fixed by removing this rogue CRD. And of course, initially, when I deleted it, it wouldn't, it got stuck in terminating on deletion and wouldn't delete. And so, as I say, it's very humorous. After, you know what the cause was. It's humorous after you've banged your head up against the wall trying to figure out what it is. I've got this issue on for nine with, with, with, you know, this cron job, creating all of these collect profile skills. And they, they did fix it. The bugzilla shows that it's been fixed, but obviously since we're not doing four nines, we're not going to get any of that. So I'm going to have to upgrade that particular cluster to four times just to get it to stop or run a script that just deletes those all the time. Well, there is a, well, but probably there's the suffer underlying issue that I ran into. And I was following a knowledge base thing that I found from red hat on packages won't delete. And although that didn't fix it because of, you know, the internal server error. And certainly enough, I did get a, even though I wasn't asking for red hat support, I did get a lecture from the guy that had the knowledge area saying that OKD wasn't supported. Please contact the OKD community. Did you say I am the OKD community? Well, no, I wasn't that he bristic. I did say, yeah, no, that's what we tell people. Yes. All right. Well, in the last few minutes, is there anything else that folks want to talk about. Well, I actually have a question about that because one of the things that was said, you know, was six, seven, eight months ago was that bugs that we find in OKD, you know, will be looked at by whatever team and that this is, I mean, not a supported product per se. But, you know, we don't get the run around and saying that this is OKD versus OCP. You know, we can open bugs for OKD and they will be fixed because they'll probably exist in OCP also. So that seems like a weird response based on what we've gone through. Yeah, that's probably more due to lack of involvement or knowledge on that person's part. I do think in general, especially if it's something that is also an issue or a potential issue in the product, the developers are supposed to look into that. We are trying to promote this effort more internally and raise awareness that we aren't that we are part of OCP essentially and that each and everybody has to be their part. That's a process, unfortunately. Not the ideal response I would have given, but yeah, if it's a real issue, it'll be looked at eventually and just keep pressing in that case. I don't be too surprised if people don't know. I've had conversations with Red Hat folks, sales folks, and other engineers who are doing sales support who don't know what OKD is. And they're like, well, here, let me tell you about OpenShift and all the great things it does. And I'm like, yeah, I'm co-chair of the OKD working group. I get it. And they're like, the why? And it's like, yeah, OK. So it's, it's, we need to do some work. And I think the Red Hat folks need to do some work internally to help promote it. Yeah, I agree with that. On-going trade. Yeah, sorry. So now we've got something like 27,000 people in Red Hat. So there's time to be folks that don't know about specific main product or community or anything like that. But if you're doing, if you're working with, if you're an OpenShift salesperson, you probably should know about OKD, I think. Or an OpenShift engineer, you probably should know about that. I think the engineers more than the salespeople because salespeople can't sell OKD. That's true. Not like it. So engineers, yes. And I think that that is also part of offloading the release engineering work to this other team because that will be confused as onboarding lots of new engineers get used to and to get to know the whole ecosystem because a lot of engineers, they come into a team and then they have a very specific focus and they don't. And obviously that's enough to be effective on those teams. They don't need to understand or know the entire ecosystem. But it is there and awareness should also be there. So we are working. It's an ongoing thing every time we onboard new people. It's really just part and parcel of it. We're aware. Salespeople are interesting. Fantastic. Very cool. All right, folks, we're at time just about. So thank you so much. It was a great conversation. Lots of technical stuff. Lots of detailed stuff. And looking forward to more. So haven't upgraded. I highly recommend the new of the new release. The new 10. Yeah, I have created this morning and it went amazingly smooth. So knock on wood. Excellent. I'll be doing a v sphere probably tomorrow of it. So. That's great to hear. All right, folks. Let's call today and we'll see you next time. Same bad channel. Same bad time. And feel free to do some asynchronous work because we can always use some asynchronous work on some of these discussion issues and stuff. And, and Mohammed, I see you in there. We, we got to talk security stuff. So, all right, folks, talk to you soon. Bye. Thanks, everybody.