 Welcome to the OKD working group meeting for April 12th of 2015. Let's take a quick look at the agenda and let me know if there's anything you want to change, Ed, or remove. It's not 2015 though. What's that? Everything else is fine. What's that? It's not 2015. Can you hear me? Yeah. Where does it say 2015? You said 2015. Oh, I hear you. I think you know that. Oh, 2022, yes. Sorry, April 12th, 2022. Where did that come? 2015, I don't know. We're doing the time work. Yeah, right. Exactly. Folks, they may have to jump off in a hurry. I'm having some real world production issues, but they haven't told me I have to join yet, so. Okay. Is there anything that you want us to bump up so that you can talk about now in case you do have to jet? There's a couple of things that I'm working on. I've been working on with Christian and Vadim a little bit, trying to get a couple of things fixed. That's pretty much it with the installer. I think it's the installer issue. I think it's a blocker for the next release, at least on vSphere. Christian and Vadim indicate whether that's actually the issue or not. Other than that, I mean, I think we've gotten a lot of bugs squashed that we've been looking at. Did send some pull request to Brian for some build information so they can try to take a look at and see about, you know, building into a release and stuff like that. Which I said I was going to do. So I did. I think that's it for me. So Christian, you have anything on that? Yeah, regarding that. Well, I think it's all the DNS search domain, right? It pops up in various places. So I opened a system DR to make that the idea configurable that was rejected, but there's now another way we could, we could do it by having a kernel argument, a system D kernel arc that would let you define the DNS search domain at provisioning time essentially passing it into directly. That would help. I think the other one you mentioned the other thing the other PR you opened. It's kind of dealing with the same problem, but it's doing it later. That's the DNS search domain as a network manager dispatcher script, which should also work. So I'm fine with doing it that way we'll have to see what the install folks. Stay there, but I'm fine with that approach to I think we should definitely also do this thing and I've actually, I've actually gotten my team to to allow me to work on on that. Directly, so I'm going to spend a few seconds on the system people, which is, you know, I haven't done that too much too much. So I'll be I'll be working on that it's that obviously things that aren't going to be landing immediately. So they'll take time. And I think in the time being hopefully I'm not sure what the like this work around service unit we have whether that now works. I think the latest iteration isn't in the current payload. I'm not sure about that though. So we will see. I am planning on doing a and okay the release for the first time this week. But he was currently taking a break from okay the though that is kind of on me we did fix the issue that we had regarding upgrades so hopefully that is all in back we can cut a new release. But I'm not sure about the specific issue. I do hope our work around works now efficiently. I think the installer issue for these for IPI. I think that's a blocker for a new release. So if we can, if we have a way to push that with the installer team. It doesn't affect OCP it only affects. Okay D so maybe we can use that as a as a push thing it's a blocker. There's a lot of process involved so we will obviously push that into master first and then we'll need to back party to 4.10 branch and that definitely requires a bugzilla connected to it. We can put on the back to low we can save this okay the specific and it'll pass but we still need the engineers that own the installer to the attitude to say this looks good to me. I'll be following through that though. So hopefully, yeah, if that really ends up blocking us. I doubt there will be a release this week but if it doesn't we will hopefully have a release by. And yeah, so I'll definitely follow up with you on that. Cool. That's all I got. Can one of you put the link can one of you put the link to the issues links to the two issues in the meeting minutes. Under the release updates section. Yes. John, if you could paste the link to your PR there and I'll put one in for the system. Yeah, that I hope that was rejected. Alright, Christian, what else do you have in terms of release news. Yeah, not a lot there. Yeah, in terms of release news are really just, we have fixed. I think the blocker that we had. So a new release should be possible now I haven't been like I have never done a new release, but even thankfully wrote up this this operating like release operating that operating procedure that I'll be following and I will tackle that first thing tomorrow. Hopefully by the end of that tomorrow there's going to be another release. But, you know, we'll see by then. We support and we understand that you're helping give Vadima break and so it's appreciated whatever you can do. And there, which isn't really specific to the next this next release, there has been some talk internally about how we could make okay the fit in better with the whole open shift development model. I don't really want to talk too much about it because it was eternal and there hasn't been. I haven't been any decisions in made as of today, but there will probably be some changes coming and hopefully that will make okay D. We'll get in much better with how open shift as a whole is being developed, and we'll make, let's say the the way from from okay D operating system to rail coral as will make that feedback loop much shorter and just much better will enable external to red head folks to in a meaningful manner, much more than, than all of you have been able to because I know it's been hard to contribute or even just to rebuild the things. And I think that will be easier in the future, but I can't really tell you any details. Yeah, not yet that there will be more on this. Awesome. All right, great up up next is a fedora coral s news. Yes. Right. So everything in the notes most of the details and the links. We're actually on the federal chorus side that we are moving to federal statistics, so well in advance. Okay, he's not moved yet to 35 and arena booth statistics really soon but still, we don't stand still and we look at the 36 release which is now in beta and will be released couple of weeks. So we are having a test week we were having a test week this day last week but if you're interested is it's still possible to take part into the testing phase. Testing federal chorus next stream, which is based on statistics. So yeah do feel free to chime in the timeline for the rebased is in the third link. And that we've, we are planning to remove the lead body utils from federal progress. Essentially if you don't know what that is then probably you don't need to care. Unless you will explicitly using that then you probably won't mind us removing it so just a heads up. It was used in a bad by bad man but it's not used anymore. We have a small change that is coming to the way we specify the format, the, the, the version compatibility that we specify for them were images so the them were platforms themselves are going end of life. And a month later, I think it's this year. I don't remember the exact dates, but some versions are soon to be end of life. And those versions were preventing us from increasing the hardware version in our them were images. And so, once those platforms will be end of life, we will date version. So, this will, like the default measures have become will require you to use them where version higher. It's not completely blocking if you're still using an older version you can still use those images just have to do a small manual change with both images. Everything is linked in the documentation. It's not the end of the world is just by default it might not work on older version now. Not now, but soon. And finally, we have virtual box images coming soon so they are not fully displayed yet into all the interface probably they will be in the next release or the one right after but you can have a look and give them a try and if you want to try a visual box images of a rocker s. And that's what it for me for this week. Are there any questions on the fedora core West stuff that was just covered any questions or comments. All right, we have a comment in the chat about who for the virtual box images that is good news. All right, let's move on now to docs updates with Brian. Okay, so we had docs meeting last week. So the first one is to do with the community repo. We have a community repo in the open shifts GitHub org, which is just called a community. And we've moved some of the pieces across the proposal is that we do not continue the membership. So currently there isn't a reasonably out of date membership list of the working group in that community repo. So we thought, well, at the minute, we're not to a stage here we actually have official membership of the working group. So we should drop the membership list as it stands. And we do need to actually make sure that we list the offices for each working group, because that is a requirement in the charter. So in each working group section on the okay, either IO, we will list the working group offices. Yeah, I think that was that. So if there's any thoughts on that or any discussion disagreement. And that's a proposal. And just for a little more context. Yeah, Timothy, you should be on the on one of the lists. Yeah. We're going to have any membership. I don't think I'm on any of the membership list either. Yeah, so we'll, we'll take care of that. The other thing is that in terms of the and we'll talk about this later in terms of the officers. Chairs of the subgroups will be denoted and we have to vote on that. We're going to take a vote ultimately it's up to the chairs of the main group, but we'll take a vote from amongst the chairs and get the feedback from the greater group at large about who should be chairs of these subgroups. It's pretty straightforward, I think, and we'll talk about that a little bit later. One of the other things that came out of the docs meeting was that the meeting minutes are going to go into the website. So I ran it by Brian. That's the best place and it seems like breaking it off of the HackMD for each meeting and creating individual pages. Going into the site for that is good because it can point people directly into a particular meeting and not make it a really long page. I'm looking to automate this. So I actually started messing around a little bit with pulling from the HackMD. You know, having delimiters pulling and then putting in merge requests. And so I'll be fixing that automation a little bit better. And hopefully it'll be functional probably within the next week or two. So we'll have an automated way of our meeting minutes getting up to the to the website. Okay, so Brandon has been looking at the styling. I'm waiting on a pull request to actually update that. There is a link in the documents HackMD posted in chat. But if you go to the HackMD for the document working group, you will actually see that there. So that is the prototype of the new styling. And the dark hasn't changed that much. The light has had bigger changes and look and feels pretty much the same, but it's a lot easier to read a lot more accessibility verified. And I think it looks quite nice. So I'm waiting for that to go live. Okay. So then we've actually started pulling some technical documentation together really following on from the discussion we had two weeks ago in this meeting. I'm actually doing it on a fork in my repo just to actually get stuff going. And that's where John's put the pull request. Again, the link is in the docs meeting groups. I'll post it here as well. And one of the challenges that we're finding is that there's a lot of undocumented steps. And so when I try to go through just pulling from the links that Almeco provided and looking what was in the various places. One of the big challenges we got that most of the registries have the docker file starting at an internal image that isn't in the public domain. So John's actually done a pull request where he sort of said registry CI Openship org slash origin slash 4.10 base. We should replace the internal image with that one. Again, I can't see that documented anyway, so it'd be good if we put it on our site. We need to obviously a way to keep that up to date. So the version of go go long changes or any updates to OCD space. Just really need to work out how we keep this to date. Yeah, I just wanted to respond a little bit about the images thing. You know, so our team, like the cloud team, if you look at like machine API operator, we've tried to maintain two sets of Docker files, you know, we have one called Docker file that we use for like the actual release stuff. And then our regular Docker file is like the public, you know, anyone in the community can build it. We've tried to share that pattern with other teams internally. But I think, you know, John, you opened up an issue, I think on the MCO repo. And that that actually caused a discussion internally because Kirsten, the person who was looking at that on the MCO repo was like, are we even supposed to be doing this. So there is like a question internally about like, should we be creating these occur files and like although our team has taken it upon themselves to do this, this is not guidance across the board. So I would just like kind of, I guess, like, maybe say be cautious a little bit if you're going to go open up a bunch of issues on the project repos because many teams aren't aware that this is even like, you know, something they should be doing. So just FYI. Is there any roots that we could actually question that as a sort of strategy. So from the red hat strategy team to say that that will be a good strategy as an open source community because I mean, OCP is meant to be an open source project as well. But if you can't build it, because all the sources and then is it to have a route to actually raise up with this sort of strategy team with red hat. Christian. I think that that's a great idea. And to add what Mike just said. So these Docker files, they aren't actually the from directives in the Docker files, the actual image references, they aren't economical place to store them they are actually, and there's those are the references So whenever CI changes to newer version, some CI bot will open a PR to update that that image reference. So they aren't actually and they are kind of being replaced on the fly by CI editor. They're not canonical images that are always used if they're in the file. So, being being the non canonical, they're just kind of kept in sync on a best effort basis, I'd say. Being that I would think we have a good argument to say, look, we don't want these weird internal names pop up there. We might as well just have some some externally externally available images there, because we always replace those image references in our CI database. So, I think that would be a good argument to make and making that I would, I have already like this is on my list. As I think I told you, John, that there might be a possibility to do this but yeah we'll have to discuss that internally. I'll have to bring it up on the mailing list. There will be discussion and stuff. But if you could also and I think that's a great idea kind of approach that from the outside and say, look, we already have an engineer internally who says, look, these aren't canonical image names, they're being replaced. And it would just be so much easier for everybody on the outside if they were publicly pullable image references. So could we just make the default to be those public images instead of the internal ones. I think that would be a great argument to make. And if we kind of stick up here on right now on this, you do it from the outside through the whatever. That's going to be my question. That was going to be my question. Where do we actually put that because we can actually put to a pull request on individual repose but is there a public forum that the strategists talk about OpenShift in? Or is there not really a channel that we have to actually make that. Right, that's what I wanted to get back to was like your original question, which is like, how do we make that communication happen. And I don't, I don't think there's actually a good place to do that right now. And I think that's something that like, probably Christian and myself and, you know, we should probably raise internally to say, okay, we've got this great meeting, we've got this great community that's growing and growing and growing. Yeah, like how do we make that connection. I see Neil's recommending the matrix room, you know, like yeah, like if there are red headers who want to hang out there and be part of that I think we probably need like an official forum or something where it's like yeah like if you want to make a request. It's there it's hot in public everyone can go look at it vote on it through whatever. And I just I don't think we have that currently. I mean, is this something that we need to get Diane engaged on as a community manager to actually push that in and definitely need to point to do as well. Okay, so for sure. Next time. Yeah, like her like her spice would definitely help out internally getting this kind of move through the process like Christian and I can bark about it all day and if people will just be like yeah yeah go go fix some bugs or something. That's right. I'd rather have that then about then have you know 35 different pull requests put into each sub project. You know, inside of well right yeah so the other side of this is yeah you could go through and just open up pull requests and create Docker Docker file that okay D and every repo. But the problem is that's not going to really solve the issue because what what Christians talking about is like using the public images, allowing see I to automatically updating them so they're always fresh it's not a maintenance burden. And then yeah we can all just use the same images to kind of build from there. You know, like I think that's actually got the most. Like that's what we should be doing means you should just be able to build these no matter who you are. I agree. I could work around now as you replace them but you have to make sure that when you do your pull requests that you undo those Docker files they don't put into your pull request. Great. And like, I like I said our team maintains to two sets of Docker files but we've had issues in the past where we revenue version and someone goes and forgets to update the community Docker file. Now it's got an old version of go laying on it or whatever you know like we don't want to create that burden for ourselves right. So those kinds of community specific Docker files they already exist in a couple of places like I think you're the the only team that just made them, but there's other other teams that I pushed them, you know, I pushed a community to them because we needed an ironic okay the specific file and they are mostly in like in some cases obviously there are specific things but really what we are using in that place is just a centos dream base. And that that should really just work for our CI as well. So I think eventually we really want to improve our CI as well to just use the same images and then we have this internal second build system for for building the actual release payloads of OCP of OpenShift. While we in OKD land just use the builds from our CI system. So really I think moving that these image bases to just a central stream base would, would maybe even work even for, for the for CI that we already have without because right now these images can't be publicly distributed because they, they aren't just the UBI base that is publicly available they are also they also contain some some RPMs that come from, from a rail repository so we can't just make them publicly available without a subscription because everybody loves subscription. I think improving the communication is the key here because like what's missing everybody loves subscriptions. Manager and subscriptions no don't don't we all so but we we just have to make the same defaults of being open I think here and I think that's a valid and good argument and I think our management will also understand that because it's much easier for external people also to help us with our work if they can actually build the stuff themselves without having to figure out okay this this image reference I can't pull what what else do I use that actually has the very similar if not the same contents. I think I think that's the problem Christian we can't even see what the spec of that image is. So there's no way for a community member to work out. We can probably get it working, but we might be building on totally different versions of the of the underlying sort of language libraries, which means any pull request we do then may fail and behave differently when you do an official build. That's the biggest issue of just trying to. If we can build on the same underlying base image or a very similar one, then it makes the pull request a little bit more valid. Absolutely, and that's why I would argue this should be central stream for everything, and then we can we can always relate our internal rel bills to a specific central stream thing or you know just compare the versions. And on the outside you would just be able to build it with with the center stream. And we're talking about the containers right like the ones that actually make up the stuff inside. Yeah, you know that makes sense to me like. Let's not try to hurt ourselves by trying to use rel ubi unless the rel ubi and the rel container group has decided. We're going to make it not hurt ourselves trying to use rel ubi like. You know, if they decide to go through, you know, I know Scott McCarty has mentioned it a bunch of times that he that we may see like a large expansion of the content available in the real ubi thing to cover like. All user space and just not include things like kernel boot loaders other things that make it useful for it to run as a as a real operating system. If that actually happens then like full steam ahead rel ubi all the things, but otherwise I think it's super reasonable for us to just do send off stream. And it also provides us an avenue to do something that we don't currently do right now we don't currently make it so that those containers that are built. We can't pre qualify, you know what changes are coming into the platform for that stuff using sent off stream gives us an avenue to do that and provide some added value in the open shift pipeline, which ideally makes them care a little bit more about us. So I think that that makes sense as well. Yeah, I mean it all make all this makes sense to me I think the real issue again like there's like a communication that needs to happen internally like. Like when I talk to developers inside red hat, you know there's kind of like a bifurcation right some developers are totally on board. We talk about open source and it's like oh yeah yeah we're building this open source stuff, but the community can't build it and they're like oh that's a big problem and others just aren't even aware that this is happening. Like there needs to be a shift in mentality on the way the development teams internal to red hat kind of look at this and they need to accept the kind of notion of okay. We're going to build these things in a community centric way so that anyone can build them and get out of this notion of like, okay what is subscription based what is not subscription based but I think, you know, obviously Christian myself Diane will have to take that message forward but I think that's a big part of what's going on here is people just don't know. Which is kind of funny for for red hat being an open source company. I just had this conversation the other day with someone and they're like, and I'm like, you know, you can't build this Dockerfile can't be built unless you're on the, you know, unless you're logged in and they're like is, is that a problem and I was like, yeah, kind of. And anybody else want to on this one before we move on. Okay. And then just just the last thing on the technical document. We also discussed creating automated test scripts and frameworks and one of the big challenges that we've ever since I've joined the community, and we've been trying to get the community to do more testing. So we thought that if we can actually either document or provide some automated scripting, Jamie gave us a link to a library that he worked on a while ago. I'll put that in the chat. It's again if you wanted it's in the hack MD for the documentation working group on the last meeting, which was dated the fifth of April. And so again, I think that's that's something else we want to work on to actually document how you can test releases, and then hopefully more of the community will be able to help us out there. And there was a there's been a request for especially bare metal. I think obviously the mware that as well, community members have access to resources there so and that was the last piece for that. And I think that's all we covered at the meeting. And just to riff on what Brian said, one of the things that I've been thinking about lately is upgrades. A lot of folks had issues with upgrades from 48 to 49 because of the that change in the cloud operator, right so they're it's shifted from being yeah Bruce I think you had an issue with this right it went from being the cloud, what is it cloud controller operator. It went from being cloud controller to cloud controller operator namespace or something like that. And there were some other issues with the replica set that I had not actually spinning up the pods correctly. And that got me to thinking is we don't actually have any documentation anywhere where if someone's performing an upgrade, and it starts to fail. Where do they look they don't know the various namespaces and operators to look at when an upgrade fails. I think that would be helpful because a lot of times we get tickets coming in that are like yeah I did this upgrade from 48 and 49 or 47 to 49 or whatever. And I'm stuck it's just not doing this and there are some basic troubleshooting steps that we could share with people, and showing folks how to pivot, you know, to get on to a new release and stuff like that if we actually documented some of that stuff. I think it would lighten the burden on folks in the chat and some of the discussion messages that we get and might build up more people willing to contribute because they start sort of getting into some of the details of actually how OKD operates. So just a thought on that. Does, yeah, Shree, go ahead. Yeah, yeah, sure. No, yeah, I think that's a really great idea. And the first thing I'm thinking of is just like the OKD and OCP are fairly identical in that regard so OCP ought to should have something right I'm sure somebody's written something by now basic troubleshooting steps and like a KB article somewhere, we could pull that for a step. I think there are knowledge base articles it's not in the product doc but this you know honestly it's interesting this is a problem that I think affects both OCP and OKD because the nature of the questions that I see come across the OKD channel in some ways parallels questions that we get from customers who are getting stuck in the same upgrade positions. Now, to your question. Unfortunately, we don't have like a great piece of public documentation that I've seen. I've just seen it come up on a case by case basis where they recommend, you know, KCS articles like the one that comes to mind for seems to happen a lot. People change V centers and then want to migrate their open shift from one to the other and like all the VM names change or something like that and it becomes a massive, you know, pain in the butt and this is like something that I know customers have dealt with and also, you know, community members, but I've only seen like a KCS article about it I haven't seen like an official like here's a massive like workbook for update issues so I think like having something like that, especially if it came from the community would be like tremendous I think it'd be amazing. When it doesn't have to be like very detailed like just something that says hey go to cluster version operator and look at the logs, because so many times there's logs in there that say, you know that it can't spin up such and such pods or whatever, and that's a good place to start. Can we maybe we should just create a doc and the doc screw can take this up that just starts to collate links to various places and maybe small snippets. I was going to say, do you want to create a discussion thread on the OpenShift forum, and I'm happy to pull it together and collate it into documentation if people are then willing to review it and. So on the OKD repo on the new repo or what do you want to do it? We haven't shifted really so the discussion forum is still currently OpenShift slash OKD. Okay, sure. So let's go there. At some point we do have to have a migration strategy. Because we've got the new OKD project organization. So at some point we do need to have a migration strategy because obviously the OKD.io site has to move across the support mechanism has to move across with the discussion forums. So we probably do need to have a planned migration rather than. Yeah, I think we'll tackle that or at least begin to tackle that at the next docs meeting next. Yep. Okay. Alright, anything else on the. Oh, sorry, go ahead. I was just going to make one more comment like on this, you know, talking about like PRs people could open and stuff like that. I know some of the project teams have begun work on creating like troubleshooting docs inside the various different component repositories. Like our team has been trying to do this. I've seen a couple others doing it. That's probably another area where I think like PRs would be welcome, you know, like if we have community members who have figured out how to troubleshoot something on like a various, you know. Whatever like a networking component or something like opening a PR to suggest changes to the troubleshooting jock or even to like start a troubleshooting doc. I think that that's like a tremendous value that could be added. So if people are kind of figure that stuff out and they want to make a PR but they're not sure where to go. You know, I'm certainly happy to help like direct people or if we need guidance on how to put that together. Like I'm happy to get involved as well. They're kind of two layers here right one is it's broken. Where do I go to find out which component is broken. And then once I know which component is broken. It's great to link to those components specific docs. Exactly, exactly. I also just want to point out that there's a lot of overlap between upgrade troubleshooting and initial install troubleshooting in that the cluster is in a state. How do I find out which component is failing so that I can get to the next steps of troubleshooting that component. I, I suspect a lot, a lot of the docs will apply to both. But, but maybe I'm wrong about that like basically, yes, maybe you look in a different initial log to figure out what's broken, but then everything after that is, oh, this thing fails in this way and that's the same for both. I think you're right Daniel, I think there probably will be a lot of very similar advice. I agree that we don't currently have great debugging documentation like how do I debug things. It's mostly as I think Mike said, on a case by case basis, like somebody will find the right knowledge base article and link that out to the customer or whoever requested that info. But it's never really like let's round up all the necessary info. It's even, even like when you debug some CFL is like what there's the must gather what do I look at first we, there is a lot of documentation internally but it's not like we have one canonical place. And this is how you would debug openshift at least I'm not aware of it. I would very much like to participate in creating that that information. And I know that Mike has been absolutely instrumental to creating some of the best documentation I've seen in the past which is the provider onboarding docs which kind of leads into this it's probably the development side. But yeah and we're now working on more of like on the continuation of this internally. So, yeah, if I can help with any of that, I am very happy to do that and I will try to raise that topic specifically of like debugging document documentation for debugging specifically, and maybe making that an open place instead of like the knowledge base article, which is always behind the login. I don't think it's a paywall. It might be in some case I am not sure so it's not paywall but you do need your account. Yeah. So I just love these I look at the repo and not the actual website. And that's what I prefer for all. So you touched on something interesting that I had suggested a while back but we never followed up on it which is a guide to must gather. What is must gather. Where would you start looking in a must gather to follow the process of of an install or a boot or anything like that. I think that is something that I've not seen anywhere externally. And I think it would benefit because we always ask people to provide one. So they're there and multiple people could look at it. But we don't help people figure out like what do you what would you do with it if you wanted to help troubleshoot when someone posts their must gather. I think that's a really poignant point there. Jamie like the must gathers like it is just kind of a collection of data and there's kind of this tribal knowledge about what's in there and then like, depending on what component you're looking at you kind of know where to go. But there are also a couple tools that we should highlight. There's one called Oh must gather. There is like another version of that Oh must gather tool and I've also got a tool that I've been working on. And like Oh must gather gives you like an OC interface to a must gather. So like you unzip you untar a must gather. And you point this tool at it and all of a sudden you can do like OMG get pods and it'll show you, you know, so you can interact with the must gather as if it were a cluster. And then, you know, the tool that I've been working on is like a web interface. So like it creates a static web page for you from a must gather that highlights where problem areas are happening. And like so you can just directly go to those records and just look at them immediately. So I think like, yeah, sharing some of these tools is probably, you know, probably helpful as well because those are the main ways that we interact with these things. So this tool that you're creating then you've been holding out on us. Yeah, well, I actually had been working to get it into our CI system so that it would be available everywhere. So that is the first version of the tool I wrote it's a Python application but I'm actually rewriting it in Rust now, so that I can bundle it as a binary that will get included in the CI package so like if you want to. So that's where the new version is at. And I'm almost done with my rewrite. If anybody is into rust and wants to help out that's where the new version is going to be at but like, but yeah so like I think that looking at must gather is through the lens of the tooling that we've created to like understand them is actually probably like the biggest step up to figuring out like what do you want out of a must gather, you know. This is cool and I'm putting all these links in the meeting minutes as well. If you don't mind me sharing one more link there was another one the testing topic came up before there was one more thing I wanted to share with you on I. This is going to be kind of out there but I know this group likes to get kind of out there sometimes. So, this is a tool that one of our engineers has created and Richard Vanderpool is the engineer who created this and he created it as a tool to help with some of our partner interactions where people are trying to get. The release repository which controls our proud like testing and everything they're trying to do like mockups of tests and whatnot, and this tool will allow you to kind of to do some of those release based. proud mockups on like a local cluster so I know like this is really. Far out there but like if you if you're getting into thinking about how to create like CI infrastructures or how to use the current red hat CI infrastructure to replicate tests. I think this is an interesting tool to look at because it will allow you to take the release repository. Run specific tests out of it like against a local cluster so you can build like almost a mini version of your own CI infrastructure, it doesn't actually run proud and all those other things, but it kind of shortcuts some of that process for you. So I know like john I know you're like deep into some of this stuff so it might be, you know this might be something that was interesting for the work. Yeah, especially as people start to think about putting PRs up that might change the proud configurations and whatnot. And this is another tool that just gives a window into like how we're doing. Excellent. This is great. I just want to second that this has been super useful. Richard is my team lead. We're on the specialist platform team is the team lead there. And so we've been using that tool from time to time. And it's yeah it's been proven very useful to us. And especially if you don't want to deal with all this complexity of probably just want to consume whatever is the release repository. And it was essentially a runner for where that is jobs. And this will just create the job for you that you want, and it'll run it locally in a kind cluster. And it's yeah it's it's super useful and yeah. Great bringing that up. Thank you so much for that for that. I think, because our RCI system is super complex. We don't have to lie about that it's, you know, not trivial. And this makes it actually very easy to just do one thing and focus on one thing. And then you can upstream that into the proper proud config. After testing your changes local. So that really, yeah, I think, even trying the concept of pro our RCI system, which is also the okay the build system. We essentially reuse our CI system for okay the as a build system, or you could turn it the other way around our build system is also the CI system for our product. But yeah, this is super helpful. And we've used it in our team. And it's it's really great. Excellent. Well I want to move on because we've got about nine minutes left and a couple more things but if folks have any more comments or suggestions for stuff. I'm Brian's going to create the discussion thread. And then folks can ship in on that and Christian will make sure that you know where that is that you can add any additional stuff. So moving on, we've got two issues that folks wanted to talk about. So who put up the set the Seth Rook, john was that you. I did but john and I run into it. Oh yeah, go ahead. No, go ahead take it. Yeah. Yeah, so john I both run this issue where there is some it looks it looks to me to be an essay Linux thing but I can't say for sure. But for whatever reason, with the release of F cost Fedora Coro is 35 that is underneath okay D4 10 this latest release. This mounts just do not work anymore. And that seems to be impacting anybody anything who is running stuff within their cluster or trying to mount surface into their cluster from an external place. Like I personally I'm running Rook in my cluster, and all of my surface mounts are basically broken at this point, notably including the image registry which is how I noticed because my builds weren't working anymore and that was a pain. Block mounts still work which makes it really weird. I'm not an essay Linux expert, john very kindly figured it out before me and like, you know, filed a bugzilla with F cost upstream but I don't think anyone has looked at it. And so I wanted to take advantage of being in the same call as Timothy if he's still around to, you know, bring that up and also raise awareness within everyone else just in case they see issues like that. He's going to he's taking a look right now. Yeah, that's on my list to look at more closely next week. I agree. I think it probably is a essay Linux thing, but that's been lower on my priority list than the built stoppers. I'm going to say well Timothy is looking at this I want to quickly get to this we've got like seven minutes left I wanted to get this user posted in the chat and also in as an issue and I don't know that this was ever resolved so network policy denial policy does not correctly restrict traffic to a pod when using node ports. So within that note, does anyone know if that's correct, or if that is, is that a known bug or is that expected behavior. There was some discussion about this about how deny all policy works. Anyone have any thoughts on that. I've heard weird things about the node ports. I mean this sounds like something that might fit in. Yeah, we just we just went through an issue about like restricting traffic to metadata services and whatnot. And I remember that there was an angle to it where a pod that has host port path was not getting restricted through. Well, let me let me back that up if if a pod has host port path on the node, the node wasn't getting restricted so traffic from the node would go out. But other nodes that had previously restricted the host path stopped doing it. Yeah, so I think that's actually what we're seeing. That's what the comment was from this user. Okay, so it's like the same thing basically. All right, one of those weird things I've noticed OVN cube has a couple of just weird corners you wouldn't expect. I ran into a thing a few months ago about a external traffic policy, just like some like one of the one of the policies just wasn't implemented and I tracked it down to a bug and I was just like, well, all right, I'll wait. Is it the default now in OCP and okay and okay D it's the default now but is OVN cube default in OCP now. I'm not sure if it is. Okay, Timothy did you ever respond to real quick. We've got like five minutes left. My response is that if it's not an issue in the federal request tracker federal request developer once it's there. I would say the first step is to make an issue there. Oh, it was not made. Here's my, I'm going to call it a beef, but I mean that tug and cheek. We've been told over and over that we're supposed to be using bugzilla for reporting, you know, pretty much any type of bugs. I do most of the time for something significant. So, you know, I opened bugzilla shouldn't the fedora folks be, you know, getting stuff from bugzilla in order to see this because having to do two things having to do bugzilla and then having to go do something someplace else. It's What is the right way because we seem to have multiple different ways of reporting bugs and it's, it's because the MCO team, you know, you open up an issue in the MCO and they're like, well, you're supposed to open up a bugzilla. Like, okay, which where are we supposed to do it. Yes. So that's, that's the thing. And that's the thing that is difficult is as we have a mix of products and community and so different things to really different places to report different things, depending on which side of the contract you're on. And so essentially fedora caress is a community project. And unfortunately, we do not use bugzilla as the rest of fedora we use a GitHub issue tracker. So anything that is related to fedora caress itself is best reported into the Fedora caress tracker. No, if you have something that is OCP base OCP is a product there. Actually, if you want to have this thing fixed, you need to report bugzilla, because that's where OCP bugs all, but don't worry, we are in the progress of changing that too. So this is changing soon. I don't know how much of this is public, but it's not really. It's all bad. You should all feel bad. This is pretty terrible. I don't care. I'm trying to explain you what the state is. I'm not responsible for it. So just to try. Yeah, so bugzilla is probably soon at some point going away and everything will be JIRA base at some point, but yeah. I'm going to try JIRA just in time to go down to use JIRA at work. Let's see if you want to get right now. If you want to get right now an issue into the attention of fedora caress folks put it in the GitHub issues. Yeah, I mean it's actually probably a fedora issue rather than a F cost. Yeah. Somebody will figure it out. It's easier if you get it somewhere here and then usually we can track the fixes landing somewhere and things like that. All right, I'll put that in today. I think this issue is particularly difficult because it was reported on the OKD tracker and we usually say bugzilla, but then the F cost requires the special. Let's move it to GitHub to be sent off to our core as tracker repo, which is a bit of a special process for us. Yeah, so this is, I think, entirely our fault internally in OKD, not F cost folks. And I think the bugzilla is perfectly assigned to Tev in Fedora. But yes, that's still the project release, so they might not be looking at that quickly. And yeah, I think opening it on the Fedora tracker, the F cost folks really do a great job in reminding people in their respective teams to look at things because they require those changes, much better than we do in in OKD. Maybe we just have kind of the open shift arc as our focus, whereas everything else is just like put it on Fedora and they'll fix it. If we can reproduce this bug on Fedora Chorus node by itself, not directly, not fully on an OKD cluster, it makes it even easier for us to debug and even more likely that you will get a fix. Excellent. Let's see if we can actually ask, because I think it is SC Linux, but I haven't had a chance to debug into it further. All right, I'll be mindful of people's time. So let's, yeah, if the three of you could work together to get it to the right place, that would be awesome. All right, three last things. CRC, we've gotten a slew of OKD CRC questions of the past couple days. So CRC is still somewhat viable, I think, until MicroShift sort of covers more ground. So we can put out a call for folks to build CRC and maybe play with CRC a little bit so we can improve the documentation. That would be helpful. Documentation group is going to take this up. So if you know anyone that wants to build CRC, Charles left some great instructions on how to do it. I'm not that hard and we talked about automating it. Survey, I've been reaching out to Driti and she has not responded. So our survey is still in limbo. We might have to recreate it because I don't have a link to the materials that she was working on. But I do think we should do the survey to get a sense of OKD usage. And then last thing is there will be an email sent out for vote for the subcommittee co-chair. So look for that. I think will be, you know, has thrown his name in for the virtualization, the OKD virtualization subgroup. You know, and folks and various other subgroups will, folks can throw their hats and stuff like that. So look forward to an email on any last minute things before we close up here. And that's a few minutes over. Thanks, folks. Look for this meeting video to be up relatively soon and the notes to be up as a web page. All right, later. Thanks for that, Jamie.