 We're at three minutes after and we've got 10 people in here. So let's go ahead and get started. Hey, let's do a quick review of the agenda. Take a look at the agenda and let me know if there's anything that I've missed before we get started. Anything you want to modify, change, et cetera. Into it in the chat as well. Are we good on the agenda? All right, let's start out with release updates with the team. Oh, sorry. And folks, don't forget to put your name in. Attendees section that lets us know who is here and maybe who might have missed in some information. It's important to some of the stuff. Okay, everybody. Go for it. Right. So the, the main highlight of the last two weeks of work has been release of forgetting for eight. It took us quite a while to to push it through mostly because of our strict. Upgrade scenario where there is no way for us to manually add an edge that we have to pass it through CI and CI runs disruption tests, meaning the test would fail if we disrupt services API or end user workloads using rather more than 1% during the upgrade. You turn over and bug, we were disrupting at around 10, 13, 15% in some cases. So we tried to push it through to prioritize it, but unfortunately it wasn't, hasn't been fixed. So what we now do is we run the tests, the very same test to it, but without disruption test. So now we can add an edge from for seven and OCD to for eight. Any further upgrades like from one for eight to the other don't need that hack and disruption is within the acceptable limits and so on. And moving forward, we would be more strict about this but this is a one time hack we had to do. And thus, we now have finally more or less updated cubelet version 121. I think it's to order for and we'll have cubelet surveys from now on monthly. Same as OCD does. In other news, Christian is probably not here. He's attending. He's attending keep con and I don't want to steal his thunder but apparently he's got quite a lot of progress on arm 64 images. So we'll wait for him to, to then all that. We're also we're working on their metal API progress. We have, I think all the beats finally in place in for 10 nightlies. Unfortunately, we don't have for 10 nightlies yet we need to push a few more changes to see and we'll have them available soon. Another topic we would need to discuss, not necessarily today but soon is the groups we to in for nine we have all things in place. And we just need to decide how exactly do we want to enable them probably the safest way would be publishing a guide how to do this today, all you need to do is just one machine config. Then we can enable it for fresh installations. And probably in for 10 we can automatically add this manifest in during an upgrade. But again, all depends on the testing feedback we get. And I believe that's all we have for today, there is some work to rebase the 410 to fedora 35 but we haven't really started the it's just in the plan so much to report on. Yeah, we should, we should consider the 49 nightlies have just started right so we should consider making 49 nightlies just do see Group B2 by default just. I think that, like, before we start getting to our C phases like we're just doing nightlies for nine. Let's just do it. Like it's not like the C Group B2 has been beaten the crap out of in fedora so since we run on fedora coral as as opposed to rel coreos, this becomes a lot easier because we already know that for the vast majority of cases that are in the C groups of flicks, things are in an okay shape now and so then it's really a question of making sure whether the rest of the integration points between open shift and and coreos is okay for this kind of thing. And, you know, they're nightlies right like people don't just get auto switched to nightlies. And if we have people like just trying nightlies, and we just straight up do see Group B2 or nightly upgrades to nightlies deploys to nightlies new new deploys to nightlies. Let's just do it and see how it goes because I don't know how else we will get like the requisite feedback to make sure that we're getting this right in time for like saying, do we want to do it. We want to see Group B2 for new installs on stable. We want to do it on upgrades, or do we want to defer upgrades to 410. I think the best way to be able to get that is just just start doing it in the nightlies and see how that goes. Well, I'm excited and I want C Group B2 now. The main difference is OCP is late and for nine cycles, so we can enable C Group B2, but if we find some bug in builds, uBlood or something like that, probably kernel's implementation is very good now for like, what's been like 10 years or something. But all other places are not that well prepared for C Group B2, so if we find a bug there, it would take us a while to actually get it lumped back to 4.9 because we have to wait the freeze before the GA and things like that, so that would take a couple of weeks. Next, how exactly to enable this, we can differentiate between fresh 4.9 installs, get C Group B2, the rest are remaining on C Group B1, we can do, everyone gets Group B2 unconditionally, we can do, here is a guide how to enable it and we can have a dedicated CI job to do this. I'm thinking fresh installs is probably the safest way, we will automatically get this tested in CI. Yeah, my main concern is that existing things like builds probably would take the biggest hit, the Qubelet is probably well defined out of work with containers and C Group B2, but things like builds are probably the riskiest part and showing this to the new installs would be the best solution. Okay, so I think we have a tracking ticket somewhere and I will post an implementation guide there and we'll see where it takes us. Okay, I mean I'm fine with us doing it for new installs with 4.9 now if we want to. I just, I want to just push the button, I want the button to be pushed for C Group B2 basically as soon as possible because like part of the problem that I've seen so far with the C Group B2 stuff is that we're now stuck in this catch 22 cycle of pain where like we can't shake out the remaining issues with C Group B2 until people just straight up start using it because that's what happened when Fedora switched to C Group B2 by default like literally zero of the container tooling, virtualization tooling, etc supported C Group B2 until the forcing function happened. And because choreos reverted that when they, when they, you know, made their releases, we never got that back pressure into the Kubernetes world to fix this, because nobody was really doing anything with it. And I just really don't like the fact that this has just been stalled out for so long and I and a lot of the underlying runtime stuff has been fixed over the years in, you know, 122 Kubernetes 122 which I think is what the relationship for nine is actually based on has support for C Group B2. So at this point I just kind of want to. I want us to be a proper forcing function here, because otherwise I don't know how, how it's actually going to like get over the hump. Basically, I think we understand at this point, get bit given from the, the amount of UPI problems that are CI is not a good substitute for real world interactions. So, while the CI definitely covers a strong subset of it. I want us to just start having users just, you know, follow the normal steps as if they were doing an open shift deploy, and get, you know, OKD 49 on a fresh install with C Group B2. That's the only way to get through it. So I think one of the things if we look at, and actually Timothy is going to be talking about this in a second, if you look at like the F cost group. They have a testing day or testing week is the case maybe it might be helpful to actually organize that for something like this call out to the community and say hey, here's something we particularly want to test. If you click on the link to the, but Timothy is going to talk about there's, there's basically a matrix that people can check boxes for installation, etc, etc. So that might be one way to approach this is to call to the community, provide them something that they can easily fill in to let us know how it went. Right. Yeah, for sure. Yeah, who wants to be in charge of that effort. Who wants to do the initiative of creating a little grid or pending something that we can send out to the community. Anyone. Yeah, I get the idea. I think it's a great idea. Yeah, it's a great idea. I just, I don't have time right now to do that. I'm already like strict super thin right now. It's a great idea. So someone has. Yeah, it's a great idea. I just hope I hope my enthusiasm would make someone else also enthusiastic about the idea. Like I just got my first demo okay before a deployment run POCO deployment working. Just just the past day in our open stack so like, it's been it's been a rough ride so far. My colleague is going to start filing issues and submitting documentation PRs for some of the stuff that turned out to be the words I would say are not quite right. Because we encountered some interesting quirks in our in our open stack based deployment trying to use IPI. Mostly in the realm of like, odd wording in complete information in some cases and like, just some missing coverage and stuff. We've got some hands raised I want to make sure we get to the folks that have their hands raised. Mike, go ahead. Yeah, in general, like I totally, I totally agree with Neil's kind of optimism enthusiasm, etc here I like the idea of getting the groups veto kind of out the door getting people using it. The hesitancy or the I guess the only thing that would be kind of concerning to me is that like yeah we get it out there and get people installing it. And then we get like a glut of people who start to have issues based around something there, which is what we want. But then like how do we manage the other side of it from the OKD community side like is there someone who's going to be helping to open bug zillas or, you know, like, having someone in the community to help organize this effort in both directions. I think would be really useful. All right, why don't we do this? I'll do the guide Vadim has has volunteered to be the point person on this. So Vadim and I will do the setup for it. And since I actually did a little bit of work on the F cost is working groups, testing day stuff. I can sort of duplicate the same thing for our group and we'll go from there and we'll check in at our next meeting and let you know where efforts are. Okay, Neil, we're going to get you at some point to do something though. We will rope you into. I'm sure I mean there's there's already a couple of things I'm already working on and then the fact that like, I'm finally getting an OKD deployment. Working in our in our internal open stack, you know, with IPI, like is basically like, that's been like the starting blocker that we've just struggled to get over. So now that I can have this, I need to make this deployment reproducible so I can blow it away and make it again and again. And then once that's all shaken out. I'm hoping that I can actually start doing some more interesting stuff because like part of the blocker has been for me is that it's a little difficult for me to do more than just like kind of cursory look at things try to know guide people about that kind of stuff when I can actually get the bloody thing running. And now that I have it running a sort of, maybe, I don't know it just started it just started working last night so like that's posted on your efforts for sure so that we know that you've got that well ideally we'd have a matrix of okay so and so has access to such and such resources. That's kind of like, that's kind of I think where we want to go here because like. Yeah, I haven't had time to basket the glory like it literally started working at like eight o'clock last night. And then dad and I were like, okay, we're done. We could finally log in we spent like, we spent a couple of days trying to figure out how to fix the networking so we can actually access the open just posted that was just a whole different set of issues. So, yeah, like, I think it's actually a great idea if we can get like, you know, community resources, community matrix of some kind of like, people are using okay D in this particular platform and configuration, so that when there are things that we need to test. We have a fast track to make sure that those things can be evaluated like, for example, like my hope is that with our new open stack deployment that we have internally that we're running. Okay D on in our proof of concept setup right now and I'm hoping to figure out how to productionize it eventually. That will make it easy for me to go into the future and say like, hey, we need to test this thing. Can I just like, you know, YOLO a few resources on our in our internal open stack to just do some testing and stuff and I'll blow it away afterwards and they'll be like, sure, why not. And that's the kind of thing that like, that's what I'm trying to move towards because that way I can say hey, you know, for open stack IP I stuff, I can help. Right. And that's that's the kind of thing that that's really where I want to get to next because like, it's a little hard for me to do much else without having the first part in place and I know, you know, I haven't, I haven't actually said it all this much but like this has been like one of the problems with trying to do stuff up until now so like I'm super excited that I finally have something because it's a real it's a huge also okay before eight looks super cool. Like, laying around with the UI it's very very nice. Anyway, um, all right well we keep us posted and let us know if you can do more than log in and, and, and automate that so that'll be helpful. In similar news actually I have access to vSphere again so I'll be doing my vSphere based testing again and using my stuff which pretty much automates vSphere UPI. Let's see, who else had a hand up someone else have a hand up anyone. Okay. Okay, let's, did you have anything else Vadim, did you maybe as we transition into F cost did you want to talk about issue 210 and okay to machine OS the conversation that's coming between J11 and yourself and through what that points to. Yeah, we circled back to rather, the Rock West team has reached out and told us that rebuilding the Rock West is a bad idea. Um, I don't know. Sometimes we must have different package versions that certainly not ideal so we would need a way to overlay the necessary RPMs and we don't want to go back to unpack RPMs and layer them as files that's the, that was a terrible decision. So now what we do is we take a list of packages to request installs batch them add additional ones and make our own floor across like each operating system. That's also a terrible solution but on a global scale a better fix would be open shift with rather machine config operator would one day learn how to layer different images which contain always three comments and pack them together, and you would have a properly function system but we're pretty far away from that. And the short term goal is to reuse as much as possible to reuse artifacts built by Fedora course project as much as possible so that we could just simply add a couple of packages from us and some configuration files but there are lots of hurdles in a way. I would appreciate some looks into the discussion we might want to move this to slash okay the repo. Because we also want to have a failsafe way to pin some particular packages like we have to do now due to Kubernetes issues but we also don't. I'm not very excited about to building operating system on every full request. So, that's going to be a long discussion but it feels it would be productive and that would help us shape the whole open shift. And a new fashion gives it was mostly the change of why and earth are we building an OS on every full request it's us and braille cross well changing how open shift eventually would start overlaying different always three comments on one system dynamically just building them from a bunch of images. So, I recall. I think it was like six months ago. RPM OS tree gained support for being able to enable and layer modules onto it so like the modular cryo can now be layered per matching versions and stuff like that. And I think a few weeks ago I want to say with the last RPM OS tree release, you now have the ability to replace based packages with over with layered packages. So like if you want to delete a package and or replace or swap out something. It's under RPM OS tree EX I think it's replaced or something like that. I forget what the sub command is, but basically the API exists for now being able to do mutations like that. Without having to rebuild the entire base OS tree image. Which I think is necessary for certain particular configurations with open shift. You know, switch from a non modular to a modular version of a component in fedora coral s, which may be necessary for something like using particular versions of run C or C run or whatever that have both modular non modular variance. So that should actually be in place now. I guess the the remaining effort would be to wire it up demco if I'm right, Vadim. Yeah, but it still doesn't. It's great and that might find its uses, but that's not the problem we're facing right now. So we initially hit the modularity problem. We kind of worked it around by fixing upstream cryo builds in open susu.org. All of them as plain repos and we just mix in the repo there so what switching back to modularity might be a solution which we would pick but that just feels like additional steps because the very same people build it upstream and the very same people build it in fedora. Why would they bother running to build instead of just one. When it comes to replacing a package that might find its uses but again, some packages are not that easy to replace for instance we're now hitting a problem where we have to roll back the selenux policy. That also brings a bunch of dependencies so probably we're missing some of the updates because of that. And a simple swap of the RPM won't fix it the same problem would be if we would want to swap. We might not be able to see but on a much larger scale of course. So these are the problems we would need to discuss in this ticket and find a decent solution. Perhaps we might have to fall back to rebuilding in some cases and the usual happy path would be using the orc or s artifacts so all the cars are on the table we just need to pick which ones to which features that we want based on. Quite extensive periods during the last two years we had all kinds of failures so I think we have really good set of features we want from from the system. So how do we move forward with this then how do we how do we tackle this do we dedicate a period of 1520 minutes to every meeting and start discussing it or how do we approach this. So, yes, this ties into the future roadmap for federal courage so can you make it correctly first. Okay. The main idea is that we're trying to move to model where we have a base image and where we allow people to have customization. We have layers that you would ship, just like we ship container images with the base image and layers on top. And so the idea that our industry would be able to pull basic image and then apply layers on top on the OS directly which fits really well with the MCO model where we essentially ship the OS as a container. We would just ship like so specifically use on top so one layer would be for example cryo and everything like that, or potentially it could be any replaced packages or think that that would be of use for for Katie. We're doing that both for federal careers and our course, red out chorus, the open shifting in general. But the idea is to make this generic but not fully generic you won't be able to do all the changes you want in the in the US but like that replacing packages until that would be actually possible. So yeah, the basic idea is not that we don't like people rebuilding federal chorus. It's that when people do so well they lose all testing that we do. And essentially we lose all the testing that that everybody else do. So it splits testing into and so that's not great for us. So that means that we essentially never test a well we never ship anything to keep you directly and OCD never uses for the record is directly so we don't, we cannot really use OCD to test for the process in CI. So yeah, the basic idea is that we try very much to bring us to closer so that we can in the end. So that means that we run OCD and to end testing in CI, for example, in federal chorus, at least for the releases, maybe not for PRs and I think that at least for release like making sure that when we do release passes and to enter single OCD, which will be like basics. So yeah, so that nicely turns out into the notes of this meeting. Which is yes, which is around testing mostly so we have a testing week right, which happened, which is happening right now where we're focusing on federal certified changes. So we're rebasing for federal certified this time we're going to try to rebase really really close to the federal certified release so it's going to happen like something like the day or the week. So they like details but it's going to happen really really close to the release. Right now federal certified so the next stream is based as sort of the federal chorus, the next stream is based on federal certified and the testing stream will be based sometime later. So any help here to test would be great and hopefully. That would break too much things in OCD too. That's where we're in the most testing probably. And yeah, and so we have full AIR 64 support right now you can find the artifacts on the download page and everything like that. And on the same year on the same team or testing we're trying to bring Kubernetes upstream. And this is doing end to end testing on federal chorus node with cryo and we're trying to bring that into our own CI so that we can at least for the release to maybe at the beginning. Tests that the upstream communities test us with cryo on federal chorus so all that should bring us much much more closer to having making sure that most of those things like work in the default OCD installation. You know, in federal chorus like we don't need changes of course. Yeah. And then potentially having a full OCD testing in federal or CI. So yeah, the goal of this is, is very much not to to shame anybody on on building federal chorus. At the point it's more about making sure we don't duplicate testing. Excellent. So, what I'd like to do is keep this ticket. And this is. Again, number 210 issue 210 in OCD machine OS it's in the notes want to keep this in our meeting regularly so we talk about this regularly so that doesn't slip by, and that we can continuously address it and also I'll make sure that folks have ample warning when the F cost testing is coming up so that we can leverage our resources to test F costs underneath the various OCD releases and try those together. Timothy, do you want to move on to the rest of what you have. I think I've covered most of what I have so if there are any specific questions. We can go ahead. Right. Let's move on now to docs Brian. Okay, so we've done most of the work to switch over the MK docs.io site. I think for red hat to change the DNS over. So, the most has gone to main, the github's automations kicked in that's all in place now. And the sea names being added to it. So, it's already just to switch over. As it says in the notes, we've incorporated and how to change the official docs. So that's the docs, okay.io, the product documentation. And we've also got a section how to update the okay.io site within the new content. If you want to have a look at it, if you go to the repo and go to the github pages and being served by github. You can actually see the new site. But hopefully soon we'll get that switched over in the DNS. Not really much more to add to that. Another thing that came up that I think Michael was on the call. Yeah, Michael's still here. So, Michael is helping us with an issue where there's 4.9 references and okay D docs. So, Michael is there any update on that. Yes, okay, we updated that main okay D docs. So docs.okd.io page now has version selector for 4.0 docs. And we dropped some of the earlier versions of three. Awesome, right. So let me know if that's what you all expected or any other improvements we can make. Let me know and we can take care of it. Awesome. Thank you so much, sir. We'll take a look at that and then folks, if you run into any issues or you want to see something different. Let the docs team know and then we'll bring it up at the docs meeting. We'll talk about that there. And Diane grabbed a code of conduct. And so this is pilfered from the Ansible folks because they actually ran it through a bunch of legal. So check that out and then we'll take a vote on it at the next full group meeting. Basically, the idea is that we'd have this code of conduct in place on the website and also announce it just like CNCF does at the beginning of our meetings. Just to make sure that the people are aware that any event or thing related to the group. Adheres to the code of conduct. So take a look at it if there's changes you want to make suggestions. Something's unclear or anything like that. Be prepared with those maybe write them out and send them to the larger group and we can talk about them at the next meeting. Any questions? I did create a docs PR. Okay. So if you do want to make comments, you can make comments directly in the docs PR. Okay. Or in the GRE issue either way. Okay. Excellent. We'll put a link to that up there. Oh, yeah, I should have done. I'm sorry. It's in the it's in the GRE issue. Okay. 244. Okay. Okay. And Sandro is not here. But you can see the updates there. And I'll read these off really quick just to, okay, so for the folks that don't know the okay virtualization special interest group of okay D. And why don't you go ahead and read these off since you're a lot of this deals with stuff that you're dealing with in terms of the website and everything. Okay, so they're moving their docs to okay D. And they initially did set up their own site so they have to have raised a pull request to actually move that over. That actually did raise a couple of issues around conversations we had at this meeting two weeks ago around social media. They created their own social media communities. So there was a question of. As this group, are we okay with that as we sort of decided not to do social media for okay D and having a work group that has a social media presence is that something that you want to support. So that's sort of an open issue. Initially they actually altered the dock page and they put a whole bunch of custom CSS and some HTML to actually put like a Twitter tracker on their page. Didn't like the way that implemented that so asked them to actually do that as a template. So the page stays pure markdown. They've decided to actually remove that tracker. So that's an open question on a pull request. Are we okay with their social media links. So that's an open question. Other than that, within the work they're doing, they've successfully tested for eight installation on bare metal UPI. And I've got a guide for it and they're going to put that onto a page on the site. And they're working with the root.io community to get root set operator to the community okay D operators, which I think is going to be goodness all around. And then they're working with assisted install a project to support okay D virtualization as well. I don't know if anyone knows any more about that. And then they're also adding automation for testing a hyper-converged cluster operator with okay D and I've got a link to that, which is in the hack MD notes. There's any questions come out of that. Ideally, we want an answer. Do we want to support social media? Are there any objections to that or how do we feel about that as a group? I mean, does it does it make sense for us to sort of promote their own social basically it's their social media, right that they want to have their own social media for the subgroup or special interest group as they're calling it. So the docs group sort of decided against doing one of our own and instead like having it go through the open shift Twitter and whatnot, just because we don't really have people to man, something like that ourselves right now. And plus there's a wider audience if we go through the open shift Twitter and commons and stuff like that. A lot of this depends on are they already using that Twitter is that already like an established communication channel for them that their users are expecting because if that's the case. I don't see the harm in having a section on their docs that says like, you know, for updated information go here. But I agree with Brian like we don't necessarily need to have the embedded, you know, iframe showing like the scrolling tweets that are happening in real time or something. Yeah, they do have they do have their their Twitter is established they have 62 followers. All of those followers sort of came at once. And it looks like they post every, like once a week or a couple of times a week or something like that since they started it. But there's not a lot there. So, yeah, I don't know that we'd want to scroll because if you have a scroll of social media that doesn't have really any updates it actually doesn't look so. You know, which is what Diane's concern was about ours, right? If we did. And I agree with that concern if you have a social media and it does anything like it kind of makes you look more dead than if you didn't have one. Right. Yeah, I mean. Brian and then Mike. Okay, I was just I also think in a way it's disconnected that group from the rest of us because they don't use any of the communication channels that the other working groups use this main group, the documentation group. And so I think several people on this meeting wanted to be involved in the project and nothing comes through any of you that okay D channels, you have to go and use their social media and to find out there's even meetings going on that they're not plugging into any of the established communication channels that the community would know about. We can we can fix some of those things. So for example, I have access to the calendar, the Fedora calendar and a couple other folks here we could add a Fedora calendar event for them. Which would post to the working group mailing list that automatically gets sent to the working group. I don't know Diane hasn't gotten back and in terms of how timely forwarding stuff through the OpenShift Twitter would be but we could theoretically forward either a fresh message or whatever could be posted directly from the OpenShift Twitter or the OpenShift Twitter could forward what they have either way. It definitely does feel splintered a little bit but some of that might be our part of being overwhelmed and not doing enough to get them sort of once their website is within ours then it changes things a little bit. Yeah, it just feels that we obviously want to support them and we want to get people using and testing and playing with their stuff. I just don't know how people find their stuff if they're on the either the Slack channel or they're listening to the Google group mailing list or anything. Well, let's I'll touch base with them. Diane was talking about setting up a meeting with them another meeting with. Let's set up another meeting let's invite them again because I think Diane invited them again. Let's try again to have them get some representation here. And this is another thing is is there all of their members are from one particular region. Right so it's kind of the inverse of the situation that we have right now. A little bit Timothy it's a little bit later and it's like what APM there's something like that for you. You know so it's it's a little bit later for some other folks here, but generally we can all sort of attend with them it's like very different, you know, availability so let's invite them again and then we'll go from there. Does that seem like a plan just to get more conversation going. I was just going to say like I didn't you know, like I give him what Brian's kind of talking about and what in the discussion here like it. I think it's right and there's less value to include like their Twitter stuff in the official docs. You know it's perfectly fine if that's the way they're going to you know they want to have their Twitter and kind of do stuff there but I would my preference would be to see like yeah how do we do outreach to that virtualization group so that we can get kind of the support of the full like OKD community behind them. So rather than saying well you guys have your Twitter and whatnot like how do we be more inclusive so that we can get your message out along the channels that you know we're preferring to use. Yeah, well let's do that as anyone interested anyone else interested in being in on a meeting with them if we have to do something sort of outside of the bounds of this meeting to accommodate their time anyone else want to be in on that. It'll probably Diane myself and Brian anyone else. All right, well, we'll do it Mike we hadn't in that direction. Yeah, I mean I'm happy to join and help facilitate however I can I don't I don't have a lot of specific technical knowledge going into that virtualization group but you know, yeah I mean I'm happy just to help from a community perspective. I think from a community perspective you're good at facilitating. So, let's do that. Let's move on now to before before we move on I had my hand up before I just I wanted to ask Brian a question, technical question about the docs. Are the work are the workflows there working properly because I I tried to clone something out of the workflow and one of the directories was not looking quite right for me so I didn't I just wanted to ask I you know I didn't open up a bug or anything because I wasn't the pool chain was a bit different than the one I was using so I just didn't know. And as far as I'm aware, yes. So obviously the automation, the automation is fully on. GitHub uses GitHub actions to do all the bills on on POS and if you find a problem, please let me know. Because obviously I'm aware people use different operating systems and then we may have to update the instruction so yeah just ping me and. No, no, I mean if it's if it's if you're seeing it working coming out of that repo then it was probably just a configuration issue that I was. That's all good. Okay. And this actually came up at the docs meeting is how do we document how folks can contribute to the documentation and you know. Being able to use pod like a container with like pod man to run the software and stuff like that to generate the site and stuff like that so docs on that forthcoming so that everyone can sort of participate in. Yeah, I'm also thinking of adding a dev file. So anyone that wants to use code ready workspaces or check can also do it on cluster. That would now that would be awesome. That would be super awesome. And big round of applause for Brian and awesome work that he's done. This has been like very, very helpful to fix our web presence. Okay, moving on to the next piece of business is the release change log. Vadim you weren't at the last meeting but this came up. And if you're still around here, can you explain a little bit about what happened with the chain logs and we've received several issues related to the change logs and the commits look like they disappeared. Yeah, they did disappear in fact. The problem is, we have two forks. And which is installer and MCO. So every time we make a new MCO release we pull in changes rebase hours on top and push them so the old comet unless it gets tagged or somewhere. It gets burned by GitHub eventually. And the change logs are dynamic. They are kept in cash so when I check them they are looking fine because the kid happens and pruned it yet. When I look back after a week, the GitHub has pruned the comet. Change log has been evicted from the cache. It tries to fetch it, GitHub doesn't find the comet and we get literally nothing. The current solution is we, well, we got away. The color is merged in for nine so that problem goes away, would go away eventually. The MCO issue still remains. My current workflow is that I tag with some nonsense tag like for seven and current date like October 12. Push the tags and so on. So the solution is manual just to make sure that GitHub keeps all the commits but coming forward what we need is well either on board or for correctly but that would prevent us from rebasing our changes on top we cannot force push comments there we would have to. We would have to merge all the coming changes between our comment and the previous one we were based upon so that's pretty tricky. Or we have to stick to well being careful and tagging the existing comments. There is a CLI command which can build you the difference between two images of the OC admin for as something like that you can build a change log between two so it's never gone but it might be unavailable for some time in release controller so that's that not much we can do about this because of the hacky way we build machine or machine configure brain store for some time. Can we add something to known issues so that people are aware of this so that if we keep getting tickets on it that we can just point them to something without having to repeat. Yeah probably it would be a good idea to file a PR which mentions that they get in the known issues so that we won't have duplicates. We can also come up with the manual work around finally codified properly. Okay. Good. All right, there wasn't anything did anyone want to pull anything out of the discussion section of the repo did we have any discussions come in that were. It out in any way I didn't see anything and pull anything out but if folks saw something they want to discuss real quick we got a few minutes. Okay, and. New business and this brings it up so location of the main. Main community. Vadim we talked about this at the docs group we talked about this at the main group meeting. Two weeks ago, the idea is that there's a lot of bush to get a set of repo that we can all participate in. Right as opposed to this current one. Basically it's you and Diane, some people who aren't really involved much, you know and Christian I think. Is there any downside to just a new get repo and moving this stuff over. Obviously Diane's going to do some looking in terms of like any legal stuff she said she was going to do that. But do you see any downside to just moving issues and discussion. And the other sections into a repo that we can then access to that more people can participate. Now they will work, but the problem is that we cannot be part of open shift organization because everyone there has to be. That had employees and it's like centrally controlled by CI. Moving out of the org maybe a new work. Yeah, it's just a matter of naming. Maybe we should move to give lab or some. Finding a nice name. Yeah, open shift CS organization. Diane can find us somebody who can create a repo is just as well post sticks there so well so in the docs meeting we talked about that a little bit and apparently that's the open shift CS stuff. Is to people who like gave her access and ability and those people have like moved on. And there's other people who've moved on and they've changed their name multiple times. So the just the docs group in the discussion in there and the discussion at the main group had two weeks ago was just a completely fresh org and repo. So not connected to do you see any downside to that. That shouldn't be if OQD is OQD is an organization that would be nice, but we will still have quite a long confusion period where people don't know which repo file to get said and things like that. But other than that, I think, I think it should be fine yeah. Does anyone else have any thoughts on that is anyone opposed to the idea. Is there a downside that you think makes it more. The benefit smaller than the downsides. Okay. I guess everyone is enthusiastically. Oh, yeah, go ahead. Yeah, yeah. Did I miss here? I thought I heard Vadim say get lab. No, I he said get lab. Yes. And I seconded that. Yes. And so I'm using get lab as well. Self hosted version. But the world seems to be against us. So maybe Vadim could say a little bit more about what was in his thoughts. So I was just thinking we could have a nice URL get lab.com slash show kitty or maybe slash open shift. And how it's there so that we would not be controlled by our CI which prevents us from using it. My only concern is naming a lot of problems. I'm pretty sure we can tackle. If we can use different get hosting for our repos. That sounds good. I'm pretty much open to suggestions, just a matter of getting a cute name so that. We could list them all on our pages and people would not get confused where to defile issues and where they discuss different stuff. And I'll be honest, I like get lab a little bit better just from a technical perspective. So. There's a little more because they also have a troll like task word. Yeah, which I haven't found on GitHub yet. Pretty much. They have one. We've used it for the meetings early on. Yeah. The get up 1 is just bad, but that's the only problem with it. It just kind of sucks. It's super hard to connect to issues and things like that. And it's, it's not. It's not easy to connect them to work connect work items to task items to schedules. Like. It's, it's a very rudimentary and frankly, quite annoying implementation. As opposed to both get lab and packer, which tie. Cards on the board to actual issues themselves. And so the tracking of work items is synchronized with the tracking of issues. Yeah. All right. I don't want to spend too much more time on this. We've got 4 minutes. We've got a couple more things to do, but it sounds like everyone's on board. At the next meeting will devise a plan for starting to move things to a new place. And at that point, Diane can chime in with what she learned from legal. CRC subgroup. Sorry, Timothy. Yeah. We can't hear you. We can't hear you. Okay, sorry. My only concern is moving to get lab is how much pro support do we have in get lab like can we still use the currency? Everything was it just for non production repose. I think the idea is that we are not using prow anyway for any of our stuff as the okay to working group to begin with. So it is a non factor. So nobody, nobody in the working group can use prow at all anyway. That is deep control by employees. So it is not useful to all the rest of us. Okay, so that we will move essentially. All the non building repose to this organ keep like. Okay. Yeah, exactly. It's that weird thing with red hat where they have open source projects and they support community except you can't get access to the resources for those projects to do. You can, you can make the ours. And we can approve them that they're safe to test because you can make a malicious PR which exposes some of our secrets. That's the reasoning behind this. You cannot be the owner of the repo. So I'm not feeling excited about okay to testing every single PR we do for our working. So probably prow is just not a good fit for us, but anything code vice anything which lands in okay the release image. It absolutely must use get have it absolutely must come from open shift. Repose and must use a prow. Yeah. But in terms of management, the website, the documentation stuff, any of those things. None of those have to none of our working group items have to none of our scripts none of our tools none of those have to. Right. Exactly. Okay. CRC subgroup. Neil, do you have an update on that? No. That's fine. That's fine. Fair metal testing CI group. There was discussion about that. We'll talk about that next week. Or next meeting. The office hours is tomorrow at 5pm Eastern promote the link. Oh, I don't have the link there, but I'll put the link in the document shared out for your social media Diane has shared out via Twitter and whatnot. And it's myself and Vadim you're going to be there. Who all is going to be there it's Charo myself Timothy. And I think there's like four or five of us that are going to be there so it'll be lots of fun it's only half an hour but hey it's something during Goop gun, which is cool. This will be the last time I think that we have Chris short sort of narrating for us since Chris short is moving on greener pastures. Okay, I think that is it. Anyone have any last minute things. All right, well, cool. Yes, please promote the event tomorrow and Vadim and I will talk about the things that we have on our task list. And there will be a new task list written up based on this that will be added to the notes and then I'll send an email out with the tasks and who's responsible for them. So that we can be a little more timely with our tasks and a little more on top of them. Awesome. Thanks folks. Talk to you next time.