 Hello everybody, welcome to the working group. We're going to just get started here. If you haven't yet, please add your name into the working group meeting notes. The link is in the chat and we'll get rock and roll in here. Today we have a couple of things. Thank you all for participating in red hat summit last week. The summit content is online at the, at the registration page. I'll throw that in chat to Christian and Vadim did an awesome state of okay. Before talk, we had a little bit of traffic in the chat room wasn't as, is wonderful as I thought, but I think they tried to reproduce the, the booth experience and have one chat room pretty much for every single booth. So I was not as effective as I think in person, but it was still a very interesting learning curve for everybody else. I don't know Vadim. I saw you in the chat room a few times. Neil was in the chat room at one point. But, you know, the people have thoughts about how the red hat summit went. It was pretty good. Actually, I, I was pleasantly surprised at how well that worked out Diane. I thought it was pretty amazing from an OpenShift Commons perspective at one point we had 3,600 and something folks logged into one of the things, one of the presentations that we did in the morning. I haven't got all the details on the total counts of everything, but it was pretty amazing to get that many eyeballs on some of the stuff that's going on here. I heard there was I heard there was 81,000 total attendees for the red hat virtual summit, which was just mind bogglingly huge. Yeah, it was. I think the first day there were 8,000 people logged in for Monday, which was day zero for and which is when we host the OpenShift Commons of those. You know, I think a lot of them were people just looking to see and make sure they were set up for the following days, but then on the next day it was insane. I think at one point I saw 70,000. It may have gotten up to 80,000 people registered. So it was pretty crazy. I'm just, I still think for me what, you know, the content was amazing. And getting to see all the talks that I don't normally get to see was pretty great, but I wasn't so thrilled with with the chat experience, the interactivity, mostly because it was just hard to find people. So looking for any feedback on that. Oh, her chat was. That was my low point as well like it was difficult to get in touch with people be reminded that people were notified trying to talk to me things like that it. That one was the low point but like other than that I was impressed at how well this this went. I was too. I mean, it, there was a lot of upfront work on getting stuff there, but I overall I was pretty impressed. Well, remains to be seen. I've got two more commons gatherings coming up that are going to be virtual standalone. So using the same platform. So any feedback people have love to hear that I'm still getting access to the dashboard behind the scenes to see what actually happened. So I'll keep you all posted when that happens. So today, I know Vadim and Christian are both here if you guys could give us a quick update on okay D and where we're at right now. Yeah, sure. So hi everybody. So I think we're getting really close to to merging the MCO branches. 4.6 development will open next Monday. And by then I'll have all the PRs needed lined up and they'll be hopefully they'll get reviewed quickly and merge very soon after the opening of the branch of the master. The unfreezing of the master branch. Next week. It's quite a there's the dual support PR I'm working on right now for spec 2 and spec 3 dual support in MCO. And that's quite large. So it may take a few days to get that reviewed. But yeah, eventually it'll it'll be merged into MCO and then we can sort of read us read ourselves off the F cost branch fork in MCO for the installer. We may need to carry the the fork a little bit longer, but we've sort of come to the conclusion that we can actually release okay DGA with the installer still being forked. Because it's yeah it's easier to just maintain one fork instead of two and yeah there shouldn't be too many large breaking changes in the install and that timeframe anyway. So that's good. It's still you know so we're go ahead and you. So does that so that means that everything is going to get merged in all the F cost branches get merged in except for the installer. Is that what you're saying. Yeah so we only have to F cost branches it's only MCO and installer sort of forked for F cost and the MCO will will be able to support both operating systems from the master branch and the installer eventually will be merged as well. But because OCP is still defaulting to spec to. That's actually not quite clear when that will switch over will happen so until then we may have to carry the installer fork, but we may even get it merged sooner and sort of have dual support of the installer in one in one master branch and. Introduce a build flag or something so we have to two different build no binary builds for for the installer but yeah for in for now OCP still defaults to spec to even though we will have dual support in the MCO already. I think I also heard. But even if I didn't hear him actually I saw it in a chat. We have a beta five coming out shortly. Is that the door or I think Vadim can can answer that I think it's in the works yes Vadim. Yeah sure. I tagged the latest nightly is better five. And it's running upgrade tests now the issues that we have failed. I'm looking into the details of that. And if it passes it would officially become the better five. If the upgrade fails we will scrap it and promote a nightly with the fix. So it's almost ready but not officially yet ready. Also, another, another fun thing we're working on is the ability to install a single node cluster. Unlike CRC, it would be a proper pool blow no kitty cluster operators enabled. But I would use a bootstrap note of course, but you would be able to fit all the parts into one single machine. Of course, you won't be able to upgrade it. But other than that, it's, it's as good as many other. Okay, the cluster that pull request is not yet merged, but I think it should should be should be merged eventually. Cool. So for there's a few other issues we've had in the past and they're also getting close to to getting resolved and that is the missing open stack support we have right now. So we're still waiting for the new ignition 2.3 release I think it is for spec 3.1 and that PR to sort of finish that release is open right now. And that's going to get merged this week. So the next Fedora CoroS base image will will have that support and we will sort of re enable open shift open stack the support with that as well. So that's I think if a thing many people have been waiting for so that should be in beta 6 which I will, I don't know if we want to make another beta release next week or wait two weeks and get get all of that in with beta 6. But yeah, very soon as well. I don't expect a lot of changes in 4.4 right now, since it has just been out of the door. So we can delay this particular beta to two weeks that we won't miss much I guess. Yeah, I agree. And should we should we do the next beta release after the beta 5 that's coming out today. Should we base that on top of 4.5 then already. That's a good question. I guess. So if we could that would be great, but I'm concerned about the stability. Yeah. Yeah, I agree. We should maybe wait for the 4.5 freeze, which is planned for the end of this month. And then the next beta after that maybe beta 6 or beta 7 at that point would then be based on 4.5 with the dual support MCO where we sort of don't have the forked MCO in there anymore. Yeah, I think that might be a better plan because for the 4.4 right now is is so stable as it is. And I say that with quotes around it, but it is pretty darn stable and hate for us to take it backwards. If we introduce something funky with some instability and 4, 5. Well, for the four builds are not going away. We would still have them as a nightly. So it's just a matter of choosing which nightly to promote. That's true. Anybody that was aware and wanted to could stay on the on the 4.4 nightlies. Honestly, I'm not sure what we're trying to argue. What are we arguing against like are we arguing against doing more integration work into the 4.4 line to get get okay D out or are we saying we should move the goal post again to 4.5. It's rather a road to GA should it be like beta 59 and then eventually GA or we should have less often that us and then GA just choosing the frequency I don't have any any particular. Well, I think before we do GA we should have at least one if not two or three beta releases off of that release branch essentially. So right now the plan is to get all the all the dual support stuff into the new master which will be 4.6. And from there we will do a back port into sort of F cost 4.5 which will be released 4.5 plus the dual support things and that will sort of be the last. Parked branch we will have to maintain on the MCO side from there on out any release branches that are branched off 4.6 will be the same for OCP and okay D but we will go GA probably with the F cost 4.5 branch which will include a few bits from 4.6 master, but it'll be much more easy to maintain just those just those few at your support bits in 4.5 and then well after after 4.6 it'll it'll just be one branch. Yeah, okay so so our GA would be on 4.5 with still a little bit of the F cause branch left in it but then our our next alpha release for for looking to the next GA would be off of the now unified. Master exactly exactly so why are we making this distinction. It's because we, we don't have the capacity to backport everything. No, no, no, no into into F cross that's not what I mean I'm sorry I didn't quite say that right. Why are we like what I'm actually asking I guess is why do we care about the release tree trains. Why don't we just as soon as it's merged we switch everything forward and then just be done with it. So it's a stability thing we may have things breaking other things other components in master because the other component might not be ready. I mean it'll build in CI but there's always, you know it's not it's not a released thing so there's no no stability promise, we can give. That's why we sort of want to build OKD off of the release branches as well. Okay, that's fine I just I just wanted to understand why we're talking about release branches stuff, because we haven't actually formerly made a release of OKD still. That's why that's why I'm asking because all of this conversation of stable branches and whatnot is confusing to me, because we still don't have an OKD for release. So from from my perspective the way I see this is, if we can get to having, if we can get to having MCO merged, what stops us from just shipping MCO from the future with everything else being a stable tree. That's essentially what the plan is so we plan to release OKD at this point together with OCP 4.5. Okay, and so once OCP 4.5 goes GA we we can be sure that stable so on top of that we will just backport the very few things that are already in master at this point into F course 4.5 and that'll be the GA release. So, yeah, I'm expecting to to release OKD together with OCP 4.5. And what is the timing on that. Yeah, I was going to ask. You get to ask all the other questions I get the timing one. I think that the development freeze for 4.5 is at the end of this month at the end of May. And it'll be, I don't know, a little bit after that so probably in June or July, I would expect. So we're looking at two more months of not having an OKD release. Yeah. Right. Well, I mean, well, so here's the other thing like we still don't have something for OKD IO to say, hey, you want to do this now. Go, go do it. We still don't have I don't care if it's GA or beta or whatever, but we still have OC cluster up because we don't have a CRC that works for OKD. We still list OKD 311 in a container and we still list mini shift. At this point, I don't care whether it's beta or GA, we need to do something. And, and that's what I'm working on. The request of the this and we have a link on the downloads page says try the OKD 4 beta. And that I do agree with Neil here though that having the OC cluster up and all the old stuff on the front page is not a good look. And also we that's another thing we don't have the CRC yet. So that's what buddy mentioned earlier, the PR that allows for single host single node clusters, and that will essentially unblock CRC builds of OKD as well. Is that is that PR actually merged or is it open somewhere like I don't know which PR we're talking about. So that's open that's on the MCO. And we may, yeah, that'll probably just get merged in the F course branch because that's sort of the 4.4 branch. Vadim, if you have a link to that. Otherwise, I'll paste it in a minute. Yeah, he can pop it into the chat doesn't work for me so I think I'll creating you cards on the community. Okay. So, so for me, I'm just going to say two months from now having GA. Okay, fine, whatever. My more immediate concern is I want a beta that people can use in every mechanism that we support for OCP. And that and we already we have all minus we don't have the CRC. And I'm honestly I don't care if it's little janky. I would like to like what can we do to help make this happen is what I want to ask. Yeah, so Neil. Vadim's got a single node cluster deployment that you can run in AWS. I've got a repeatable one you can do either on bare metal or in Libvert. I haven't finished writing up the the instructions on how you do it. But even with the current beta, you can get a full a full single mode cluster up and running. There's a couple of things they're a little wonky you have to do during the install process, but but you can get one up and running and and it's barfing a few things until we get the until that PR that Vadim was referring to goes in. It's perfectly functional and you can you can try everything out won't run on your, you know, eight or a 12 gigabit, you know, gigabyte think pad, you need a fair amount of RAM, but but it's doable. Okay. So, Charo, you guys are, you're hitting a lot of nails on the head today for me because one of my asks was going to be help me help everybody else get the okd.io site looking to some semblance of usability so it's not all 3.11 focused. And I put the link to the okd site here. And if anyone wants to work with me on on cleaning it up. I'm thrilled to have you help. So, but I wanted to get Charo to talk a little bit more about the work that he's doing with the single node cluster and working towards getting a CRC up because that for me is the thing that's going to change this page. If we have a code ready container, then we can replace this mini shift or just shift this mini shift container stuff over to the downloads page and emphasize, you know, the CRC on the and the single node cluster stuff here. That's my game plan. So I've been, you know, a bit overtaken by summit, but also been waiting for something to give people to replace the mini shift. Yeah, and I think I think if we can get some clean instructions written up for bare metal, libvert, AWS. Once you're there, it's a more repeatable pattern. And I actually, when I first started this, trying to get it ready. What was it last weekend, two weekends ago. Yeah, I actually got the full single node cluster up and running using the GitHub code ready slash SNC. I hacked the script up a little bit that Praveen uses, but that's how I got the first one up and running what failed at that point was actually building the CRC bundles and judging from the from the errors that I didn't dig into too far. I think that's because I was running on sent us seven six, and it was expecting some things that are available on rail eight. I think Praveen was probably doing his builds on on a rail eight machine, or maybe a upstream fedora. And single no cluster it came up and running so what I did from there is realized that what it was doing was was not that far different from how I was building my bare metal UPI clusters sitting on Libvert using BBMC. So, so I hacked up the tutorial I had put together and used it to build a single node cluster, then realized that I wasn't thinking it all the way through and I didn't need a load balancer with the bootstrap I could actually use DNS. So, so I've redone it again just using DNS a records to poor man's load, you know balance between bootstrap and master while they're coming up. Once the master is up and bootstrap is complete you just remove those DNS records and destroy the bootstrap node. If you're using Libvert once you get it to that point you can shut it down. Edit your, you know, verse edit your, your host and add the RAM to that master that you were using for the bootstrap. So if you're if you're doing this on a little box with 32 gig of RAM. You can easily create a single node cluster that has 24 gig of RAM, and that's actually pretty usable. So, so the next steps, I guess is my question here so if you're going to do some documentation of that so that it's a read reusable process for other people. And, and, but the bundling of the CRC, the building of the CRC bundles failed. That's not the same thing as having an excuse me if I'm wrong, a CRC. No, it's not what it what it effectively would do is it would allow somebody to to create for themselves. What you actually get with CRC because really what CRC is. And I may be speaking a little out of turn because I've only tinkered on the edges with it, but it's basically a pre built virtual machine that is then bundled up so that you can pull it down and run the script that configures your local environment. So, if your local environment is hyper V or Libvert or I forget at the moment the one that's native to the Mac, it configures this bundle to run that virtual machine on that and so you get a locally running instance for anybody that wants to do some significant work with it. I'm not sure running it on a laptop, even a nicely beefy laptop is even anything something someone's going to want to do. So running it on a little sidecar server is probably a better option and that was the approach I was taking. And the problem is that this stuff eventually has to in some form or fashion work on people's laptops which is I think one of the things people have the MCO and all these other things to change for CRC have been working towards trying to shrink the minimum viable open shift so that people can do it on there. Yeah, and that's some surgical work we could we could really start doing after the fact. What I think one of the, at least personally what I've observed one of the one of the challenges that we have is that we have to go back into the open ship code itself and undo things that were written into it for a three node environment right and I think the pull request that we've got open around the SCD quorum is is a good example right the SCD quorum guard. Because it was built for a data center it says if I don't have three I'm not, I'm not healthy. Right and that was coded into it so we have to go back and make these things configurable to run in a non data center ready environment. The other upside of it is you could do clever things like have three of them running in containers inside of there and have them quorum on one node. It's stupid, and you shouldn't do it in production, but those are the kinds of things that that I did for making OpenShift 3x work in the single machine when I wanted it to pretend to be productione. That's some of the stuff I think we'll have to work through to get it there and get the size down. Prometheus is another example right. It runs two pods with like seven containers in each pod. And that's probably overkill for a single node. Yes, it's way past overkill. There'll be other things like that that well if you want to test out monitoring. Okay, you know, we're going to have to surgically pair down the size of some of these operators and what they what they expect to be there to call themselves healthy. Right. So, in my quest, it gets better stuff for getting people started on this page, the okd.io site. Do we put in try out the beta, the latest okd for beta release as a single node cluster and we need to get started over here. Is that even the right link? Yeah, that's the old OC up, I think we would need to we would need to put together some docs around building a single node cluster. And I would I'd love for someone to take what I put together and polish it up. I'll make it as solid as as I can try and get some work done on that this this next weekend. Yeah, so at the very least this this line here should hope to try the beta release page. Yeah. Okay, so that's that's wrong. I'm just trying to get what's wrong here because this this stuff here that I'm pointing to the OC cluster up and get started is all 311 stuff. Right, nothing here so this at this one. Yeah, the page is whacked. Let's just put it that way. It's post summit, we can work, we can fix this but I was hoping that we would fix this with either this single node cluster install process or CRC and then announce the rebooting of this page or something. Yes. And I, and I am happy to take. You take a look here I think everybody can see this if you have. If you see broken links. This is where this page lives. Throw that in the chat if you haven't done so already. One of the things I like about the single node cluster versus CRC to is that you could share the single node cluster without. Setting up a funky h a proxy routing or something. Yeah, I think CRC you can only get to it from whatever machine you're running it on without, yeah, because it hooks into DNS mask on your computer. Right, right, whereas. You can use h a proxy to set up CRC nodes remotely and like on remote machines we've done that on servers in our infrastructure here. Oh my God, that's horrifying. It wants to own that CRC testing domain name though so it becomes really confusing. Right and what I'm using if if you have access to DNS you can create a records and then it you give it you give it your own cluster name and right. That's what I do at home. But yeah, that's not good. You can't do that inside of a large organization. All right. So I, I'm trying to parse where we're at right now. So getting the documentation for the single node cluster somewhere in the repo that it can be linked to, whether it's on that first page. I just want to, if possible, I just want to revisit something Neil said a few minutes ago about, you know, deploying open shift to like containerized to a fully containerized installation. This is something that I'm like really curious about and I've been looking at from the perspective of the cluster API work that we do. Because there is a way to make cluster API look at a provider that's like a container based provider. And I'm curious if there's like, there's a lot of work that needs to be done here but I'm actually curious to see if there's a way that we could make the open shift installer. Who's this container based provider to get us to an end point of having like a single machine architecture with containers as the backing for what would normally be nodes in a, in a kind of a cloud situation. Or I, or you thinking about this more simply like things like the bootstrap node stuff don't necessarily the bootstrap processes could be simplified in that manner as well. The bootstrap container. Yeah, I mean it doesn't mean much. And for for open shift. Theoretically that's possible the issues that you cannot create a container from an ignition file. You're asking for a challenge. You can totally create one a problem. The issue is, you need to init it and that would mean like you'd either have to use system dn spawn or somehow trick podman to do enough to make system to do a boot sequence. But if you do boot sequence that's definitely not a container. Oh it is a container it's not running a kernel. It just does enough to boot up the init and start the start ignition, and then that's it. I mean, if it works, I'm super fine with that. I'm not saying anything about whether it works or not right now I'm just saying that that is a way to do it if we wanted to do something. Any resolution there. I think the single node cluster is a topic that will follow us along a little bit more because there's a few different proposals for open shift upstream going on right now. How to solve that and we don't really know how that's going to end up and of course we'll have to follow whatever upstream decides eventually. So even if we whatever we do right now it may change in the future but I think it's yeah I think it's definitely good to to get that working on on the 4.4 code already. And if sorry. If we could add it into the getting started section. Is that the appropriate place to put it. You know how to run as a signal node in the getting starting page. Yeah, that would make sense. Yeah. When I'm popping around too many places. But yeah, if we could, even if you could just put basically char what you know so far in there as a section. That would be helpful. And that would be something that I could link to from the okd.io page. That would be everybody could help correct your grammar and fix things. Oh, there'll be plenty of that. I've also created some helper scripts and. Files like for your DNS and things like that. Do you want me to just link to where I've keeping that in my github or do we want to do a. Pull request and create a section in okd for single node cluster. I would like it to be in okd if we could, but I would settle for it in your home directory for now. There 1st, just to get us started and then and then. If folks help with simplifications or. Fixing issues, then we can pull it over and just make it part of the okd. That would be a great thing. In my humble opinion. So, let's see if we can get that going in the next couple of weeks and don't worry too much about your grammar or things. And then if we can have a list of. Any variations on themes that we have to do to make it work and other other platforms. I live with a former teacher and she is very much onto my grammar. So. Wendy will take a look at anything she'll fix it. Alright. Everybody else will test it. So, I did actually have 1 more comment on 1 thing I forgot to mention building the single no cluster using the the CRC. Things that Praveen has done the. The F cause instance that it spins up for both the bootstrap and the master node by default only have 8 gig of space allocated for sys route. Which is grossly undersized to to fit everything that goes in a single no cluster. So I actually had to modify the Terraform. Config in the and then build a custom installer. I had to modify the code for the Terraform in the installer to create a 34 or a 32 gigabyte disk. And then it successfully ran. Doing it the way that I'm doing it with the UPI install stuff. I didn't have to do that because I'm building the liver and telling it, you know, how big a disk it has. I don't know if anybody here has any insight into how with the IPI method that CRC is using it knows how big to make it sys route. I don't think it's possible, but we could ask. The door across people. To extend the size of their images. It would still. Yeah, basically I would rather solve this on the door across image side. Yeah, okay. Okay, I'll get with you off offline then on where we need where we would need to open a issue. It was an easy change, but again, it's hard coding the size so I wouldn't want to necessarily. Plant that in the installer. All right. So. We've beaten that dead horse. I can see that next time we need to ask the bonnick. Kaiser to come to talk about GPUs on okay D4. We still is. Azure is still it sounds like the outlier we still don't have a fedora core OS image officially up there is that correct statement. I think nobody has cared so far I've heard nobody has carried it forward to make it so that we could get an official image up there so like this is a thing that as far as I know is a red hat side problem. So, I will look into it and see if I can figure out. It's not like we aren't making the images we just can't do anything with them. No, once we upload ourselves. All right. And the beta is available but there's still a blocker on opens. I'm just going through the list here. And so that's still still a truth and we haven't done anything around documenting how an okay D4 release is built yet. That was one of the things. That we wanted to do so but I think getting. To start on that. I don't I haven't done anything with it since he created, but he created some initial things around hacking on okay D4. That was the team. Uh huh. Okay. I will follow up and see. So I think it's essentially, um, that's maybe two different things. So the main okay D releases and all the CI builds are defined in the open shift release repository. This is where all the CI jobs in the open shift organization live. And that's also where okay D builds come from and they then get promoted to beta releases at the moment from there. And then we could also have documentation about rebuilding everything on your own somehow, but maybe we should already link to the release repository. Because you can you can just, you know, check out the files there and all the jobs dig through that. It's not. Yeah, it's not not super easy to see what where what lives where, but you can, you know, definitely check it out and dig in there already. So the release repository is really where everything lives. I think it's rather than asking people individually to dig in, which is just some sort of short documentation about what all the pieces and parts are. Personally, I'd rather get this single node stuff done in the next coming weeks. Yeah. And selfishly, that was my priority too, because I wanted to have one running. So I hadn't looked at the, it's the contributing dot MD that's in the okay D repo. That's the document the Dean started working on. Yeah. Okay. Well, I will find that and see what we're doing. But if we could just keep in the loop over the next two weeks and make sure we get to a decent single node install. And if people can take a look at okay D dot IO and give me feedback on, you know, even if you want to just print a picture of the page and put X's and O's or whatever it is. And send it to me. I'll try and reshape the page as we go along. And fix things up and you can also make full requests against it too. Is there anything else we should be covering? I know I was going to mention that coup con EU went virtual on us. So we're not going to host an okay D working group meeting physically at that place at that time. But we may do something in the background. I'll try and figure out what possibilities there are when I meet with the DNC of folks later this week. Maybe they'll make a chat room available there for us to lather on about wonderful okay D stuff. But I'm not sure yet. Did any of us get okay D related talks or for our core OS related talks accepted to coup con for this virtual one that we should be promoting. You think of it, send me a link and I will turn that engine on. There are several talks which are related to okay D, but not directly for instance see groups of it to talk from Joseph is what we're planning to have you know kitty. Once we were based to Kubernetes 119. I'll find a few more slightly related to to kitty as well. Maybe an okay D guide to. I have a. You can be virtual. I think I've got a talk. Scheduled on Friday to talk about the coup con virtual stuff with the new stack. So if there's things that we. Want to promote or something I can sneak it in that conversation as well. Get the word out. Is there anything else that we should be talking about today? We got 10 more minutes left. I'm not going to show anybody my. COVID haircut. That I let an 18 year old kid cut, but I'm not going to be in video for a few days. This weekend. slippers. I was like, I could go back to my long haired 90s days. I already live the long haired 90s days. So I'm good. I'm slowly getting there. Actually, but barbershops reopened this weekend in Germany here, but I haven't gotten around to to going there. We need your long, we need your long haired self to show up at least once. I went back to the 80s with a haircut. It's neatly hidden under under a red hat right now. So. No, I got I got a red hat to con that I've been wearing the past couple of days. It doesn't look bad. It just is very short. So the next meeting we have. Christian, what do you have in the calendar? So it should be in two weeks. The next meeting should be in two weeks from now. I think we shifted the cadence because of, at least that was my understanding, because of the virtual summit by one week. I thought that was just that was done as a one time thing because we've got up before. Okay, so I'm fine with either. We could do the next meeting next week already then whatever you say. I really prefer not screwing up the cadence if we don't have to because because it's it's it's hard enough getting all the scheduling done right to begin with. Yeah, that's fine. So let's do it next week. Same time. And then then two weeks after that again. Yeah, cool. So by next week, just to answer Joseph's question about the procedure that leads to GA. I've written up an enhancement proposal that I will put up on the okay D enhancement in the enhancements repository in OpenShift. Very shortly and that will sort of be the document the living document where everybody from the engineering and architecture side in OpenShift will chime in to to sort of nail down the GA definition of OpenShift. Of course, we'll we have our recommendations in there. And yeah, so that should be up. Yeah, tomorrow. I just want to get some code ready first so I can just say, you know, not a lot of work. It's already done. I just merged this PR and we're there. Yeah, but I will post a link in the to that in the OpenShift dev channel and all the slack channels. We're usually communicating. And I'll ask you about it again next week. So if you can throw me the link, we'll add it in here. Oh, yeah, definitely. It's any way possible by next Tuesday. You have that right up added into the readme.md. That would be great. And then we could look at that. And just focus on getting the single node installation and okd.io updated over the next week would be my priority list. Everybody. So if you can make a pull request against the okd.io website, if you see anything wrong or have suggestions, just throw it in the issues or okd.io and I'll try and adjust and address them. It looks like a cold and rainy weekend and rowing oak this weekend. So, yeah, that'll happen. We'll just have to wait a week. I thank you all for joining in and I will see you and we will see you in one week. All right. I have a great rest of your day guys. Yep, you too. Y'all too. Bye. Bye.