 Hello, everybody, and please, if you could add your names to the attendee list. The link is in the chat and we'll get started with this. Okay, the working group meeting today, Christian Blumback are one of our co-chairs has that flu. I'm not sure which fluid is, but we're wishing him well. And he is staying home and not available for today's call. So, if you could add your name to this attendee's list, that would be great. So we can keep track of who's here and where you're from. And we'll just drive through today's meeting. It may not take the full hour, but we'll see where we go. All right. So Vadim, you're here, which is great because Christian tapped you on the shoulder to give the OKD for update as well as Vadim has an OKD on GCP tour. So that we'll use that today to drive some conversations. And just a quick update on events, upcoming events. There still is not a date posted for KUKON EU. If it does happen, we will have a face to face OKD working group meeting there. We were going to hold a virtual OpenShift Commons gathering to replace the Amsterdam one, but we have now cancelled that, but we are having a Red Hat Summit OpenShift Commons gathering. And it will be virtual on April 27th. And I'm going to ask Vadim and Christian to pre-record and probably get Dusty to pre-record an OKD for the Dora CoroS talk. It won't be part of the agenda for the day. The agenda is very curated down to mostly customer talks in one state of OpenShift 4. And Kubernetes released talk just because we don't think people are going to watch an entire day long event, but we'll see. But they will watch stuff. And so we're going to add all of the talks that aren't getting broadcasts as part of that day. We're going to pre-record all of those and have them available. So we'll be doing that. And as far as I know, almost all of the other things over the next few months have been cancelled. They're postponed. But my offer to anyone here, if you've got a topic, whether it's OKD related or Fedora CoroS that you want to record, a briefing on, we're just basically taking all the content from the KubeCons and summit that aren't getting broadcast and pre-recording them and making them available on our YouTube channels, which is RH OpenShift. So that's sort of the state of events things here. Any questions? Comments about that? OK, cool. Pretty much the state of the universe. So maybe Vadim, if you could give us a little update on where we're at right now with OK Day 4 and see what are any of the blockers. If you want to share your screen. So here are the notes. There's been quite active two weeks for us. The biggest news is that we finally have a proper documentation on .okd.io. It's pretty much a carbon copy from OCP 4.3 documentation. So it has a lot of inaccuracies, a lot of mentions of RH CoroS. So what we would like you to do is to skim through those, find the parts which are irrelevant to Fedora CoroS, like IBM Z, and filebox to the OKD repo. We will arrange them together and ask OpenShift docs team to to get a priority on those. Later on, we would be able to, I'm not too familiar with documentation and the way they template things, but after a couple of tries, we would be able to submit quick fixes ourselves manually. Status of the platform support. So AWS vSphere UPI, a bare metal unknown to work. We've cycled quite a few bugs during that cycle, but those are pretty stable. We've finished the support for GCP API. In my case, Bootstrap Node did not get destroyed, but we need more information. And if that happens for everybody, that's the bug that we'll fix. Note that there is no official image uploaded to Fedora CoroS-controlled bucket yet. So for now, you would have to upload it yourself manually and override the path to it using Pashow and VAR. And I'll also be showing the installer process that later today. Next, Overt, Rev, and all things like, should also work. There is one bug. Folks have hit, it has a workaround, but we need to confirm that every other installation hits it as well, because I didn't hit it in my install. Hey, Vadim, when you say it. Oh, hello. Sorry about that feedback. That's Overt UPI, right? Or is that Overt IPI? That's Overt IPI. Oh, okay. Okay. So the IPI works. I might give that a try. That would be great to know. So next, also OpenStack IPI also work. I didn't try the UPI, but since UPI is pretty much what IPI does with modifications, that should also work, of course. That's more complex. The biggest pain point right now is Azure, because we're still blocked on tickets from Fedora. Hopefully, those would get results soon. On the beta status, we have a beta milestone. Currently, six open issues. Two of them would be tackled by upcoming MCO rebates. Three tickets are basically tracking issues in OCD for the Azure problems. And the Overt credential issues, as mentioned previously, is what we need confirmation, and hopefully it will be tackled soon. Next on the official release, where the main criteria is that we don't use Fork, Installer, and MCD, and we test upgrades. The upgrades are fairly easy to enable. We don't do that because we want to push to Quay first. And once it's done, we will enable upgrades and have them properly tested. The Forks is a bit more complex. We need to work with Installer team on the approach that, but we started working on Installer and MCO integration now, so hopefully it will be tackled soon. Okay, and by no Forks, you mean that if we're building the installer from source code, we don't have to use the Fcause branch anymore, but we'll be able to build from one of the release number branch or the master branch? Yes, correct. Also, you would file issues straight to either Bugzilla or Installer GitHub repo because that would affect the code that Installer team supports. So let's go with the questions on the status. No questions, do we? No, I'm very happy. I have a fully repeatable process now using bare metal UPI that I can create and destroy clusters and it is consistently successful. In what platform? I'm actually using Libvert with VBUC to emulate bare metal. I'm getting ready to publish fairly comprehensive docs on how to do it in my GitHub page that it's targeted to my internal team here. So it'll have a lot more instruction than you guys would need because it'll tell you how to install DNS and set up your buying server and everything. That's pretty good actually for just everyone. Yeah, so probably this weekend I'll have it done. If you troll around in the project on my GitHub site, the docs are all in there, but the readme doesn't link to them yet because there's still kind of work in progress. And I'm working now on making it work with fixed IP addresses, but we may have run into an F cost bug or something because it's not keeping the fixed IP addresses, but it works great with DHCP reservations. Yeah, persisting the static AP is a bit tricky because of the drug cart not saving them as network manager key files, but saving them as if conflicts and stories ignoring that. Yeah, that's basically what Fedora should fix, but we have a workaround for that. Yeah, yeah, you could just read the if configs. I don't know why we're not. What's stopping us from doing so? Those are legacy. The if config files from network manager reading and writing them. Why are those legacy? Yeah, drink those batch files. Those must die in fire. At least that's what Fedora 31 wants us to do. Yep. But they don't have to be bash files. That's why they're declarative. It could just be, you know, read config things. They're not like Etsy network interfaces, which is actually a shell script that has to be executed to run, which by the way, if you never knew that today, you learned. That's not our call to make. We're following upstream and we're sticking to that. In any case, let's switch to the differential tool. Yeah, can we pause for a second, Betty? And can you go back to your notes for a second here? Because the last bit, I just, your notes file there. So for the release, how are we in terms of getting a beta release official out the door? Where are we at? I would like to wait for a question about the bugs we are hitting right now. The over bug being the most probably critical. We can leave with the bootstrap not being destroyed. That's pretty easy to tackle. And we'll give Azure a couple of days. I will contact Fedora Chorus guys about their progress on Azure. If we won't be able to tackle them in a couple of days, we will have to push that to release date and go to beta without them. So hopefully a week, maybe. Okay. So what I would like to see is if you can flag the whole group and especially me when the overt thing gets done and resolve properly, that we do some sort of beta announcement with or without Azure. We seem to have some consensus here that we've got good cross platform support. And if we could aim for next week, that would be great to do that announcement. And I'm wondering if I can coerce... Is it Charo who was talking or was that Neil that was talking about having success there a minute ago? I think that was... To taking some of the experience that you had and turning that into a public blog post and some content we can use in that Fedora magazine or just a regular old OpenShift.com blog. Oh, yeah. Actually, you could probably coerce me into doing that. My friend, Matt Bowling, he works on the consulting side of Red Hat. He's been trying to get me to do something like that for a while. Yeah, no, because it would be nice to have the announcement then coincide it with an external person writing a blog post about it. And I'm not sure where we would publish it, but yeah. And Chris is offering some editorial help as well. So I think... I know part of my problem with getting the Fedora magazine article out is just time and having at least a minimal amount of content to... I'm good at editing things and shifting things and getting the messages right, but if I don't have some technical content and links to it, it's just more blah, blah, blah from a talking head. And I haven't really actually managed to set up an OKD anywhere. So I don't know what to say other than the generic, hey, you can now do this and Fcause goes with this and then... What I'd like to do is if we can get to next Friday, so I'm looking at the calendar here now. So next Friday would be next week. Are we talking vetting on the 24th of next week or aiming for like next Friday, the 27th of having at least the overt thing resolved? I would rather go to Friday because we also need to do a lot of churning before that, like publishing releases to Quay, discussing things with Clayton about further steps. We have tons of documentation to update, so I'm not sure. Yeah, so let's aim for that. Let's put it a bit of a stake in the ground here, excuse me, and say the 27th. And if I can ask everybody on the call to look at the docs this week and next week and make as many bugs, point out as many bugs as possible, and we'll try and clean up the docs for the 27th as well. So a little light reading over the next little while and get those bugs in, because that would help immensely to have some more eyeballs on that. And then maybe we can get on the next call, which would be the 31st, someone from the docs team in as well to talk about the way forward with docs and moving that forward. So that works for everybody. I would really like to get this out the door before the end of March, at least to Beta. Yeah, I think I could have a blog post for you guys to kick around by the 27th, because I should have the documentation on what I've done in the lab so that my team here can reproduce that I should have that done this weekend. That would be great. And then even that, that is a baseline. And then between if Chris is willing and myself, and hopefully Christian gets a little bit better, we can just get everybody else to add in their two cents about it too, and have some content to get out there. So that would be real, that would be a really nice thing, because we've had so many crazy things happening to be able to give somebody all the folks out there in virtual self isolation, something to play with over the next few weeks. So how about that team, that tour of OKD4 on GCP? That would be a wonderful thing to see. Sure. That's the final result. But the first step is here. So I didn't bother with a proper video. I've got a bunch of screenshots and a lot of talking, but here's what we have. We will show an install OKD4 GCP using IVI flow, show a few known issues, and take a quick peek on what installer is actually doing. UPI flow should be pretty different, because you have to set up things yourself, and in our case, we want to install or take care of that. So here's the install config template I'm using. There is not much different from other providers, actually. The difference is, of course, platform GCP, your project ID, and the region you're using. The rest is pretty much the same. I'm using its, in fact, change of template, because I'm templating those using Ansible and create clusters on demand. I wrote my own wrapper around installer, where I can make GCP or make AWS, and I have multiple clusters running outside. Note that it's not designed to keep a long-term cluster living. It's just to quickly spin them up and destroy. So I just run make GCP, and it does the thing. And here is the link to that. You can dig into my terrible skills and make files later. So what it does, in fact, it pulls the latest installer. We're pulling from origin for the four installer, and we're verifying that it's the correct version, and the release image matches the desired one. Next, what it would do, it would template and install config from GCP template and uses the base domain for our GCP account. It's literally calling an Ansible command and templates it. We're also copying to a temporary folder from my file for my cluster, and we're saving a copy because the installer consumes the install config.yml. So later on, this huge terrible command, which starts the installer from the image we have just pulled, so we're passing it. I have a better line. First of all, we mount the folder with our cluster because you can have different ones running in a while, and we're making the installer output to this particular directory. Next, we mount the Google credentials, and finally, we run create cluster command because the entry point in this image is OpenShiftInstaller. We also override the release image just to be sure, and currently you have to override the image used by the installer, which points to our local copy of Fedora OS. Here's the output from the installer, and things start running. Basically, it notifies us that the OS image has been override, and so is the release image, but that's what we expect. So after about five to 10 minutes to create the necessary resources, we would see that images in our console started creating, and the bootstrap image got assigned an external IP. So if we SSH that IP, we would see that initial Fedora OS image has been has started, and there is a bootcube service there, which does all of the jobs, and we can watch it using JournalCityL command. So the first thing bootcube service would do is to upgrade us to the latest image from our release payload. It would extract the machine OS content, pull it, and apply it as an OS tree content, and then finally reboot. So after the reboot, we would see that Fedora Cross version has changed to latest, the one in 6th of March when I run this, and bootcube service continues. That's basically a huge difference from our cause where the initial image does not pivot into itself. So during the process, you can, on the bootstrap node, you can already export the kube config from this directory and slash up OpenShift, and start watching what's happening in the cluster once it assembles the API service. Oh, this is pretty small. It shouldn't work. Sweet. So first thing we see is that the version of, it's like this. That's much better. Can you all see that? Yeah. Yeah. At least on. Great. So what we see here is that the first operator called version, it's a cluster version operator, started progressing, and it has started the network operator, and it's also progressing. This is why three of our masters are not yet ready. So network configuration has been installed in them, and we have tons of bots hanging and pending, because nodes are not yet ready. You don't have to do that all the time. Like this. And when finally network is installed, those masters are reporting that they are ready yet, and other operators like machine config started progressing, bots are in creating state, and so on. So the difference from UPI flow here is that there are no workers yet. They physically don't exist. That's expected because we use machine API to create them dynamically. We define three machine sets, each of them in different availability zone, and we want one machine in each. And after some time when machine API operators started progressing, it would create machines. They would get status provisioning. That one is not yet probably processed, but later we would see that new workers have started appearing in our GCP console, and eventually network configuration would be installed on them. Necessary files would be copied. They would become ready, and more operators would start their progress. The most critical ones is, of course, authentication and QBAPI server, basically. There are lots of crash loop back calls. The greater operators yet that's pretty much expected because authentication has not yet created all the necessary certificates, and so on. But in the end, we would see that install has completed. The arm killed bots are irrelevant. It's actually bugged and should be fixed. All machines are ready, and we're done. We can run OC status, and yeah, that's pretty much it. To known issues, again, you would have to use the OpenShift install or a CMH override because it's not yet uploaded. Once it's done, we will update the installer to point to the correct location. In my case, bootstrapping never actually completed, rather it did complete, but OpenShift installer on my site never noticed that and never asked to destroy the bootstrap node. I'm pretty sure the problem is on my end because all things look like it should work, so if we have this confirmed, we'll dig into that a bit deeper. Hey, Vadim, go back to slides. Yeah, so those out-of-memory killed pods, I don't know if I see it on EtsyD, but I see the same thing when I'm doing the UPI install in my lab. It doesn't seem to adversely affect anything because the correct pods do appear to be running, but if that's a bug, I might be able to provide some additional information because it's happening on the UPI site as well. Yeah, it also affects OCP. There is an axilla for that, and those pods, they update the kube API schema, and once they're done, they are using way too much memory because we're limiting them, and they get all unkilled because they have used all of their limits. Okay. It shouldn't affect anything. It can affect though, so it's a bug which needs to be fixed, of course. Yes, but since OQT is just a fork installer and MCO, the rest is the very same what we have in OCP, so every single bug which is outside of the installer field builds the MCO field to put the proper file, all of that goes straight to OCP, of course, but we would like you to notify us that you found something and file a bug on OQT repo as well, so that we would know how bad things are. Any other questions? Hey, Vadim, it's Danny. I just have a quick one. I might be off a bit, but I think a while ago we were discussing about moving to each CD operator and being used as part of the bootcube and all this kind of stuff. Is that been done or still on the plan or what? Yeah, it is. I barely know how it works, honestly, but it's there. So the bootcube basically is using the operator nowadays, is it? Like in versus when we used to be like, I don't know, 4.0 or whatever, 4.1? Yes, so previously what we did, we asked MCO to template at CDMember static pods definitions. I think at CD operator is doing that for us nowadays, but before the official OCP 4.4 documentation goes out, I don't think we will have a proper description of the process. Okay, yes. Let's find out what's happening. Any other questions? How long did it take for you to set this up? The whole process, about 30 minutes. The maximum installer will allow you to run 20 minutes of bootstrap and 40 minutes to set up the cluster. And I think it's about 20 minutes for the infrastructure. So it cannot take more than an hour and a half. It will fail in the middle if it takes an hour and a half. And this is set up with IPI style, right? So then that means that in the console, you can do things like grow the cluster and whatnot, and everything just kind of magically happens correctly? Or is it, or are there caveats there too? Once they run create cluster, I'm not touching it at all. I'm just watching you to fail or eventually succeed. And, wait, how do I do that? Oh, yeah. And yes, due to Sheen API being supported here, I can scale machines and things. And I won't have to provision them manually myself. And that is the cool part. Any chance you might remember, Vadim, whether we support the machine API on VMware? Because I remember in the past, it was not ready for that. Yes. We have a machine API on VMware. And I think there are works to make a VMware IPI. I don't think we have this in Fedora Core as installer yet because we didn't base. And I'm concerned about the core. But yeah, it's totally possible. Thanks. Yeah. And Neil, in the lab work that I've been doing, even though it's UPI with Libvert, if you provision just with the Vert install, if you provision another machine pointed to booting off of the Worker Ignition Config, the only thing you have to do is approve the CSR. Oh, cool. Yeah. And so before I left for the weekend, I've got a cluster running at home with the usual three masters and six workers. And you can just keep adding workers to it until you run out of CPUs and RAM. That's cool and terrifying. But it's awesome. Yeah, you need this as well. So for Libvert, what you can do is to install a cluster API as a go actuator, I think. And you would be able to create machines in a similar fashion using machine sets and machines. Oh, I know this. That's cool. Without approving CSRs because that's taken care of by Machine API. And yeah, you would get the very same experience, basically. That's nice. That would be very cool. All right. I guess that's all. Yeah, thanks for sharing that, Vadim. That is awesome. That was fantastic. Yes, that was really great. I think that's awesome work. I just wanted, I think you said that earlier in the meeting that the OKD content on documentations was under OKD Preview. I believe it's actually listed as OKD Latest. I just wanted to double check as opposed to the word Preview. But this is the latest documentation. And this is what we're asking people to take a look at. And then if they find, you know, verbiage that's wrong or instructions that are wrong, just log a bug. And you're asking us to log the bug in the OKD issues. Is that correct, Vadim? Yes. You can go straight to OpenShift Dogs first, but don't expect this issue to get a lot of attention. And also, we would like to review the changes first and advise probably on a better wording and another effective thing. As the latest versus Preview. I don't mind changing. Latest does not look like OpenShift 4. It has things like MiniShift in here. It still mentions Ansible Playbooks and stuff, so that doesn't seem right. Yeah, that's kind of what I was pushing out here. Is that the right stuff to be? Okay. Hi, Dan. It's Michael Burke. I'm the Doc Point person. Yeah. Hi. Hi, Doc's person. Hello. I've been lurking here. I'm a little scared to jump in. Yeah, the latest. The latest docs were pulled off of our master, which, yes, is 4-3. Do we want to wait until 4-4 merges down the master? No. Having a section, latest section of docs that actually is OKD4 is really, really good, but it is very, very weird that I'm still seeing stuff from OpenShift 3 and OKD3 in the latest tab. MiniShift is not a thing on OKD4 at all. We don't have an equivalent to MiniShift right now. Yeah, so those are the issues that I would ask you to actually log. Okay. If you find them and we can just, then we'll have a list of things that are odd and need to be removed. So I think this is the one that we should be looking at, the OKD latest one that's up there. And I think the right way to do this is to, and I keep trying to bump to this, is to put an issue in the OpenShift OKD list, and then it says drop section on adding rail computes or whatever. Those kinds of things. And then the documentation team and the other folks can review this and either make the corrections or take your cut and paste. If you have a sentence, if we did bad grammar, cut and paste that as well with the correction. That will be helpful. Okay. This might be my ignorance, but if we've already caught a lot of these issues in our 4-4 branch that's pending and emerged, are we wasting people's time pulling out issues that we already know about? Well, people have to have something to do. So far, we haven't been able to give anybody anything to do. All right. If the documentation is complete enough, and this documentation that we're looking at is just a copy of the 4-3 OCP documentation, could we go ahead and just present the work in progress 4-4 documentation here? Yeah. I would much rather go through a Google doc or not a Google, a GitHub issue flow than finding something in Guide here and then editing an issue over here kind of thing. Right. Honestly, I'm confused about how this workflow is supposed to work because none of this makes a whole lot of sense for making either OKD or OCP better in terms of documentation or anything. That's encouraging. Yeah. The documentation person who was on the call, you want to speak up what works better for you guys. Exactly. Could you list the options again? We don't have any options at this point. That's the problem. Right now, we're being told if we file issues on the OpenShift docs repo, they'll get ignored because they're on OKD-IO. So we're filing them on OpenShift OCD, but that doesn't mean a whole lot because I'm pretty sure the docs people aren't really looking at those. Somebody has to go and prune them and then go forward them back to the docs people. And this feels like a whole lot of busy work for no particularly good gain. So let's pause for a minute, Neil. Michael, who is the documentation's connection here, liaison for our group. Michael, if you could tell us what would be the best process for you getting this feedback? In terms of it getting into our scope of attention, I think we did it against the documentation repo. Can you put a link to the button? Is it public? Yeah, yeah, it is. Okay, so can you put the link to that in the chat? Sure, I need to take it up. Because I'd rather not give bad advice. Hey, Michael. Can we put OKD right? Go ahead. Can I make your suggestion? I'm not sure whether this is a technical possible, but in the docs, in the repo docs, if folks are, I think you switch from 3x to 4x, you would switch to a different way with modules and everything. And I believe if folks will log an issue or anything like that, can both log that as OKD versus OCP and then obviously we're going to have labels mentioning this is OKD and this is OCP. So then we can differentiate and maybe you guys can pay attention to more OCP and less OCP is OKD and vice versa, obviously, based on your workload and such like. Yeah, I believe there is an OKD only label in there. Right. But I guess folks who don't have a right permission in the repo, they can't set a label, can they? They cannot. Only people with commit access are greater on a repo can actually set labels. Right. Yeah, this is why I wanted to start with OKD repo first because we can aggregate a bunch of issues and on a huge PR to OpenShift talks and get more attention to that instead of multiple small PRs from contributors which we don't wish we don't have options to track because they can affect both OCP and OKD. So that was the process which we came up with. It might not be effective, but let's discuss that. Well, Vadim, do you want us to work from the Enterprise 4.4 branch in OpenShift docs but post the issues to the OKD project? I think we should follow the latest methods, why the latest here and I'm hoping you're posting the latest code you have to the master. So once OkD derives from that, they automatically get all the changes and the contributors would update master and eventually that would make to OpenShift 4.4. But the process is different. We can post the 4.4 branch, yes. So, but the 4.4 is not what's on the OkD latest. That's 4.3, correct? Correct. Yeah, posting a 4.4, excuse me, posting a master. Okay, so maybe it would help me, Michael, if you took over the screen for a few minutes and showed us where we can view the master and gave us a couple of things about that and how to look at that and review that. Okay, you want to look at master in our repo? In your repo. Yeah, just drive us through where we should be looking so I can get it in the recording and then edit out some of the blah, blah, blah here and create a little guide on how to view the master. And is this something if we update these things, if we get stuff in over the next couple of weeks that might surface before the 27th? What's the 27th? That is when we were going to do, been driving people to do a beta 2 Friday release. Okay, this is our master. What would you like to see in here? So, if I wanted to view the docs in a visually readable way to see where there were mistakes, how would I do that? Click on. Is there a way to view a rendered version of the master branch? Everything in master should be in okdio. Is it? Oh, it's called. Oh, okay. I thought okdio was like a copy of the 4.3 docs. That's what I thought as well. Forgive me. Because my ignorance, I'm new to this game here, but it's okay. We're all new to this. I would say for folks who wants to see the master, they can build it locally. And when they build it locally, they can see and render in real time they make changes. And that's how you normally contribute to the docs. That's how I've been doing 3x. Whether the current master is pointing to docs.okdio latest, that's a different topic. Because ideally, personally, one of the frustrations I had in 3x is when you say latest, latest doesn't mean anything because if you switch to OCP, they don't have latest. So ideally, it would be nice to kind of follow exactly what OCP is. When we say 4.3, it's 4.3 and that is latest. 4.4 is whatever, rather than saying latest and have this magic stuff being done, which points to either 4.3, 4.2, 4.1, etc. But that's maybe a different conversation to have. Well, I mean, one of the key differences between okd and OCP is that okd is essentially rolling forward on the latest code anyway. So ideally, the documentation on master could correspond to the active rolling development of okd as it stands. Because then at some point, they'll branch it and then that'll become OCP and a quote unquote stable okd sub release or whatever. That's what I thought it was supposed to work like. I could be wrong. I also assumed that's the workflow. So if Michael, if you could merge 4.4 preview branch into master before March 27, you will have a lot of testers and a lot of feedback. I think that would be a win-win. 4.4 preview is master. He said that a couple of times, Vadim. I'm sorry. Okay. So what we need to do is, right, so any changes would be rendered on docs.kdio and we can go just straight filing bugs, just a matter of how to track them properly. Should we go straight to OpenShiftDocs and label them with okd or add some kind of a tag in the title? Which process would you prefer? We could just put okd in the label in the subject line of the title of the thing. If you did okd colon and then whatever the issue was you found going back to where he was showing them earlier where you log an issue, I think that's the proper way. Yeah, I think that would work. Can you go back to that issues page and maybe just go into our repo and choose. Pick the issues and then what I'm saying here is if you could here just put okd colon or in brackets okd colon and then whatever the issue is that's wrong or whatever and then whatever. I will talk to my documentation project manager and let him know we're doing this. Does he generally will review the issues? We've been trying to get better about reviewing the issues. We can then start to schedule these issues as we have. I'm going to ask one more time just because I'm still confused. Are the docs that are on docs.okd.io latest okd 4.4 or okd 4.3? They're the docs in their current state which is based on 4.3 plus whatever we may have added for 4.4. Any issues that we have for 4.4 that are done we will likely have merged to master. At a certain point in time. So anything that going we'd have to do another merge with any fixes that you have for the 27th. Yes. Ok. So is that clear as mud for everybody on the call? All right. So I think the basic thing here is if you find a mistake or a misstatement or grammar whatever in the okd documentation go to openshift doc-docs in the openshift repo log an issue with okd in brackets that'll flag it and we will and give as much detail as you can about where you found the mistake and how you found the mistake or the correction that you want to be have made and submit it and we'll you know Michael if we need to come back and have another call what I'd like to do is capture this and make a very short video maybe with Michael and myself restating all of what we've just done and put it up and circulate that and give people homework for the next 2 weeks and so Michael I may reach out to you to record that later this week if you have it you don't mind especially once you run it once you run it by your management team to make sure that they're okay with this sounds good okay cool thanks all right is there anything else we should hash today anyone else make any new discoveries new platform things okay then I am going to call this meeting and thank you Vadim for taking on the GCP demo that was great I'll probably try and snip that out as a short video as well and share that and Craig for your medium blog post that's wonderful and everybody else for all your efforts to make this happen and I'm really looking forward to getting beta out the door so enjoy your week and read some docs sounds good yes thanks in advance everyone for any changes you had cool thanks guys bye