 Get the notes document here so we can get some attendance going. I'll throw that in chat. You can add your names into the pending document. Hopefully, I got the right file there. Well, that does another email. Seems like you got it. There we go. Good. Close my mailbox. And stop editing videos today. And let's go to our working group page here. Cool. And community and the working group page. Has Vadim joined? Hello. Hey, Vadim. Yeah, good. We have we have one of the co-chairs. Is Danny on the call by any chance? Haven't heard from Danny in a long time. Our external chair. Hope I don't see him. Haven't seen him here in a while, actually. He had sent an email that he had gotten swamped a while back. So I'm just. Checking in to see who has joined us. So if you guys don't know. Red Hat Summit has gone virtual. I've said this multiple times on previous things. And on April 27th. I'm holding an OpenShift Commons gathering. And there will be live chat and there is a prerecorded state of OKD talk. That's going to be one of the on demand videos. And if you would like to join, you just go to the link that's here. That Red Hat Summit and register for summit and you'll get an email. Reminding you that the gathering exists. It's kind of hidden in the agenda, but it's there. And it's all day on the 27th, not all day, but from 9 a.m. till 2 p.m. ish, depending on how long the AMA session goes. So if you'd like to join in there and hang out in the chat during the day. That would be great as then you could answer questions or ask questions. Just go to the page. Well, Diane, at least I plan to be there. But so yeah, so you'll see the structure of the day is much shorter. And it is getting shorter all the time. Because people tell me people will not stay the whole day. So we have some. Yeah, well, it's unfortunate, but then everything is available on demand as well. And so I recorded with Christian and Daniel and a bunch of other people. Sort of what we normally would do at a for technical talks. Those are all going to be pre recorded and loaded. So I was going to try and get an F cause one done to with Dusty or Benjamin. If I could coerce them into it. So that would be that. And Paul Cormier is supposed to be coming and doing the fireside chat with me. But so far, his schedule is not permitting him to pre record that with me yet. But there's some really good case studies and lots of lots of good stuff going on. So and all the updates are coming in for the future of OpenShift 4. So that's that's my pitch for you guys to hang out with us on the 27th. And now I'm going to check and see if Christian has joined. Not yet. So I but Christian may not be here, but Vadim is here. And I'm going to see if I just don't entire the end. All right. There you go. So I'll just motor on a little bit with my little bit. I have not I've fixed a couple of broken links on OKD.io. When we bumped it to the latest version of documentations, all the links to all the 311 specific stuff were because obviously it's not the latest. But I haven't gotten through all of them yet. So bear with me and I will get that state of talk that Christian and Vadim recorded also embedded on that site. And I promise it will all be done before April 27th and sooner if I can find some time. So how about if Vadim and Christian, if you give us an update on, I thought I heard that Vadim got a beta two out the door. So if I stop talking, hopefully Vadim or Christian can give us an update on that. That's right. I think I'll hand it off to Vadim directly because I've been called off to work on something else this sprint. I'll be back full time on OKD in a week. But yeah, right now working on something else. So Vadim, if you could tell us about beta two, that'd be great. Sure. Beta two is out of the door. We're trying to keep a weekly schedule of fresh betas. Basically, those are promoted nightlies. There is nothing fancy about them. The only significant difference is that these are being properly uploaded to Quay now and there is an upgrade from first beta to the second one. So if you installed a beta one using a Quay images, you should be able to upgrade to beta two and we'll be adding more upgrade pass every day we upload a new beta release. I'm hoping to prepare beta three today or tomorrow and we'll keep schedule of making fresh betas in the beginning of the week, probably Monday or Tuesday before our work group meetings with fresh snapshots of things and getting them uploaded. Hey, Vadim, is there a weekly release? So are you breaking up? Oh, sorry. Sorry, can you hear me OK now? Yeah, we can hear you now, but we didn't hear your entire question. Oh, yeah, I was I was asking if there's anything that distinguishes the beta from any of the other nightly releases that we've been building from or are we just picking one of those and kind of pinning it and saying this one's going to be a beta release? Yeah, just just picking a nightly and promoting it as a beta. OK, there is nothing fancy about them except being uploaded to Quay. So it's going to be there probably forever. But we'll talk about that. I don't want them to be forever. You know, it sounds bad with that thought. Well, I would like to for them to be there for like a month, maybe two. But we want to encourage people to update often. I see, yeah. I mean, yeah, which means at some point we will have to force people to date and we don't have any other mechanism other than removing old fetus. Right. So other news, a few more other fixes have landed on this fear of this fear of no agent thing should now work in in fresh nightly from today. I didn't get a chance to test it. And I don't even know what to expect from it. So it should probably announce the correct address to the vSphere. It doesn't affect the functionality. It just appears to be properly showing the IP address, another information from the node in the vSphere. So if we could get some eyes on that, that would be lovely. Fedora-CrowS has been updated to the latest stable from March 23rd. And we now have a proper image for GCP uploaded. So installer in today's nightly should be able to install on GCP without any additional replacements and things. And we'll add testing for that soon. Once we ensure all the tests pass at this moment, there is one test which is flaking. So once we sort that out, we'll additionally verify that nightly some betas could be installed on GCP, you know, or CI. Vadim, that's different than having the F-Cars image available on GCP. That means it's in Quay, but not still not on GCP. No, that's all we need for proper GCP support. Yeah, it is on GCP now properly. Fedora has uploaded it to GCP now. Perfect. That's what I was trying to articulate. Thank you. Right. So we're also working internally on additional tests, upgrades and improving our release controller infrastructure. That doesn't really affect the installations of OCD. But we are now, we soon will be able to add arbitrary upgrade pass. At this point, it's only the next nightly. And if it fails for some reason, we currently have to do a lot of manual work to add an upgrade pass for that. Once we'll end a few fixes, we would be able to run arbitrary tests and once the signature, once this private keys would be uploaded to a location, our upgrades pass would be, our nightlies would be properly signed with the GBG key, which is trusted by CVO. So at this point, you have to add a force parameter to skip this check. But once we land this, you would be able to upgrade from web console, just like standard OCP. And in 4.5, we will also document how to run your own upgrade infrastructure and how to properly mirror releases and create arbitrary upgrade pass. That's awesome. Right. I mean, it's totally possible today, but it's just poorly documented. I think that's pretty much all we started. We are preparing an enhancement about the official state of the OCD on OCP so that other teams would be aware that it soon would be officially supported thing. The main point is to get some of the OCD bits into the master branch of the installer and MCO. Once it's done, we will prepare 4.5 snapshots and we'll have them rolling out the same as 4.4 currently. I don't think there is much community could help. At this point, except probably chiming in on what would you like to see in OCD in general? How would it be different from OCP, which are distinct features and so on? Let me find the enhancement where we are discussing that. Why don't you, if you want to share the screen, that would be. We are going to essentially reuse the existing OCD enhancement proposal and make that a bit broader and include all the things. We think OCD needs to sort of merge with master, undo the current fork situation. From there on, we should be in a pretty good place to really run everything in an automated manner and have essentially reuse all that lands in master. I will be updating this enhancement proposal sometime this week. As I said, I'm right now working on something else, but that should be done by tomorrow or maybe Thursday. Then our next thing for me is to update the proposal. As soon as that's done, then OCD truly becomes upstream of OCP, right? Well, it's not really upstream as much as a sibling distribution, because we will share the same master branch and maybe even some release branching. Yeah, it's because it's not really upstream anymore. It's the same code base, exactly the same code base as OCP, but just with a different OS base. So it's a sibling brand? Yeah, makes sense. Actually, I like that even better. As a customer of OCP, it makes it easy to transition back and forth from the lab to the data center. Yeah, I'm looking forward to that kind of synergy, too, because that will make life a lot easier for planning, scaling and prioritizing stuff. And definitely on the service level, it should really be able to do the same things. Later on, we definitely want to enable people to upgrade from OCD to a paid subscription of OCP. With the mechanism we have, you can just switch out the base OS by pulling a different machine OS content that would be R-cost based and thus upgrading from or moving, migrating from OCD to OCP. So we won't have to do it the hard way? Exactly, it should be pretty easy to do. We just have to document that and also do some testing so we can be more sure that it works out for those specific versions you're on of OCD. So that's probably a thing our salespeople will love. But yeah, it's nothing we need to focus on right now. The first thing is still the OCD GA release we want to get done. And after that, we'll get to those bits. Yeah, our short-term goal is to land the OCD installer in the upstream branches so that installer folks would not be confused by folks who are filing random issues and showing random bits of the OCD installer because right now they're not very much aware of what's happening there. And once they would see that the code changes pretty small, they would be able to experiment with new things a bit easier. So I think that's pretty much all from my side. Let's move on. Yeah, I'd like to thank you, Vadim, for preparing the VB tattoo release and also preparing the PRs to upgrade the forked branches. So thanks for all the work you put in. I second that. Yeah. We're getting there, guys. It's really happening, so I'm pretty thrilled with the whole thing. So thanks again, Vadim and everybody else. So, Charo, not to put you on the spot, but we've been talking on and off in this meeting about documentation and documenting how an OCD release is built with something we had talked about as well as getting better documentation overall. And I'll just ask because I haven't looked in two weeks. I apologize. I have been busy with other stuff as well. The documentation issues that we've been logging, how is the docs.ocad.io looking? Have we gotten out the F-COS references in and the R-COS or however I'm supposed to say that acronym out? Or is it still, are we still in limbo land with the documentation, just the baseline documentation? I think the first chunk is landed. There are still several small pieces we need to fix. I can prepare a pull request this week probably. Yeah, I've been using the OCD.io docs. As I've been working in my lab and they're much better in the last week. I'll say there may still be some. Somebody had posted an issue. There were still some issues to Red Hat Core OS. But I'm seeing less and less of those. Yeah, because I haven't looked. There was some really direct. Have you all filed many issues? How far? I haven't found any actual issues with the documentation. I'm taking some notes of some things that might need to be added or clarified as I'm adding more capabilities to my lab. Some of them around adding persistent storage. I've got Cep running now in the lab, hyper-converged in the cluster. And that's a very repeatable process. And have the registry using a persistent volume. So I've got some notes that I can share with somebody around that that would help the documentation. And now I'm working on pipelines. I need to get Tecton working so that in the lab at work, I can get developers actually functional in OCD4. Yeah, it does look good to me right now. The things that I was worried about that it was still referencing the wrong Core OS that got fixed. So that was my big red flag from the other bits. As long as it's working and you keep adding issues, I'm happy with that progress. But the other thing that we were talking about was the idea about creating some documentation on how the release itself is built. I don't think that's even... Have you done any writing or drafting, Charo, on that at all? No, I haven't even started working with either Christian or Vadim on getting me up to speed on how the new environment works to build one. I need to get the lab fully functional for a developer, and then I'll be able to rewind and spin back through the whole thing and basically start with an empty code base and build it from there. I'm not that I'm picking on you, but the other thing we talked about was auto updates for the beta that you were going to test them. Did that happen? It did, and it works. You have to add the dash-dash force, at least for the nightlies. I haven't done it yet from beta one to beta two, but my assumption is since they're also... Since they're just labeled from the nightlies that they also are missing the signatures. Is that correct, Vadim, that you still need the dash-dash force from the command line to execute an update? Yeah, right. That is a part of a huge effort to how to run your own disconnected environment entirely. So once we sort out a few issues, the OPD release controller would be first one we would be experimenting with. Yeah, and that's actually a good call-out that he just made. My environment is running as though it were a disconnected environment. Because I'm running mine as though I were actually in a data center and my CISO didn't allow machines in my data center to talk directly to the outside world. Right. The thing is all of that is happening on the infrastructure side, and it's just at some point the nightlies would be able to upgrade without force. I'm working with Clayton on that. I don't have any time frame where that happens, really hoping by the end of the month but I'll be pushing for that to happen sooner. Now, the good news is, is that with the dash-dash force, it works. And if you look at the releases in the nightlies, the ones connected with the little green dots, you can just walk up that tree. You can run an update, go make yourself some coffee or pour a drum of scotch and come back when it's done and run it again. Yep. I did something similar as well and it worked as well as could be expected. So I was very pleased. I did open a couple of issues that I came across. One of them may just be my environment, but I cannot get the console to work with the newest version of Safari. Works fine with Chrome. Worked okay with Firefox. Yeah. Interesting. I have the inverse problem. It doesn't work in Chrome for me, but it works fine in Safari. Whoa. Check your extension, I think. There's some extension I have that's not, like, cooperating. Can you just swap your consoles for me? So both of these are blogs and you can file them sooner. Pretty odd, though, but definitely a console bug. While we're on the, sorry, while we're on the topic of updates, I did notice I'm also running Beta 2 in my lab and its stable works fine. I did notice in the update channels now it has options. There's stable 4.4, fast 4.4 and candidate 4.4. Are fast and candidate a real thing yet? Or are they just placeholders for now? So the channels in console are hard-coded. We're also one of the parts for the disconnected update structure. So it's to have a server which shows which channels do we have. And in OkD we have entirely different channels. This is why OkD would be the first one where we would experiment with automatic fetching which channels do you have available because really you can set any names for those. But console hard-code once we have an OCP and hopefully very soon we will run hard-code that and you would be able to see at least stable 4 and probably nightly. So the 4 stable that's currently on the CI release website has nothing to do with the stable 4 channel right now that I see? No, stable 4 channel is what you see on Origin-release that stable 4 that's correct. And then fast and candidates we don't have it. We have nightly channel. Gotcha. This is basically a coincidence that it all works. But ThetaBarks which we will fix very soon. Yeah and I haven't tried running an upgrade from the console. I've been doing it from the command line. Yeah I initiated mine from the command line that I watched its progress on the console and that all seemed to work totally fine. Like it showed things upgrading and that was very cool. Yep it was indeed. Something else that I opened up and I've seen this across the beta one and then several of the nightlies above that is the... SED is still is logging errors trying to talk to the bootstrap node. Which is gone. Oh I have not seen that. My SED operator is currently complaining that my SEDs are degraded probably because my SSD is too crappy but they're not trying to talk to the bootstrap. That's interesting. Dig into the logs and see if maybe you see it in the logs. You don't see it by hostname. You'll see a... I dropped the logs in an issue that I opened. You'll see it failed to connect and it will be the IP address and port 2379 of your bootstrap node which is long gone. I'll take a look. The only reason I saw it was actually investigating the same thing that you were seeing was every once in a while it's reporting the SED... It's reporting them as the nodes is unhealthy. Yeah exactly. Mine just constantly cycled in and out of unhealthy. I figured it was because of some latency thing and it was tuned for enterprise-level NVMEs or something like that. I'm running mine on fast SSDs and the masters all have 20G RAM and 8 VCPUs and the nodes themselves aren't showing any resource constraints or anything. The nodes themselves are reporting as healthy. The operator is complaining and I came across these issues in the logs while I was looking to see if there was any reason they might be saying they were unhealthy because otherwise everything is working fine. Exactly. My entire cluster is great. Nothing else has any problems. The API server has no trouble doing its job. Ooh. Interesting. Yeah, I don't know. We would need to file that. It seems to file both bugs to the operator. I know we are tweaking health bit checks on Azure because their disks are usually slower so we are allowing them to reply with a bigger latency. I'm pretty sure this setting should be available externally so that you would be able to shoot yourself in the leg and increase that timeout if you have slower machines. As for the bootstrap node, it probably doesn't remove itself after the bootstrap is dead, which is a bug that should be long gone. One other update. I saw there's a working workaround now from Provene for the CRC blocker. I haven't tried it yet. Like I said, I've been fighting with the new hell that is Tecton, which I like much better than Jenkins. I hate Jenkins, but it's a whole new ballgame. Ah, man. We just finally healed our Jenkins system. I am so incredibly happy for that. It finally got shut off on Monday. Yeah. Well, everything goes in cycles. It does indeed. Yeah, three years from now you're going to be telling me whatever the new Tecton is. So yeah, just get used to it. I'm already used to it. So we don't have any update on the CRC yet still. So that would be a wonderful thing to get working before April 27th. I know it's coming soon. Like 14 days from now, so that the link, we might have a viable link to a CRC for the OpenShift Commons gathering. But that may not be possible or not. But if it is, let me know and just ping me directly, Charo, and I'll work with you to get whatever documentation we need up on okd.io. So we can replace that link to the 311 mini shift and move on. That would be great. We talked a little bit also about reordering the repos in the GitHub to be more 4.0 centric. I don't think I saw any work done on that. But if you have a suggestion, Charo, of reordering, if you could float that by us as well, maybe as an issue. Yeah, I actually did the PIN repositories after you and Christian and I had that conversation. Somebody changed the PIN repositories to reflect what we had talked about. Okay. So maybe it did get done. Yeah, I think that it looks. I'm looking at it right now. It was. Yeah, I talked to Clayton afterwards and he changed it up for us. It's in the dungeon. Yeah, it looks good. All right. So what I don't have here are all the issues linked in here. So the update on the blocker for CRV we've done, we've added the at CD operator issues, console bugs and different browsers that everybody talked about Neil and that if you log in an issue, let me know the update. If you do encounter those bugs in the web console, for example, I would ask you to actually just file a bug on the respective repository in this example. We're supposed to do that now? Yeah, well, you can also just file it in the OKD one, but then we'd probably just move it over there. Triage it ourselves and if you want to have it directly done, you can just open it in the repository it belongs to and the team then should take a look at that. That's responsible. Neil, what I've been doing is I've been opening the issues in the OKD issues. Okay. But then Vadim or Christian or some of the other folks that are monitoring that have been really good about saying, oh, this is like F-cause tracker and so then I'll open an issue in the F-cause tracker. Okay. And that way we kind of got them all under OKD for meetings like this where we've got a single place to reference them, but the folks who are actually working on those particular repos aren't necessarily monitoring OKD. Yeah. And with us really merging with the master again, essentially all of the bugs we encounter in OKD will also be bugs in OCP or at least most of them, unless they're really specific to the platform, to the base OS. They should be exactly the same as in the OCP product. So it just makes sense to really file them at the component they sort of show up directly because otherwise, you know, we don't really want to be, we don't want to use this meeting as a, you know, bugzilla triaging thing because there's, you know, there's heaps of open issues, obviously. So, you know, but of course, if there is one thing that really consistently annoys OKD users, we'll definitely look more closely, but otherwise it will treat it just as any other bug that, you know, has a team assigned to it. Yeah. And the more hands we get in the pot, the more bugs we're going to find. Yeah. Actually, there was one bug filed by Joseph. I'm not sure if he's here today. I think today and that also that was master code. And I think Vadim already filed a PR to fix it today as well. So that was really great to see how that, you know, that feedback cycle really works for OKD already. And it'll be even better when we get to the real release. Captured that one. All right. Let's see. What else do we have here? If I missed anything else for today. The update on update. I think you gave that already, Vadim. Or is that more? That's more Charo writing, I think. It still probably hasn't been done yet. Yeah, we would need a new hacking document. I can work on that how to replace bits of OKD. Yeah. And I think Charo was volunteering to help with that as well. Yeah, absolutely Vadim. I would love to start building some of these, some of the cluster operators directly from source. Okay. By the next meeting, so I'll prepare a few basic things and then we'll see how to how to expand that. Yeah, that sounds great. If you want to start kicking some things my way. I have some spare time in the evenings. I'll I'll try them out. The instructions don't need to be super explicit because I can I'll dig around as best I can and ask you questions, but that way you don't have to create a massive comprehensive document before we start testing something. Right then. Anything else people want to talk about? I always want to talk about documentation, but that's just me. And I think with Charo's help and other people's feedback, we can get that up and running. And I don't think Dusty or Benjamin Gilbert are on the call. I was going to try and tap them to do an F cause update. Anybody on the call directly from F cause, I think, isn't he is Oh, but did the mic speak up forever? Hold your piece or jump off the call now. What's he doing here then? You joined the next one. There's been quite a few in F cause, but I'm not really that familiar with it to speak about it. Hopefully we would get some more vision about what's what's going to happen soon and we changes should we be aware of? I was in, Micah, I don't mean to put you on the spot, but I was hoping to get Dusty or Benjamin to do a state of F Fedora CoroS talk that I could pre-record and have on demand for the OpenShift Commons gathering on the 27th. So I know everybody's busy, but and it's all crazy out there, but if you have a connection to them and to connect them a little bit, that would be great. Yeah, I think Dusty is he might be working on something for Summit outside of the community track, but I think that it intersects with F cause so he could he could probably massage something for that. Yeah, that's I think that I think I don't want to put I think he's already like cause there's in Summit, there's a community central stuff and a whole bunch of us like I did a okay detox and a whole bunch of short shorts. And I think there might be a Fedora CoroS update in there. So if he's already done it, fine, just as long as the content's there and I can cross list to it. So that's kind of what I'm looking for. And I can always edit something he's done and turn it into a state of talks. I'm not trying to make more work for people. I just want to get that content. I can follow up with him to see what he's working on there. Yeah, and just, yeah, whatever, whatever he's got. If he if he's recorded already, if I can get the URL to it or the video file, I can always upload it to the on demand section for for the gathering at Summit on the 27th. That would be good because that's what I did with one of the other talks is I think it was the platform services guys recorded something for Summit their roadmap and I just shape shifted it into a state of talk for them, but I didn't have to rerecord it. But that's all I have on my crazy agenda that I use to run this meeting with, which seems to work, at least for me, hopefully for you guys as well. The one other question I had still was, I think if I'm two things actually two more questions is Azure the only place where the images are not available in their marketplace or did that get updated? No, they're still not an Azure. Okay, I'll leave that there. And then there was a blocker for OpenStack. Has that been resolved? We're still waiting for the next ignition release. Okay, that's that's the note that I have in here somewhere due to ignition and GCP. Does GCP still need to manually upload images? Or did I hear you earlier in the meeting say that they were okay? Now they had them. That should be fixed. Okay, good. What about CRC? How's that? Has anything happened there, Christian? So we've had Provence, yeah, Provence fix has been merged. But I think it's still a manual process to override those configs to allow like an unsafe at CD with only one one host. So yeah, I have to follow up with him when I get some time to talk about how we could build. Yeah, just build and as an OKD based CRC from that. Yeah, I imagine automated. Yeah, Neil, I'll send you a link. We've got an issue open in OKD about that that Provence recently responded to. I haven't had a chance to try it yet. Because he just responded a couple days ago. As soon as the Bootstrap API is up, there's a command you run from another terminal that sets that unsafe parameter for etcd and then from there on it's supposed to proceed building the single node cluster. And then from there, presumably the rest of the directions for building the bundle for CRC should work. And at that point we would have then something folks could download and run CRC against it to get a single node cluster running. Yep, that's it right there. What you've got on the screen. Yeah, so it's now we need to. Yeah, I think we need to follow up on that. Reach out to him, see how we can automate this and build CRC with those instructions. The other question I have and I think that Neil, you're going to test over and maybe we've talked about this already, but I think it does work. Can I take this out of this note? Over it has not been tested yet but should work, Neil. I think you attempted to test it and it passed. I have not attempted yet. I will leave it in. That is something I hope to do in weeks. Maybe it depends on when I can get power back in my on my servers. All right, we have we have quite a few reports about over here and we fixed several bugs. I think it should be working now. We just need a contact person for that. The number of bugs open for over it on the OKD repo do not inspire confidence. Well, you inspire confidence, Neil. Go out and test it. As soon as I have a box that isn't dead or that I can actually IPMI to, I will, I will try. I am still trying to get my servers back online after having the power outage in our data center area for weeks. Well, we'll wait. Tangential question. Has anyone tried kubivert yet? And theoretically, will it work in OKD 4.4? It should work. Yes. We're not really testing it regularly, though. Okay. Somebody tested it. My list to eventually give a try is that that's that that's really what I'm interested in seeing is a is a bare metal. Open shift environment. Is also managing my virtual machines. It's not that I don't like to wear and rev, but it'd be nice to not have to be nice to bare metal it. Yes. Christian, does the open stack IPI still require Swift or is that gone now? That's for the open stack install, right? Yep. So we removed, we removed that, but with removing it, we broke open shift because it now requires HTTP headers in the ignition spec, and that's going to land in ignition spec 3.1. It's already in the code for ignition spec 3.1 experimental, but we don't have a binary to work with that accepts that. So that's why open stack is currently not working because we're not on Swift anymore, but we don't have the ignition binary that sort of includes the replacement functionality. So open stack is now broken. Cool. Yeah, open stack is still broken. Okay. And yeah, it used to work before and we broke it with the last rebase. That's okay. Did we break it for all of open shift or just OKD? Just for open stack OKD. Okay. The OCP is not broken yet. No, actually OCP already has, this feature has already landed in ignition spec 2. That was kind of a minor regression with spec 3. It landed in spec 2 first and yeah, it has just yet to get built for or have cause. Yeah. At this point, what are we looking at for time before the next ignition version to land? Are we just waiting forever? Yeah, essentially, yes. Internally, I asked whether we could get a new build or a new release and I was told it shouldn't be too long. It's just one commit missing and that was two weeks ago or three. So yeah, I'll follow up again with that. Maybe we can get a new experimental or just a new build, a new really minor release that includes the experimental code that would already be enough for us to unblock open stack. I think the actual aim is to release ignition 2 with spec 3.1 stable. So yeah, as we don't really need to wait for that, I'll just ask for another release to be cut as is. Excellent. Yeah. Otherwise, it might be a few more weeks. I'm really not how long that's going to take. But yeah, I'll follow up on that. Okay. All right. We got 10 more minutes left. Are there other things we should be talking about? Questions people have? I'm just checking in the blue jeans. Chris has to go. Thanks for joining us, Chris. I'll see if we can get that. I had a follow-up question to the gathering at Red Hat Summit. So if I'm registered for summit, I can access the Commons gathering or is there a separate registration I have to do? It's all the same registration. I will send an email out to this mailing list and to the Commons mailing list and to every mailing list that I own explaining that. It's not wicked clear on the summit page, but there are some mentions of the gathering on the 27th, sort of a day zero event. The reason I keep asking people from this group to join is last count, I think there was close to 19,000 people had signed up. We normally have max about 500 people show up for a gathering. So I am... And they're not all... They aren't all... A lot of them aren't even aware that the gathering is happening. So they're going to get an email from me shortly if they registered already, letting them know. And so my biggest concern and my happiest thought is that we have a flood of people who have never been to a gathering and not for a gathering. And my other big fear is that there's just me and five of my closest friends in the chat answering their questions. Well, all the talks are raging on. So, you know, if you can come, that would be wonderful. And just... It's pretty much... It's an open chat, so you can answer questions, talk to each other. I have not used this platform before. If anyone's used it before, you've got more experience than me with it. All I've been doing is recording all of the briefings and talks and getting them uploaded. So it's... Really, on the 27th, it's a huge experiment for us. And we are the guinea pigs for the virtual summit, which will be happening the next two days. So... And I'm not wicked worried about it because it's a good problem to have. I just would be nice to have more... more voices in the chat than just, you know, the usual suspects and a lot of product managers. Yeah, I'm looking at Micah. So, Red Hat Summit is the 28th? Yes. The 28th and the 29th, I think, is what they said. Well, like, scheduling is really weird to look at. It is, but it's... You've got to realize every... The virtual experience is the 27th and you know, when you register... Like, if I go in and register it because I think... I might have it on the agenda now. Actually, I think it's... I don't want to... Sorry, I didn't want to do that. I wanted to hit the agenda. I think it might mention us as a pre-event. Yep, there we go. That was not there when I looked yesterday. Diane's been working a lot in the background here. Well, thank you, Diane. So hopefully we can get that there. The agenda for that day will look a lot surprisingly a lot like the summit, the gathering comments. Hey, Diane, send me a reminder link about that either in Slack or email. Yep. And I'll try to spend a good bit of time on the 27th hanging out there. Yeah. There's some great... This one here, I'll just admit it up front, is a replay from the London event. Francesco did a great talk about using open source to fight pandemics. And in London in January it was very prescient, I think. I think they already at Public Health England were already watching what was going on now. So it's a really interesting talk. And then there's a whole bunch of other customers and end users talking. And I didn't put the names here. Okay, now I can see what I'm missing here. But some really interesting use cases for it. And then there is a good OpenStack one at BBDA that's pretty interesting about running... They're running OpenShift on OpenStack and some OpenShift on OpenStack stuff. So you can be in the background saying what we broke in OKD and OpenStack for that one. Well, I mean, I know that myself and Dan from data will definitely be present for the comments gathering. Sree, do you plan to show up as well? I do. Yeah. Okay, cool. Yeah, so at least you'll count on us, the data people to show up. Yeah, you and 18,000 others, hopefully. And it could be nobody shows up, so it could be just you and me. I'm really excited about it, as you could probably tell. And just because I think it's the way of the future that we're going to be doing a lot of these virtual things, like we do these working group meetings. And I'm very curious for people's feedback on it. So maybe at whatever the next OKD working group meeting is, if we can have a little talk about what worked and what didn't, that would be great too. Sure. Okay, so if we're registered already for Commons or for the Summit, there's nothing we need to do for the 27th. Nope, just remember to show up. And I don't know in the background what corporate marketing, because I don't normally work with corporate marketing, what they're doing to update people that it's coming or send a reminder. But I'm slowly getting to be a cog in their wheel, so hopefully by the 27th, there'll be some little reminder note that goes out in the morning with everybody. So, yeah. Yeah, I'm sure all of the emails from then probably go straight to my spam box too. Yeah. Yeah, that's like the red hat partner emails. Yeah, I know, I know, I know. But anything that says Summit in it right now, listen to for a little while, because it's going to be, they're doing a very good job, I have to say, because Summit normally has three to 400 talks in it on the two days. And obviously we're not going to be running three to 400 virtual talks, but they're recording a huge amount of them in advance to be available on demand. So, I got a lot of respect for the people who have managed to herd those cats, because I'm only herding 10 talks out of 300. So, they're doing a lot more than I'm doing. I just get to be the guinea pig for the software interface. So, that's my fun. But anyways, that brings us to the end of the hour, and we'll meet again in two weeks time. What's the date in two weeks? Well, that's the 28th. We'll not meet again in two weeks time. Let's say we meet on the 25th. Christian, can you adjust the door calendar for that? I can do that. I'm probably going to do that tomorrow, though. That's fine. It's not urgent. So, we just shift the cadence by one week? Yeah. That's what we're saying. Just because there's just no way. There's just no way. No. There is no way. Yeah, I totally agree. All right. Thanks for hosting again, and thank you everybody for joining in today. All right. Take care, guys. Thank you. Bye. Bye. And James, I think it's a, we'll figure out the shift, but it's permanently shifted to the fifth, and then two weeks after that, and we meet again. We'll see.