 Okay, the working group meeting for March 15th, and we are on the new time change. So give everybody a few minutes to see if they all get in. In the chat is the MD agenda, which I'm going to share on the screen for a minute. And if anybody has any, if I can figure out how to share a screen on this screen, there we go. Up here, too many systems. So if you could add your names in here, I'd appreciate that. And we do have a guest speaker today, but I have pinged them in a couple of channels and have not gotten a response. So that is the micro shift talk, which was rescheduled from two weeks ago for this week. But if Sally makes it, you're here. All right. There we go. Wonderful. Yes. Oh, man. Are we recording right now? Yes, we are. Hang on. I'm very tired. I've been working on something like banging my head against the wall, you know, like that, how you just can't sleep until you get something working. Yeah. Yeah. As long as it's not micro shift. No. Okay. All right. Good. Then we're good to go. Oh, actually. Yeah. I don't have it running, but I do have a demo where I show it running. So I'm just going to run through some slides and talk about it. Okay. So if everybody's okay with that, I'd like to have Sally go first so that we can get the presentation on micro shift out of the way. And Sally, how long do you think you'll need for this? Oh, I don't know. I'm going to probably, I don't know, 15 minutes. Okay. Take that. Everybody's okay with that. While you're listening to her, make sure you add your name to the HackMD and in the HackMD are the links to all of the things that she's showing off. So check out the HackMD. So Sally, I'm going to let you take it away and share your screen and we'll hear everything wonderful about micro shift. Been a while since I've shared screen with BlueJeans. So, yeah. So I have one tab and BlueJeans share one tab. Yep. All right. Oh yeah. Chrome tab I see now. Here we go. And I am not muted. Nope, we can hear you loud and clear. And where are my slides now? Can I not see my slides? We can see your slide. It's on the link slide. I cannot see my slides. So that's going to be a problem. So let me stop sharing and put them in presenter mode. Right. Is that what I have to do? Oh wait, they were there. They were there. Hold on. There's a little lag sometimes. All right. Let me get back to the first one. And I think I think I'm good now. Hold on. Now we don't see your screens right now. So. Yep, yep, yep, yep. Gotcha. Okay. Now we're all together. You good? It's loading and now we see it. Thank you. Cool, cool, cool, cool. I am not ready yet. There we go. All right. So micro shift. I don't know if anyone has heard of micro shift. I don't know how many people are on this meeting. But I'm going to skip down to the slide where it says what micro shift is. But basically it's to manage workloads on the disconnected far edge. I'll come back to these. Don't worry. But. Yeah. Micro shift. And explorative project created by the edge computing team and the office of the CTO. So the emerging tech. And what it does, what you'll all be interested to know is it repackages the over shift core components or actually the okay D core components into a single binary. So that is what micro shift is. But I'll go back up and start my spiel now. So edge computing. What is micro shift? Show you the pieces. There's a few different deployment models. And the one that I think you all will be interested in to try will be the micro shift AIO or all in one. So edge computing. Think like 5G IoT. Delivery drones. Satellites. Mark cars. They're micro data centers. Embedded systems and like field devices. So things running with sketchy internet in the middle of an oil rig or something. A ship, a satellite. That's where the micro shift was designed to run. So these field deployed devices, you know, pies. Things that look like this. You got it. So the edge computing, it can bring the processing this storage closer to the user. So like you can run AI algorithms or in a smart car or monitoring the oil and the gas like out where it's being collected, you know, running a program on a satellite. So that's, they're not in a regular, highly available, you know, six node cluster. Like these edge computing is, you know, for devices that can't run open shift or Kubernetes basically. But you still many times would love to have the experience of like a cloud native Kubernetes deployments. All the things that you're used to, you know, maybe the programs might run in the edge device and also in a cluster. It would be nice if you didn't have to, you know, reconfigure or rewrite the program. So that's the problem that micro shift has been designed to solve. Also like how to manage, update, transfer data to and from the remote edge. And again, still use that tried and true the familiar cloud native deployment patterns that you have in Kubernetes. So these devices are low resource. They're disconnected. The app, you might need the application and the workload management separate from the operating system as opposed to how we run OKD and open shift where everything is, you know, a bundle. Or you might, you might run it in like an RPM OS tree operating system, but not something that can run open shift. Not even so. A micro shift can be deployed as an RPM embedded in an edge, rail for edge, which is RPM OS tree base. So yeah, so the problem, so the world is being instrumented. That's basically like the era that we're in. And we haven't quite figured out how to use the data. We can't even imagine like what benefits might be waiting for us when we can harness and use the data like things like health and environment and business opportunities. Again, autonomous carves, space travel, I don't know. So that's, nobody's, so this is the problem space, the era that we're in edge computing is becoming more and more real and important and critical. So micro shift, it gives the best of both worlds. Again, device management to manage the operating system like I'll show you, micro shift can be managed with system D and also those Kubernetes cloud native. So again, open shift Kubernetes, OKD, those are like the highly available, stable data centers and then you've got the other end where you're just maybe running a single pod man container on a tiny device that would just be like pod man on rail for edge. But micro shift is somewhere in between where you might want to run a deployment, some services, but you don't have enough resources to go all the way over to the right. So yeah, again, what it does is brings open shift and back to a monolith, although it's not a very, it's a very tiny monolith, but it packages everything together. So let me break that down and show you. It provides an all or nothing start and stop. System D can wrap a pod man command, which I'll show you and it starts and stops within a few seconds. You can keep a pod man volume around to keep the state of everything so it can start right back up where you left off. And let's see what is this slide trying to show. Oh yeah, OK, so an open shift you've got your operators and everything is highly available and fully managed. That's what you lose with micro shift, but it's a trade off for being able to run it in these edge devices. Make sure I didn't forget anything. Yeah, open shift is meant to scale and micro shift is meant to not scale. So here's the architecture. This is what's kind of interesting. So in the micro shift binary, we have put the at CD cube API, controller manager, the open shift API, which makes it, you know, a step up, not really step up. It's different than like a kind or a mini cube because you can really still use those open shift resources like. Assisi's routes. So yeah, in the cubelet, all of these things are just embedded controllers and then. The Microsoft designers have also added a few open shift specific components like the service CA pod and the overchief router. So things that that, you know, very opinionated, they felt that developers would want. They also put it put in there. Yeah, again, Microsoft is usually meant to run with pod man, so you pass a pod man volume and that's what holds your state. And also it runs in concert with cryo on a host. Later, I will show you that micro shift AIO it actually even embeds cryo. So you don't even need to have cryo installed on your host to run it. But for production, that that wouldn't be anything near like production. Or if you were going to run micro shift in production, you'd want cryo on the host as a system D service and then micro shift on the host as another system D service. And here's where OKD comes in. So micro shift, it references the digest and it references the digest of these core components and then also references the manifest and the digest of these extra components. So these, I think we call it the core controllers and these are the add on components, but they're added on by default. They are, yeah, they're vendor, the actual code is rendered in from a specific OKD release and then the manifests are added and referenced from a specific OKD release. Yep, so a host machine that runs cryo and podman is all that's required. So again, here's a little flow chart that decides for you whether you want to run rail for edge with a podman container, rail for edge with micro shift, or over to the open shift side. So you can see there, I can share these sides, but pretty interesting too. So look at which is meant for where. So there are different deployment models for micro shift. Again, with an RPM OS tree like rail for edge, you might just embed the RPM in the operating system and run it like that. Or you can run podman on any operating system that can run podman with system D. Yeah, so with system D and podman, what's interesting is you have start, stop and restart very easy, but also podman has an auto update feature. So in the podman command, which I'll show you in a bit, you can set auto update equals registry and then whenever there is a new image digest in the registry, it will automatically kick off a new micro shift pod and that's how you can, it's very easy to update or roll back. We won't watch this now, but I do have a link for it later. Can you guys hear that by the way? I'm just wondering. No? Oh yeah, okay. Yeah, okay. We can hear you, I'm not sure. Yeah, I didn't hear anything else. Yeah, I'll show a few minutes of a demo at the end, but here, this is what's very interesting for all of us, us developers. Micro shift all in one. It's a super convenient way to test, say an over shift deployment that you might be developing. It's just a really quick and easy way to get up and running with a cube environment. Yeah, it can run anywhere. And some people have started to use it in the CI pipeline because it is, you know, so resource not intensive. What's the opposite of resource intensive? I don't know. So yeah, this is my side campaign to spread the word about micro shift AIL. Oh, okay, that's it. So these, again, I'm going to share these sides with you. I really recommend that you check out each of these links if you're interested, but especially if you didn't see this, if you didn't see the AI at the edge with micro shift DevConf recording, it's right here. It's very good. But this is an AIO demo that I put together that I will share. Let me see if we can get your, if there's talking in it, that'll be probably the test. Yeah, what I'm going to do is stop sharing here and just share a new tab. Because I have the other tab open. Well, we can help you get the word out about this. Yeah, cool. Okay, so it's six minutes long. Like, do you want to watch the whole thing? Or should I just go to the end where I show my terminal? I think I'll go to the end. It was about 4.30. So I can stop it and kind of explain what's going on. Yeah. I will show that. Let me just show you the pod man container. Hold on. No. Is that it? Yeah. Okay. So you guys, oh, let me full screen. Okay. You can see, so this would be what is in the system D unit file. And I can actually pull that up to after I show this. But this is basically what you would do. So the port forward of 6443 is just so that you can, you can access your cluster from your local host, because otherwise it's only living inside of a container because the micro shift is running inside of a container. So if you, if you port forward 6443, then, you know, local host inside the container is the same thing as local host 443 outside the container. And you can just run your OC or cube CTL commands against the cluster running containerized. That makes sense. For the AIO, you definitely have to run privileged. And you might have to turn off SC Linux. So that's why AIO is not meant for production, but it is meant for developers who don't really care about SC Linux running on their local system. Maybe you do, but I usually turn it off. This, this again, there's a link to it. And you can watch the whole thing. It's, it's, it's six minutes, but I will show you just micro shift running. And can, first of all, can you all see that? Okay. Or is it way too small? No, I can see it with my old eyes. Okay. All right. All right. So. So you can start micro shift with the pod man run command or wrap that pod man run command in the system. That's where we're starting. And I'm going to keep pausing. Okay. So I'm going to copy the system to unit file from micro shift repo. So it's just GitHub, you know, slash reddit micro shift. There it is. You can find it in the repo. I'm just copying that to my local system. And that's it. I haven't, you know, I don't have anything cloned. I have nothing. I'm just copying the unit file to my local system. And now I'm just going to start the service. And you can see that I have, I had a local registry running. So just ignore this registry container. But you can see that micro shift AIO, this, this registry has nothing to do with micro shift. But you can see I have AIO running. So let's see what it does. And I have a pod man volume that's saving the state. And there's the pod man volume. If you want to look at all your stuff on your local system. Okay. So now here I'm going to exec into that micro shift container and let's see what's in there inside the container. I'm root. And what's very convenient about micro shift AIO is that baked into the image, the micro shift AIO image is OC and QCTL. So you don't, you don't need to have OC on your system. You don't need to have cryo on your system. It's just all in one. And the Qube config inside the container lives here. This is at the mount point also for the pod man volume. But you can go back and watch this demo because it just shows you everything. All right. So I, you can see it was only 10 seconds. I keep stopping it, but so far 10 seconds and things are already coming up. And I didn't, this was starting from scratch. And you can see already under one minute, you've got the embedded components such as Qube API, over shift API at CD. Those are all just embedded in the binary. So you're not going to see them separate. Like when you, when you do OC get pods, they're not separate pods. They're part of the micro shift binary. So what you do see is the ingress, those add on components, over shift service CA, host path, we just set up host path provisioning. There are other options, but yeah. So now you can see one minute. It's all ready. Now I'm still inside the container. And there's also cry CTL inside the container. So I just did a quick, you know, let's see what, what containers are running. And so here again, it's, it's what it's, it uses flannel for networking, but these are all the containers, the underlying of the pods. Okay. So now I'm showing you, I'm outside the container. And this is how you connect to the cluster outside the container. So here, this is. This is the pod man command to copy from the micro shift container. That can figure and I copy it to my localhost right there. And that's why you port forward 6443. Because now it's just, you know, it doesn't know that you're not inside the container. And you do have to, you know, fix the permissions, but now you just, it's basically, you have a queue configure. it's basically you have a Q-config and you're accessing a cluster, so you can see that. So I just wanted to prove that you can create new, you can create deployments, you can see my deployments running, I can scale it. Now, yeah, this is what I wanted to show you. So now I'm stopping the service and you can see that there's no, there's no containers. Like again, this is, I just left that there on accident, but the Microsoft AIO container is gone. And now I'm going to start it back up and there's all my pods still running. Oh, and I want to show that the test, the test deployment, you know, I started and stopped it, but as soon as I restarted it again, it just picks up from the podman volume where I left off. And to clean it up, so if you are running in a system of service, you do need to stop the service. That's what this is showing, rather than just stopping the container because it will just keep restarting because that's what a service does. Is that it? I believe so. Yes, so to clean up fully, you can remove the volume and then, you know, your state won't be there the next time you start. So that's it. It's just super convenient. Try it out. It literally takes two minutes to run and let the team, the AI, the Edge team, know what you think because it's still like very new and we're still gathering. So I can tell you here, I'll stop sharing now. So where do you want us to give you feedback? Where's the best place to give feedback? So let me go back to actually the slides. You know what? No, I'm going to go. So there's a Slack channel and I don't know if that's on there, so let me see if it is. Is there a Slack channel on there? There's a blog. Oh, okay. I know here. Here, here, here. Microsoft.io, the docs. If you go to the community page. Microsoft.io, yep. Yeah, yeah, Microsoft.io. There's the community. Oh, community. Here we go. We're not seeing you navigate over there, but I'm sure we can... Oh, you're not? Why not? Because you're only sharing your Chrome tab and the one Chrome tab, I think. I thought that Chrome tab was the same thing. Hold on. It's in... Someone's posted the link in the chat, so I think we're good. There are a couple of questions that I thought... Before I answer the question, I just want to say that I was working on Microsoft for a few months. I have moved on to other projects, but the core edge team is still very much involved with Microsoft. That would be Miguel and Ricky, and you can find them on the Slack channel very easily. Okay, questions. And they are both talking at Commons in... At KubeCon for me on Microsoft to give the time as well, so we'll get the word out there. So there's a couple of questions. Neil was asking if it's possible to have the OpenShift web console on a MicroShift deployment, particularly the MicroShift AIO. Is it there, or is it too much? Was it one of non-core things? Sorry, can you just say that question one more time, because I can't see the chat for some reason. Yes, so if you stop sharing your screen, then it'll let you see the chat. So it's life and... So basically, we have the OpenShift web console, and they're wondering about sneaking that into the AIO version of MicroShift. And Leroy is asking, are there add-on component images baked into the binary 2, or did it pull them at runtime? There's a couple of questions. Yeah, those add-on components are the ones that the team thought... They're not really add-on, because they're by default, but you can experiment and add-on other things. The philosophy is to keep it as small of a footprint as small as possible, so unless there's a really good reason, they're not going to add it on by default. So Sally, I think we have a lot of good information here. We also have a few other things. Vadim, is there anything that we should be asking that we're not that you want to make sure? I wanted to... You can hear me, right? You can. A lot of people have been asking, does OpenShift in any form work on Raspberry Pi 4 in particular? If it's MicroShift, then perfect. Yes, MicroShift runs on Pi 4. There's a very big group of people working on it. There's some really cool demos coming out on the Pi. It's the main... And GPU enabled also. All right. Lovely. Another question Leroy has asked. Do we pull images on runtime or everything is included in those 160 megs? That's a really good question. So the references are included. The images are not, but for fully disconnected, there is... Last I checked, we're just finishing it up how to... The images come tarred up. And all you do is unpack them with a Podman command. It's super cool. If I knew right where it was, I would find it, but... So it is possible to run them. There is a way to mirror them or run them disconnected. Yep. And it's a super cool way, though. I wish I could explain it. It's with Podman. It's not like a mirroring of OpenShift does. It's way better. You can also, I suppose, save them as tar files and then just Podman load them. And those, I think, are included as a separate RPM. It doesn't blow the actual image, but if you wanted to run like that, you can install that RPM, which is just the images. And for those interested in Raspberry Pi, is that also under the microshift.io website? Is there a link somewhere there to the Raspberry group? I think it's definitely included in the how tos. Let me just make... Let me see developers. Where does that conversation take place? Because I know that's the one that... Yep. If you go on the Slack... So Microsoft has its own Slack org, and it's there in the docs. So if you go there, there's a whole channel for ARM support. Okay. And it's very active. And if people aren't answering you, just ping Ricky or Frank or Miguel, and they'll get right to you. Cool. Awesome. Well, this is something that we've waited for a long time. I hope I did an all-right overview, but definitely look at Ricky and Miguel's DevConf presentation, because it's great. Diana, I have a quick question if we still have time for that. Yep. Go for it. Yeah. So this sounds pretty awesome, especially... So with Fedora Core OS, we kind of float everywhere from single node pod man to running OKD on top. But we have a lot of... Like you said, this is a use case very much in between where people like some of the features of the Kubernetes platform, but they don't necessarily want or have the resources to install a full OKD installation. I think bringing this to the Fedora Core OS community would be something that would really be popular. My only question is, I know you mentioned this experimental, which a lot of these projects are. Like, for example, if Fedora Core OS typically is more of a set it and forget it type. So if they were to run the MicroShift Alden 1 configuration, we would want to give them some way to automatically update it. So they don't just run it once, and then it stays on that old software forever. If MicroShift has either security updates or feature fixes or bug fixes or something. Is there a way that you know of that is an easy way to just keep things up to date, just re-pulling the container or anything like that? So it's provided as an RPM. So that's what Fedora Core OS would consume, right? And but you would, I think you would just, yeah, I think you would make your own Fedora Core OS with MicroShift, isn't that? Gotcha. So when you say it's provided as an RPM, is it like the system D unit that's provided as an RPM that still pulls from a registry or is it like the MicroShift binary provided as an RPM too? So the MicroShift binary is provided as an RPM. Gotcha. And then you can run, you can run the, and that includes the unit file for MicroShift. So you can run that just barely, bear on your host, the MicroShift, but you can also run the system D service that wraps the Podman command. But inside that Podman container is still running, you're running system D inside the container. So inside the container, it's just the same as if you were running it on the host. So you're running the MicroShift service inside the container. Or with the RPM, you can just run, run that without Podman on the host. Perfect. Thank you. Yeah, I won't ask any more questions because we got to wrap up, but thank you so much for this. Yep. So all the unit files you'll find in the MicroShift repository under Packaging System D, and then that kind of explains stuff too. All right. And I see that Jamie has joined us, our other co-chair, and in the interest of time, Sally, thank you ever so much for this. And please thank the entire team for getting this work to, you know, place that really is huge. I think I've waited nine years for this. So this is pretty, pretty cool. I'll do whatever I can to help you build this community out. This is, this is... Yeah, but like it doesn't have a real place yet. So it just, it's new and we don't know where it's going to end up, but... Yeah. I'm having a conversation with Erin Boyd about, you know, what the next step is too. So I think it's going to incorporate a lot of enthusiastic OKD people. So we're pretty happy about this. So I'm going to go back to the regular agenda. And the next thing on the regular agenda here for those of you who are following it is the OKD release updates. And Vadim and Christian are both here. So tell us about Dirty Pipe and anything else we need to worry about, Vadim. Sure. A week ago we released OKD410. This was based, which is roughly similar to the GA version of OCP. There are still a few batches waiting to be merged. We've seen a problem with the cluster at CD operator, which mistakenly marks nodes as a insufficient disk space, rather disk speed. There are workarounds for all these. We haven't seen any catastrophic failures. John Fortin has reported a problem with the staff amount in the registry. We're still investigating what's up with that. And this weekend I'm going to release another pretend version of the upgrades. The problem is that it contains a kernel update, fixed for the Dirty Pipe vulnerability, which allows users to rewrite read-only files. Apart from that, it also has a regression related to NFS, which is hugely popular. So there is a workaround posted by Fedork or SFOX. We will add this part of announcement that you need to make sure that your NFS won't break and read more details on the bugzilla before the next release, and we'll include other fixes coming from Fedora. Another important, so 4.10 release is that we have finally upgraded to Fedora 35, and we'll be following the Fedora Curve stable. We also updated the system installer config maps to make sure that you can install 4.10 with that. And now we need to update our documentation to mention a new way to run single node clusters using the system installer. This unfortunately has to be hosted on your site. We cannot use the hosted version of the system installer on consolereliad.com, but it's fairly simple, really. And we will also update the documentation to mention that the bare metal API is finally supported. I haven't seen any reports that it's actually working, so if anyone has a luxury of redfish, iDRAC, and all things like that, and would like to test Okadee, that would be very, very appreciated. I believe that's all we have for the actual release updates. We're also working on Corvus layering, which will simplify our builds and would enable people to layer their own configuration settings and the RBMs onto their custom disks, as if it was a standard build config, but all of that work is still highly experimental. We're waiting for a critical piece in MCO to land before we can start throwing this out. Yeah, I believe that's it. Just to quickly add to that, maybe Dusty has some more info on that too. The Corvus layering is planned for 4.11, so for the 4.10 timeframe that will land at least not in the same version of Okadee until then. But yeah, I think that's going to bring with it a few big improvements, both on the usability side as well as on our AI and testing side, because we'll be trying to move an Okadee end-to-end test into Fedora CoroS, essentially, to have Okadee end-to-end tested for each Fedora CoroS change, which hopefully we currently only do this discreetly and we don't have continuous testing really for that, so hopefully that'll improve our, yeah, just how we test things a lot, and hopefully we can avoid breakages to updates with that in the future. Oh yeah, my router has enough antennae. Yeah, real quick, for the NFS issue that Vadim mentioned, and if I understand it correctly, it only happens for certain NFS server NAS products that exist, so I'm hoping that it, I don't think it is something that affects everybody who uses NFS. I think it's only if you happen to have one of these QNAS, or QNAP NAS devices, then you would be affected, but hopefully not everybody who uses NFS would hit an issue. I mean, otherwise there's no way people are running the kernel, like a newer kernel, and not complaining about this. All right, so I don't see Timothy here, and the next thing up is usually the Fedora CoroS update, so maybe Dusty, if there's anything, go for it. Yeah, I don't have anything too specific other than obviously that new kernel is coming to stable, and Vadim mentioned it earlier. We're going to start rebasing our next stream onto Fedora 36 here in the next week or so, and that's coinciding with the Fedora 36 beta, so you might be able to use that as an opportunity to get some early testing in on the Fedora 36 bits. So the next thing up, do I see Brian Innis on for doc updates? Brian is not able to make it today, but I can jump in with a couple of the things that we have from that. So the Code of Conduct is now up, and congratulations and thanks to all the people that helped contributing towards the Code of Conduct, and we're going to start filling in details for Operator Wants and Wishes on Christian's original issue in the repo, which dates back I think to January or February of last year or something like it's been around for a while. So the idea is that folks should start having conversations on that issue, within that issue about your wants and needs and desires for operators using the forthcoming operator catalog. For reworking the OKD-based repository, we're going to start just creating PRs to align with Vadim and other folks that are signing off on the proposed changes as we move to the new repo. Multiple people have been added as owners of the new OKD-project repo, so there's now probably a dozen people, Brian, myself, Bruce, Diane, so the usual suspects are there, and we want to make this accessible to folks and start moving there. At the next docs meeting, we're going to get very specific with our transition plan to it. Another thing that came up is website styling. Brian is working with Brandon on improving the visibility and accessibility of the OKD website, the new OKD website, getting the color scheme together properly tweaked, and then also basically we're going to talk about matrix and make a decision because folks are having a hard time registering in matrix, so we may actually scrap the idea. If anyone in this group has feedback on matrix and has been able to successfully register and has found it useful and thinks OKD should be using it, please share with us in the documentation group your experiences, because three people in the documentation group have had issues trying to register with matrix. Sorry, Jamie, quick question. Have they registered with the Fedora matrix instance, or which one? Because I have multiple matrix accounts like with the, what is it, Brayet, or the original matrix one, then there's one from Mozilla. Right, right, so this is supposed to be a room hosted on the Fedora one, and so there, apparently, so Brian, for example, is getting, when he tries to use his Gmail account, your organization is not approved to some sort of message along those lines. I wish Brian were here to provide more info, but maybe we need to clarify the instructions a little bit. I don't know, we'll see, but if you want, we can have a side conversation about how to utilize that, and if it is the best tool for sort of group communication, group chat, and whatnot. I'll get back to you on that, because for me it works, and I can even use my Mozilla matrix account to access the Fedora rooms, so I don't know. Okay, I'll figure out what's going on with that, and that's it in terms of communications. You want to walk through the issues now? Yeah, sure, so let's go through issues real quick, see what is new and fun and exciting. The OVN Kubernetes bugs working with external IPS, I think there was a response on that. Was there a bugzilla file for that? It doesn't look like it. There's no response, so we don't know if there was for that. Simple content access alert. Vadim, you want to fill us in a little bit on that, instead of us sort of going bit by bit? On this simple content access error, we're still discussing this, so there is a rather OCP feature to easily deliver subscriptions to Relate Notes. That's great and absolutely useless for OCD. There is a setting had to disable it, but we're still discussing should we own the whole config map, what are the implications of us setting this, so once we clear this with the inside operator folks, we can easily add this as a config map during the next update. It would be automatically applied and the alert would be gone, so it's fairly easy to fix. The problem is that we don't know all the consequences of this yet. As for other discussions, I can think of anything which is immediately raising as a problem we need to fix. There are quite a lot of problems which are shared with OCP and these just need to be reported to OCD. Okay, and speaking of discussion, so there is in the discussion section that conversation with John about the NFS, and so there's an error, and we should probably mention this in the meeting because other folks might have this issue, so this is in discussion 1153. John interpreted the error as to mean an RBAC issue initially, but actually it's from the storage file system, it's the NFS issue, and it was causing the builder to fail, basically, and so folks should check that out to get clarification that this has to do with the storage and not with the builder account service account permissions. Let's see, that's it for discussion items and issues. So operator status, did we talk about that yet? Do you want to, do either of you want to provide an update if you haven't already? We don't have anything, unfortunately, so we have all the infrastructure in place. Now the hard part is to get a buy-in from the teams so that they will start deciding should their operator be in the community repo or should they move to OCD specific repo, should it just be created in the OCD specific repo. Since Fortan has been released, they would have more time to actually get this done and we'll start being them, but at this point, there hasn't been anything actually happening, apart from being enthusiastic that this will repurchase this from the internal teams, basically. I think we will see some movement on that soon now that Fortpanten is out. The stress level is a bit lower for everybody, so I think we'll, now we'll have better chances for the teams to look at actually building part of OCD. Is there any outreach, Vadim and Christian, that we should be doing or that I should be doing, maybe? I'll revive that internal chat we have with the operator, how folks see that we can kind of get them to start the discussion with the teams. If we could get something set up and then at KubeCon EU, I know both you and Vadim are going to be there and Daniel Messier will be there and maybe some of the other operators, maybe we can just put our heads together and get a roadmap put together at that point, hopefully before me, but that would be nice to move that forward as well. It would be nice to be able to give users something that they can have a sense that other than this amorphous announcement that there's something that there's a plan moving forward. Okay, and Christian, did you want to touch briefly on the provider onboarding stuff? Yeah, just very briefly. I thought it might be interesting. We have internally created documentation. At first it was just geared towards our developers within Red Hat, but we're going to open that up and it's public now. I kind of an onboarding guide how to get new platforms to run OpenShift, and that is not actually, it is kind of geared towards the platform providers so they can onboard OpenShift themselves and add support themselves without having to involve Red Hat, at least at first. There's kind of a tiered support model in the works, and all of that applies to OKD as well though, and we've had, entirely we've had two folks enable OpenShift or getting OpenShift to run on the Vulture cloud as well as on I think digital ocean, kind of just following the steps there, both of which obviously aren't supported in any way, but just for folks interested in running OKD on infrastructure that isn't already supported officially, that is kind of the place to look at what to do and how to proceed there. It'll kind of be geared towards OCP, towards the providers themselves to then add official support along the way, but yeah, all of that applies to OKD as well on the technical level. So if you want to kind of try to install OKD on another platform, that would be a repository where a lot of information that's working on this. This actually connects to a conversation from last year about reaching out to different providers that aren't supported yet. OKD had sort of talked about like, hey, let's reach out to these folks and see if they would be willing to donate some resources to get things working on their platforms. This could be help OKD leveraging that, basically, you know what I mean? We could leverage that to get some more platform supported for sure. Diane looks absolutely yeah. So I just had a quick question for two questions. What is this Vulture cloud that you speak of? I haven't heard of it before and can you put a link to that in? I haven't haven't seen them and there's a bazillion clouds out there, so it's good to know a new one's there or and I had also heard that Alibaba cloud and Azure stack were now documented in the OpenShift docs or Michael Burke, there was a little thread on that Vadim you are on and thank you. That's how you spell Vulture, not the full way, B-U-L-T-R. Yeah, that was that was a new cloud for me as well. I calling it mine, I did that in our hack week and he got it to run. So the other, I'm just curious, the Alibaba and the other Azure stack hub, are those going to magically appear in the OKD docs.io or is there something we have to reach out in the docs group with Michael Burke to make sure they show up? They should just magically appear. We have a setting to remove rather override things for OKD, but any else just comes straight from our CB docs. Do we have Fedora Core OS images on both of those on Alibaba and Azure stack hub? Yes, we definitely have Alibaba and last time I checked there was an Azure stack hub images. I never, I don't have access to the accounts there, so I never checked that OKD is actually installable, but I don't see a reason for this to for them to break this simplicity. We have both the images, unfortunately, we don't have access at least for Alibaba. We have Azure access, I don't know about Azure stack to be honest with you, but we don't have community access right now for Alibaba, so we're not uploading or testing every release, but that's kind of like a manpower problem and an access problem, like with enough manpower probably talk to the right people to get proper access, but we just were a little thin. We could use OpenShift CI to actually test this. I'll try to do this next week, but nobody has requested Alibaba and Azure stack hub to OKD specifically, so that's what we get for free. OK, so if anyone's listening to this recording and they wanted either of those things, reach out and let us know. The other thing that I just wanted to drop in the last 60 seconds is over a year ago, we did sort of a configuration and deployment summit for OKD where everybody walked through it, and I think at this juncture, I'm maybe the Vulture folks, the Digital Ocean folks, and maybe it's time to revisit that as something to bring the OKD community together again and then to expose these new docs. I can see a few thumbs up there in the chat. If we could organize something for that, probably post-CoupCon EU in June-ish, I think that would be a lovely time to do a push around that, and again that would be, I think, virtual, but a way to do that and then create. Find the Panda logo, the latest one, and give it away, you know, t-shirts and swag or something like that for people to do that, and you do it in conjunction with the docs groups push to get updated guides. I think the timing is right now with these new docs being out there. So Dusty, we can work with you to make sure that maybe June-July timeframe, we have the images where we need them or at least access to them, and that I think would be a great way to come into the new year, the new releases. Yep, like as unfortunately we already have the images available with every release that we do, so it's all wiring after that point, right? Just wiring everything up. So we can work on that, but yeah, cool. And my last question for Vadim and I haven't checked because I've been offline for three days. Were you able to use the Twitter handle to do a quick announcement on the latest release? No, it's just like it's been a week already, so we'll probably go with the next. I definitely have the access, but I just felt very wrong to post the release announcement a week after. Yep, your judgment is good. All right, final words, Jamie? Go forth and conquer. Live long and prosper. Oh no, that's the other one. Take care, guys. Thanks everybody for all your hard work.