 Hi, everybody. Hi, Christian, thank you for joining us. And I'm just going to unmute everybody so you can all do this. Hey, Maya, there you are. Hey, Mike, we'll see Christian. Do we see Christian? And there's Neil. All right, so we'll just do Hollywood Squares here. And so today we have hopefully a short-ish meeting. I don't know, maybe. I'm going to share my screen and we'll start sharing my screen. And so Vadim has said he's going to be a little bit late. And I will find my working group notes. This is right in the way, so. Oh, welcome, everybody. Happy Tuesday. And this is the OKD working group meeting. If you're in the wrong place, stay anyways, because we like the company. Today, we have a proposed agenda. I put the link to the attendee meeting notes thing. If you could add your names into that, that would be helpful. I also, Maya is here, who is the person who I mentioned last week, had the IoT ARM64 questions use case. So I was going to use this time to also ask her to explain a little bit about what she's looking to do and what she needs. But first, you want to walk us through an update on OKD4, Christian, and where we're at, and then the road to the release. And I'm apologizing for saying July 9th to you and making you panic the other day. I've been watching. Watching but not paying close enough attention. So cool. So take it away. So, yeah, I think Vadim isn't here yet. But, yeah, what Vadim has done is we've updated the nightly builds to be based off of 4.5. So new nightlies are actually built from the master code now. And that's on the way, ramping up to the actual 4.5 OCP release, which will be basing off, which will be using OKGGA to, yeah, as a base for. So everything should, all the platforms should work. I think we have an issue with GCP currently, but which will be resolved, it's a known issue and that got lost in the latest rebase. But what's new is the vSphere IPI install path. So I'd like to ask anybody who has access to a vSphere environment to use to test that out with the new nightly builds. I think the next beta, which would be beta six, I think is also on its way, which it's either already been released or is going to be released very shortly. Yeah, and we're still on our way to releasing OKGGA very soon. So we'll have to wait for OCP 4.5, the release, which is, I'm not sure if that's publicly available, that I'll just say it. I think we're aiming for a release on July 13th, and OKGGA is expected to be released a few days after that. So, yeah, not far from now. And that is, I think it for the update from my side. Wow, we have a real date. Well, that's as real as it gets, okay? So we've been here before. So I have a lot of faith this time around because there's a lot of other people saying that same date, but we'll see. And I'm pretty sure it's OKGGA is not going to happen on the 13th of July, I think. Why not? Okay, that's OCP 4.5 GA, and then we have to backport a few comments from master onto the release 4.5 branch, making that the F-cost 4.5 branch. So, yeah, it's going to be July 13th plus a few days, a few very short days, hopefully. Yeah, I think there's also some requirements on Fedora CoreOS that we just, you know, realize were super important. So it might take into account some Fedora CoreOS release schedule as well. Fedora CoreOS doesn't have a release schedule, it just makes snapshots every two weeks. Yeah, I mean, that's the goal every two weeks, but we have a change that we need to land. That we, in other words, there's a change that we need to make that we don't want OKB to have to release GA and then make that change for their users, right? It would be more smooth for OKD if the GA included the change. I can link to it in the BlueJeans chat as well. That would be great. Yeah, just to put a topic to that issue, that is the naming of the Ethernet interfaces. We've been using the old schema in F-cost so far with ETH0 and so forth while R-cost and new scheme would actually be ENS192 that we'll be releasing OKDGA, hopefully with that new naming scheme without actually breaking old installs. That's at least our goal. Yeah, I hope we can promise, well, I hope we can actually do that, but yeah, we wanna get that fixed in F-cost before we do GA. All right, so as we said, it's all relative to dates, so don't worry too, don't worry, I was gonna do that. So I'm just gonna put in, I'm not sharing my screen again. I apologize, trying to see who that was talking because Dusty, I didn't recognize your name right off the bat, your voice. Yep, that was me, Dusty. Dusty. I just wanted to highlight to Bruce had a question in chat here about, is there a pointer to the vSphere instructions or are those just part of the installer instruction? And Mika, could you say that again? Oh, sorry, I just wanted to highlight Bruce's question from a few minutes ago, make sure we don't lose it. Is there a pointer to the vSphere instructions? Someone, can someone dig that up and share it in the note? My connection seems to be a bit wonky. I didn't get the question again. I'll just, yeah, please. Is there documentation on docs.okdio about doing IPIV sphere? That's a good question. So let's just go take a look. Doesn't look like it, actually. I looked on the, I don't see. That might be, let's go on to look. So I think we're actually sharing the documentation with the OCP latest and there's, at least on the source side and there's only a few things like the OS naming that are different. So even if they're not there, the vSphere IPI should work exactly as the OCP vSphere IPI install. It's just that you'd have to reference or use the different installer binary or build it from the F cost branch and put in, well, yeah, the actual image references in the JSON file in installer should already be updated. So Joseph, to answer your question on Azure, there hasn't been any progress that I'm aware of and it's still sort of blocked on the Fedora side to get us, get the images uploaded to Azure and we won't be blocking the GA release on Azure availability. And Mike's question, yes, the aim, the goal is to get rid of the F cost branches eventually that will probably happen sometime after GA though. So in the 4.6 release, we hope we don't need the F cost branches anymore. For 4.5, we will have that on the F cost branch our releases because we will actually use release 4.5 branch and then we still have to backport a few comments which by then will be merged into master already. So at least on the MCO side, in 4.6 we won't be needing the branch anymore. In 4.5, we will still need it. The installer is still a little bit more on the unclear about when we can get rid of the branch but that's, yeah, we're not blocking GA on that and it should also happen quite soon. I expect in the 4.6 timeframe some time or 4.7 at the latest. But that's not really, yeah, it's not a blocker. And yeah, any other question? And who was the person who was asking for the vSphere link? Not sure I gave you the right one but it was the one that I found searching and you can all, I apologize everybody. I can't see the chat questions when I'm sharing my screen which is why I keep popping back and forth which I'm sure. That's why I mentioned it. That was Bruce by the way. Yeah, I was asking just because I do have a vSphere and I'm working with the IT services people to go through the native installation parts. I installed using bare metal having created the virtual machines. So I know that works. Okay. All right, well hopefully we can get some vSphere stuff going. So I don't see Vadim. Is Vadim on here yet? I think he's still locked up into planning meetings for the day. So I'm gonna say he's a no show for today. I, in light of the June 13th and perhaps July 13th, I get ahead of myself all the time. Perhaps July 15th, release date of the GA. The following week, whatever that Monday is I think maybe another one of the OKDAMA sessions if people are available for it. So I'll tentatively put us all back on the hook for a GA party on whatever the following Monday is there. So I'm just looking at my calendar. That would be July 20th. I don't think I've booked anybody there. On the 13th, we were supposed to this week have an F-cost one, but we had a snafu in the matrix on Monday with live streaming. So for the AMAs, I like to make sure everything's live streamed. So we're gonna reschedule the F-cost one. I have them booked for July 13th. So we may be able to go ta-da or something on the 13th, but I'm not counting on that. So you all should have invites in, or the primaries on that. All should have invites in your inbox for the 13th, including Christian and Vadim, who I want on the call if you can. So we can say how wonderful F-cost is and how dependent we are on them. And Dusty too, who's on the call today? Yeah, Dusty's on the F-cost one, Brent Ben Beard and Colin Walters have all been invited too. So we'll see if they all make it. And that's a pretty loose format. Dusty's just gonna give an overview on what F-cause is and why and maybe a little bit about the release cycle. I'll see if I can get Ben and Colin to do a little song and dance as well. And then we'll just open up for Q and A. And that's the format for whatever we do on the 20th besides throwing up some balloons and announcing, using Joseph's reworked logo for us and figuring out some stickers or something. So that's the great news and a blog post of course. So there's all that. So today I did manage to get one person who had the use case to join us that I talked about last time, the ARM64 use case. So Maya, if you could, if we're okay with that Christian, I'd love to have Maya explain the use case and what he's looking for for her IoT project. If Maya, if you're game, take it away. Yeah, yeah, no worries. It's a little background. I've been working with a company that does retail analytics for two years. So here's in the stores, what are they paying attention to? How do we grab their attention? Controlling content on digital signage. So you get 18 to 24 year olds walk up, you show them content relevant to them and so on and so forth. We've seen increases in sales and so on. The main problem was is that it's running in Android, which is absolutely awful for doing anything. And I've finally gotten my client off of Android and into the idea of Linux and taking the monolithic Android app, which was a pain to maintain and creating all the little bits and pieces. So we have a camera input and then we have an age detection, vendor detection, attention span and so on and so forth. So we have a whole bunch of inputs being processed by a bunch of outputs talking to the CMS. And it just seems like it's, oh, right, this is a microservices containerized sort of architecture thing. And it's a bit more powerful than your typical IoT thing. This is the device that we run on that I've spent two years designing. It's got a six core ARM64 on it with four gigabytes of RAM. So it's no slouch in terms of power. We can also add Google's TPU module to it or an FPGA if you need a specific bit of processing for a very specific app. And the, I mean, one of the problems with Android, well, exists in both cases is how do you manage a fleet of not just 10 of these, but say 10,000 of them? We do have one client customer that has 38,000 retail locations worldwide. We're rolling out 25 of them in the next month and making sure that we can handle a large scale sort of very wide network of devices is something that's really important to us. One of the reasons we're on ARM is simply that Intel and AMD are too expensive. And one of our direct competitors is now actually one of our customers because their device costs 10 times ours and we spent a lot of money doing the cost optimization. So, getting it on ARM, getting it lower powered, still being able to provide all the services and things that we want. This is like, okay, we take an architecture that is containable, is designable, scriptable and all the rest of it and we drop it on each unit and then we drop it on a thousand units or whatever and to have a server side management console, this is right, all of the devices in Washington State, we're gonna run a campaign in Washington State. So, you click on all the devices in Washington State, you update their configuration, it pushes out functionality or content or whatever and you can do national campaigns or regional stuff. The other thing that we've discovered by designing this architecture is that now actually instead of just being a retail focused infrastructure, we can apply it to a whole bunch of other businesses where security is important, privacy is important, we're subject to GDPR rules in the UK. I can't afford 20 million per data breach. So, keeping everything properly locked down, I've been chatting with Peter Robinson who does a lot of the arm work for Fedora and we've got a trusted platform module in there so we can completely encrypt and own the boot chains, we only run signed images, only images that we sign can run on the device, our images can't run on anybody else's device and so on and so forth. So, it's designed with security and privacy from the outset and to be as flexible as we can possibly make it. So, the camera is just one input. We've also been prototyping and playing with some millimeter wave radar. So, in the retail space, if you have shop displays up, you can't see through them or the camera can't see them through them, but the radar system can. So, you have a better idea of occupancy count based on different types of sensors and things like that. And yeah, we've just started with ARM because when it runs Android better too, it's less expensive. So, we're at a cost per unit of about $2,250 and if you look at Nooks or even AMD's offering, their devices start at that. We've integrated a 4G modem, we've got a Wi-Fi, we've got Ethernet, we've got power over Ethernet. So, we've really made this thing as easy as like stick it to a wall, plug it in and turn it on however you can and it just goes. So, to have that, you know, just turn it on and go and plug in plainness of it requires a lot of sort of backend coordination. But, you know, having the device register itself on the network the first time, do all of its bootstrapping, go to its default configuration. All of that fun stuff is stuff that I never want to do manually ever again. So, that's basically where we're at. Looking at the suite of tools and following OKD, OpenShift, all the operator frameworks, although the sort of let's ditch doctor in favor of something more secure, you know, these are all the things that have been drawing me to OpenShift and OKD and Diane and I have been loosely talking about this for a couple of years and now we're finally at a point where I was like, right, I need some infrastructure set up. I need some people who can set up the cloud side of it that will, you know, set up and execute and deploy hundreds or thousands of units at once. Some added bonus problems, VPNs for, you know, 10,000 remote systems doesn't work very well. So, like communications and things like that. So, there's a few challenges left to be had, but it looks like from a starting point, OKD is the sort of collection of tools and that should make it as easy as possible. Apart from the fact that it mainly runs on x86 and not ARM64, the underlying OS container Linux looks like the best thing, but the last time I saw anything ARM64 related, I dated back to 2015. So, it's really getting an ARM64 build of it and then we're ready to hoover it up and go. Also in terms of the weight of it, we don't need the full K8s distribution and all the extra features. K3 looks far more attractive because it's just like, it's just what we need to run and because we are building walled silos and walled gardens, we can control what does and doesn't need to be supported in that. So, the OKD for IoT, which is sort of a dangerous name, I think OKD of light is probably safer. I love OKD, it's okay. I mean, I want the T-shirt, so I'm just saying. I'm happy to wear the T-shirt too, but some people may not get it. So, just having a lighter weight version of all the stuff that runs in big heavy weight data center instances to be able to run on ARM64 devices with a couple of gigabytes of RAM, maybe 16 gigabytes of storage and still manage to do everything that we want to do. That's it in a large nutshell. Any questions or? So, there's lots of questions I think from my perspective. One, getting a compiled OKD that runs on ARM64 architecture is a blocker and it does sound like fun, but as well as what really understanding better what it is about K3, what we would have to slice off of OKD to make a K3 thing. So, I'm gonna unmute, because I can see that EAM has joined us. So, if people have opinions, I have opinions, I'm tired of mine. I asked why I had to come to share it here be so that some more technical insights might be able to join this conversation. So, please, Christian. So, I may just maybe because Container Linux was mentioned. So, the successor to Container Linux is Fedora CoroS, which we are basing OKD on. So, that's great. And then there's also, so for the use case, I'm not sure whether it does sound to me that maybe Kubernetes isn't really needed in that if it's like sensors and stuff for that run on all the machines anyway. It's maybe more of a provisioning thing. You don't, and for me, Kubernetes is needed when you need this scheduling of workloads across many machines. If all the machines run the same workload anyways, you don't really need it maybe. And then Fedora IoT, which Peter Robinson does, maybe a great alternative there because they also have a provisioning system in place with the ignition configuration, which we also use for OKD and OCP, and they have a service called Zeziray to do that, to sort of have one server that can provision machines with a given config. Because, well, yeah, I think we're a bit further off of a real open shift on ARM right now just because of the resource constraints we have. Most ARM devices won't support it and we can't just make open shift in UK through ES by ripping out pieces. At least not that easily, not that it's never gonna happen, but that's not a thing we can achieve in the short term, I think. But yeah, it does sound like a very interesting project and I'm definitely willing to help with anything there. And I think the first step we can actually do is get all the parts built on the ARM architecture. And then even if there's no machines that really can run them, you could, yeah, at least try to run them or virtualize that or stuff like that. And Vadim may know more here as well. Vadim, I think I have you unmuted, so if you wanna try speaking. Yeah, that sounds like an incredibly interesting project. If I understood correctly, Fedora IoT works on that device so we could use it instead of Fedora CoreOS. Later on, it would be just a matter of building a payload and we can reuse OCP binaries, but majority of them because due to the license constraints, some packages still have to be rebuilt. But given a large built farm that can be done and once we got there, a mass installation would be the tricky part. If the devices have IPMI or any kind of a remote control interface supported by Ironic, we could use bare metal IPI schema to massively provision a lot of instances and make them join the cluster like we do with the standard machines. That would be very impressive. And eventually, all of the clusters we create, we could use open cluster management or whatever the thing is now called from IBM to control various clusters and use Kube federation to move workloads between them and tweak them. So that pretty much is also the issues, except it needs to be done. The tricky part would be getting Fedora IoT on that devices. So if that works, that would be a very huge boost. Once we're done, we could prepare a payload of the OQD dripped off several operators, for instance. You probably don't need telemetry. You probably don't need operator hub. You can't get away with a single Prometheus per cluster and so on and so forth. We have a set of instructions how to do that, but I don't think it says actually being tried right now. So I think there's a few things that we could do. Probably the first is we actually published some sort of lightweight guide to like, hey, if you happen to have resource constrained, a resource constrained hardware set up, what are the things that are optional, right? I think that's valuable for anybody who's not even trying to do this in Maya's case on ARM boards with four gigabytes of memory. So I think that'd be valuable. Regarding Fedora Core OS, we do have a plan to actually support AR64 hardware. We don't have a plan to support 32-bit ARM, but it sounds like you're already on AR64, so you're good there, right? Yeah, and we have unofficial builds right now, but we obviously want to bring the other architectures under the official build pipeline and produce those at the same cadence that we do the others. So we have a plan to get there. I'm just not sure what your time horizon is on it. Well, I'm a customer. I went to yesterday. Of course. I have a question for Maya and I think this kind of, I think we shouldn't overlook what Christian said about, is Kubernetes the right tool for the right job here? But do you predict like meeting the features that Kubernetes is adding? Cause like, I'm struggling to think like, I wouldn't think you'd want to put all these devices into a single cluster, which mean would they be single node clusters? And then the question is, well, what are you really getting out of that? Like, could you just use a container runtime and a nice secure operating system to achieve the same thing, basically? I think one of the core problems here is that without OpenShift today, we don't actually have a story for provisioning any of these systems at scale, like at all, period. The only tool we kind of had for this is being EOLed right now, because once PULP2 is EOLed, we actually have no way of mirroring our PMOS tree or OS trees at all. We have no way of pushing them out at scale. We have no way of replicating them. We have no way of easily provisioning them. And we have no way of tracking those provisioning. Right now, all of that is built into the MCO, which essentially traps you into using OpenShift to do this, even though it's the wrong tool for this whole workload case. Like- Neil, just- Yeah, maybe two. Yeah, stop for a second. Just, yeah, I think Christian already mentioned the, I'm gonna say it wrong, the Fedora IoT's DZR, DZR. Oh, that does some- DZR. The weird name that I can't say. That thing does limited provisioning for it. It's not quite to, I think, the extent that Maya's asking for, but it could be evolved to do so. We still don't have the replication or the locality things that are required to make that actually efficient. Right now, this actually automatically happens when you have OpenShift clusters and you're deploying them because it will replicate the OS tree payloads within the infer nodes and then deploy them to all of the worker nodes and then reschedule them and bring them up and stuff like that. There is no equivalent to this for the non-OpenShift case right now because the only tool that did this is now EOL. Which you're referring to is, is Pulp? Pulp 2. Pulp 2. Pulp 2 is the only implementation of a mirroring replicator, mass, whatever for OS trees. Nobody wrote anything else and Pulp 3 doesn't have support for OS trees, but we got nothing. Okay, so actually that's all good points, but I'd like to go back to Dusty's suggestion that we do some documentation about what you can strip out for resource constrained deployments of OKD. I think that might be a nice good first step, not that I'm volunteering to write that, but as we get to GA, I'm just looking for new things to do. And I'm also looking for something that can be leveraged to compete against the Rancher K3 project as well that is not tied to Docker and some other things along that line. So I'm also trying to think about what we do next as a working group and what the use cases are, which is, Maya, you're a guinea pig, face it for this topic. That's okay. You've heard all this, tell us what you're thinking now that you've gotten the brain dump or a dump. So I have been playing a little bit with Fedora IoT and played with the unpronounceable provisioning tool. And it is still very limited. You can add the root SSH key and set a couple of basic parameters, but you can't really push out any of the OS tree stuff. I've been playing with silver, blue and looking at that mechanism in Fedora IoT with respect to Fedora IoT on ARM. I've got a developer who's working right now on the U-boot and the primary boot letter or the first boot letter that we can configure from the processor. So it actually lives external to main storage and we're adding functionality to it to be able to scan the system image and confirm that it's a proper signed image. And if it isn't, it has the ability to call home over the modem over ethernet to download an image which would really help for the first provisioning and bootstrapping process if all of the baseboards can say, oh, right, I don't have an OS. I'll go get one. So we're looking at that. As the provisioning and the management is really the biggest problem. So whether or not we need a full feature K3 or K8 is probably heavily debatable. But we do want to be able to change, add things to the single node cluster or the single device, add services, delete services. And the management or the monitoring of those services also important because that determines billing. If you're doing loads of age detection stuff because that's important to you, we're going to charge you for that. And we're not going to charge you if you don't care about gender detection. So whatever the business cases are for the particular modules that we're running. So knowing how much compute that takes, if it's using the TPU or not, if you're using an accelerated version, the configurations won't change dramatically. They'll change maybe once a week or a couple of times a month. And there's no, it's not like a giant web app. There's no need for horizontal scaling or vertical scaling. It does what it does. So the IoT and the OS trade model may be perfectly sufficient. So long as we can add containerized devices where we don't have to worry about dependency clashes. I hate PHP for all the different versions that have ever come out. I thought I'm not really enjoying much more. So, and things like that and wherever little bits of obscure source disappear and come from, I want those in a container. I don't ever have to worry about rebuilding them. So I see some heads nodding and it's a familiar story that I've heard at every tech conf about apps and their evolution and or devolution and self-destruction. So, the provisioning, the scaling, the resource monitoring and the management across the fleet. It's what Neil was describing about the replication and the getting the base level OS done, installed across everything. If that tool is now disappearing, then that's slightly worrying. Charles joined us. So you walked into an interesting conversation, Charles. So everyone who joins late is gonna go, what the heck are we talking about here? So... Oh, no, I know you think I'll just, well-prepared for what you're talking about. Garo, do you have a magical do-it-yourself solution for mirroring OS trees? Ask me next week. I mean, come on. I just throw in some more context here to really make it difficult to follow. So Fedora IoT and Fedora Coro are related, and we wanna move them closer together. Right now, there are some subtle differences though, which I'm not sure how that would affect using Fedora IoT as an OKD base, for example, because in Fedora IoT, we run the ignition stage in the real root on the first boot, while in Fedora CoroS and Reddit CoroS, we actually run it in the inner drum FS. So we wanna move all of that closer together and make it a coherent story, also with silver blue, but that is unfortunately a little bit further out still as well because of priorities. But yeah, I do expect that that'll be easy in the future to sort of interchangeably use Fedora IoT and F-Cos and maybe Fedora IoT will kind of become the arm spin of F-Cos, even though I think Peter Robinson won't like that and we will have to change things in F-Cos. So yeah, and there will be some changes in the Fedora CoroS world as well to sort of align that in the future, but that's nothing we can do today or tomorrow, it's like a long-term thing. And I don't wanna promise too much here, but it's definitely on our radar, it's just not something that's a super high priority right now, but it would, so I'm not sure how easily it can be done to sort of switch out F-Cos with Fedora IoT. If we have, or once we have the F-Cos arm builds, that will be much easier, of course, to just use F-Cos for that platform instead of using IoT. I think eventually it's a goal for us to not have separate arm distributions but just have one Fedora IoT that is also the Fedora CoroS, but yeah, we're not there yet. So just throwing that in there to make confusion complete. Yeah, so I think, well, on the 13th we'll have another AMA session with the F-Cos folks and Dusty through his name and if you wanna reach out directly to him, he's the community manager for F-Cos. And so it might be a good connection for you, Maya. I still like I'm gonna keep going back to circling the drain. Dusty's suggestion about creating some documentation around what is a minimal viable OKD deployment or what things can you remove? One, because I'm always gonna put my cards, it gets me closer to a K3S or is it K3S, K38 or whatever K3S? K3S. K3S, competing thing maybe. But it also, it starts to inform us and maybe what we can ask of Maya too is to look at that and see what else, if we've dropped anything she might need for this kind of IoT deployment or there's even more that deeper cuts we could make. I think that once we get to GA, which I know is not until the 13th, the 15th, the 20th, whenever. Somebody's saying dates now. You missed it. You missed it. Christian said a date. I'm like, nah, you're not talking dates. Christian said July 13th. Oh my. He did not, he said the 15th. And then I said we do another OKD for a GA, AMA on the 20th of July because unbent things will slide again. So I'll book us for, I'll pencil us in and Cheryl, I'll invite you as well and everybody here to come and join me in that. Amy makes an excellent point though. Christian never specified what year that was. Right, it could be the year 3000. That's true. Exactly. All along in a dust to the wind. Best wait for it. Yeah, just wait for it. It's coming, right. So anyways, we might get a t-shirt that says achievement unlocked, I suppose. I think at this point it's probably well-deserved for one of those. Figure out what I can finally figure out how to order t-shirts. So that's like just asking other folks for advice. Well, back in your original point, a minimally viable OKD would also make it more approachable for folks that are doing like code ready containers, right? If we could build a code ready containers OKD version that was even more compact so that you didn't have to have a, you know, an Alienware workstation to run it. That would be nice. Yeah, I mean, there's also a real good, you know, community slash outreach opportunity here, which is very close to what Maya's doing, which is there's a whole lot of very cost friendly, fun hobbyist type boards like Raspberry Pi or the, all of the stuff from the Pine 64 folks. You know, if we could effectively deliver something AR64 based that could run on something with four gigabytes of RAM, you know, there's an opportunity there to bring a lot of people in that might not have otherwise been able to try out OKD. Yeah, I think, you know, I kind of like, I like the way we're talking about this now. And when Diane was first mentioning, you know, like what can we cut away? My first thought was almost, can we invert this and say, you know, and I feel like this is something that's missing in OpenShift Container Platform as well. But like, can we show a documentation or an architecture that says, these are the core components you need to make it. And this is how everything fits together as you build it, you know, because even looking at OpenShift Container Platform, it's really difficult to figure out like, how do these pieces fit together? You know, what is this operator doing that? You know, so I think starting to build that map so that someone could build the core pieces and then start to figure out, how do I plug this and how do I plug that to me would be really valuable. Yeah, it's unfortunate that we don't know how OpenShift is actually put together. We're getting... I'm in the middle of this and I barely understand how it's put together. What are you talking about? Yeah, no, that's the putting it together part. And we have a few constraints that will probably crop up in sort of more IoT or distributed applications. We need to speak to the underlying hardware. You know, we need to be able to talk to the CSI camera interfaces and be able to put stuff out on the HDMI output. And with the TPU module or an FPGA, those are connected via a PCIe channel. So, you know, we need to be able to talk to the underlying hardware and have support. I know that, you know, NVIDIA Docker's been around for ages, so you can run, you know, Docker apps on NVIDIA GPUs. So it's not new, but it is something that needs to be built up. And, you know, the ability to run container apps, coordinate them, update them, add and delete them and have them talk to native hardware. That's my starting point. And then something, and then a really nice web app to be able, you know, for the customer say, right, I want this, I want this, I want this. They'll compile it down, deploy it onto a device, and then, you know, watch all the statuses of all those devices. And for, you know, the accountants, make sure that all the billing is done appropriately. Christian just popped an operator. That's going to be our answer to everything. There's an operator for that. That's the... Is SRO going mainstream, though? Because I had heard that was kind of like an experiment. So is it, is that actually going to graduate or... What? What? Who wrote it? I thought that was... I have no idea, actually. It's called the special resources operator. It's like a way to manage things like kernel modules and stuff like that that you might need on the host. It's been developed in collaboration with NVIDIA, I think. That seems like the scary part. Yeah, my understanding, though, Dusty, was that Zvonco had originally put together the SRO and that we were going to eventually transition over to NVIDIA's operator. And then, you know, we were going to, you know, SRO and that we were going to eventually transition over to NVIDIA's operator. Are we going back to SRO now? I think where SRO is going to be more generic. So, like, technically anybody could take and make an operator based on this and manage things like kernel modules. There's an open shift enhancement for it. Let me see if I can grab a link. My cache is probably just stale at this point as well. There was a time when that SRO was just looking at NVIDIA stuff, but now it's like, yeah, it identifies any special hardware and then runs the operator. Yeah. So, honestly, that's the freakiest thing I've seen and I don't know how much I'm comfortable with the fact that there's an operator that just, you know, kind of messes around with kernel modules and does weird things with hardware initialization. It's already bad enough when you're working with the kernel directly. Like, I don't know if you want to add operators to this. It's freaky. It's freaky. But this is how we're enabling like GPU workloads and NVIDIA me, other types of. NVIDIA. That would be so nice if they were just open source. Yeah. Yeah. And in your dreams, mine too. The other news today that I was going to share with the group was the operator framework finally got the number of votes and as of this morning, it is officially going to be an incubated product in CNCF. So we got one last vote in and I think it was announced on the TOC mailing list. It hasn't been publicly announced anywhere, but as you know. Is that a good thing? That is a really effing good thing. We have been trying. Why? Why? Because one, it's a pattern for operators that is becoming sort of a standard for it and the OLM and the SDKs are in wide use. And we need, and this is me with my red hat on, we need to get more external eyeballs on that workload so that it's not, you know, so that it's not a red hat only effort to maintain and resource it. And people have been asking us to do this for a long time. There were a lot of roadblocks in the way, but with a lot of patients, we've been at this effort to get it incubated for, I'd say, what is it June? Probably last October, I think is when the first time I touched the CNCF TOC and pushed their request out to be incubated. But it's been a long haul. And it's good. It is definitely a good thing for operators period. It's a pretty. This also means that that transitions to the CNCF's governance model and contribution model with the CLAs and all that crap. Well, yeah, I think you exchanged that for the number of more eyeballs and sunlight on what we're doing. And yeah, right now it's a very red hat heavy, shall we say. That'll change, of course, now if it's in CNCF, but certainly don't want the extra paperwork involved. Yeah, but you wouldn't want it to become like Apache Cassandra and data stacks, right? You know, there was an opportunity for Cassandra to be a much bigger ecosystem than it ended up being because it really was controlled by a single entity. Yeah, I'm not saying, and that sort of proves that moving it to the CNCF doesn't necessarily imply that that's what'll happen, right? True. You just literally gave me the counterpoint. It's like just moving it from data stacks to ASF didn't fix Cassandra. They used the ASF policies to control it even further. Now I expect that red hat is not so stupid as data stacks and will not make the same mistake, but just because the paperwork says it belongs to somebody else doesn't mean it actually does. So, and sometimes the paperwork is also enough to make people a little more hesitant than they were before. So those are all things that you have to be careful about. Like I've seen a lot of recently TOC submissions to the CNCF. It's not like I don't sit there and watch all the things happening there. You kind of have to. But like one of the bigger problems is that if a project transitions to CNCF, things have to be reverse validated for all the contributions. And that makes things very complicated for projects where people don't want to sign legal agreements or can't. And so that's ugly. I totally understand that. I mean, it's good. It's a good thing in general for the ecosystem, but like paperwork sucks and adding more of it does not necessarily make it better. But proprietary appearances or us being control freaks about something or being labeled that about operator framework has, I think, hindered even further adoption. So, you know, it's what can I say, and I'm just happy that they're going to start giving like a KubeCon and other things. We can have an operator con, you know, event, you know, set like they have prom com and things like that. So we can start building a bigger community, more open community about that. The CLAs and CAs and all those things are, you know, that's part of the CNCF governance. I can't do much about that. So on that cheerful note. And thank you for making it down or out of a good thing there, Neil. I'm just going to say that out loud. No, no, no, it's a good thing. It's a really good thing, but it just means that like it, the contribution model it revolves, you know, going through CNCF modeling rather than the store, the normal, more free spirited red hat model of the machine. Okay. So before we go, because we get four more minutes, Joseph, you have an issue and I'm trying to track it down here that you were talking about that you're going to pop into there. Do you want to just mention it so you get on the record with the recording, which I'll post what it was that you were. I'm going back in here. I'm just writing about a problem with mirroring operator hopper, but I will open an issue for that. I'm not sure if it's a bug or just a misuse from usage. I will clarify that offline. Okay, cool. All right. Well, thank you everybody. And thank you Maya for coming. What I'd like to do is I'm going to put in the issue tracker, the idea of creating documentation on what an MVP of OKD would look like. I think that it's a good conversation to have. I'll bring up, I'll mention it on the F cause AMA so that people outside of the universe that we live in will. Yeah. Thanks Christian. We'll, we'll get word of it and we'll see what we can do to move this forward because, you know, we always need a challenge and GA is next and theoretically everything is just automated after this. The builds just happen. The feedback comes back. It's all wonderful. So we need something new to do in the OKD working group. Not that there isn't more work. So anyways, thank you. Thank you everybody for joining us today. I'm going to let you all go before my network goes down because it has been wonky today. So thanks again and we'll talk to you all soon. And Maya, make sure you look up dusty on free node IRC. Good for good connections made. Perfect. Take care guys. Thank you. Bye bye.