 All right, so again, welcome to this joint meeting with the Federal Accuracy and Podman teams. The first thing, so of course, this is recorded, we'll try and use the raise hand feature from Google Meet. So please try to use that to speak. And we'll get the conversation with Kéman. Remember that, yeah, this meeting should be a place for high bandwidth discussions. We are not planning on taking final decisions here at the Federal Accuracy Decision process. Usually takes place during Federal Accuracy meetings. So if, yeah, if there are things that we cannot discuss here or if we don't have time or anything, feel free to read that into tickets into the Federal Accuracy tracker. As we have a lot of topics on the agenda, we'll try to keep that into 10 to 15 minutes slots. And of course, if we don't cover some, we'll maybe do that again or we'll see. Yeah, that should cover most of the thing. And thanks for joining, I'll let Dan speak first. Okay, so the Podman team has been working a lot on a solution where really what we've been sort of concentrating on is Podman on Mac. So we basically got most of the features that Docker has at this point and the last big advantage of Docker has over Podman is their Mac support. So we've understood that for a while and we've had Podman for Mac for a while. But when you use Podman on Mac, we make you have to have a Linux box somewhere similar to what Docker does, but you have to have a Linux box that you connect to. So one of the goals was to basically allow you user to get on a Mac and be able to execute the Podman commands. You come up and say, you need a Linux box and now what does the user do? And so our solution to that has been basically looking around for a VM to or a standard VM to just pull on behalf of the user. And we've gone back and forth, we looked at CRC, we've looked at some community members have done things like Podman machine and built on timing Linux and different solutions have come up like that. But what we really want to do is we want to, because we're basically a core of Fedora project, we wanted to use the container platform for Fedora, which is Fedora Core OS. And so we've actually wrapped basically Podman. Now Podman has a command called Podman machine. You do a Podman machine create and it goes out and it grabs the Fedora Core OS image and brings it down to the Mac and starts running it. We also want Podman machine to work on top of a Linux box and maybe eventually on top of a Windows box as well. So the basic idea is to pull the Fedora Core OS image and then for those that don't know, Podman has a fairly new what we call API v2 which is a remote API that allows us to run containers remotely, so it's a REST API. And so Podman or the Mac would be communicating with Podman inside of Fedora Core OS to actually launch all of the containers. So as we've worked with Fedora Core OS, we've have some decent problems with it. Probably the biggest one right now is that Fedora Core OS does not default to Sync Groups v2. And we think that that's a colossal problem at this point because we want, I desperately want Sync Groups v1 to die as fast as REL 8 goes away. But so that would be our issue number one is getting Fedora Core OS to switch to Sync Groups v2 by default. The second issue we have right now is we want to support both running on top of x86 as well as the new Macs run on top of ARM. And there is a Fedora Core OS for ARM but right now it's not compressed. So we want to know, is there a reason that the version of Fedora Core OS is not compressed by default? And then the third issue is we want Fedora Core OS to go on a diet. And we think at this point that having Fedora Core OS ship with both Podman and Moby Engine is really duplicating code and just removing Moby Engine for Fedora Core OS removes about 200 megabytes of storage. And we believe that Podman can implement just about everything that Docker can do on a platform. And we can run if you install Podman Docker which just sets it up in Docker mode to be able to support the remote API and if anybody wants to run Docker commands, it would work. But that would make, I mean, the problem right now is it's Fedora Core OS is, I think, about nearly two gigabytes in size. So when we're pulling it down, the initial user experience on the Mac is not great if it takes, you know, minutes to pull down the image. So I take, actually, do you have any other, Brent Boddy and Ashley are working on this and Brent's not, wasn't able to attend, but actually anything else? I think it basically covers it from what Brent said. But yeah. So I guess at this point, we have a discussion. All right, everybody just says, what you want once is great and we'll just do it. This is really what I want. Yeah. Yeah, you brought a lot of points. Like I think one thing around messaging like the C Groups V1 is, you know, it's a default. And certainly for this use case, like be pretty easy to, you can create an ignition config today that changes that default. That would mean an additional reboot the first time the machine is, you know, started, which I can't imagine would be too bad of a hit like it's a little bit of latency. But, you know, if we want to mitigate that. Fedora has been default of the C Groups V2 for since Fedora 33, which is a year and something. Fedora 33 is about to go away. Fedora CoreOS is supposed to be, you know, what the future of containers is and why is it stuck here? And this one, forgetting about Podman issue, this just drives me crazy that we're not you guys on a C Groups V2 at this point. Right. Well, right. All your issues are interconnected and the reason for that default right is that we ship Moby, which doesn't support this, right? Yeah. And so Moby, by the way, Docker upstream does support it. So I don't know why Moby doesn't support it. So. All right. Benjamin, go ahead. So on the C Groups V1 issue, I'm not up to date on the current state. I know that there's a plan to switch to it. And maybe one of the other FCOS folks here can speak to that. So there are a couple of things. The reason we were on C Groups V1 is that Fedora switched over to V2, like intentionally, if you read the change proposal, before everything was ready. And we have user support, right? And so we wanted to make sure to hold back on V1 until all the software that cared was updated. So that's why we're there. One, so, as I said, someone else can speak to the timing, but one other option is that we are working on adding kernel argument support to Ignition. And so if that happens to LAN before the C Group V2 change lands, you could use that when provisioning your Podband Mac VM to switch FCOS over to V2, even if that's not the default. So without a reboot would be great. But I still, forgetting about the Podman issue, what project doesn't support C Group V2? Kubernetes supports it at this point. A properly maintained version of Docker C, yes, yes, Dusty. Yeah, I think what we were waiting on was essentially Kubernetes to have support, which it does now. And then also Moby Engine to have support, which it does now in Fedora 34. So I think we were hoping to switch over in Fedora 34, but I've also been away for over two months. And I'm not sure of the current status on that. But yeah, we were hoping to move to it as soon as we could. So I think we're approaching your goal on that one. Steven, go ahead. I was gonna say on the Ignition KRX one, there technically is a reboot, but it happens so early in the NITRD. It's effectively not a reboot. Like we're happening, it would happen right after Ignition fetches its config and then applies the KRX, and then it would immediately reboot. So you never leave the NITRD before the reboot happens. So I don't think you know the answer on the aspect. I think the C Group V2 is going to happen for Fedora 34 unless we find any major issue. So that should, which Fedora 34 is coming really short time frame. So that should be fixed really, really soon. So I guess I'll jump in real quick with, I know that you mentioned size and as a reason to potentially drop Moby, is there a world where we should basically drop all container run times and then just have a extension build-in that pulls in Docker or in this case, I guess Moby since they can't use the official one or Podman or Cryo at a specific version. And we trim the image by that much more like three times. Sorry, Benjamin. I think before we switch to that, are we happy with C Group V2? I think the final world was that it's going to happen in the F-34 timeframe to the switch to F-34. Okay, so if we could switch to C Group. For our core OS, we'll ship simultaneously with Podora 34. No. In a close time frame, we don't have the exact same, we don't rebase on the same day, but we will test the Fedora 34 content and we'll try to rebase. We have a stream to rebase on that ready and we'll do the switch hopefully shortly after the release. So we're actually rebasing the next stream right now. So in the next set of releases, you should be able to use the next stream which, well, I guess that doesn't automatically gives us C Group V2, but we could make that happen so that it happens in the next stream first. The summary there, Dan, is basically we don't switch our stable stream on the exact day that Fedora 34 is released, but there is another less quote, unquote, stable stream that you can use that will have the Fedora 34 content and if we're switching to C Group V2, that will have C Group V2 as the default. So that would be available on Fedora 34 release day. Does that make sense? Brent has joined us. Brent, where are you grabbing the Fedora CoreOS image from? For Intel, I'm grabbing it directly from their download page, but I am using the next stream and then for AR64, I'm grabbing it from I think the only place it's available which is a side location done by one of the, probably one of the folks on this call, maybe. Those are the two locations. All right. And there, I think it's already next stream. Like, I think the AR64 is already following the next stream, I think. So this next topic, which is AR64 support and so as you've discovered, we don't have full AR64 support right now and I think we have Jakub here. We maybe can give us an update on the status of that work for Fedora CoreOS. From my side, I'm still in the same spot that I'm building it locally on hardware that I have available and pushing it to Fedora people. I got a bit distracted by other issues in the past weeks and I will be also on PTO next week. So I will not be looking into it directly and don't expect much progress for the next week, unfortunately, but I plan to continue working at it as I'm currently onboarding onto the infra of the CoreOS and trying to look all into all the, all the work that would need to be done there. Hopefully make it official orders to not object. Yep. And I'll be back next week as well. So I can, me and Jakub can kind of work on it together. As a consumer of it, I just let you guys know that I've had no problems with the images other than uncompressed downloads and the metadata, download metadata not being present. I'm working on the compressors. Brent, before you got here, Steven pointed out that you can actually reboot into Cgoosv2 fairly seamlessly or hidden from the user via the ignition script. So we might want to investigate further. Damn it. I did hear that and I heard that it would be soon. Okay. Very well. And the Fedora 34 timeframe lines up with probably a better approximation when we'd really be pushing Podman machine out. So. Correct. And frankly, I'm fine with the ARCH. I mean, Dan, I don't know how you feel, but if for two months ARCH is in a side location, as long as it's available, I can't say that's necessarily a bad thing. The only problem there is that we have to have that coded into Podman, right? That location, if that location goes away, then we're in trouble. We don't, I don't know if they want to ship in that location forever. The forever, fair enough. There's probably also the other issue. It's not a CDN, right? So people around the world may have wildly varying download experience. But I think for Podman on Mac, I think we're gonna see this image becomes much more popular as time goes by because obviously Mac is switching their default to that. Sure. So in the short term, if getting those images compressed would help unblock you. Jacob, that should just be a fairly small change to the pipeline. Is that something that we could do? And then the other thing I wanted to check, Brent, was that you were using, when you say you're getting the X86 image from the download page, is Podman actually touching stream metadata and going through that whole process? Correct. For me, I was blocked on the infra issues that were mostly blocking me for doing regular push. Like I most probably can push actually everything except AR64 for whatever reason. I have issues with the builder at the moment, but it was issue with the storage. I'm working on it, hopefully it will be fixed. I hope it will be done by Friday, but unfortunately there are more issues that I had. Okay, Dalton. I was wondering if this covers the AWS AMIs. I've been publishing AMIs publicly for a couple months now, but I would love to stop doing that since I'm not in any way affiliated. Dalton, are you referring to AR64 AWS AMIs? Okay. Yeah, I've been publishing those and using them, but would love a more official version. Is that the same pipeline? Yeah, I think we would, once we had official AR64 images, we'd probably want to create appropriate AMIs for them as well. Dalton? Yeah, in the short term we could, you could push the, I'm talking to Jacob, you could push the AR64 bits to the same bucket that we use. We have like a slash develop top directory. So, you know, it can be semi stable and at least it's on a CDN. So, yeah, so Podman could hook into that. Sure, but I don't think I have yet access, I have not yet requested access for the Fedora side of the infrastructure. Yeah, we can do that offline, yeah. Okay, thanks. Okay, Dalton again. You still have your, yeah. Sorry. Okay, no worries. All right, so AR64 is in progress. Do we have anything else to talk about here about that? Colin, yeah. I was actually gonna move on to a different topic, but I was kind of related to this, it's okay real quick. So, I know we talked about this before, but are in place updates in scope for this? Like, are today is the Podman tool injecting ignition to disables in Coddy or are we planning that basically the user experience would be, you know, they use Podman machine and then they spin up the machine, but then like instead of having like a command to like redown the image and they instead just do like the in place O Street updates and, you know, like pull container images in place or are you thinking it more be like the latter? Like, is it, would it be Steelus or kind of like a hybrid where like last I looked at the Docker one, they explicitly created a separate writable data store, but then just blew away the OS image. Is there a plan for which of those approaches? My plan was to talk to Colin about it. Exactly. We want you to decide that, not us. I mean, I think we give you some basic ideas and then take your advice. Yeah, yeah, I think. One of our goals is to show the power of Fedorico or OS. So it's not, yeah. We don't want, we don't want to over engineer what Podband does, right? So if Coro West can manage his life cycle, that would be great. Right. The reason I bring this up is like, it's kind of related to your topics around shrinking the image and other stuff, right? Like to me, one of the most valuable things about Fedorico OS is that like stream metadata JSON. It's like we have tested this OS, we booted it in a bunch of different places. We've simulated installing it on bare metal. We've tested this whole thing together, right? And so we want that to apply to a variety of use cases, right? Whereas today, you could just take our configs and build your own F costs and post whatever you want, right? But then then you like own the build, testing, delivery infrastructure, right? Right. And so, yeah, and if you do that, then you also need to think about how you handle updates, right? So a valuable point of Fedorico OS is definitely like in a scenario like this, as well as production, cloud scenario, you can do in-place updates pretty cheaply and keep your existing data. But yeah, it's an important aspect of this, right? Cause I think, I mean, definitely one thing, you know, you'll notice if you use Fedorico OS for a little bit it's like right now, sometimes I know we'll just reboot while you're like SSH it or something, right? And like, so you have to think about that. I mean, to me, this is like good, but you know, do you want to have the podman command at least like have basic controllers and Coddy or something like that, maybe? Cause it can be a non-trivial amount of data to download and some users may be on meter connection. So whatever you'll do, basically I would file an issue about this and just come up with a stub plan which could be we'll do in-place updates, right? Work it out. Yeah, I would like, I mean, in my opinion, we deal with that one of the people complaining about it. And I think, you know, but yes, you're rather comment about us managing an operating system that that's a definitely no, that's been, you know, we're not going to be in the operating system. Podman team is not going to be in the operating system business, so. Yeah, I mean, I feel like this is a little bit of a special corner use case of Fedorico OS which is like you, you're not necessarily running like a server application that needs to be up all the time and people are connecting to it. Basically you're running a development workflow for people, right? And, you know, while you're up and developing and you're interacting with it, you don't want it to go down, right? So like the fact that we have automated updates that happen, you know, you can schedule them or whatnot, it seems like that's not necessarily something you want to do, like, you know, you don't want to be in the middle of development for somebody and their machine go down or their podman backend machine go down. So it seems like you would want to treat it slightly differently and you could even have the approach of like, I don't know at the top level what you could run but you could say podman OS update and it actually talks to the backend machine and actually will run an update or you could just say podman OS replace and it basically will reach out, see what the latest version is, grab the brand new image and give you a brand new VM rather than updating it in place. Yes, I don't like your suggestion there because you're figuring a human being knows what the hell he's doing. And I don't believe, you know, anybody's gonna be, yeah, is gonna realize that it needs to be updated, right? So I don't want the thing to be three years old and they're still using it, right? Right. So I would, in a perfect world, I would have the thing update itself but not reboot and then when podman shuts it down, you know, anytime podman shuts it down, which the users are gonna do, right? He's gonna have a podman down command to take the thing down and when it boots back up it would boot up with the new version. I mean, to me that would be, that way the user is out of the update business, but it's not randomly shutting down on it. Go ahead, Jeff. So as the user who's probably gonna complain to both teams about this down the road, I'd like to kind of plant the flag of, it would be really nice to either have Zincotti send a message to podman or podman pick up a status from Zincotti that says, hey, there is a new update available and show that as a line when you send a podman command, right? I don't know that that exists today. I don't want to presuppose that this has all been thought through, I kind of doubt it, but my use case is going to be that once every six months I'm gonna try and spin up some containers on my F cost, it's on my Mac and I'm going to hope that it succeeds, but if it doesn't, I would like it to tell me, hey, you could use updating your machine and a podman machine update is something I'm perfectly willing to run if something tells me to and I would prefer to find out from podman in that case. So when we get an issue together and start iterating on it, I'd love to provide some feedback on it. Yep, thanks. Yeah, and I think Stephen even suggested, oh, we could even propose tying a particular F cost version to the version of podman that's running, right? So on the host, on Mac, you know that this version of podman was tested with F cost version XYZ, right? And if it doesn't match, you maybe give the user a message and say, hey, this version was tested with that. You might want to consider updating or something along those lines, but like Jeff said, there's a lot of options here. We should probably definitely talk about that user experience when we get there. I think over time, we're gonna figure those out. And the podman team is gonna have to be real good about version control, that older versions of podman can work with newer versions and vice versa. All right. Do we have anything else on the update font? Update topic. I don't see any answers raised. So should we move to the reducing the size of Fedora Chorus topic? Benjamin. Yes, I just want to frame a bit, submit so that everyone's on the same page, sort of what our overall goals are for F cost. So we're concerned about the OKD use case. We're concerned about the Kubernetes use case for people who want to run other distros of Kubernetes. We're concerned about people who don't want to run Kubernetes. We want to support people who want to run podman. We want to support people who want to run Docker. We originally had a model where everything should sort of unified with the OS. And unfortunately that didn't work for cryo anyway. We've been adding some infrastructure to allow sort of pulling in additional packages in fairly restricted cases. And so I think that's smoother than it used to be, but it's not clear to me anyway, that we should sort of move more in that direction than we need to, because it's nice to have an out of the box experience for users that works for as many container use cases as possible. We want people who want to run containers to use F cost no matter what ecosystem they're accustomed to and how they want to do it. And so I think that's where the team's thinking has been up to this point. As far as size, therefore, I think, at least in the back of our minds, we've been thinking more in terms of reducing unnecessary dependencies being pulled in rather than the size of the core components that people actually want to run if they're running F cost. So that's my perspective. And I'm interested in, if other folks on the team have a different view, please take that. I would just say that would work if Golang wasn't the fattest thing on the planet and the more Golang container things we shove in there, the fatter and fatter it gets. This is a Brent did an experiment yesterday and removed just Docker, just Moby Engine and Run C and he delimited about 20% of the size of Fedor core was. Not just Moby container, Moby brings in the Docker command, Docker container D and Run C. Each one of those is a massive Go program that is at least 20 megabytes to 50 megabytes, each one of those. Yeah. I mean, yeah. Unfortunately, it is definitely large. I think when you consider the original goal of Fedor core OS, which was to try to maintain some continuity for existing container Linux users who had Docker slash Moby Engine and Rocket. Rocket was not the most commonly used. So we ended up just kind of pulling forward the Docker piece of it, but that's a huge piece. And I think if we were to try to remove it now or make it like something that's just added on secondary, it would, I don't think it'd be very good for those people who decided to take the journey with us and move over to Fedor core OS. A lot of people who do try Podman like it, which is good, but I don't think we should necessarily force them in that direction by removing it. Now, other options that we could possibly have are, you know, actually making a specific Podman machine version of F cost that just removes some things, right? And so it's basically F cost, but without a few packages, but it would be a separate thing to download, which is more overhead, right? But is it worth it, right? Is it worth it to reduce the size so that you don't get those pieces you don't want, right? So it's all a balancing act, but that's another option too that we have. You know, one example is I use silver blue, but, and I don't know if this is still the case, I remove Firefox from the base layer because I want to run it as a flatback, right? So we could do something like that if we thought it was worth it to us to remove Moby engine from a Podman machine version of F cost, right? And it would actually show, oh, these packages have been removed when you do an RPMO history status or something like that. But if they're concerned about the initial download size and the update download size, it doesn't, it's sort of intentionally today, it doesn't help with that, like, because yeah, the idea is you can still reset to that base image. I mean, the other thing, so my arguments against Moby engine are twofold. First, I think that Podman can provide the Docker CLI with the D-O-C-K-E-R adjective and it can provide an API to satisfy Compose and other Docker calls. Now, will there be 100% correct? Probably not, but, you know, we will be responsive to fixing those as opposed to taking two years to get off of C Groups V2. Second thing that happened is Kubernetes has dropped support for the Docker shim, so that's changed the need for Kubernetes to have at least Docker as a container engine on the system. Now, maybe we can argue that container D still has to be there, but I'm not sure container D works well with multiple different, I'm surprised that container D doesn't have the same issues that Cryo has with different versions of Kubernetes. And right now, how do I get KubeLit on top of Fedora Core OS? So, just one thing to note here and I'll leave that next, is that we have existing user using Docker on Fedora Core OS. So if we remove that at some point from the image, we need something to migrate them from the current storage of all the container and everything that they have on there to put them and that's something big. And I don't think we have that right now. That's one, go ahead. Yeah, I think maybe the other side of that coin is, doesn't seem like it's not even plausible to remove Docker CLI while Docker shim is in use until V122. So there's at least like a time required. Like the absolute earliest would be when 122 is no longer or 121 is no longer in use. It's like there's a substantial usage of the Docker CLI until viable container runtimes are available. I would, I don't necessarily care whether or not things are like bundled in or easily like able to layer in. But it seems like the track record so far is that like the layering approach is pretty hacky at the moment. So like I would be super wary of like, you know if you're gonna replace a major component that is like the reason people are using Fedora Chorus that has to be like a really good experience before but like before any changes made like that. Yeah, right. Yeah, and like first of all, we would want to get all of our technical ducks in a row and make it a really good experience. And even if we did that, it's still a hot button topic. Like container wars are a thing. And trying to replace one with the other is a little bit of a lightning rod. So yeah, it gets hard. So that's why I was suggesting, you know maybe we make a separate deliverable specifically for pod men if that is like a deal breaker, right? I don't think it's a deal breaker. But just again, the deal breaker is that you guys on in secret was for you too, because of Moby Engine and that drives me out of my frigging mind. So we can fix that. We can fix that. You can't fix his mind. Not at all. I just have a quick question because I'm not sure in my own mind. How does Fedora relate to REL? I mean, REL's gone and dropped Docker, right? Are we trying to stay in lockstep with Fedora and REL as far as what's being delivered in both platforms or not? I can speak to how Fedora Chorus relates to REL Chorus which is a somewhat different question. REL Chorus is completely tied to the needs of OpenShift. So if OpenShift needs some glue that only makes sense in OpenShift context we'll tip it in our costs. If OpenShift doesn't need a component like Moby Engine we won't ship it in our costs. Where F Cost is trying to fit a broader set of use cases. OKD is one of them. And even there it's not a one-to-one mapping into what OCP is doing on our costs but there are others as well. Another benefit for Red Hat CoreOS versus Fedora CoreOS even when running like Kubernetes or OKD is that a specific version of R Cost is only for a specific version of OCP. So they know the KubeLit version, the container runtime version for Cryo, all of these like pain points of like having all this genericness to be able to run like four versions of Kubernetes potentially it can just ignore and it can say this is it. You get this and only this. This is my only purpose. Right. So what is, so bottom line, C Groups v2 to me is the biggest thing. The other one is the Intel stuff. So we can live with you keeping Moby Engine around even if we think it's a waste of time. But the problem I think you guys are gonna have in the future is you're gonna have a huge upgrade problem. If you ever do want to drop Moby, you brought up a good point. It's like you guys have this database sitting there and how do you, you might be running Moby Engine for the next 20 years. Yeah. Unfortunately, and the time to drop Moby Engine or Docker if we were gonna do it is when we created Fedora CoreOS. Right. But yeah, I mean, we decided obviously we wanted to keep those users who would have chosen not to come with us if we had done that. But it might be interesting. I see, we could do somewhat of a conversion with Podman from a Docker storage, but we couldn't convert the containers, I don't believe. So we could move the images over fairly easily, but we couldn't move the containers over. Go ahead. Brent, is there anything else we talked about that you wanted to cover? We have just before we have Benjamin and Colin with Resen and Steven. So Benjamin first. Yeah, just a couple of minor things. There's been some discussion in terms of people upgrading from container Linux and that's true, but I think there's also the broader question of if someone's looking for a container operating system and they're bought into the Docker ecosystem, whether we could sell to them to try FCOS and also switch container runtimes at the same time. I don't know that we're that established yet in the container operating system space. And so it's not clear to me even for new users that we could sort of start phasing out Docker. The other thing as a practical matter is just empirically, we haven't really had the cycles, the time to work on reducing the size of the OS. We're concerned about functionality. We're still working on some networking use cases. We're still working on improving the installation flow to some extent. So that as an argument for sort of prioritizing this, I think would sort of fall flat on that basis. It's obviously space is the thing we want to get to, but like having a solid distro for users and so far empirically have been more of a priority. Yep. All right, Colin first I think, and then Steven. Yeah, just what, so I know there was a conversational threat at one point, like why don't we bake in everything? And I guess one pushback I want to, and I'll read this on a ticket, one pushback I think I have on that is, our current containers may not be, are definitely not the end of this space, right? Like there's Cata containers, like there are people out there who are like, okay, I want to like fully isolate my containers. And maybe wasm makes a lot of progress and like there are people who want to do a lot of wasm stuff. And if we get to this world where it's like, okay, well we need to bake in Cata and QMU and then it's like wasm comes long and so we need that on the host. Like I feel like that's a long-term losing strategy. Like so we clearly, yeah, the reasons we have Docker have been described, removing PubM by default I think would be, removing all of them by default seems like a step too far. I'm just, I guess I find myself leaning a little bit more towards let's polish a mechanism to extend and make sure we can lifecycle, line cryo and all that stuff and cover testing of those too because like a huge gap in our whole testing matrix is like actually testing that extensions work. So that's kind of where my head's at. Steven. Yeah, I wanted to follow up after Dusty's comment about it potentially being too late to remove things from the initial image. I don't think so. I think that there's a fair amount of it will still work on upgrades. So if you had an existing machine it will keep working and then as long as we provide a solid enough deprecation window and good enough messaging, I think it's fair to cut things out of the base image provided there's some not terrible work around to get yourself back to that state if you need it. And if we can build this UX around extensions or just improving package layering UX so it's not like write a system D unit does some bash script and then reboots. I think that that's at that point is fair to put out the messaging. Hey, in the next version, the next major version of Fedora when the stable stream or when the testing stream gets there we're going to Yank Moby or we're gonna Yank X package. Here's how you get it back and then we provide documentation on that. I think that that's sufficient personally. Yeah, I mean, especially if in the case of upgrades for example, if you're upgrading a system from one that had Moby engine in there to one that didn't and the system detected that then it could convert it into an extension, right? And now it's just an extension that lives with that OS. So that user did not experience a downtime or whatever it was just converted into an extension and it just worked. Yeah, it definitely needs some thought and some communication. And we first of all have to decide if it's something that we even wanna do, right? And then we can try to execute on it. All right, Christian. Yeah, hi everybody. Just because that came up, I want to... You're a bit loud, Christian. Oh, sorry. Yeah, just because it came up I wanna just say that I'm strongly opposed to removing Pondman from F course because that would make installing OCD much more difficult since we boot F cross and then we use Pondman to actually extract stuff. So we do need Pondman in F cross for OCD right now. So I'm opposed to removing that at least. And I'm also in favor of removing Moby. And I think if we can find a way to kind of convert that into an extension, that would be great. So Christian, in the proposed idea that I thrown out in the chat about removing all three, essentially there would be a mechanism in ignition to say I want Pondman and it would just install it during ignition time. So by the time that you get to the real route where the user would actually try and go and say I want OCD, Pondman exists at that point. And it's just not shipped in there. It's just pulled seamlessly. Okay, that would actually work for us then. Yeah, there's a lot of options here. We probably don't need to go down too far that I've got like three or four that I can discuss right now and we don't have time for that. Having that option also would solve the problem that Colin talked about where someone wants G-Visor or they want, you know, I would like to get rid of Run-C off the platform, just use C-Run, just again, because it's 20 megabytes, it's, you know, and they basically do the same thing and we default to C-Run now. And Docker runs fine with C-Run, but yeah, libk run in the future, there's lots of new OCI runtimes coming up. All right, Jeff and then Peter. All right, so it sounds like we've got a person in the community who would like to get this change plumbed through Fedora CoroS and they're approaching us and asking us for it. And I'm hearing a lot of ways that we potentially could do it and reasons for doing it and not doing it. One of the real crisp takeaways that I'd like to get from this meeting is are we willing to, at some point in the future, actually do something like this or should we just kind of set a baseline of we don't ever intend to do this? Can we reach that level of agreement in this call? So I'll just speak here because I said this at the beginning. So the idea is just like to jumpstart discussion here and if we want to reach specific agreements or specific decision on topics that should happen in the tracker or in the Fedora CoroS meeting. So feel free to file issues and we'll discuss there and we'll bring that up and keep it coming. And we'll convert the current notes into issues. Peter, go ahead. Hello. Yeah, I just, this is kind of related to the topics before I'm here. I maintain cryo and package it. And I was just gonna say that the option that's kind of been being floated about like dropping all or most of the container engines in favor of like a good extensions. Mechanism seems like the affair and also like optically more like less dangerous than just saying, okay, we only support Podman and cryo. But yeah, I just wanted to mention that, but it's, yeah. All right, I see Christian with your hand raised, yep. Yeah, thinking about the optics, I'm not as worried about that because we've kind of already ripped the bandaid off of the Docker thing. I would say, but yeah, maybe some community members really want that and rely on it. But I think as long as we can keep it working, even as an extension, I think that would be enough. I wouldn't see a problem with preferring Podman over Docker as a distribution. But that's just my opinion. Brent, was there anything else? You didn't hear my intro where I basically said the three key issues I knew was to give us to be too, besides the image and the arm image. Was there anything else that you wanted to talk about? I think everything looked good. There were a couple of surprises in how the ignition file works, which we can debate whether that's a bug or working as designed, but we have a decent workaround. As Colin correctly pointed out when we were looking at it, if we were to auto-start the Podman socket for the users and for root, then we wouldn't have this weird ignition file that has to go through and do all that. And I think that's a valid point. I don't know, I don't know if Fedora would allow us to auto-start for just F cost and not Fedora, but that's certainly the other thing. But again, we haven't really stressed other than just downloading, getting them out of data, parsing it out, checking the shot against something local and booting it. That's about as far as we are right now. And can we connect to the socket? Our socket. Colin's given me great links for... We need a mechanism to say that we're up and running and they do similar for their testing. So he's given me a link on how to implement that. I think right now we're sitting reasonably pretty on that. This is as usual. A quick question. Is there a link to like the source code or your stuff? Yeah? It's been proposed to main and there was some convoluted issue with the auto-completion stuff that I gotta go work on, but it should be merged here the next day or two. Colin, just look for the pod man machine pull request. It's all in there. It's a fairly large pull request, so it's gonna take a lot of people having comments on it. Yeah. Thank you. Okay, we have one last topic that we wanted to discuss. And I guess I'll defer to Dalton for this one for the continual in time if you would like to take it. Yeah, sure. So I filed this I think two weeks ago or so. So basically there's this sort of timeline scenario where like obviously Docker Shim is going away. I know this group is like preaching all about like using all these other tools, but the reality of it right now is that there's like no materials on how to actually like which route to actually go. So this issue was trying to highlight this is like what container runtime or runtimes are like officially blessed because at this point we're kind of just community people are just making it up. It's like, do you sideload this thing? Do you configure this thing this other way? Do you sideload this other thing? It's kind of, I don't know, we're like super eager on getting rid of Moby it sounds like but we're also sort of tossing our hands up in the air about how to get other container runtimes to work. I've vaguely poked it container D can maybe find time to also poke it cryo, but it doesn't feel like the best scenario for this to be done by random people poking at things in the community. So I was really hoping we could come to more answers about container runtimes and like officially supporting and testing some of them. And I think that does tie into some of the layering stuff maybe or some of the shift by default types of discussions as well. Dalton, you're talking about in the context of Kubernetes, right? Yeah, Kubernetes container runtime and the Docker shim deprecation window coming up. Well, you have a, for my perspective we have a bunch of engineers who are paid to maintain two container runtimes and they will contribute to Fedora and help fix issues that come up. And those are cryo and podman. Community support for the others has to be done through community. Okay, so with all those paid folks we currently have like a random blog post actually Dusty's random comment describing how to install it which thank you very, very helpful. And container D just happens to be there by chance. So that's kind of the current scenario for using actually using those two. Right, container D is just sucked in because it's part of Moby Engine. Yeah, coincidental. Christian, go ahead. I think we have, yeah. Yeah, I think this is more of a user experience problem really because we do have cryo and it is supported but we can't easily install it on Fedora CoreOS yet because it's only being released as an RPM module and RPMOS tree doesn't understand that metadata and can't pull it. You can actually download it and install it manually with RPMOS tree. But obviously that's not a great user experience. So maybe that is where we should kind of improve the user experience and maybe make RPMOS tree understand modules or release cryo as a standard RPM. Obviously because we have the module because we have different release streams of cryo that may not be feasible because you may wanna be able to choose between versions. But yeah, if RPMOS tree understood the module metadata and we're able to pull from those young repositories or module repositories, I think that would alleviate that issue, right? Then we could just have cryo as an extension and most people should be happy with that, I'd say. Okay, go in and then Peter and we are almost close to the top of the hour. So I'll keep the recording going until for something like about 10 minutes probably and then we'll call it off, go in. Yeah, I just say, Dolan, I think you're absolutely right to call this out as a enormous gap. And one thing I am hopeful we can do at some point is like I'd love to at least be able to have informing CI from Typhoon and other groups using for our CoreOS. Like, because yeah, definitely like we just have a general gap with F cost and upstream Kubernetes in general, right? And I actually haven't even looked at what Typhoon CI is but I think solving this problem is part of solving that problem too and it would really make a lot of sense to do that. Yeah, I'll leave that there. Go ahead Peter. Thank you. Yeah, yeah, I'll definitely thank you for bringing this up. I wasn't really thinking about the interaction with Fedora CoreOS and so this is definitely, yeah, useful perspective. I think from my perspective and from the perspective of the Cryo team, I think we need a signal from the Fedora CoreOS community about what the kind of the eventual path of installation is going to look like. I'm kind of, I'm happy to, you know, configure the Cryo package in whatever way that is needed or work around like, or morph what is currently existing in such a way that will work for Fedora CoreOS but I definitely need some signal on the sanctified path of getting a package that like definitionally has three supported releases at any given time. So it can't be like a sequential package there has to be three versions that are downloadable at any given time. I'm also happy to work with my team on and like we also have someone who works on the Kubernetes release team. So I'm happy to work with him on like grouping together the KubeLin and Cryo and Cry Tools, all of these things so that there's like a sanctified like let's make this Fedora CoreOS node Kubernetes capable. I'd love to see that. But yeah, I need the, I need the direction before I can make any of those steps. All right, Daniel. Basically I was following up on exactly Peter just said, could we just put the latest supported Kubernetes, the latest supported Cryo in Fedora mainstream and you just, any time you update, you get whatever, you know, if that changes then it just gets updated. You know, why do we need, why don't we have those in the mainstream? One point that I will use as a counter to that is the majority of consumers that I've seen consuming Cryo outside of OpenShift use the like last supported Kubernetes release, not the most recently released one because the project generally moves pretty quickly. So it would be much simpler just to package the latest one but I have a, I have a guess that many users won't be pleased with that as being the only thing that we have as a sanctified path of installation. Yeah, as much as I'm not a fan of the modularity effort in Fedora, Fedora, in Fedora, it's basically made for this. Like that's what it's trying to solve. It's that exact problem we're having. So I think it makes sense to use modules for this. And, you know, yeah, we need to improve the UX there. And as Colin said, like we need to have good CI around this and be clear what versions of the module we support and sanity check that every time we do a release. So if there's agreement on like the specific streams we want to support, we can hook that stuff up. The RPR industry issue, I think we could work around that or otherwise we'll just have to tackle it. But I don't think it's necessarily a hard blocker because RPR industry can fetch things from the modular repos. It's just that it, like you said, Christian, it's not modularity aware, but it's still can read the young metadata. Yeah, so my ideal, I guess, in this situation, yeah, I agree, Jonathan, that like modularity UX is not great in a lot of ways, but it is literally, like that's why we adopted it despite all of the difficulties that we have with it is because it's literally for the purpose. So ideally, I would say like my ideal is that there's first class support for modules, you know, Fedora modules in RPM OS tree. And then we can work together on configuring the packages per Kubernetes release within that module. And I think that will be, that is like the most idiomatic task in from my perspective. Christian, go ahead. Yeah, I agree with that because if we were to kind of pin one version and say, this is the version that F cost ships, that might create problems downstream for Typhoon or OKD if those versions aren't the ones that are baked in. So it's great to have modules to be able to choose from, you know, three different release streams that are supported. And yeah, I do agree. This is exactly what modules or modularity is supposed to solve. So yeah, making that UX better in RPM West tree. I think that that would, that's what I would go for here. It doesn't have to be full support, but yeah, if we can just make that a little bit easier, I think that would be great. Good, go ahead. So I have a little bit of ignorance. Like how bad is it to use the latest cryo with red and Kubernetes? Like, you know, I know when the whole cry was being churned out and all that stuff, I'm sure I'm assuming there was to, you know, stuff that needed to be version locked, but I guess that's my question is, how much does it really need to be version locked today? It's difficult because the CRI has not moved a lot in the past like two years, but it has moved some. And, you know, like part of the cryo value proposition is that there doesn't have to be any ambiguity between differing versions of the CRI implementation and they're cubelet. It's like, it's a clear one-to-one relationship. So we have had situations where consumers have had to use N minus one cryo version because we've been slow about releasing or, and the opposite also, but we can't really make claims for supporting it because that kind of arbitrarily increases the support burden on us of like supporting, well, not arbitrarily because there's only three upstream supported releases, but if we have to backport CRI changes that far back or maintain backward compatibility with three releases, that's a higher support burden. And also goes against like philosophically what cryo was initially created for. Yeah, I don't really know the answer to that either. I think it's gonna be really interesting around, I think for all the problems of Dr. Shim how terrible it is, there's also like, it hasn't changed in forever. So therefore no one has had to worry about the Docker version in forever, whereas we're about to like see the ecosystem move towards container D and cryo or things like that and maybe have to start caring about this. I'm vaguely hopeful that if things are up to the CRI like 1.0 version standard, we can all just say like, okay, it's good enough. Kubernetes is happy enough with it, but I don't know if that's actually going to pan out like the thing that you're worried about Peter as well. We would probably be trying to do, of course, the latest Kubernetes as soon as it's released with whatever the latest cryo is and hoping for the best. And container D is gonna be doing the same thing on flat car and I guess we'll see which ones work well. I don't know what will really, how much mitigating effort will be needed there to handle that. So it's hard to say. Yeah, and part of the trouble about this conversation is that like, yeah, the Docker shim like the world didn't have to worry about this version problem because Docker shim was like never updated. And it like, you know, it has been kind of abandoned upstream since the CRI existed and we never made that explicit. So like there is this awkward like interaction now where that intention is now being made clear and all these end users are like, oh, no, like this, and now I have to think about this. And yeah, so I think I like to think that the, I mean, the CRI is not even in version one yet. Like I think that's gonna happen in 122, but it's in no way, it's not slowing down in development just because we're calling it V1. We're just calling it V1 because it's published it up three years ago and it should be on V3 now or something. So, yeah, I don't wish to make any claim about CRI's ability to support N minus three. I guess I'm much more interested in the N plus, yeah, rather than the N minus, yeah. Well, and that's even harder, like back porting all of that. I, yeah, actually that's a good point. And yeah, I don't know. Well, yeah. Actually, maybe we're going backwards. I mean, CRIO is supporting a future version of Kubernetes that hasn't been released yet. It's hard to make guarantees about, but it's the thing that would be needed. Well, that is, I mean, I would hope that CRIO is, if we're updating Kubernetes as like a single release, you know, like, okay, 123.released. So we're like moving Kubelet forward to that. I would hope that CRIO would be there also. So I think like, if we're only worried about the most recent release, that I don't expect us to take, you know, that long to upgrade CRIO just to catch up to where the Kubelet is far ahead of us. I'm worried about looking backward more. Right. Well, thanks for talking about the container runtime stuff. All right. Do we have anything else to bring? I think we've covered a lot of topics, but we're still a couple of minutes if we have a last thing to discuss. Otherwise, in any case, so I will share the notes and I'll ask her to, I'll try to write as much as I can into issue tickets, but of course, if you can create ones with the specific stories or had your comments in the tracker, that would be helpful. And yeah, Dalton, I used to have you in help. Do you want to say something? No, no, it's fine. No worries. All right. Thanks guys. Great seeing everybody again. Well, the ones that I know anyways. Thanks a lot. Welcome back, Dalton. Thank you everyone. Bye-bye. Thank you. Thanks everybody.