 Okay, cool. I'll go ahead and get started. First off, I want to say welcome and thank you to everybody for coming to our session. Today, we're going to talk about what's new and what's next in Fedora Core OS. My name is Dusty Mabe. I'm an engineer at Red Hat, started at Red Hat in 2013 in consulting and then switched over to get my hands dirty in engineering in 2015, and I've been working on Atomic Hosts slash Core OS stuff ever since. Before that, I was working at a telecom company working on their Syntos Linux platform, and I'm joined today by Timothy Ravier. Timothy? Yeah, hi everyone. I'm Joe Ravier and I work at Red Hat 2 in the Core OS team essentially on Fedora Core OS, and I also take part into the development of other op-industry-based system in the Fedora Projects, so Silva Blue and Fedora Kenoit too. Cool. Thanks, Timothy. Okay. We'll get started. What is our agenda for today? First, we're going to talk a little bit about what is Fedora Core OS. If you've been to a Fedora Core OS talk before, that part might be a little repetitive, but bear with us. We'll then talk about what's new since last flock slash nest, so lots of stuff that we've been working on. We'll also talk about what is coming up soon in the next few months, and then we'll talk a little bit about how we want to become a better Fedora Project citizen and integrate more with the rest of Fedora. First, what is Fedora Core OS? We've been in this state for a little while, but we call ourselves an emerging Fedora edition. Fedora Core OS at its inception came from two separate different communities. On one side, we had Core OS, the company's Container Linux project, and then we had Project Atomic from Red Hat and the Atomic Host derivative from that, which we had Red Hat, or REL Atomic Host, we had Centos to Atomic Host and Fedora Atomic Host. Fedora Core OS is the merging of Container Linux and Fedora Atomic Host. From those two separate projects, we picked and chose what we thought would we wanted to keep from both. From Container Linux, we kept the philosophy behind Container Linux, the provisioning stack and the cloud-native expertise, and for Atomic Host, we kept the Fedora Foundation, the update stack from Atomic Host being RPMO history, and then also we got enhanced security with SE Linux, which was not enabled in Container Linux. Talking a little bit about the philosophy behind Fedora Core OS, a lot of this came from Container Linux, so it might not necessarily be what you are used to in Fedora LAN, but one of the primary things that we try to highlight and emphasize is Fedora Core OS has automatic updates and we try to get people to leave them on. They are on by default, but what this means is that there is no interaction needed for administrators who are running Fedora Core OS systems to get security updates and bug fixes. By default, if they do nothing, their systems will update. This means they don't need to continuously check whether a new CV is affecting them or whether something came out or whether the OS distribution vendor has shipped it yet. Basically, it comes to them when it's ready. Another thing that we like to emphasize is automated provisioning. All nodes start from approximately the same starting point. Whether you are in AWS, GCP, OpenStack, whether you're on bare metal, whether you're running on a Raspberry Pi, pretty much the image that you are starting with for Fedora Core OS is going to be 99 percent the same everywhere. The only difference is really, we'll bake in some platform-specific IDs and stuff. Different platforms behave slightly differently. Other than that, it's the same image. What this means is we have this provisioning infrastructure, I don't know what you want to call it, called Ignition that we use to provision a node. You craft an Ignition config, and we have a helper that helps you do that, and you essentially tell the node what you want it to do in life. If you want it to chew on data, that's fine. If you want it to host a web server, that's fine. If you want it to do something else, be a timer, that's fine. You use Ignition to provision a node on first boot. They all start from the same place, so whether you're running on a Raspberry Pi or in AWS, you can use that same Ignition config. This leads into a buzzy word called a beautiful infrastructure. The Irish spin on that here is you can automate your deployment and system configuration using Ignition, and then if you ever want to tweak something, you don't have to log into the node. You can, but you don't have to. You can, if it's cheap, to blow away a node and reprovision, then you can just blow that node away, update your configuration, commit it to Git, push it somewhere, and then reprovision it. So that's really nice if you like to have a good idea that you're always going to be able to start from scratch and get exactly what you want. And the last point on this slide is we try to encourage our users to run everything in containers. So they get the OS, they add their software on top, their software is running into containers. If they deliver all their library dependencies in a container, which is essentially what containers are, then it makes the host updates more reliable. We don't ship an update to a library on the host that breaks their application, which means that they then are like, oh, well the host update broke me, so I'm gonna disable automatic updates in the future, and now they're more susceptible to being out of date open to security issues. Our supported platforms for Fedora CoreOS, we have a lot. There's 10 plus ones on this page. We are always trying to add new ones. We are directly launchable in AWS and GCP, and we'd like to add to that in the future so that you can just click and start Fedora CoreOS. We also have bare metal. So if you wanna install on bare metal, you can install via an ISO, that's a live environment you can do an automated or interactive installation to hard drive using that live environment, or you can choose that you just wanna run from RAM, from memory and run workloads that way if you'd like. Same thing for Pixi slash network boot, you can run live, you can install the disk. And then we also have the option for if you have a fancy new hard drive running on a 4K native disk as well. Currently we only run on 64-bit Intel, but we are adding ART 64 support to our release pipelines as well. It already works, it's just we don't build it and ship it regularly. So we are working on adding that, hopefully within the next month. And I'll let Timothy talk about what's new in Fedora CoreOS. All right, so let's go with that. So here I'm going to talk about what happened in Fedora CoreOS land since approximately August 2020, so about a year ago. So let's go, next slide please. So the first one of the first big change we made is that we finally moved to C Group V2 by default for Fedora CoreOS. So we switch with the version that is within there on the slide by default, which means all new nodes starting from this version will be running C Groups V2 by default. We were able to make that with the Fedora 64 switch essentially because now we have full support in Podman and Docker. So whether you're running Podman and Docker you can have that work, no privilege here. The main thing to remember with this switch is that we cannot auto-update nodes by default. So there's no automatic updates from V1 to V2 on the nodes because you need to recreate your containers. The container runtimes are not able to move from one to the other automatically. So you have to do that manually, make the switch, to switch a system you can use a command that is just within below and all the examples are into the doc. So on each slide you will get sometimes some comments here and at the bottom I'm linking to the specific page in the documentation, which is related to the content of the slide. So yeah, next slide please. All right, so on top of C Group V2 changes we added new features to RPM S3 which enables you to do the reliable live changes on the system. So sometimes it's useful. We know we call Federa Chorus an immutable operating system but it's not immutable in the sense yet that you cannot change it. It's immutable in the sense that you control or it gets changed and when it gets changed. And that's the idea here. So we have several commands in RPM S3 now which enables you to do changes on the system live, in a reliable and safe way. So the first one that you may already be familiar with is RPM S3 user overlay. Essentially what this does is this creates an overlay, a temporary file system that is mounted on top of slash user and enables you to do any modification you want to do on the system and slash user which is usually mounted as read only on Federa Chorus. And this overlay is mounted as read write. So you're free to do any changes. Of course you have to be wrote but you're free to do any changes and it's non-persistent. So that's like the basics. But if we want to go a little further we have did a new command which essentially install packages live on a system. So the idea is that you would call RPM S3 install for example, a stress if you want to develop something on your system and you can have this apply live option to apply the changes directly. What this will do is it will create a new deployment just like we do for all RPM S3 install commands when you overlay the package on the system it won't directly impact the running system. It will create a new deployment for you to use on the next boot. But with this supply live option what you get essentially at the end is RPM S3 will switch atomically the running system to this new deployment and you will be able to use a new package. So you will be able to run a stress to debug your image. The main benefit here with that is it's perfectly safe because your system will still be fully read only slash user will be read only and on the next boot you will get the new image with stress still applied. This was introduced in two versions of RPM S3 which are linked below with the full documentation again below. Next slide please. All right, so on top of that we've had some new features in the edition. So the first one I want to talk about is kernel arguments. So we've had it support in ignition to directly set, change, modify or remove kernel arguments on Federa Chorus nodes. So usually when you boot your system first you would have to either do that manually or do some little bit of high key changes in your configs and it's not fully supported in ignition. So with one single option you can do that, change, kernel arguments on all the platforms with the same way in the same manner in the ignition config. So this change of course is applied on first boot and as it changes kernel argument changes all the way the kernel behaves the kernel we will reboot to get the new arguments applied to the kernel. So below I've had two examples here. The first one, so those examples are butane configs. So if you don't know about butane the short version is that's it's a nice way to write configs then that gets converted to ignition which is something that is consumed by the machine by the Federa Chorus nodes but it's much nicer to write butane configs to get ignition configs then after. So those are two butane configs here which give you examples of what kind of kernel arguments you can change. So the first one is about removing some of the mitigations for the CPU vulnerabilities. So it's a spec term meltdown. You might have heard of them and if you want to get full performance on your node you might use them. And the second one is this if you're stuck on cgroups v1 for any reason then you can still do that and we have the option to stay on v1. Next slide. All right. The next and I wanted to do is boot up D. So what was the boot up D? Well, since we created a new software a new piece of software to perform updates to update the boot loaders on our chemistry based systems. So right now it's only using Federa Chorus and it's only supports UEFI boots and UEFI systems but we're planning for Biosystem 2. So why would we have to do that? Well, the thing is making actual boot loader updates transactional and doing that in a safe way is really, really hard. So it's not something like we can really easily solve and that's why about something that wasn't solved by R3 and R3 in a sense they don't touch the clouders. They just you just install a system and then R3, R3, R3 will not update it it won't touch it. So we need something else. So the idea is essentially we don't know we don't know when it is actually safe to perform a boot loader update on the system because the power in that gate code the power might go out you might reboot your system or thing like that and it's extremely hard to know. So that is we let you do it we let you tell boot up D to when it's safe to actually perform the data system and so users can manually trigger that and update the boot loaders. So to come into this very simple let's boost up CDL update and you just instantly and instantaneously get the new version. All right, next slide. Another change we've had it is alongside a little bit of a boot loader's closer. It's we've no made slash boot read only on Federa nodes Federa Chorus nodes by default. So the idea is that modifying contents written in slash boot is discouraged because it's already handled automatically by our players, our PMO stream mostly and boot up D for some things. So the first thing is usually when you go into slash boot and you want to change things is for two reasons either you want to add or remove cannon arguments to deployments to deploy kernels. So the safe way to do that is on Federa Chorus to do that with our PMO stream chaos and then you do all the commands to remove arguments or you might want to change which version of Federa Chorus you want to boot by default. So either you want to go back to the previous version or you want to deploy a specific version to test or to find effects or something like that or you either want to update and changing the order on which the boot order is safely done via our PMO stream too. All right, next slide, please. A new one of the other features we've added recently in Inction is full support for encrypted storage using LUX. So LUX is essentially a way to pick up a device and say to the intelligent of this device is fully encrypted and nothing can be read from it without the key, the correct key. We support a couple of mechanisms to unlock LUX devices. The first one is the classic key file. So either you write a string or a long string or a binary into a file and you use that to unlock a device or you can use a tank server or a TPM2 unlocking mechanisms and both of those are enabled through Clipis which is another software we include in Federa Chorus now. So on top of that, we've added support for making sure that you can also run the root partition encrypted using LUX. So this requires that you use either a TPM2 or a tank to unlock the partition because of course if you want to unlock this partition the setup has to be automated because your node will boot up directly and you have to unlock the partition automatically. So here below are two examples of context. The first one is about slash for TPM2 using LUX and the other one LUX for another partition on system. Next slide, please. So yeah, so a feature we've added in Ignition is read support. So you can set up any kind of read array on first boot, the ignition. What this will do is this will manage all the setup of the boot partition, the ESP and things like that and this will do all that for you and set up the read arrays for boot up for your system. One side effect of this change is that we no longer either mount the UEFI partition or the UEFI system partition by default. So you won't be able to see anything in slash boot slash UEFI because we don't want that anywhere anymore. The two examples here are about read one and read zero. So the first one is about mirroring the full boot disk. So if you got two devices and you want to make sure your system is safe if you lost one, then you can go with read one. And on the other side, if you want to go faster, more performance you can go with read zero and strip the data on two devices. Next slide, please. And finally, we've added some more option to allow you to more flexibility when you're booting using the IPC or PICC mechanism. So this is for booting nodes on the network, the other network, either booting nodes that are transient. So we say that they don't assist or if you want to install nodes on systems, you're doing IPC, you can use that too. So the idea is when you build a system via IPC you're giving the target system a kernel and you need from FS and find root FS to use for the system. And the, yeah, the final system needs all that. And all those files needs to, both the init run FS and the root FS need used to be included in the same file. And that's no longer the case because sometimes it's useful to speed them up for performance reasons. So you might want IPC by itself sometimes is slow or can be slow and you might want to do this in separate steps to get things more performing. So yeah, this gives you flexibility because now you can do this three in three different ways. So the first one is to say, okay, I'm providing the kernel and the init run FS which are rather small via IPC and then the system itself will download the root FS over the network and you have to specify the URL web where the root FS is with a kernel arguments that you specify new PXC config. The second insert option are going instance back to what it was before. The first one is say, okay, or instead of providing just one init already we're essentially providing two and the init run FS and the root FS and this will be given directly to the booted system. And the final one is if you can only specify one init already you can concatenate the root FS and the init run FS and provide that directly to nodes. All right, and that's it for PXC boxing, PXC booting. And that's about what we have been the main highlights of what we've been the last year for on Fedora Chorus. And now we're going to talk about a little bit of things that we are focusing on that are probably coming in the first, in the coming months. Of course, we don't have like specific timelines but those features are other really advanced mostly done or in later stages of development and are really likely to like land in Fedora Chorus soon. So the first one is back, can't miss support. So this one is mostly done we're going to enable that in the coming weeks. Essentially, it's a way to enable a very privacy preserving way to count systems. So it enables us to have a view of all many systems. The all many Fedora Chorus nodes have been started and running in the world in a sense. The idea is that this takes really great care into programming user privacy and we make really great care into make sure that we don't send any specific data about any systems. And so what this actually sense is only about a very large, very wide approximation of how long you've been running a node. So if it's either something between one week or something between six months and one year or things like that, and it's very, very broad. With this counting system only reaches out to official Fedora repository servers. We're not introducing something new it's already the official infrastructure. And yeah, and we don't send anything else. And of course you can always disable that if you must. Yep, and if you were wondering where CoroS was on the slides that Matthew Miller had earlier today where he reports usage information and stuff like that CoroS wasn't there because we don't have it enabled by default yet. We sent an email earlier this year and an announcement on the Fedora magazine about when we are going to enable it and we are almost there. So we're enabling it later this month but it's the same exact metric system that's used for the rest of Fedora. Definitely. So one other thing that is bugging us in a sense for a while is that we've kept unfortunately we've had IP tables still blocked we could say on the legacy backend instead of the new NF table once into Fedora CoroS node. It wasn't something we did intentionally but unfortunately it's a consequence of a bug in the way the alternatives command work and how it's set up on systems. And so yeah, essentially we still by default doing the legacy backend but we are planning on switching to the new NF tables backend. So that everybody can benefit from that just like the rest of Fedora. So the details by themselves why this doesn't work are a little bit boring and are due to incompatibilities with all the attentive which is ground all command from the Unix times work and which isn't really compatible with all our industry itself function. So yeah, the thing is the nice thing is it's really easy to work around essentially it's writing some same links and you're done and there you go, you're working on the issue but we of course wanna make something nicer but it's still work in progress on that front. All right, next slide please. The third item is about system view result D. So we enabled system view result D by default in Fedora with the move to Fedora so default just type the rest of Fedora. But the thing is we found some issue with a usage in some contexts in some cases so we had to disable some part of it so we had to disable sub listener because we had issue with reversed in S lookups and well cascading things of issues. The main thing here is that the issue itself is resolved we worked around it with working with the network manager team to make sure that this work correctly on Fedora queries with the use case we found but the fix itself will only be available only be available in Fedora certified so we have to wait until we rebase until Fedora certified is released and we rebase to that to get the full fix and enables in view result D fully on the node. And then we've got two ideas that are floating around that we're working on to change a little bit of things around our chemistry about how we overwork on system. So the first one is about always three commits in container images. So the main idea is that now we's with our chemistry and another three extensions you can export always three commits inside a format that these suited will suit you to put into a container images and this essentially gives you a little bit of tweaked container images that you can use to rebase your system too. So you can from a library in course node rebase to a new version from a container image that distribute or three commits inside container images. This although enables you to run essentially a version of Fedora queries just like you would run a container. So it just work essentially just what you just have to provide like the command to run inside node and well not really a node but now it's a container and it's really great for debugging, testing, figure out what's in a specific version and yeah, this is a kind of like a feature preview about what's coming up. We are not yet releasing those container images but we've had it super for that and we're working on figure out how to enable that. The final last thing about this is that it's not meant as a base image for writing application containers. You can still use the Fedora RPM classic base image which is much better and much smaller because those images are something like 700 megabytes because they have the full Fedora queries node essentially. And finally the last thing we have mixed like these the last option we've had it is a CLI wrap which is essentially a wrapper for command commands that you might run on a node and you wanna make sure. And the idea is that you might forget whether or not you're running a Fedora queries node whether or not you're in a container or a toolbox and you might try and install an RPM for example and if you do that on Fedora queries node right now you get some cryptic error message because the RPM itself is not aware that you're running on the lockdown system. So here the idea with RPM history is that we're replacing, we're wrapping those commands and giving you its ints instead of errors, error message to help you push into the right direction. So if you want to install a package on the system it will explain that to you how to do that the RPM history install itself. So yeah, it's an help mechanism to help you transition from classic Fedora to those systems and optionally you can enable that optionally via OS redeploy and it's currently an experimental option. And that's it for me and that's it for like the features that are coming like really soon and how to see and the part about, okay what do we do with the rest of Fedora and how we interact with that. Yeah, I'll talk a little bit about becoming a better Fedora project citizen is how I've titled this. And first I will dig into a little bit of context for where we are today and what we've been up to. So the background context is I mentioned earlier that Fedora Core OS essentially was the combination of container Linux and atomic host. So container Linux had an installer, atomic host had an installer, container Linux had a network stack, atomic host had a network stack, container Linux had a container runtime, atomic host had a container runtime. So there was a lot of different things that needed to be resolved between lessons learned from each project, what we liked about each one, what we didn't like about each one, we liked container Linux's ignition, we liked atomic host RPM OS tree, things like that. There was a lot of work that went into deciding what we wanted Fedora Core OS to look like and there was a lot of pressure to get something out the door early. So we basically did all those decisions up front, ship something and then we worked to knock down all of the technical debt associated with getting it out early. So there was a lot that went into deciding what we wanted Fedora Core OS to look like. Another thing is Fedora Core OS is also the basis for upstream and downstream open shift which is a very fast moving project in and of itself. And there's a lot of requirements or needs, features, new features that come in, especially in the container runtime space for things in open shift. So we need to be conscious all the time of shipping new features for upstream OCD and OCP. Another thing is Fedora Core OS follows a different release model than what you might be used to in the rest of Fedora. So we release three different streams, stable testing and next every two weeks. So we do three releases every two weeks and sometimes we do more than that if we find issues or security issues come out. So we're pretty much always releasing which is kind of demanded that we develop some custom tooling for ourselves to be able to achieve those goals. So in that same vein, Fedora Core OS has a heavy reliance on CI and speed. So releasing multiple streams every two weeks means that we pretty much have to rely on automated tests. There is no way that we can release three different streams every two weeks and possibly more than every two weeks if issues arise, if we don't trust our automated tests. So we run CI on pretty much every PR that comes in and every pipeline run that goes through. So this allows us to essentially let robots do a lot of the work that we would have had people manually running previously. The other point in there is basically the OpenShift release cadence is much faster than REL. So that means that we need to be able to get features in to Fedora Core OS and downstream into Red Hat Core OS a lot faster, a lot of times than what might be used to in other Fedora project alternatives. All this kind of boils down into we needed custom release tooling in order to help us achieve those goals. So what we've done is we've put a lot of work into building pipelines that can run many times a day and also run all of those tests that I mentioned earlier that essentially test every little piece of Fedora Core OS, the installer, the different various hardware platforms in the clouds and all that and let us know what's failing and what's not. And we also wanted to be able to quickly develop Fedora Core OS so that we could get feedback very fast. So we built a containerized development environment called Core OS Assembler and it allows anybody that has Linux with Podman and KVM to quickly and easily build, run and test any Fedora Core OS artifact that we ship. So you don't need special access to infrastructure. You don't need anything. You don't need a complicated setup. You should just be able to run it with Podman and a nice little bash alias that runs Core OS Assembler for you. So all of that background was to say that's what we've been up to, that we've been building out the infrastructure and features and now we're getting to a point where we can kind of run a little bit. All of that stuff is working like a well-oiled machine and we can start to be a little more proactive in the Fedora community and talk about the changes that are coming in and have discussions appropriately there. So that's one of the first things that we've started doing recently is actively reviewing Fedora change requests during the development release cycle and having conversations within our community about what we think affects Fedora Core OS or what doesn't and how do we address those things, right? Some things we need to go back and talk on the change request a development discussion about. Some things will be things that we need to do internally some things will be things that we don't need to do anything for we just automatically are able to absorb the change. So for this release cycle we have a Fedora 35 changes label in our issue tracker and basically every issue that we decided oh, this needs more investigation we created an individual issue for it and we have discussion there and investigation in that ticket. The other thing that we started doing recently was building and testing against a raw hide stream of Fedora Core OS. So the raw hide stream is not intended to be a stream that people use it's strictly just for finding problems and fixing issues. But what this means is our suite of automated tests now complement raw hide. Every time Clungy runs and a new raw hide repo is created with new content new RPMs that are in the content set we create Fedora Core OS based on that and if things break we investigate we report issues upstream or and or we just fix them and we participate a little more closer upstream with the maintainers and developers of those respective packages. What's really nice about this as well is we can also pin so if a package comes in and breaks us we don't just leave that package we can pin on the older version of the package and continue to test against new versions of raw hide until that issue gets fixed for us. So we can continue to move and not just get broken on a single package. Another thing that we'd like to do is take a more active role in discussion and participation in Fesco. So participating in the Fesco discussions will allow us to get advanced knowledge of future changes and allow us to help influence and add perspective on how changes that are coming down the line might affect Fedora Core OS users. So taking part in that discussion means we get ahead of the ball rather than behind it. Another thing that's optional that I've been thinking about is having a Fedora Core OS representative possibly run for Fesco and be a member of that board. The other thing that we've been discussing lately is changing our default policy for configuration. So the background here is there's been some friction between leading edge Fedora changes and Kubernetes required defaults. So if you don't know one of the target use cases for Fedora Core OS is being able to run Kubernetes on top. So Fedora is nice leading edge, very good. But sometimes Kubernetes has a little bit of a conflict with that and that it's not necessarily ready yet. C Groups V2 was an example. Another example I have on this slide is Swap on ZRAM where the Fedora change came in. So now we have Swap on ZRAM, which means even if you don't configure a swap device, you still get swap. Kubernetes just doesn't support swap. It'll see that swap exists and it'll just not even run unless you change a flag and say, yes, please run. But in general, it doesn't support swap. There's been like a feature request upstream that's been accepted and it's going to happen soon, but it just doesn't. So that was an example where there was a Fedora change that came in, but we weren't able to absorb it because of one of our target platforms. What we are doing now is we're changing our policy such that we're able to apply those Fedora changes as they come in and add good documentation for Kubernetes distributors who are going to be running an admission config anyway to provision Kubernetes on the platform to add, hey, you're gonna be provisioning Kubernetes anyway, add this little bit because for example here, Kubernetes doesn't support Swap yet, et cetera. In the future, we'd like to gate these changes with something like a feature flags implementation where users can easily, rather than copying, pasting a lot of configuration change, they'll just copy and paste a small little snippet of like enable these feature flags and then they'll be good to go. So this is an example where in the past we were a little less flexible with regards to some Fedora changes. We're getting to the point where we're a little more flexible and can stay in line with Fedora a little better. The other points that I wanted to mention here is closer, having a closer proximity to Fedora major releases, I combined this into, or separated this into two separate slides. The first one here, I'm gonna talk just a little bit about our stream process and how we do promotions. So I'm gonna talk about testing and stable. In this case, basically what we do with our testing stream is we snapshot RPM content on this particular day and we build a testing stream release out of that and we release it. So users who are following testing will get the testing stream release on this date. And then if everything is good in that content set two weeks later, our stable stream will get that content. So the takeaway from this slide is essentially testing comes out, if everything is good, two weeks later stable gets promoted to that same content set. And that kind of leads me into how do we behave during Fedora's major rebase. We've gotten a bit of criticism about like, oh, Fedora 34 is GA, why isn't Fedora Coral S stable on Fedora 34 content yet? And it has to do with automatic updates, right? We want to essentially make sure that we try not to break people as much as possible so that they will stay on automatic updates. We don't necessarily want people manually updating their systems because then they get out of date and our model kind of goes out the door. So what we've got right now for Fedora, you know, GA and switching over our streams is what I've got on the screen right here. And I'll go over that in just a second but I want to emphasize that this is just what we have right now. We want to get a little closer to, you know, having stable over on, you know, closer to GA release but we're just kind of tweaking this process as we go and this is what we've got right now. So at the time of Fedora beta release, our next stream is switched over to the new Fedora release. So in this next cycle on Fedora beta, the next stream will be on Fedora 35 content. At its point of Fedora final freeze, we're going to start updating our next stream every week instead of every two weeks. So it more closely tracks GA content. Every week that there is a no-go and then cotton ties a little bit inside of himself, we will still do a release to that next stream to, you know, closely track that GA content set. On GA date, so Fedora GA, Fedora Chorus will reorient itself and its release schedule starting on that day. So usually on a Tuesday, we'll start over. So week zero that day, we'll do a triple release but the important part is next, our next stream will be that latest Fedora 35 content. So that will be what gets released GA. One week later, assuming nobody reports, hey, this broke everything in the world for me. Our testing release will be promoted from that previous next content. So the GA content will be in testing and then the standard two weeks after that the stable release will be the GA content. So stable three weeks after GA will now be fully rebased to Fedora 35. That's where we are right now. We'd like to improve in the future but we're going to tweak it slowly to just be a little more conservative. Okay, so that's it for us for now. Obviously more discussions to be had about becoming better Fedora citizens. I'll stop and check to see what we have as far as questions. If we have a lot of questions, we'll just go there. If we don't have many questions, then we will just dig in and maybe do a demo or something. We have a couple of questions. Okay, cool. Some were answered partially in the chat, I think. So we can, but we can give it a short short try. So one of the first for the community was how to handle persistent data in Fedora Christ. Do you want to take it or go? Sure, yeah, so I guess it depends a little bit on exactly what you're talking about. But if you, when you boot Fedora CoreOS, obviously there are parts of the operating system that are read only, which means you can't write anything to it. If you want to write data, so a lot of times your applications will have data themselves that they want to store or state that they want to store, then a lot of people will write things under var, which is read, write. If you want that to persist over a reboot, that's fine, var does that automatically. If you want that to persist over a reprovision, meaning you've got a system up and you want to reprovision it, but you want to keep your data, what you can do is have that slash var or anything, really be a separate file system and you can write things there instead. And then on the reprovision, essentially you would tell Ignition, hey, I want a file system at this location and you can, when it runs, it'll say, oh, there's a file system that already exists at that location and you didn't tell me to override it, so I'm going to leave it alone. So that's how you would handle persisted data. If you want it to stay across reprovisioning, then you just make it a separate file system. Great, thanks, Liston. The second question was about, does queries have an excellent transition or graphical interface? And I'll take this one because it's on my side, I would say. So no, we don't have a graphical interface because it's mostly a server or oriented operating system. So you run that on servers or in the cloud. And if you want a graphical interface or graphical desktop oriented versions, you probably should look at federal server blue federacinoite, which are more desktop oriented, but very similar because they use the same kind of technology underneath. All right, the next question is about, are there any plans to integrate the apply live functionality with GNF so that people can operate with their existing muscle memory? And... Yeah, do you want to take that one to me and that's kind of like the CLI wrap type? Yeah, all right. So yeah, it's essentially the idea is to provide hints. So switching completely, like, so integrating with GNF, I don't know because GNF operates like directly on the file systems. So the idea here is more about integration into our industry. All we do that with CLI wrap is a bit, is it always a bit difficult because if, as we won't ever have the full set of DNF functionality, if we pretend to be like the full DNF command but don't actually support all the options, it's a weird user interface, we're usually the experience. And so yeah, so far we've taken the side of saying, yeah, we don't support all the options, we're just giving you hints if you run that interactively. Otherwise, we are out. If you run that as part of the scripts, it will just give you an error. All right, the next question is about the size of the QEMU KVM image and Fedora Chorus, which is rather large, one on five gigabytes compared to Fedora Cloud. Yeah, so one thing about Fedora Chorus is we try to deliver, we do try to deliver the same image everywhere. That just, it really helps out a lot from a perspective of confusion, especially for us as well. But the thing about that is, Fedora Cloud is able to get away with targeting a cloud environment. So a VM type situation, they don't have to really support that many different types of hardware, right? So one thing that we have to do is, ship all the kernel modules and a lot of firmware that they don't have to. I will admit that there are, in Fedora Cloud, we don't have container runtimes. Container runtimes are kind of heavy. And they're built in Golang, statically compiled for the most part. And so we have a couple of different things in Fedora Chorus that are quite large, unfortunately. I would love to get them down. But I don't have any, I don't know of any plans right now to do that. I don't think, Timothy, you probably don't either, right? Yeah, it's essentially what I brought as an answer. So yeah, good line here. Okay, but we're always open to ideas. If we have creative ways to like, not necessarily remove functionality that we have, because like one thing is, if we remove all the hardware support, then now we've got several different images that we're shipping and stuff like that. And it really keeps the mental model easy when you are shipping the same image everywhere. So there are some drawbacks to that. All right, so we are on the 15 minute mark. So we can maybe, I don't know if we should go on. We can still go on to one or two questions, but... Yeah, let's just keep going with questions for the next five. So one next question is about container management tools that we have available in Fedora Chorus. So I don't know if you could specify a question more because essentially we ship Podman, Scopeo and Docker with by default with Fedora Chorus. If you need something else, then feel free to write a suggestion because I don't know exactly what the question is about. I think I might, I don't know. I see his question and I kind of have an idea because I have a similar problem. So he said, for example, automatic rebuild of Podman containers. And I've solved this problem by just using system D units with the timers. And so periodically, a lot of people will say, oh, well, you just set your container up so it builds in a registry and periodically does that. And then when you run your container, just set it up to pull periodically or something like that or pull always. I, for some reason, want to build my containers on that system and then run them there and not store them in a registry. Sometimes I have secrets in there that I don't necessarily, I don't know. Anyway, I see your use case. And at least what I've done today is I have implemented system D timers and services to just rebuild periodically. I don't know if Podman somehow has inherent ability to say, oh, this image hasn't been built recently, rebuild it. But yeah, that's something to look into. So I do, it's nice because I do the other side. So essentially I build my containers in Quay and I have them auto rebuild them in Quay when I commit to a repo. And then I pull them using Podman pull on the notes. And you can have Podman use the auto update feature which is like pitches regularly and sees if there's a new container. If there's a new container in the registry, we'll try and pull it and update your container. Yeah, yeah. And if you don't know, Quay, I think is pretty much free. So you can sign up and they'll host your containers for you and stuff. So there's nothing secret in there. That's an option. No limits for public repository. All right. Next question is about all those federal conflicting with Kubernetes. I think that might be part of what I said earlier. Basically, I was saying the Fedora leading edge, sometimes the brand new defaults that come in, some applications or projects aren't necessarily ready for them yet. And it takes them a little while to say, okay, this change in system D upstream, let me take advantage of that or let me be able to at least not break when I see that. So that's what I was implying. Basically, sometimes Fedora is a little too early for some applications that exist. I wasn't trying to imply that Fedora is inherently conflicting with Kubernetes. Yeah. And the last question we have is about wondering if we can use new feature to export oyster, commits to container image in other desktop variants such as Silverblue and Kinoite, especially to rebuild the box session from the scratch. So yeah, we're still working in like, I don't know how to all the issues with that, but we could potentially move to that. This would require that. Well, so this two side of this, from the first point is like, it's purely a distribution mechanism in a sense here. It's shipping the same content that you usually ship the OS 3 repo instead of shipping it and by a classic file server, you ship it by a container image, but it's still the same content. Yeah. So the takeaway there is putting them in a container image isn't really that interesting unless you were trying to mirror infrastructure and deliver OS 3 updates yourself. So as a user, you probably shouldn't really care if we're using traditional OS 3 backend or a container registry as a backend for Silverblue or for Doer or OS updates. However, if you are somebody who's trying to set up an offline disconnected environment and you want to ship Fedora Horlass or ship Fedora Silverblue to your users, then you might be interested. You might already have a container registry that you've got containers in and just putting these payloads in the container registry is easier for you than setting up an OS 3 server to do the OS 3 part in containers. Yeah, definitely. Then it's more like focused on always ship things. The rebuild thing is, this image in a sense is not really like, you don't really have DNF entirely. So you cannot use it like a classic Fedora container image. You cannot really install packages in it right now. It's not supported. So it's not really meant like a classic container image. It's more like a way to store things in it. And I think that's the last of the questions we have right now. So if you have any other questions to ask them, we still have something like four minutes to go. And otherwise, we'll close this session. Yep, in chat this said, CoreOS assembler seems like a great way to touch your admission changes before applying them on a server. Yep, CoreOS assembler has a nice feature of being able to do CoreOS assembler run. And basically the image that you just built, it'll start a virtual machine and you can just jump right into it. You don't even have to SSH or SSH for you. And so you can provide your own admission config and test it if you would want to. Or you can take the image that you built and run it anywhere else and test it that way too. So it's a nice little development and test environment. Okay, so I'll say thanks everyone for joining us today. And we'll hopefully post this slide somewhere. You can somewhere and we'll post the links either on the Fedora CoreOS tracker or inside, well, I don't know, the Shiddles maybe, I don't know where we can put them, but yeah, you should be able to find them so shortly. And we'll do that. And we'll link them in the usual Fedora CoreOS spaces. I also, I had a slide at the very end which I managed to not show, which I'll try to show now. Oh yeah. Which is more or less... Not to get involved. Yes, it's basically how to get involved. So we have a website where you can go download it. We have an issue tracker where you can report issues or have design discussions. We have a forum where you can just ask, I'm new to Fedora CoreOS or I'm hitting this problem, is this an actual mailing list for the same thing? We're in IRC. And then there are links at the bottom which I'm thinking these slides get posted somewhere eventually to other previous talks where we go into a little more or do demos and such like that. So there's YouTube links at the bottom. So yeah, thanks everybody for coming and I appreciate that. Hopefully you're interested in Fedora CoreOS and come chat with us, come get involved and enjoy the project. Thank you all. Thank you. See you.