 OK, I'm going to assume everybody can hear me. First off, my name's Dusty Mabe, and I'm going to share my screen for a little presentation. Hopefully, I don't go get too long-winded today. And I actually have time for some questions. So let's see. OK, so can you guys see the presentation? OK, seems like everybody can see the presentation, so I'll go ahead and get started. So we are here today to talk about Fedora Core OS at the Fedora 34 release party. My name's Dusty Mabe. I'm a software engineer for Red Hat. And you can email me, hit me up on FreeNode, or check me out on Twitter or whatever else if you'd like to ask me any questions afterwards if we don't get around to them. But what are we going to talk about today? So today, we're going to talk about what is Fedora Core OS, even though we've been around for a little while. There's a lot of people who still don't know. We're also going to talk about how does Fedora Core OS relate to the rest of Fedora, and then some recent developments that we've, you know, what have we been up to lately? And then if we have time, we'll do a demo and answer some questions. So first off, what is Fedora Core OS? So I'll talk a little bit about how for Fedora Core OS came to be, which kind of answers what it is and why it's different than the rest of Fedora. So Fedora Core OS is an emerging Fedora edition. It came from the merging of Project Atomic's Atomic Host, and also the Core OS Incorporated Container Linux offering. So out of those two, it took some things from one and some things from the other. So from Container Linux, it took kind of the philosophy behind Container Linux, the provisioning stack, and the cloud-native expertise. And from Atomic Host, it took the Fedora foundation, the update stack, and the enhanced security with SC Linux. So digging a little bit more into specifically the philosophy behind Container Linux, Container Linux itself basically had automatic updates, so no interaction for administrators, which means things are more likely to stay up to date because they don't have to consciously think about updating their systems. So basically, security fixes get applied because new updates come in with CBs all the time. All nodes in a Container Linux cluster start from approximately the same starting point, and they use ignition to provision a node whenever it started. So whether you're on bare metal or cloud, you share the provisioning stack. They also kind of were more about immutable infrastructure, so if you need to change, you update your ignition configs and reprovision. Obviously, this is a lot easier to do in a cloud-native environment. Then for example, bare metal, but there are ways that makes it easier as well. And then user software runs in containers. So host updates are more reliable because you keep your applications and your host separated. So that is kind of the philosophy behind or what Container Linux was before we merged. So we'll talk a little bit now about Fedora Core OS and you'll see a lot of this same stuff applied. So first feature, automatic updates. So Fedora Core OS features automatic updates by default. If we have automatic updates, we want them to be reliable. So if somebody has a node that's updating automatically in every other month that breaks, they're gonna disable automatic updates. So we need them to be reliable. So what we have is extensive tests in our automated CI pipelines that try to make sure basic functionality doesn't break. Another thing is that I'll talk about more later. We have several update streams. It allows users to preview what's coming and make sure that a bug or an issue in one of our testing streams don't make it into stable and actually affect their nodes that they really care about. We also have managed upgrade rollouts over several days. So that means if somebody early on in the rollout manages to find a problem, we can stop it before it hits the rest of the people that would then get the rollout. So over two day period is when your nodes would update. If somebody in the first three hours finds an issue, then we can stop the rollout and actually manage to not affect everybody else. But sometimes things go wrong. So we have RPM OS tree rollback that can be used to rollback in the case, we don't catch something in time. The other feature is automated provisioning. So Fedora Core OS uses a tool called Ignition to automate provisioning. It runs in the init run FS. So before anything in the real root has started up. And pretty much any logic for a machine lifetime is encoded in that config, which means it's very easy to automatically reprovision nodes. Cause if something went wrong, you just update your config and you start a new node and it basically picks up and applies whatever configuration you have, maybe you join a cluster and then goes from there. And we have the same starting point, whether we're on bare metal or cloud. So we use Ignition everywhere. So if you are booting a bare metal node via Pixi or you're starting a node in Google's cloud or Amazon's cloud, you're using Ignition. So it kind of helps you use the same stack. The other one is features are cloud native and container focused. So software runs in containers. We have the Podman or Moby Engine container run times, Moby Engine is Docker. It's ready for cluster deployments. So you can spin up a hundred nodes. All of the Ignition configs, make the nodes join the cluster and then you spend them down when they're no longer needed. So basically if you don't need a lot of extra capacity, then you spend it down and you don't pay for it. The other thing is we're offered on or for a plethora of cloud slash virtualization platforms and we're trying to add more all the time. I think the last feature I'll talk about is OS versioning and security. So Fedora Core OS uses RPM OS tree similar to Fedora IoT and Silverblue. I like to describe it as get for your operating system. So more or less you have a version identifier that corresponds to a commit hash and it's very much like get. So if you know this version, kind of like a tag, you know exactly what software was installed and running on that version of Fedora Core OS. You can tell us, hey guys, I'm seeing an issue with Podman on this version and we can easily try to reproduce it. The other thing is it uses read only file system mounts. So it prevents accidental OS corruption and also some unsophisticated attacks if somebody happened to gain access to the system. Probably not sophisticated attacks but unsophisticated, yes. We also have SE Linux to prevent any compromise applications from gaining further access to the host. So how does all of this relate to Fedora? How does Fedora Core OS relate to Fedora? Well, the first biggest thing is we have Fedora based components. So we're built from RPMs, just like every other edition of Fedora, which deliver hardware support, basic administration tools and the container engines. But sometimes we do have different policy decisions. So, and that's based on our target user base. One good example of this is Fedora moved over to C Groups V2 by default a few releases ago and we're just now getting around to that now because Kubernetes and the Docker slash movie engine container runtimes weren't really ready for that. And that's one of our target use cases. So in some cases we do make decisions to kind of lag Fedora. We try not to have that happen because obviously it's more for us to manage but sometimes it just makes sense. I mentioned update streams earlier. This is another thing that's different than what the rest of Fedora does. So we have three update streams that people essentially follow like a rolling release. So next is an update stream that has experimental features and Fedora major rebases. So for example, two cases right now, our next stream is based on Fedora 34 and it also has C Groups V2 enabled by default. So that's an experimental feature, quote unquote and also a major rebase that is in our next stream. Testing is a preview of what's coming to stable. It essentially is a point in time snapshot of Fedora's Bode stable RPM content. And hopefully if we don't hear of any issues over a two week period, we will promote that testing release into stable during the next cycle. So the goals here are to publish new releases into our update streams every two weeks and also find issues in our next and testing streams before they hit stable, which is where the workloads are that people really care about. So what does this look like typically? I'm gonna walk through, you know, a typical version in a release of Fedora Core West. So this one is 31, which represents this one was based on Fedora 31 content, a date. So this one was March 23rd, 2020. A release stream three is for our next stream and then a revision. So more or less this date represents when the content was snapped from Bode, it makes it into a testing release and then two weeks later it makes it into stable. Hopefully we don't find any issues, like I said. I'll walk through this just a little bit more here in just a second when I talk about the Fedora 34 rebase. So Fedora 34 is out, yay. But we haven't switched over stable just yet. We basically have a plan for doing that, but it's gonna happen in a few weeks. So how do we relate to the rest of Fedora's processes? This is what we have today and we're gonna work on trying to get a little closer to Fedora GA for future releases, but this is just what we have today. So around the Fedora beta release, we switch our next stream over to Fedora 34. So that's what happened this time. Once Fedora final freeze happens, we switch that next stream and start to do weekly releases rather than bi-weekly releases. This is to closely check the GA content set and to bring in any fixes. There's usually a lot of things that gets fixed or critical things that get fixed in this time. And then around GA, what we're doing this time is Fedora Corvus will actually reorient its release schedule based on GA. So on the week of GA, which was this week, our next stream now has the very latest Fedora 34 content. So there's a next stream release this week. Next week, which is week one, our testing release will be promoted from the next to happen this week. And then week three, our stable release will be promoted from that testing. So on week three, our stable has been fully rebased to Fedora 34. And to illustrate this a little bit further, so right now, next stream release, one week later goes to testing, two weeks later goes to stable. So week zero, oh wait, week zero, week one and week three. Okay, so what have we been up to recently? So quite a lot of different things. I'll go over these individually and I'll try to do it fast. But one point I wanna make at a high level here is this isn't all of the stuff that got rolled into Fedora 34 specifically. These are features that have happened and we ship them over time in Fedora Core OS because we have automated testing and because we have people running our testing streams, we feel a little bit more confident in actually making changes midstream. So some of the things that we've been up to recently is we improve the package layering experience if somebody happened to have a base content set that was a few weeks out of date and tried to layer a package on top, they would get an error. Now they're much less likely to have an error because we have an updates archive that allows solutions to the problem to be found. The boot partition is now mounted read only. This prevents accidental damage to critical files for boot. It's just another example of more of your operating system being mounted read only so you don't actually mess it up. It's now possible to use RAID 1 for your boot disk and we also have better support for disk encryption. Some of this was driven by support in our Red Hat Core OS product but we also like to have that in Fedora because people like that stuff too. We also have better introspection into the state of Zincotti from our PMO history status output. We have initial support for updating the boot loader via bootup D. So if anybody's heard of the grub CVE for boot hole, you'll know why that's necessary. We also have, we're also moving to see groups to V2 by default. It's already deployed on our next stream and it'll be coming to testing in stable soon. We have DNF count me support for Fedora Core OS rolling out in the coming months. That's so Matt DM or Matthew Miller can manage to include Fedora Core OS and also Fedora Silver Blue in his nice pretty graphs. And then the one last thing is we renamed the Fedora Core OS config transpiler to butane because it's gonna be used for other things like Red Hat Core OS and machine configs and stuff like that. One thing I will mention, we don't have DNF count me support yet but we do have a Cincinnati server that essentially tells notes, hey, here's where it's safe to upgrade to. And this is just a snapshot in time taken earlier today. So in the past month, almost 50,000 unique nodes have checked in to request what the update graph looks like. I think it's close to 48,000. But that's pretty nice to see a consistent number of nodes that are checking in. So it's a data point. Okay, quick demo, I'll do this real fast. And if somebody wants to queue up questions, that would be nice so we can ask them while we're doing the demo. So let me see if I can share. Okay, so this is basically a demo of running a WeChat client on a Fedora Core OS machine. This is just a very small example of, a small application that you can run on top of Fedora Core OS, it's more or less just a container. But it will show you how you can have an ignition config that deploys the entire node and you don't have to touch it. And if you wanna throw it away and start it again, you just delete the machine and start it again. So in this case, what we have is we have a service. This is a service that essentially runs Tmux and starts WeChat inside of it. This will run inside of a container. And that container is defined in this Docker file. So we install WeChat, we install Tmux, we start Tmux with system D, et cetera, et cetera. We have a startup script that actually tells WeChat what channels to connect to and things like that. So essentially all of our configuration is defined here and we can throw away this container or this whole machine and we don't really care because it's all defined there. So what I'll do is I'll take this butane config previously known as FCCT and I'll create an ignition config out of that. And then the next thing I'll do is I'll just start up a virtual machine based on that ignition config. So now I've got a virtual machine that's gonna start. And once it gets up and running, it will actually build a container and then run it. In this case, I decided not to push my container to a registry. I'm the only user decided that was a bit overkill. So okay, I'm gonna log into the machine. So now I'm in the machine. I'm gonna run RPMO history status. You can see I am on the next stream for Fedor Core OS and this is my version that I'm on. That was the version of the next that was released earlier this week. And that kind of gives you, hey, I'm snapshot it on April 27th. So let's see what's going on for my user units. So this is a user unit that runs that builds the container and this is all running unprivileged. So once it's done building the container, it's just running as rootless podman. And I can watch it as it runs. So while we watch this, I can see if there's any questions. Does anybody have any questions? Oh, can I increase the font? Is that better? I don't know if that's better. Let's do, so okay, it finished building the WeChat container. So now that it's done with that, I can do, so the container now is up and running. It runs system D in the container and starts Tmux with WeChat in it. I have a nice little script that basically runs podman exec into the WeChat container and then runs Tmux attach. So if you see there's actually, if I run podman ps, I see WeChat. So let me just run that user local bin WeChat command. And I am in fedora-devel on IRC as fedora-demo. And I am just gonna reach out and say, hey to Marie and over here, yes, the demo works. So the real me says that guy's an imposter. Anyway, that is the demo. And I think we can, let's see, go back to my presentation just real quick. If you wanna get involved, there's a bunch of leaks right here. If you're interested in just learning more about Fedora-CoreS, go check out the tutorials. So we have pretty comprehensive tutorials that'll walk you through, oh, here's how you run a container, here's how you build an Ignition config. Those are quite useful. All right, questions, sorry. Let's see if anybody has any questions. Q&A tab, okay. Okay, let's see, man, there's a lot of them. Let's see. Okay, Fedora-CoreS is a pretty integral part of OKD. Are you aware of other active use cases for Fedora-CoreS outside of that contents? Can also extrapolate to Red Hat-CoreS. So OKD is a big user of Fedora-CoreS. There's also an upstream project called Typhoon, which is like upstream Kubernetes type platform. And you can choose either Fedora-CoreS or Flat Car Container Linux as your host for that. So that's an option. Most other cases are individual container clusters, so people running Dr. Swarm, or there is an OpenStack project for Kubernetes, I think it's called Magnum that also runs on top of Fedora-CoreS. So there are a few different ones that integrate on top, but we should probably start building out a comprehensive list. Let's see, Peter says, why isn't the next stream for Fedora-34 done at beta time and then goes to stable for Fedora-34 GA? So next stream for Fedora-34 does happen at beta time, but we continue to leave it on next stream at least right now until after GA. Because of the automatic updates, we've been more conservative about switching it over on GA date, because if somebody, it's automatically updating node fails, it's an opportunity for them to reevaluate and go somewhere else. So we've just been a little more conservative with switching it over on GA. Over time, we've gotten closer and closer, and I think that will continue to happen. So maybe next time we actually switch testing over before GA, not a week after. So it's just a continuous evolution of where we want to be in the future. What is the best deployment using CoreOS on home server running multiple services, having some base OS and running CoreOS containers? Like an example, I'm not sure exactly what you mean. Jonathan, do you understand that question? Okay. I'm not 100% sure either, to be honest. Maybe clarify in the chat. There's another question right after that, which is there an offline time when applying updates when they are applied automatically? I'll try to understand what that one means. So with our update server, Syncati, you can configure it to say, oh, only apply updates on Monday at 8 a.m. between 8 a.m. and 9 a.m. or something like that. So you can kind of configure when, so that nodes just don't go down during a critical time for you. And then there's also different strategies there that Luca has baked into it. So definitely check out the Syncati update strategies. As far as like fully offline stuff, there's, we're not quite there yet. Right now you still need access to the Fedora update servers. Let's see, would it be good practice to set up CoreOS on bare metal as a dual boot alongside Fedora Workstation, or would it be better to run Fedora CoreOS on a server? I would not use Fedora CoreOS as dual boot just because especially when you're installing it, it assumes that it's got access to the full disk. So I wouldn't do it as dual boot on the same disk as you're doing Workstation. I would say it'd be better to run it on like an individual server or a virtual machine if you wanna try it out. Is there some plan for official support of CoreOS on Raspberry Pi? I don't have a specific answer for you on that. I do know we want to have ARM64 builds soon, which could include the Raspberry Pi, but obviously if there's some more extra work because of that specific platform, then it might not happen at the same time as the ARM64 stuff lands. Okay, I think I'm at time. Marie, do I have more time or are we done? Well, thanks everybody, I appreciate you coming and we're in pound Fedora Dash CoreOS on FreeNode, so ask us questions. Bye-bye.