 I'll get started. First off, I'd like to say hello. My name's Dusty Mabe. I work with the Fedora project quite a bit. I'm employed by Red Hat. And I focus mostly on Fedora Core OS and Fedora Cloud. And I'm going to talk about Fedora Core OS, what's now and what's next. So in today's talk, I'm going to briefly talk about what was, which was container Linux and Atomic Coast. I'm going to talk about what's now and then what's coming next. I'm also going to try to answer some questions. And unfortunately, I don't think I have a lot of time for demo. But I'm not sure if people want to stick around afterwards in the room. If we don't get kicked out, I can do a brief demo for people and have people ask questions as I go. So first of all, what was? So what was, what used to exist was container Linux and Atomic Coast. They were both container focused operating systems. Container Linux was based on Gen2. And Atomic Coast was based on Fedora slash rel. And used RPMs as input. Container Linux had an AB partition scheme for updates. So used an image-based update strategy. And Atomic Coast used RPM OS tree as technology. And then container Linux used ignition for provisioning. And Atomic Coast had Anaconda slash CloudNet. So Anaconda for bare metal install type workflow. And then CloudNet for anything in the cloud that started with an image. So that's what existed. But what do we have now? So now we have Fedora Core OS, which is an emerging Fedora edition. And Fedora Core OS came from putting these two communities together. So Core OS, the companies, Container Linux, and then also Project Atomic's Atomic Host. This mostly was facilitated by the purchase of Core OS, the company by Red Hat. But we've been harmonizing ever since. But what does Fedora Core OS do? What is it comprised of? So Fedora Core OS incorporates the Container Linux philosophy, the Container Linux provisioning stack, and the Container Linux cloud native expertise. And it also incorporates the Atomic Host foundation on top of Fedora, the update stack from Atomic Host, and also enhanced security with SC Linux. I want to take a brief moment and talk a little bit about the philosophy behind Container Linux, because you'll see a lot of it ring true when I switch over here in just a second and talk a little bit more about the high-level features of Fedora Core OS. First of all, Container Linux had a philosophy of automatic updates by default, which means that by default there's no interaction for administrators in order to stay up to date, which typically means that systems have security fixes applied in a more timely manner and, in general, just don't get forgotten about and open to CVEs for a very long time. Another philosophy that they have was basically all nodes start from around the same starting point, so they use ignition to provision a node wherever it started, whether it be a bare metal node or in the cloud, which is interesting. They also take immutable infrastructure to heart. So in general, if you need a change, let's update our configuration and reprovision. This is a lot easier in a cloud-native environment than, say, in bare metal, but that was kind of the path at which they were coming from. And it tends to make a lot of sense in today's world. And then also from the perspective of host updates, have our users run software in containers. That way they depend less on the host and your host updates are less likely to break you and break your application. So now let's talk about Fedora Core OS features. And you'll see a lot of that Container Linux philosophy come through here, the first one being automatic updates. So Fedora Core OS features automatic updates by default. In order to deliver automatic updates, they need to be reliable updates. We can't have systems randomly breaking and sysadmins getting paged and stuff in the middle of the night. Or whenever the update decides to come through, usually in the old model without automatic updates, there would be some sort of plan involved. Oh, I'm going to update X systems. And I'll be around in case something goes bad. With automatic updates, they need to be reliable. We don't want to break people. So we basically have extensive tests in our automated CI pipelines. Every time a build gets created, every time a PR gets opened, we test things. But there's things that we can't catch with tests. Sometimes that happens. So we also have several update streams to preview what is coming. So users can run those update streams and preview what's coming and they know if something's going to break them. The other thing that we have in place to kind of help our updates be more reliable are manage upgrade rollouts. So basically when we start an upgrade, it won't go to everybody immediately. It will go to everybody over a period of time. Call it 24, 48 hours, three days, four days. We can control how long we want the update to roll out. And that means that it's a window over time. Different people are going to get the update. And if some of the early people that win the lottery get the update and it breaks them, they can let us know and we can stop the update rollout to do some investigation before we let the rest of the people in the window update. But sometimes things will go wrong. So when they do, we have RPMO history rollback which can be used to go back. And in the future, we want to have automated rollbacks. So a user can specify when we boot our system into a new update, run these three checks. And if they don't pass, then I want you to go back and maybe send me an email or something. I'm not sure, but the idea there is that if somehow the application is affected by the update, we can automatically go back and it's not panic mode. So, okay, I want to talk a little bit more about multiple update streams. So we have three update streams that we offer to users. One is next, one is testing, one is stable. Next is kind of experimental features or like Fedora major rebases. So like when Fedora 33 beta comes out, we would try to rebase to that pretty soon and get feedback on things that need to be looked at testing itself as a preview of what's coming to stable. So that is basically a point in time snapshot of the Fedora stable RPM content. And then after a period of time, we will promote that to the stable stream which is the most reliable stream offered. The goals here are to publish new releases into the update streams at approximately a two week cadence. And then also to really hope to find issues in next and testing streams before they hit stable. We don't want to break people in stable. This is kind of a complicated slide in the interest of time, I'm not going to go into too much detail, but down here, essentially, I'll run through how our update promotion works. So this is essentially the YAM repositories and it updates with time. So at a point in time, we'll essentially take a snapshot of what is in the stable repositories and we will build a testing stream build off of that. Let's call it 31, 2020, 03, 2003, so March 23rd. And then we'll release it. And after two weeks, there's a two week period in here where hopefully nobody finds issues. But if they do, then we basically will bump testing and go again. But if nobody finds issues, then after two weeks, we promote that content to stable. So we have a lot of control here, which is very useful. Okay, so the next feature I want to talk about is automated provisioning. So Fedora Core OS uses ignition, like container Linux did to automate provisioning. Any logic for a machine's lifetime is kind of encoded in the config, in the ignition config. So it's very easy to automatically reprovision nodes. I actually had something like this hit me earlier this year where I lost connectivity to a box that was sitting on my desk in the Red Hat office. But because of COVID, we were not allowed to get into the office. So I took the same ignition config that I had used to provision that node and spun up a VM locally on my desktop here at home. And I was back up and running. And also with ignition, we start at the same starting point, whether we're on bare metal or cloud. So you can really use ignition everywhere, as opposed to having to have two different strategies for provisioning, whether you're on bare metal or in the cloud. And a little bit more detailed about ignition. It's a declarative JSON file, essentially. It runs exactly, or during the boot process, ignition runs exactly once. And it can write files. It can do system to units, users, groups, partition disks. It can do some complicated things. But in general, if you provided an ignition config that is either invalid or a part of it fails for whatever reason, the boot will not continue, which means you don't get any half provision systems, which can kind of be confusing if you get a system that's kind of working so it doesn't fail health checks for whatever reason, but there's part of it that actually didn't get configured correctly and you find out much later. And because ignition configs are more machine friendly and not really pretty to look at or to edit, we have another tool called Fedora CoreOS Config Transfiler, which is used to translate a human friendly YAML format into ignition JSON, but it's not just a YAML to JSON converter. There's also quite a few little helpers in there that will generate valid ignition for you with a much smaller syntax. And it also has some distribution-specific stuff. So for example, if we were to take ignition and run it on OpenSusa, they might have some more distribution-specific things that they wanna add in their own config transpiler. Ignition is meant to be kind of dystro agnostic. But yeah, so the next feature I wanna talk about is cloud-native and container-focused. So software runs in containers. So we have either the Podman or Moby Engine container runtimes. Moby Engine is a Docker. And it's ready for cluster deployments because we're using ignition, you can spin up 100 nodes and have them join a cluster. So kind of bursty type cloud ability and then spin them down when they're no longer needed. And as far as being available across many platforms, right now we have Alibaba, AWS, Azure DigitalOcean, ExoScale, GCP, OpenStack, Volter, VMware, and KVM images, QMU images. We're also adding IBM cloud, Vexhost, a number of others. So we are looking to try to be everywhere. The next feature is OS versioning and security. This one I like quite a bit, especially as somebody who looks at a lot of bug reports. So Fedora 4OS uses RPMOS tree, which I like to describe as get for your operating system, which means there's a single content hash, like a get hash that defines every build that you've done. It defines the set of content. So in this example, this 86C0246 is a short hash that basically says, this is a set of content. And we also have a little slightly more meaningful version number that's associated with it, which I like to think of as a tag. But there's a version number associated with a content hash and that single identifier tells you all of the software that was in the release. And so you as a user, you can share that information with me or with one of the community members for Fedora CoreOS. And you can say, hey, I'm seeing this behavior when I boot this release, is this a bug? And then we can actually take that exact same version, deploy it and try it out. See if we see a bug, which is very powerful. With RPMOS tree, we also have read-only file system mounts, which kind of prevents accidental OS corruption. So if you are in RF with a glob and accidentally try to delete things you shouldn't, it will prevent that. And then also, non-sophisticated attacks from modifying the system. But then we also have SC Linux for other things, like if you have an application in a container that gets compromised, hopefully it can't access anything else on the host and can only do things to the container. Okay, so what's in the OS? So basically, latest and greatest in Fedora. And then we have the hardware support, so whatever the kernel supports pretty much. We have basic administration tools, Podman, Moby. We do not have Python. Part of the goal here was to, encourage people to use containers for their applications. And Python is a really good escape hatch for just spinning something up really fast. But it also means that the user then depends on the host version of Python and the libraries that are on the host in order to run that application. And we want the flexibility to be able to remove things from the host or add them to the host and not worry about every user that exists for that particular library. So it's just much more reliable for us to do updates if things are running in a container. Okay, so Fedora Core OS has been used in several other projects so far, that build on top of it. One of them is OKD, so which is the upstream of OpenShift OCP. And in OKD is kind of interesting because the cluster controls the OS upgrades with the machine config operator. And the cluster essentially knows a lot about the operating system. It's tied to Fedora Core OS. So when you spin up a new node of OpenShift, the cluster can actually manage that, select the right image for Fedora Core OS for this version of OKD and bring it up in, if you're using a cloud GCP or something like that, for example, it knows the image to use. If you're bringing your own, you'll have to spin up the version of Fedora Core OS but then if it happens to be the wrong version, the cluster will actually rebase it to the correct version for OKD. That's kind of interesting. We also have a community Kubernetes distribution called Typhoon that has Fedora Core OS as a base option and Dalton Help loan that community has done a great job with that. And we also have the OpenStack Magnum project which uses the atomic host and now uses Fedora Core OS as the base OS for the Magnum project that delivers Kubernetes to OpenStack users. I wanna talk briefly about Fedora Core OS and the relationship to RelCore OS. So basically common tooling and components but different scope and purpose. So RelCore OS is not intended to be used as a standalone OS where Fedora Core OS is. So container Linux basically could be used as a standalone OS or in a cluster which Fedora Core OS supports very nicely. RelCore OS is much more targeted at OpenShift. So for example, OpenShift components, some OpenShift components are delivered as part of the base OS. So much more focused in scope on that one. The other differences, RelCore OS is based on a Rel package set, Fedora Core OS based on a Fedora package set. RelCore OS is updated and configured by the cluster operators from OpenShift similar to how I described OKD just a minute ago. And Fedora Core OS, obviously we have the updates flow that I described earlier. Other than that, there's really not much difference between them, very similar content sets other than the OpenShift slash Kubernetes pieces that are baked into the OS for RelCore OS. They're all built using Core OS assembler. We have very similar pipelines although we're looking to kind of make those even more similar in the future. But that's it for Fedora Core OS and RelCore OS. Now I want to talk briefly about kind of what we have coming down the pipe. So what we have coming down the pipe, we want more cloud platforms like I mentioned earlier. IBM cloud is one that we have most support for but we are not yet producing cloud images for them. We're adding just a few more bits that are missing and then we want to start putting those on the website as well. And then platforms like packet and other things that are missing from that list. A multi-arch support. So we have a proof of concept right now for adding ARM64 to our pipeline and hopefully that will prove out the model for us to get a PowerPC and also S390X added to the platform as well. We also want to add more human friendly helper functions to Fedora Core OS config transpiler. One example today is for some reason it's just really, it's really not ergonomic in order to change the kernel command line for one of your nodes. You kind of have to write a service that runs a RPM OS tree command. It's just not very human friendly. So what we'd like is to be able to kind of hook into our Fedora Core OS config and have a user very easily say, hey, I want this kernel argument to be different than what the default is and have that plumbed through all the way to have the running node get that applied when it's done. So little things like that are things that we won't really want to improve the user experience for. Another one user experience host extensions. So I mentioned earlier, we kind of want to guide people towards not package layering and using containers, but package layering is great for things that aren't easy to containerize and or maybe little tiny host utilities or something like that. The experience there is great when it works but sometimes it doesn't because of the difference in our package sets between what's in latest Fedora stable. So we're working on with rel-engine infra on that to make that better. And then also more improved documentation. We had a hack fest yesterday that we got a lot of people to join, translations, added documentation, fixing bugs in our documentation, things like that and tighter integrations with OKD and other upstream projects as well. This is my call to get involved. So use Fedora Core OS, go grab it from the website, open issues on the issue tracker. We have our forum as well. If you have questions and aren't sure how to do something, feel free to ask us there. We have a mailing list and we also have our IRC channel. Cool, I'll open it up for questions. And if people want to stick around after questions, I know I'm running out of time. I've got demo content too. So if enough people are around and want to hang out, I'll do a demo. And I guess I'll just look at the chat in order to see if there are questions. So James, you had a question about stats for a number of users. So what we have right now, we have kind of like a, I don't know if the metrics is the right word. We have a proof of concept of a service that basically will gather some, not specific information about a machine, not user-specific or anything, the version that it's currently on and maybe the architecture or something like that, I can't remember. But we don't have that service deployed yet, but we definitely would like to. So one of the goals with that service is to be able to actually get some insight into how many nodes are up to date, how many nodes are old. For example, if we're going through a rollout window, we'll actually be able to see if nodes failed to update. And so rather than waiting for users to give us feedback about, hey, this failed and in this way, we can start to see in that rollout window if 10% of our systems are failing to upgrade for some reason, we need to pause this rollout and start to figure out what the problem with that update is. Because we don't want that level of failure at all. So we don't have anything right now, but we have plans for that. Yep, Fedora Core OS is definitely intended to be used for a home server, single container if you want. So I actually have a demo that I can give for that specific case here in just a minute. If they don't kick us out in 20 seconds, I don't know how this works really. So if they kick us out, then grab me and I'll send you a link to where I also did the demo one other time. Okay, Neil says that we have it as long as we want. So, and it looks like Jim is looking for example, ignition configs. So Jim, we actually are doing a workshop tomorrow. And the workshop content is actually on our documentation page. So if you can't come to the workshop, then you can still go here to the tutorial section of our documentation and execute it yourself. But it has a lot of examples of ignition configs that you can use. So like here's one, actually for the most part, we define things in, you know, the Fedora CoreOS config transpiler config format, I guess. But typically you'll start here with a more human friendly config format and then you'll run a tool that basically spits out ignition for you. So there's a lot of examples in that documentation. I'd encourage you to check that out. And then if you have any specific questions or something's not working for you, we have the discussion forum, which is a great place to kind of, you know, engage the community. So Will says, yeah, so the CoreOS updates don't actually go through Mirror Manager. Well, I'll take that back. It's slightly more nuanced than that. I think we do have some sort of meddling URL type thing, but I don't think the mirrors, the Mirror Manager actually may do that. No content is on the mirrors, but our clients still talk to the Mirror server, like the master one. So yeah, you know, we might be able to do something there as well. Regarding butterfs, you know, we don't really have any plans there. You know, we've been mostly getting Fedora CoreOS off the ground, but, you know, even if we don't have butterfs be like the default that is shipped, one of the things that we've been doing with Ignition is making, you know, adding in support for being able to do root on, you know, complex root devices. So like RAID and, you know, doing things like encryption. And as part of that, you essentially need to be able to do whatever you want to, to the root file system. So you can switch that out and, you know, switch the file system if you want. It's not there yet, but yeah, you should be able to run it on butterfs in the future if you want to. It just might not be the default. And Jonathan has a link to an example of doing that. But yeah, some of the code enablement I think is still in the works. Okay, does anybody want to see a demo? Yes, demo. Okay, let's try this out. All right, I think this is the one we want to see. Okay, can everybody see, is the text big enough? I think I tested this earlier and people were saying it was okay. Okay, cool. All right, so I mentioned briefly in the presentation that earlier this year, you know, I lost access to a machine. I think the office lost power briefly and the little box that was sitting on my desk did not have in the BIOS configured to automatically turn back on when it gets power. So it was powered down and I had no way to get to it. So it is actually my WeChat client that I just SSH into and run Tmux on. So I had previously put everything, you know, all the configuration for that client in an Ignition config and a container. And so I was able to spin that up on my desktop in about 10 minutes after I figured out I didn't have access to my other box. So no downtime, well, short downtime. Anyway, I've recreated an example for this to try to go through and show people, you know, one thing that you can do with Fedora CoreOS, which is kind of a focused use case. So what I have here is Dusty Mabe's WeChat config that runs in Tmux via system D in a container. And that's kind of complicated, but it's kind of a interesting way to do things. But what I have is instructions here for creating an Ignition config based on Fedora CoreOS config, and then also instructions for how to test it using a virtual machine. But what we have here is a Fedora CoreOS config and it basically adds in, you know, an authorized key for the core user. One thing it does is it runs a service called setSEBool.service, which basically enables the container managed C group SEBoolian, this is needed because I run system D inside my container. It has another service called WeChat service, which is what essentially runs the container that starts system D inside the container that then runs Tmux. And if you'll notice, this particular service runs after another service called buildWeChat.service. I didn't wanna build this and host it in a container registry. It's not that complicated and it's not gonna get pulled that much. So I just wanted to build it locally on the node. And I have another service called buildWeChat.service that pulls Fedora32 and then just runs build against a set of files that are local to the node, which I populate a little further down. So here, let's see. Home Core WeChat build is something that is defined inside of the container directory within this repository. So if I look at the container directory, there's a Docker file. In here, I install WeChat along with Tmux and system D. And I essentially run, have a start script that I hand to WeChat that tells it to configure everything. So usually with WeChat, what you do is you start it up and you configure a bunch of stuff and then this configuration is saved in your home directory or something like that. So this is a container and I want it to automatically be able to come back up wherever I am if I blow it away and start it new. So I crafted a short start script which basically adds, this is an example, obviously mine is a lot more complicated but this one adds free note as a server, sets my nickname to the dusty demo, joins Fedora meeting one of the IRC channel and then connects. And that is pretty much all it does. Let's see. Oh, there's a, sorry. Yeah, there's a WeChat.service that starts inside the container. This has started via system D and all it does is run Tmux and it calls WeChat and it runs this start script. So let's see it go. So this is the read me. I'm gonna run it. So if we take that Fedora Chorus config, pipe it through FCCT, we see this ignition config and you can kind of see ignition is much less easy to read than your FCCT configs. For one, it kind of encodes some things and it's just not as fun. So we've got our WeChat config and now I can essentially take and start a virtual machine if I just wanted to test it before I actually put it on that bare metal node that I have so I can spin it up. For some reason it always takes longer when I'm doing a demo than it does otherwise. But if you'll notice some things we try to do with Fedora Chorus or help the user along with information. So one thing that we do is we output information about like your IP address. This might be useful in a cloud environment where you might not know for some reason what the IP is or whatever. That can be kind of useful. Also the host keys if you wanna validate that you're talking to the right machine. We also output information about, was ignition able to actually get a config and also did it have any SSH keys that were added? If it happened to not have any SSH keys that were added then you might have trouble getting into the node, right? So that might be another tip on whether the actual deployment or provisioning went as you thought it would. So let's go and let's get into this node. Okay, so I'm in the node. One thing you'll notice is we are on Fedora Chorus. That's the latest stable release right now. And there's information about our issue tracker and our discussion board. Let's see. Right now the build WeChat service is still running. So that container build is still kind of going. If we wanna look at that, I think we should be able to see stuff that's kind of coming to the screen as a result of that container build. So it's kind of progressing through its build, but while we wait on that, we can go look at some other files that were laid down on the system. So this file in user local bin WeChat is one that I wrote in here that basically says, I can log into this system. And if I just run WeChat, I'll be able to get into the environment that I expect, which is to actually get into the WeChat container and just Tmux attached to the existing session that's already running. So let's see. All right, so let's see get into it now. And we are in to the container. Looks like it's still connecting. Yep. So now I can say, and that's pretty much it. So that is an example of having all of your configuration baked into your node or into your ignition config and just having your system come up and do everything you want it to. The only other demo I can really show right now is Fedora Core West and the relationship with OKD. So I brought up a cluster earlier. It takes a little while to bring up a cluster, so I wasn't gonna do it during a demo, but I brought up a cluster earlier. You can see I have three control plane nodes and two worker nodes. And I'm gonna use a tool called K9s, which is kind of like a text user interface to view Kubernetes clusters. So this is an OKD cluster. I've got a bunch of cluster operators. Looks like one of them is degraded for some reason, but I don't think that one's important. But one thing that's nice about OKD is I can look at these nodes that exist in the cluster and I can pull up information about them. And this, you know, OKD itself knows about the OS image that's in the cluster. So it knows it's running Fedora Core West, right? And it also knows how to bring up new nodes in the cluster. So if I go look at the machine sets for this machine, I only brought up this cluster with two worker nodes and I can do something like, oh, just spin up a new worker node for me in this, you know, D zone. And so what I'll do is I'll go in and edit that and I'll search for replicants and I will spin up a new one. So we can see that the count changed and we can also see that there's a new node that's being spun up right now that is in US East 1D for GCP. So different availability zone. And eventually that node will join the cluster and it will show up right here in the list of nodes. So that's pretty much it for the demo. Do we have any other questions that I can answer? So we have a question for ignition. Does it mean that every configuration change means reboot of the server? I think it depends. Like obviously you can kind of evaluate how valuable the reboot would be. For example, if you make a change to your application, then I don't, you know, not unless your configuration for your application is delivered via ignition and not updated any other way. For example, in some cases you might do, reach out to source control and grab a new version of your application or a new config setting or something like that. You know, obviously you probably don't need to rerun ignition for that. If you're changing networking or something like that, you know, you could want to rerun ignition just because, you know, what if you applied it to the node using network manager and MCLI or something like that but then later and then also at the same time tried to apply that configuration via ignition but you didn't test it. And then, you know, three months later when that node has trouble and you try to spin up a new one, you know, whatever configuration that you applied and didn't test doesn't work and you have to figure everything out. So it really depends on you and how valuable you think that is. Obviously you can do things to the systems but, you know, if there's low cost to spin up a brand new system, obviously cloud environments make it really easy. Bare metal, not as easy, right? Maybe you have Pixie and that makes things a lot easier but if it's a low cost to bring up new systems I would recommend that you reprovision. Let's see. Okay, how do I deal with auto update of containers running in this demo? Like the WeChat one that I did, I'm gonna assume that. So for the WeChat one, the demo that I did was a little simplified but in the one that I actually have running, I have it periodically basically try to pull a new Fedora 32 container and then also build from a Git repository. So for example, once a week it'll try to do a new build and if there's nothing new to build it won't do it but if there is something new to build then it will do that and then the next time my Fedora Core OS box gets updated which happens roughly every two weeks I'll get the new WeChat container that'll be used to start it. Okay, cool. I'm gonna stop presenting and all but I'll hang out in the chat for a little bit but thanks for everybody for coming and I appreciate you joining the talk and we have a workshop tomorrow. So if you're interested in getting your hands on with Fedora Core OS, feel free to come to that. If not, it's on the website or on the docs page so you can execute that whenever you like. Thanks everybody.