 Welcome back to another OpenShift Commons briefing and as we like to do on Mondays, we are going to have an Ask Me Anything question with one of the most wonderful upstream projects, Fedora CoreOS. We have a number of the members of the team from Fedora CoreOS as well as a few of the OKD working group folks who are also leveraging Fedora CoreOS in a big way. Today we have Dusty May who is going to kick us off with an intro to Fedora CoreOS and a bit of a demo and then after that we will do live Q&A and we are streaming in Twitch, Facebook and YouTube so we will aggregate all the questions and feed them to our lovely contestants today. So with that Dusty, take it away. Hey everybody, thanks for the intro Diane. So my name is Dusty May, I work for Red Hat and I'm here to talk about Fedora CoreOS today. So briefly I'm going to talk about what is Fedora CoreOS, I'm going to talk about some of the features of Fedora CoreOS, how it relates to REL CoreOS and also how does it relate to OKD and then I'm going to give a short demo and hopefully dig into a lot of questions from people. Okay, so here I had planned to start the demo. The demo is actually an install of OKD on top of Fedora CoreOS but I realized that the install time would take longer than my talk so I started it early today so we'll get to that in just a little bit. Okay, Fedora CoreOS, what is it? It's an emerging Fedora edition. It came from the merging of two communities. One was CoreOS Inc's Container Linux community and also Project Atomic's Atomic Host Community. It incorporates the Container Linux philosophy or what we've been referring to as the Container Linux philosophy, the provisioning stack and the cloud native expertise of Container Linux and it also incorporates Atomic Host's Fedora Foundation, the update stack and the enhanced security with SC Linux. So first I want to talk a little bit more about the philosophy behind Container Linux because it really has driven what we've done with Fedora CoreOS. So first off, Container Linux focused primarily on automatic updates which means that the administrator by default has no interaction with the system in order to keep the system up to date. And the goal there is that staying up to date in a default automated manner means that security fixes get automatically applied and your systems stay more secure, so more secure by default. Container Linux also had all nodes start from approximately the same starting point and they used ignition in order to achieve this goal. So you would use ignition to provision a node wherever it started, whether it be on bare metal or on cloud and they all essentially start from the same starting point. Container Linux also focused on immutable infrastructure. So for example, if you'd need a change, you're encouraged to update your configuration and reprovision this kind of guarantees that your changes make it back into your configuration and it's tested because guess what, you've tested that provisioning when you brought up a new node. Also user software runs in containers which means that applications don't depend on the host and the host updates are more reliable. When you have automatic updates, you need them to be reliable. So now we're going to talk about Fedora Core OS and you're going to hear a lot about those features that I just mentioned or the philosophy I just mentioned. The first one being automatic updates. So Fedora Core OS features automatic updates by default and if you have automatic updates, you need them to be reliable. How do we achieve this goal? So we achieve this goal by having extensive tests in automated CI pipelines. We also have several update streams which I'll touch on in a minute that allow users to preview what's coming. Users run the various streams so that they can know when changes are coming that they need to either address or report issues for. We also have managed upgrade rollouts. So upgrades, upgrade windows happen over several days. If people hit issues early in the rollout window, they report them and we can address the issue, stop the rollout window if we need to. And then of course, things will go wrong at some point. For when things go wrong, we have RPM OS tree rollback which can be used to go back to the previous version. And in the future we plan to have some sort of automated rollback functionality where a user can specify, hey, if this particular health check doesn't pass, I want you to automatically go back to the previous version before this update. That's a future feature, not something we have just yet. Okay, I mentioned multiple update streams. Right now we have three update streams which we offer that have automatic updates. One is next. That stream is focused more on experimental features or Fedora major release rebases. So for example, when we switch to C Groups V2, right now we're on C Groups V1 in Fedora CoreOS. When we switch to C Groups V2, we'll land that in the next stream first. It will have some soak time there. Hopefully people report any issues. We get them fixed. And then eventually they'll go into testing and stable. Also for example, when we switch from Fedora 32 to Fedora 33, that will happen in next first. So it's an opportunity for us to put those breaking changes that are going to happen or possibly breaking changes that are going to happen in there and get them tested. Okay, so testing is basically a preview of what's coming to stable. It's a point in time snapshot of Fedora stable RPM content and that is going to go directly into stable in two weeks if we don't find issues. So the goals of having these update streams are to publish new releases into update streams every two weeks and also find issues and get them fixed so they don't hit the stable stream. The Fedora CoreOS release promotion, I touched on this briefly. We have a version number that basically incorporates the Fedora major. The date at which content was snapshotted from Fedora stable repos and then also a number that indicates which release stream we're on. So we have three release streams. We have testing stable and next and that number corresponds to a release stream and then we have a revision. So if we do an ad hoc fix to a content set, we will bump that revision number and re-release. So for the YUM repositories, if this represents the YUM repository moving in time, we will snapshot that on, in this case, the 23rd of March. That becomes the testing stream. We do a testing stream release and then hopefully we don't find any issues. Two weeks later, that gets promoted to stable and people in the stable stream get that content. Okay, the next feature is automated provisioning. So Fedora CoreOS uses ignition to automate provisioning just like Container Linux did. Any logic for machine lifetime is encoded in the config. So it's very easy to automatically reprovision nodes. An example of this is earlier this year, I have a node sitting on my desk in the office that runs Fedora CoreOS and it also happens to be an IRC client server set up that I have. Yet I lost power to it at some point and it didn't turn itself back on. So I wasn't able to get to it and I also wasn't able to get into the office because of COVID. So I took the configs that I used to bring up that server and I just spun up a VM at home and I was back up and running in 10 minutes. So that's an example of because everything is baked into these ignition configs, it's really easy to get a new node up and running in the same profile as the one that you had. And then because we're using ignition, we have the same starting point, whether we're on bare metal or cloud, we use the approximately the same image everywhere. So because we can use ignition everywhere as opposed to kickstart for bare metal and cloud init for cloud, it kind of unifies the whole experience. Okay, so the details of ignition. Ignition is a declarative JSON document. It's machine friendly, so not very user friendly unless you like to read JSON. It runs exactly once during the init remfs stage on boot. It can write files, system to units, users, groups, partition, disks, etc. The key point here is if provisioning fails, the boot fails. So you don't end up with a half provision system. Sometimes with cloud init, you could have part of it succeed and part of it not succeed and then you've got an application up that's half running. With ignition, it's good because you know the machine didn't provision correctly, but it also can be bad because it's hard to debug in the init remfs. So there's good and bad there, but we like it overall. Particularly, ignition is not very human friendly. So we have a tool called the Fedora CoreOS config transpiler. That is a lot more human friendly. It's written in YAML. And it also has what we call sugar on top that will automatically write out some of the more tedious parts of ignition for you. So the next feature I'd like to talk about is basically cloud native and container focused. Software runs in containers and users have two options. They have Podman or Moby Engine, which is also known as Docker. Those two container runtimes, if you're coming from container Linux, you still have Docker if you want to use that, or if you want to try out Podman, that's there for you. It's ready for cluster deployments. So you can spin up 100 nodes and have them join a cluster because ignition configs are used to automate the cluster join. It kind of takes care of everything for you. And then you can spend down nodes and spend them up again as you need. So that's kind of more of the cloud native piece of it. It's also offered on a plethora of clouds and vert platforms. So we're trying to be on any major cloud platform and we're adding new ones every day. OS versioning and security. So this is one that I'll dig into a little bit. Fedora Core OS uses RPM OS tree technology. I like to describe it as like get for your operating system. So we have, for example, a particular version of Fedora Core OS, which is kind of like a tag. You have a version and also a Git commit, or sorry, an OS tree commit hash. And this single identifier tells you all the software that's in a particular release. It tells you all the RPM content, tells you all the config default settings that are delivered. And that is important when you are trying to report issues or share information. So as a user, you can report an issue and say, hey, I'm on this specific commit of Fedora Core OS and I run these steps and I see this problem. And that's really powerful. Because it uses RPM OS tree, it also has read only file system mounts, which prevents accidental OS corruption. So if you happen to RMRF on the wrong directory, and it also prevents some, you know, novice attacks from modifying the system. We also have SE Linux enforcing by default, which prevents compromised apps from gaining further access, which is not something that Container Linux had. Okay, so what's in the OS? We have the latest Fedora-based components built from RPMs. We have hardware support. So hopefully anything that Fedora supports, we can support with Fedora Core OS. We have basic administration tools. We have container engines and not much else. So we don't have Python. The goal here is that we encourage users to run their applications in containers and not run things directly on the host. That makes our updates more reliable. We don't want to update the host and break your application. Okay, so coming soon, we have more cloud platforms that we're adding. We also are trying to get support for multi-arch. We've got more, you know, human-friendly helper functions that we want to add to Fedora Core OS config transpiler. We want to make our package layering more reliable in cases that might need that. We want to have more slash improved documentation and then also tighter integrations with OKD. When you talk about Fedora and RHEL Core OS, you want to know what's the difference. So at a high level, the biggest difference is obviously based on RHEL package set versus based on a Fedora package set. But probably the larger thing is Red Hat Core OS is only designed or meant to be used directly with OpenShift itself and not meant to be used standalone. So the updates for Red Hat Core OS are delivered and controlled by the cluster itself and not independently of the cluster. With Fedora Core OS, you can use it either standalone or you can use it as part of a cluster. So for example, with OKD, in the standalone case, you get the updates directly from Fedora Core OS' release servers. In the case of OKD, you get the updates similar to how Red Hat Core OS gets its updates from containers in a registry. So you can use Fedora Core OS standalone with other cluster orchestration technologies or with OKD. OK, specifically OKD is installable with OKD's installer. So the same one that OCP uses, OpenShift install. The cluster controls the OS upgrades, as I mentioned a minute ago. Upgrades are provided as machine OS content containers. And then the cluster can manage and bring up new machines automatically, which I think is the coolest point there right at the end. OK, so let me go into demo. Diane, do we have any questions before I go into demo? Do your demo first and we'll queue up the questions afterwards because then... OK, perfect. Let me switch my screen share. See if this is the right one. And you guys see my terminal window. OK, perfect. So I was going to start this at the beginning of the talk, but I realized that it takes a little while, 35 minutes to bring up a cluster. So I started a while back. So in this case, what I've done is I brought up an OKD cluster running on Fedora Core OS and I'll hop over here to another window and I'll actually show that running. So I'm using a terminal user interface called K9s, I guess, which is pretty awesome. I like it a lot. OK, so this is my cluster. In this case, I have one, two, three of our primary nodes and then two worker nodes. One thing that's really nice is I can dig into each of our nodes and I can actually search Fedora Core OS. And the cluster knows what operating system is running underneath, which I think is pretty darn cool. So this is the Fedora Core OS release of Stable that we just released on Friday. It's from a content set from June 29th. So that's when we froze the content set, we did a testing release, and then we did a subsequent Stable release two weeks later. OK, so this is the node and I can dig into each node. I can see what's running on each node. I think one of the coolest things that you can do with OKD these days, and it represents the tight integrations with the operating system is if we go and we look at the machine sets for this particular cluster, I only brought up two worker nodes. So there's a third machine set here, which has no nodes currently in it. So if I go and I edit that particular machine set and I change the replicas to one, so it will start to bring up a node. And if I want to edit another one, I can bring up some even more nodes. So let's make this one two. And now I can go look at the machines in the cluster and we can see we have two that are provisioning right now. Regarding OKD itself, as those predict provision, you know, typically if you want to look at the help of the cluster, I look at the cluster operators and these kind of give you a sense of the health of the cluster. In this case, it looks like we have one cluster operator that is in a degraded mode, which is not good. I'll need to look into that one. But in general, as long as all of these typically say faults, then you're good. Let's see if those, those haven't come up as nodes yet. But yeah, so this is kind of the demo. I can run OCD bug node to get into any particular one of these machines if I want to. But this is just kind of a representation of the integration with the OS that we have with Fedora Core OS and OKD that makes things like this possible. I can scale out a cluster, you know, depending on my workload. And because it uses Ignition to serve all of these new worker nodes, essentially their role in the cluster, those come up automatically and join the cluster. And then, you know, that's the end of the story. So I think that's all for me unless anybody wants me to poke around here and show anything else off. Someone Jeff was asking maybe, and this might be where, are you able to show off some of the features of your Ignition config? Like a specific Ignition config? Perhaps. Let's see what I have. The one you use to bootstrap the cluster. Okay, let's see if there's anything in here. I don't know if I have those specific Ignition configs, but... And Christian is saying that those are typically created by the installers automatically. And I'm just going to unmute a few folks here so other people can try them and call in Christian. Yep, yeah, so here I just highlighted one Ignition config in the terminal here, which is more or less the one that's used to bring up the bootstrap node, which is essentially what kicks off the entire install of the cluster. So this Ignition config has some user information where it sets SSH keys. It writes in different registry configurations. It pretty much defines everything that needs to be set for this cluster in order for it to do everything it needs to do. So we've got CA information. We've got services that start. It's just a whole host of different things. Does anybody know of a particular thing that might be good to show off? Well, one thing I would add to this. So part of what came from CoS was container Linux, but also Tectonic. A lot of OpenShift 4 and OKD now is based on an evolution of Tectonic design. And so for OpenShift 4, the Ignition configs are actually cluster objects themselves, like Dusty, if you want to do OC get machine config. So as an addition to the CoS model, the integrated machine config operator will support day two changes. So as was kind of mentioned, the Ignition is provided in install time. And so OpenShift and OKD work the same way as any other workload on top of Fedora CoS. But in addition to that, the MCO kind of takes over day two. And so you can create a machine config object with OC and that will that change will roll out to the cluster. Someone is asking also, what happens when you create a machine config that breaks the host? For example, a mistake in the system D unit, will it automatically roll back? Yeah, I'll just echo it. Vadim said basically the MCO today only rolls out a change to one noted it. So you can adjust that and you may want to do that in larger clusters. But so if you create a system D unit file that breaks boot up, then only one node will go. He's again asking. So the node will stay disabled for scheduling? Yeah, yes. And it depends where you break it. But yes. So from YouTube, someone's asking, what about support for hardware raid controllers for bare metal servers like HP Gen 10 and super micro servers? Also VLAND and bond support for network cards? Have we gone there yet? Yeah, we should have support for complex networking configurations. Anything that network manager supports should be just fine. As far as the, did they say hardware raid? Is that what you said? Hardware raid, yep. So as far as hardware rate goes, assuming that you don't need a special driver in order to bring it up, then the hardware raid just shows a disc to the OS, right? So like whatever is configured in hardware raid, it usually just shows a single device to the OS. So that should work. Is that specifically a question about like modules and or drivers? Yeah, Vivian has not said that yet. So we'll see if she clarifies that. Okay, gotcha. Do that. Yeah, software raid, a slightly different story. We're working on complex root device. Stuff, we're not quite there yet with the root partition at least. We're shipping a regular Fedora kernel and Fedora core OS and a regular rail kernel and rail core OS. So in terms of just strictly drivers, anything that works for those destroyers should work here. Good to know. Thanks. Well, lead asked the question, and I think it might have been answered in the chat. Would it be possible to report back to Red Hat satellite foreman for overall inventory security stand? And he said it came to an internal audit for rail core OS nodes. We're not part of the satellite host's inventory. Yeah, I can take this one. So, you know, I think one of the, I guess for all of the open shift, certainly a huge design point was that open shift and, you know, OKD should be self-driving by default. Like it shouldn't require you to run external infrastructure to say, spin up a cluster at AWS or GCP or public cloud, we definitely want to integrate with an external system. So I think the short answer to this would be, you can always run a demon set or a system D unit file that runs on each node that does, you know, a web request to foreman and asserts the machine health. I think that's probably the shortest answer. But, you know, it's actually interesting because the demo Dusty did of the machine API is actually the cluster itself, like you can spin up new machines using OC, like your user experience is OC to control the inventory of things. One other question. What is the proper way of accessing Fedora CoreOS when network is not working or the SSH private key gets lost depending single to the kernel parameter didn't work for him. So I guess it depends on how long ago this person tried that because we did some work maybe a month ago or so to make sure that a pending single to the kernel command line should drop you in appropriately to an admin shell or whatnot. So you should be able to set a password or change an SSH key. If they didn't set a password, maybe there was an SE Linux problem when they got into that Dell. But I would say open a bug and we can try to go through that. We do have a doc page that kind of says what to do when you're trying to recover your system. So that's on the docs. So hopefully those steps work now. If they don't, please do open a bug. Could you pull up and share the docs page for that? Just that might be a good segue here. Sure. And then there's a bigger question. There's always a bigger question, a less technical question. But I'll throw it out there now because one of our guests did earlier. And here we go. Perfect. Thank you. So it's about the relationship between REL, CentOS, and Fedora. And you knew we would get one of these questions and I figured that. So from Fedora REL and CentOS, he knows that Fedora is upstream with the latest features. Whereas REL is the stable enterprise version and CentOS is the stable free version, which leads to many enterprises appreciating CentOS for stability. How does Fedora CoreOS fit into this pattern? I want to take that, Dusty. And we can have a conversation about that. And maybe also incorporate a little bit about CentOS Stream and what that might be. They might not be familiar with that. Okay. Yeah, I'm trying to think of the best way to answer this question. Obviously, there's a lot of different ways to answer it. But in general, we have Fedora CoreOS. We try to do everything upstream there. REL CoreOS actually follows very closely with the development that's going on in Fedora CoreOS. With a slight package set tweak that includes things that are specific to OpenShift, like Cryo and the container runtime there. So it's a little bit different than normal Fedora to REL. It's kind of like the set of packages is the traditional Fedora to REL, except for some of the packages that we focus on more like Ignition and things like that, which kind of get updated a little faster. As far as CentOS goes, right now, we don't have any plans to make a CentOS CoreOS, mainly because it's taking all of our efforts just to work on Fedora CoreOS. But I do know some people have expressed interest in that, especially in the OKD working group, making essentially a CentOS version that OKD can run on top of. But I don't have any plans for that right now. Was the question specifically for plans related to it or how does it fit in? Our guest, I'm going to unmute to see if he can do this. He's on mobile device, so it might not be able to follow up on that. But I know we've had a lot of conversations about this in the OKD working group and we have Christian and Vadim with us from that as well. A lot of it has to do with resourcing. And there is in the CentOS world a PaaS, a PaaS working group, where that conversation has been taking place. So Christian, if you want to chime in a little bit, if we can hear you, I can see you now. Yeah. I can hear you now. Can you hear me too? Okay, perfect. So yeah, the reason we chose Fedora for OKD as a starting point is because we get that feedback loop and we actually complete a feedback loop by having changes trickle down naturally because Fedora is the upstream. So everything below that will get those fixes eventually. While CentOS is the end of the most downstream thing and anything that gets fixed in CentOS isn't landing anywhere else. So CentOS is essentially a repackaging of rel. So even in the standard CentOS, if anything is fixed in CentOS, first it doesn't really land in rel naturally. So for us it was just we... And that was kind of broken in the OKD 3.x releases because that was also just a repackaging of OpenShift. So we really wanted to make sure that we get a feedback cycle working where any bugs we hit in upstream will also be fixed downstream. So I think from a Reddit perspective it may make sense to also create a CentOS stream based CoroS that would run OKD. I'm not sure we will invest any resources into, at least not ourselves, into creating a CentOS CoroS OKD, which would be, again, it would be downstream and none of the other products would benefit from it. Yeah, I think... Go ahead, Colin. Yeah, just one aspect to this I think is really interesting is containers fundamentally change how we think of the operating system. So in this whole conversation we've been talking about the host and for some people that's important, but all... You can run the Rally UBI container on Fedora CoroS, right? And not only can you, you should, right? Like that's a totally normal and expected thing to do. So for application developers, you don't have to... You still get the benefit of Rally and the CentOS stability for your app stream while newer hardware enablement and all that stuff comes in in Fedora CoroS and that only impacts the system administrative side. Yeah, so I think that split is really important. I think that one of the very early slides that Dusty put up was about where the container Linux and the project atomic project came to live was in the Fedora world. So collaborating with them made sense for the OKD working group in terms of stability and in resourcing the effort. So that's been, from the working group's perspective, it's been a really nice relationship building period. Yeah, another thing on the resourcing front, like with Atomic Host, it was kind of more of a a lot more passive relationship with Fedora where we would build the same package that was in Fedora. We essentially only have one stream in the terms of what we're doing with Fedora CoroS. So it was a lot more passive and a lot less resources required. Now we have three streams. We kind of need to keep up with features that are in next versus the other two. We need to triage bugs that are against ones versus the others. When we do automatic update rollouts, we need to kind of focus on, did this update break anything? Does it need to be rolled back? So that's why it was a lot easier to do a CentOS Atomic Host in Fedora Atomic Host because it was a lot less resource intensive. But because we wanted to focus on automatic updates, we needed to put some of this other stuff in place. And that's why when people talk about doing a CentOS CoroS, it's like, oh man, that's a lot of work. But we want to enable people to do that if people want to step up and do that. So we would love for it to happen. It's just a lot of work. Yeah, it's a lot of work, but I think the pattern has been established and a lot of the difficult hoops have been figured out. It really truly is a matter of resourcing, I think. And if the community wants to step up, happy to help drive that as well. But I think it really comes down to that. And I think as Colin, so aptly put out, containers changed everything. So that would be that. And let's see, I'm going to go back up. There was another question from Paris, Lucas from YouTube. What is the status of the Fcause OpenShift code ready container? Boy, we've talked about that one a lot in the working group. Christian, try a couple of things. You want to give a... So I think Vadim may know more about this, but I think we've been able to build it. And there is sort of preview or a proof of concept that it works. We haven't been able to actually change the project to build on top of OKD by default, as opposed to OCP. So there's still some discussion whether we want that, whether we want to keep that R-Cos, OCP, OpenShift, OpenShift platform-based, or switch it over to OKD, or maybe do both, which is, again, work we haven't tackled yet. So technically, it's possible. And we have some guides for similar things, like OneNote installations of OKD already. So there is definitely some discussions that are going on. Yeah, so this is an AMA around Fedora Core OS, and we had one the week before on OKD working group. But if you want to join the OKD working group, there's a whole thread about the single-node cluster with a whole bunch of people who've done some homelab stuff and some really great stuff, but it has not filtered down into an official code-ready container. Yeah, Vadim, I think you should be unmuted now if you have anything to add to that. Right, from the technical part, we built an OKD-based CRC bundle, and now it's the team deciding which way do they want to go, the CRC team deciding which way do they want to go. And it's a really hard choice, because the OKD's features are not that important in code-ready containers. We'll be keeping an eye on that, because the difference is just the basis for do they want to go with Fedora Core OS or El Core OS, and it's up to them to decide what's best for their product. So there's nothing that keeps us from creating something similar on the community side, other than I'm always going to say the word resources and maintaining it once you build it, because we've all been there before creating VMs for the early origin that we had to maintain with every release as a community thing. It was fun, and keeping up to date is a thing. And also, we've been having a conversation in the OKD working group about ARM64 architectures, and Ryan's asking any news about Fedora IoT and the new provision site looks really nice and interesting, but maybe you could talk a little bit about the roadmap or the visioning for Fedora IoT and Fedora Core OS. Everybody just nods. Yeah, so I know we do, like I mentioned earlier in the slides, we do have a plan to have an ARH64 stream or set of streams for that platform. We don't have that just yet. I don't think we have any official plans to merge Fedora Core OS or Fedora IoT together. I think there's some slightly different goals there. One of them being Fedora IoT, obviously plans to run on some 32-bit ARM hardware, and that is not really a goal that we had from the outset, because we're focused a little more on servers. ARH64 just seemed like a good line in the sand to draw. But yeah, I don't think we have any official plans for merging the efforts, but that is something to talk about and explore. We're always interested in how to make things better. Somebody else might be able to speak to this better than me. That would probably be Peter Robinson, or Robinson, rather, and so we'll have an AMA on Fedora IoT some time soon and make him come and talk. There's also a little confusion out there in the marketplace and I think or in the community space around that, but I think, as others have said, ARM is becoming ubiquitous out there in the IoT land and it comes up often. And I think that's there. From YouTube, back to that hardware question from Vivian, for the hardware question again, I mean, not only installing a preconfigured RAID volume, but also configuring RAID levels from ignition configuration. At the moment, it's possible to do that for, I'm going to say that acronym wrong, MDADM only, and not for hardware RAIDs. Yeah, that might be something where we need a specific issue to dig into, but basically, I guess, they're asking for support for ignition to be able to configure hardware RAID controllers. Yeah, that seems slightly out of scope. I guess there's typically a tool that is shipped to talk to them or maybe there's a kernel interface that already exists. But yeah, I think maybe a specific issue, I think her name was Vivian, would be good and we can hash out some details there. I can link to the issue tracker for Fedora Core OS and we can kind of take it from there, I think. If you could share your screen and go there, that would be great because that person is not in the blue jeans. Okay, perfect. Well, maybe we'll see where the issue tracker is. Let's go back to the presentation. And I have a slide for getting involved. So if you want to go just grab Fedora Core OS or view our releases, we have the top level get fedora.org slash core OS. For any issues or kind of design discussion related stuff, we usually open tickets in our Fedora Core OS tracker. So that's github.com slash core OS slash fedora dash core OS dash tracker. So you can open an issue there and that's where we can kind of dig into the details of the hardware rate controller support. And we also have a forum if you have kind of like a more of a user-related question related to, you know, I can't, how do you set a password or things like that? The forum's a good place for that. We have a mailing list and we also have Fedora Core OS on FreeNode. Should I go to the issue tracker, Diane, or is this good enough? Just go for a quick, quick ride over there to Fedora LAN. All right. So this is kind of our issue tracker where we try to triage things and it's got everything from, you know, actual bugs to let's discuss things, right? So here's something about how do we want to integrate with, you know... We're still just seeing your slides, so maybe you have... Oh, okay. Hold on one second. Split screen personality. Yeah. Let's see if we can, I think, does this look better? That's it. There you go. Okay. Yeah. Sorry. Popped open a new window and I clicked on that link that was not shared. So, yeah. So this is the issue tracker and we kind of have everything in here from actual bugs to discussion topics. So this particular one is where we're chatting about how we want the OKD release schedule and the Fedora Core OS release schedule to relate to one another. Should they be related? Should they not be related? How do we want to go about it? So a little bit of everything in there. And one of the guests is asking, and I think you had a slide on it already, is Fedora Core OS already usable in Azure, AWS, GCP, and elsewhere? There was a slide. The answer is yes. So if you go to our download page, so there are two regions or two clouds in which we have automatically uploaded images right now. Those are AWS and GCP and we're trying to work on other ones. However, we also have images created for a lot of different clouds. So we have Alibaba, AWS, Azure, DigitalOcean, ExoScale, GCP, OpenStack, and a few others. We also have VMware images. But in general, if you go to our doc site and you click on the provisioning machines breakout, you can see how to actually provision a machine on a lot of different cloud providers that we have. So for example, you asked about Azure. On Azure, because we're not currently uploading directly to Azure or at least we're not able to share an image that others can use, we show you how to download it and then upload it before you start to launch an instance. So if we don't have it automatically uploading, we hopefully have steps to show you how to do that. Thank you for that. And thank you for all the effort that you guys obviously put into making this happen everywhere because it's making a lot of OKD people very happy. And hopefully when we get to GA, we'll have a lot more feedback on some of this. Let's see if we've gotten everybody's questions answered. Can you talk for me a little bit about Ignition 3 support versus Ignition 2 support? I know that's been something that we've had a conversation with in the working group about having it there. Where is Ignition at right now in terms of Fedora CoroS? Oh, I can best detect it. No, I'll let you take it, Christian. The TLDR is it's very close and Christian's been working on it very hard. So yeah, in Fedora CoroS, we started with Ignition V3 from the get-go. So that's always been V3 in Fedora CoroS. In Red Hat CoroS and OpenShift Compute Platform, we started out with Ignition configs back V2. And now in the upcoming release 4.6, we will switch OCP to Ignition spec V3 as well. So it'll be the same. And if you're a user of OKDs, just use spec V3.1 and you'll be good. And OCP will switch to V3.1 as well in the future. And Vadim is pointing out that it's shown the machine config pool screen, which had shown both Ignition spec 2 and Ignition spec 3 machine configs. Yeah, that was, we kind of had a, in the MCO, we added dual support so we can provide either one and it'll be translated to spec 3.1 for OKD and 2.2 for OCP. So just to facilitate that migration for OCP to move from spec 2 to V3. But that's really an implementation detail. And OKD users should always be using spec 3 or spec 3.1. And that should be fine. I think I'll just add on top of this is we're really close to fixing a lot of, like building on top of a whole lot of infrastructure that we spent the last like two years or so building and there's a whole lot of new features coming in Ignition around. I think Dusty may have briefly mentioned around managing the root file system and a whole bunch of encryption and a whole bunch of other stuff that I think a lot of people will really appreciate. So this is all kind of coming together now that we have spec 3 supported by OKD and will be used by ARCOS 2. Interesting to watch the resources or the people who are working on all the different projects here. And Christian seems to have his finger in just about every patch here out there. So it's quite useful having you on the OKD team as well as watching all of this other stuff. Mike Rogford asked a question. I'm not sure we answered it. But in terms of OSs and use how or would you create a split between RHEL's UBI based images and the builder templates and those derived from Fedora. It's trying to figure out the best way to even phrase the question. So that makes sense to folks. Could you repeat the question? It's in the chat as well and I think he's still with us. So maybe he can like if you want to unmute you can clarify that. Not to put you on the spot but yeah to put you on the spot. But are Fedora container images a goal for OKD? So I think it's a secondary goal for us to obviously build out that Fedora container ecosystem. So in the future the OKD working group we've already talked about this last week will meet up with the Fedora container SIG and yeah obviously we're interested in delivering all of our operators on Fedora containers as well. It also works the way it is now with UBI based containers or CentOS based containers. But obviously yeah we do want to encourage people to to build a Fedora containers and use Fedora containers for everything. Now someone asked me earlier today in Slack about the OCP's operators the operators that weren't in operator hub.io yet. And if there's a few operators that are specific to the operator hub that comes with open ship container platform and if we were we the working group was going to rebuild all those on Fedora CoreOS and I think that's a bit of a significant effort and I think it's also some other other questions around that as well which ones and I think it's specifically they were talking about service mesh and Istio making sure that there was one-to-one parity with what is available in operator hub.io. Yeah as far as like rebuilding things on Fedora images I think that would be it would be great to have but you know a question to have is or a question to think about is like is there a trade-off for doing that work is there other work that we you know should do instead if for example that was going to push the OKDGA out to January of next year would that be something we'd want to do. And the other thing too is you know if things are able to use a UBI base and we have only one version of them it's a lot easier to maintain. So it's just like there's a lot of trade-offs to think about when you start to go down that road which is the exact same thing which we were discussing earlier with a Fedora CoreOS and CentOS CoreOS right. Resources often are the limiting factor but we want to do things and we want to get more involvement from the community as well to help us achieve those goals. I think that maybe related to ICO is this concept of host dependent containers because I think ICO ends up programming IP tables and things like that and so that is a tricky topic to handle. We have some discussions around that around how you best have a container that you know partially executes on or manages the host but hopefully we can get to the point where even these host dependent containers like ICO work in both cases without needing to rebuild. I think the first step we're going to do here is just building those containers that aren't currently publicly available without a subscription that are rail based, rebuilding them in Fedora just to have feature parity with the community version. I don't think we don't have to rebuild all the UBI8 containers because there's just no benefit to it really. Yeah I think that that'll that'll be good and that might be a good way of doing it. Getting parity is what we want and then you know if third parties want to go and build something out there and host it in operator hub.io god bless them if the working group either the OKD or the Fedora CoreOS or some other working group the Fedora container working group decides to take on that then bless them and find a resource. I think that the really what we're just trying to do today and I think Dusty's got another talk on Community Central this week and we're going to be doing a lot of talking around OKD4 is really getting the word out about how to engage with these communities, how to test the the work that we're doing and give us your feedback on Fedora CoreOS. Make sure that the cadence of release cycles and stable releases are working for everybody and that we all stay in sync and connected on these things so it's and we recruit new bodies to continue to do more work so I think that's really been there. In terms of the roadmap for Fedora CoreOS Dusty and Colin what's next? So I would say we've got a few issues that are kind of you know periodically come up that we're trying to work on next. I mentioned one earlier which was multi-arch so we have you know some very motivated people in the community that kind of maintain a secondary pipeline for different architectures and we want to make that like part of our official pipeline right so we want to try to release stable for you know all architectures at the same time. We want to have the artifacts show up on our download page. We want to have them signed by Fedora Release Engineering like our current 64-bit Intel artifacts are so that's something we're going to try to work on next. We mentioned complex root device stuff that's in the works right now. We're always doing something around networking it feels like so we're trying to enhance it so that for example right now we default to DHCP when you first bring up a node which is a same default but in some cases it can be problematic so we're going to try to change it so that if you don't actually need DHCP on that first boot we're going to not bring it up right. Let's see what else oh so even though we discourage package layering in Fedora Core OS sometimes it is advantageous to do that for maybe some small feature that's like a host level type thing that is just not very easy to containerize or either a very big maintenance burden to containerize so we're going to try to make package layering more reliable. We've had this problem for a long time with our OS tree based distros silver blue atomic host Fedora Core OS so we're going to try to tackle that problem on the Fedora Release Engineering and infrastructure side and see if we can help that out a little bit so a lot of small things cloud providers you know if you're a cloud provider and you want Fedora Core OS running on you we would love to have make that happen so yeah just more of the same for right now obviously OKD is just now getting to GA but we want the relationship with OKD to be tighter right so we've been doing a lot of foundational work in Fedora Core OS haven't had a lot of time you know Christian and Vadim have done an amazing job without a lot of help from us right and we want to make sure that that integration is better right we also want to look at other Kubernetes distros like Kubernetes distros like Typhoon who are also using Fedora Core OS underneath right how can we be a better platform for them so those are the types of things that we're looking at you know in the short to mid-term. Perfect well I think it's um it's a great relationship I think we're just going to have to foster more of them in the coming months and weeks and days and move it forward if you want to throw back up your final slide with how to get a hold of everybody that might be a great way to end and we'll definitely have you guys back on and continue to collaborate with you across lots of different communities so thank you all for joining and we look forward to many more releases in tandem with OKD and others. That's fine, thank you all. All right thank you all for joining take care today.