 All right. Hi, everybody. I'm just going to get started, I think. So welcome to my presentation on OKD4, OpenShift Kubernetes on Fedora for OS. I'm Christian Glombeck. I'm a software engineer on the CarOS team within the OpenShift organization at Red Hat. And yeah, we've been working on OKD, the OpenShift community distribution of Kubernetes that works for most on Fedora CarOS. So let's get started. Just a quick look at the agenda. We'll talk about what is OKD for? What is Fedora CarOS? You may have heard a little bit about Fedora CarOS already. Then how to install? Then we'll talk about the OKD working group. We'll have a look at the road ahead and ask me anything session. So let's get started. What is OKD for? OKD for is not an abbreviation. It stands for the origin community distribution of Kubernetes. And what it is, what that means, is it's the OpenShift code base, the same as the OpenShift product we have at Red Hat, running on top of Fedora CarOS. And the special thing about this is we have just one lifecycle here. We manage the operating system upgrades through the cluster. So the operating system Fedora CarOS in the OKD case really becomes just an implementation detail. And the administrator of the cluster doesn't really have to worry too much about it at all. So yeah, we have a little graph here. So we have an automated installation and one lifecycle, as I said. The Linux host runs Fedora CarOS on top of that is the Kubernetes and OpenShift code. And on top of that, we have, you can install any Kubernetes application, obviously. We're looking to make this very easy with operators from the Operator Hub, which are at the topmost level as services the cluster admin or the developer can install on top of the cluster. And we run on almost all the clouds. So it's really just, yeah, it's really the same experience in all environments, which is really, really cool, I think. So yeah, you can install on bare metal, virtual machines, OpenStack, AWS, Google Cloud Platform. Azure currently doesn't. We don't have images on Azure yet. It works, but you'll have to upload the images yourself before you can proceed. So let's have a look at what is Fedora CarOS. You may know a little about Fedora CarOS already, but I'll just say it again. It's an automatically updating Linux OS. It's aimed specifically at containerized workloads. It's based on RPM OS tree and ignition. I'll circle back to this a little bit in a little bit. It's built with CoreOS Assembler, which is our build system, for composing the OS trees and creating all the artifacts for the different platforms for it. And it's made out of Fedora RPM packages. This is the only difference, really, between OKD and the OpenShift OCP product. We have a dreadtatt where those are rel packages. Obviously, we use Fedora packages here. So just to say a little bit more about RPM OS tree, so that is a really cool technology where images are essentially composed. It's like Git for operating systems. So you have one commit describing the entire contents of the OS disks. You may know a few other projects that use OS tree, like Flatpak or RPM OS tree specifically, like Fedora Silver Blue and Fedora IoT, and Fedora CarOS, obviously. Ignition is our tool to describe the config you want to get. So we don't use Cloud in it. We have Ignition, which is declarative. And I think that makes it superior to Cloud in it, as well. So you can describe any custom config you want in Ignition configuration. And then that is applied at the first boot. And once the machine comes up, it'll have the config you described before. So that's really, really cool. In the OKD and OpenShift case, we continue to manage that Ignition configuration as a day two operation, which means after the initial install Ignition won't run again, but we will still be able to react to changes in that config. So how to install? It's super simple if you use the IPI, the installer provisioned infrastructure path. You just head to OKDIO, go to download, download the installer, and then you'll need an account on a public cloud, and it'll install automatically everything. That is, if you want to spend that money on the public cloud. If you have your own infrastructure, you'll use the UPI, user provisioned infrastructure, install flow, for which you'll have to set up a few things before you can run the installer and get everything started. All of that is documented on docs.okd.io. And there's a few more links here. So we have a GitHub repository, OpenShift slash OKD and OpenShift slash community. So if you want to, there's guides there for a few setups on various clouds and also some minimal setups if you don't need a full-blown cluster. So the OKD working group. We've been working hard on getting OKD out. So OKD4 is GA now. And we've done that with the OKD working group. So there's a few engineers from Redhead, Vadym Rukovsky, Charo Groover. We have Diane Mueller, our community director there. And yeah, we have bi-weekly meetings on a BlueJeans video chat. So you can find the dates for those on the Fedora calendar OKD. So that's appstudfedoraproject.org slash calendar slash OKD, FedoCal. We also hang out on Slack all the time, on the OpenShift dev and OpenShift users channel on the Kubernetes Slack and on almost all the channels in the OpenShift common Slack if you remember there. Yeah, again, the two repositories as well. And we have a Google group that we also use as a mailing list. So everything we discuss, all important things, are usually sent out on that. And you can start a discussion there as well if you don't use Slack. And I think I missed the road ahead section here. Let's just, yeah, the road ahead. So I think I missed the slide somewhere. Well, so we're planning, as part of the OKD working group, we sort of created a roadmap together with the community. And the first part, phase 0, we've just finished, was releasing GA, OKD4GA. So we're very proud we got that out the door. And now we're looking into the future. There's many things still to do and things we can try and develop in the longer run. And one of the main reasons why we wanted to have OpenShift run on Fedora CoroS as a prime target is not only that the community benefits from it. Obviously, we want that. But we also now have a feedback cycle, essentially, where we can test out the things on the Fedora kernel and on all the things that will end up in the next rel release, essentially. So also our product will benefit from it, which I think is really cool. We never really had that with OpenShift Origin or OKD 3.x. But now we really have this ability to test things before it lands in the product. So I really want to invite everybody to the working group. If you want new features, if you think there's an operator that needs to run on OKD, please join us. We have an operator wish list. So we're going to work on enabling all those operators to run on OKD as well. Right now, we've just focused on the core operators, on the core OpenShift, make that run on Fedora CoroS. There's a few things because Fedora CoroS is not the same as RelCoroS. For example, we don't have Python on Fedora CoroS. So there's a few limitations there. That is because we really want to have people run their applications in containers on Fedora CoroS. So everything should really be packaged as container and not run on the host directly. Some operators on the operator hub still rely on dependencies like that. So there is some work needed. And I'd like to invite everybody to join us to help out with this effort. The more people we can get, the more testers, the more volunteers that actually contribute there, the better it'll be and the quicker we'll get there. And that is, I think, that is the great thing about the OKD working group because now we have enabled the community to actually contribute to OpenShift upstream development and operator upstream development. Yeah, that's zeroed ahead. We really want to enable and build out that ecosystem. We also want to use OKD for new features or new technologies that haven't landed in the product yet. So for example, if there's anybody in the community who'd like to use Sillium or something, we really want to help out with that effort and discuss design and options to make that work because that is, I think, beneficial for everybody. So yeah, once again, please join the OKD working group if you're interested. So we have those meetings there on the calendar. And also, if you hit any issues, if it's a technical one, like an RFE or just a bug, please open an issue on the OKD repository. If you have questions about the working group itself, the process, ideas how to improve that, the community repository would be the right place. And with that, I will answer any questions. I'm not sure if people can activate their microphones, but otherwise, I'll just look in the chat. If you have any questions, please go ahead. I'm happy to answer. OK, so there's one question. Actually, let me scroll up a little bit. Support for LibVirt VM installation. So we don't have support for it, but it is possible. We just don't test it very often because it needs a huge VM, essentially. We have quite a large footprint still. That's another thing we want to work on, reduce the size of an OpenShift OKD installation. But it is possible to run it in LibVirt. You have to build the installer yourself because that is usually disabled in the binaries. But you can do that from the OpenShift installer repository. There's an F-Cos branch. Essentially, we have still the exact same code base, except for right now two repositories. With the next release, it'll just be the installer that is different. And that has to be different right now because we have to reference the Fedora CoroS images and not the Rail CoroS images. So if you go to that repository, the OpenShift installer on GitHub, and go to the F-Cos branch, F-C-O-S branch, check that out, build the installer from there. You will be able to install it on LibVirt. Let me check a few more questions. Wouldn't LibVirt work via something like KubeVirt? So there is an operator for KubeVirt. And you can run that. That is kind of the other way around. Installing on LibVirt, you install it in the LibVirt virtual machine. And with KubeVirt, you can run virtual machines on your cluster. I'm not sure that you can probably nest that as well, but that's definitely the thing we haven't really tested. But there's the KubeVirt operator, which will enable you to run virtual machines in containers on OpenShift, on OKD. And that was one of the things that had a Python dependency up until two weeks ago. So I'm not sure if the new release is out yet. But with that release, that commit has merged to fix, to drop the dependency. So the KubeVirt operator should now work on OKD as well. We haven't had time to test that, though. Any feedback on that? If you want to give it a try, that would be great. Why not add LibVirt as well? Yeah, we've disabled it in the main binary because it's not a thing you want to use in production, really. It doesn't make too much sense. OpenShift is made for high availability. And if you run it all in one VM, you get none of that. So yeah, it's not a production-grade thing. I wouldn't recommend anybody using that for actual production. So that's also why we've disabled it, because there's not the real use case we want to give with OKD. After all, this is an enterprise-grade cluster. So you get all the goodness of OpenShift, the product, with all the security, all the developer tools. It's really Kubernetes on steroids. And it's highly automated. I think I haven't really stressed that enough. It is highly automated. So you essentially have a cluster on autopilot. It'll update itself. Right now, you still have to click a button. But it's really super easy to maintain. OK, Matthew's question. How closely tied is F-Cross going to be with OKD going forward? Will F-Cross be primarily recommended for use with OKD, or will it serve other use cases? Conversely, it's OKD supported on other distros. So we are one use case for Fedora CoroS. Fedora CoroS is still the thing that you can also use in a single-node use case, where you just want to run app hotman containers or even use Docker if you still want to do that. And OKD is the cluster use case for Fedora CoroS. So we actually will have a release schedule that'll lag behind Fedora CoroS by, I think, one week. Fedora CoroS has bi-weekly releases. And we'll do bi-weekly, or we are doing bi-weekly releases in the alternating weeks in between those Fedora CoroS releases, so we can test those changes in the new Fedora CoroS images for one week before we put out another OKD release. So it's not super strictly coupled, but obviously it's one big use case for Fedora CoroS. But still, it's separate projects Fedora CoroS wants to support other use cases as well. And OKD right now only supports Fedora CoroS as a base operating system. But we've said from the beginning that we'll do Fedora CoroS first because it makes most sense for us as a community and also for the company to get that feedback cycle. But the community has already expressed interest in, for example, CoroS CentOS, sorry, CentOS as base operating systems. And for that, obviously somebody has to build kind of a CentOS CoroS RPM OS tree and create the artifacts from that. And that is certainly possible. We just haven't had the time to look into it. And we appreciate any help from the community there as well. And it would also be possible to take Debian packages, compose that into an OS tree, and use that as the base operating system. Obviously, the cluster side, we have this machine config operator that maintains the ignition config after installation and maintains the disk state. And obviously, that has a few assumptions about the operating system. But then again, that works with both Rel and Fedora package sets right now. So it's certainly possible to also use Debian or Gentoo or any other system. You just have to compose it into an OS tree. And then you can get started with testing that. Yeah. Primarily, we focused on Fedora CoroS now, but now we've gone GA, and we'll be open to adding more OSes in the future as well. What's the minimal required footprint for OKD clusters? It is quite large. So I think the recommendations, it's the same recommendations as the official OCP OpenChift product. So you'd want six nodes, three masters, three workers, and each with 32 gig RAM. But well, is it each? It's definitely a lot. But we've been able to cut that down and even install it in 8-gig machines. Yeah, but that's not officially supported. We're definitely working on making that officially supported. It works right now. And I think Charo and also Vadim, if he's here, have done quite a bit of testing on that already. But yeah, right now it's really recommended to have a beefy machine or multiple beefy machines. You can also make the master schedulable. So you only need three masters, but three is the minimum required to get a proper install because of NCD, which needs to maintain quorum and to that high availability at all times. What is the interest in making OKD available for all architectures Fedora supports? That interest is big. So we really want to do that. That was Dennis Gilmour's question. And yeah, we definitely want to do that. And that is the thing we will be focusing. So I'll definitely approach you, Dennis, and to talk about that. Because we already, for Fedora Chorus, I'm not sure we're releasing all those platforms or built for all those platforms yet. But we already have the infrastructure to build them. So with OpenShift, we just have to rebuild all the containers. And then it should run. So we just have to set up a few infrastructure bits to get all that built. But I think we'll get there. That's definitely the thing we want to do. So yeah, and just for the comment, Glenn, there's a new OSG builder that you can roll your own. Yeah, we're also looking into how we can leverage that. So I think there's quite a few interesting bits we can have a look at here. OK, next question. Not really a question, but, Charo, I just reiterated that we're going to start working on minimizing the footprint. I think that's also a thing the community has requested quite a few times, because everybody has their huge enterprise set up ready to roll their own cluster out. And that's definitely a thing we'll also look at. Yeah, definitely. The minimization objective is a thing that I've also kind of peaked in from time to time. And I think minimizing that the entire system is just important to do. And it's a good long-term goal. Right now, we're super happy with just released GA. And as you can see, there's still a lot of things to do. So if you want to join, I'm just going to say that again, if you want to join the working group, please do it. That is really the opportunity for everybody to contribute to voice their opinions and their problems and request new features or even offer to contribute some code. Because that is what we've also enabled now. You can really contribute in the upstream development now. So yeah, I think my time is up. I'm not sure if there's more questions. But thank you all so much for joining. And please continue to enjoy Nest with Fedora.