 Sir, I think we can start our session, welcome everyone to our audience and we are in the application development with the serverless and the containerization track. So we are here for the next session which is OKD4, the OpenShift Kubernetes on the Fedora Core OS and we have Christine Glomberg who will be presenting out this lightning talk. So we can start it over or like we do, we want to wait for a couple of minutes for the other participants to join. Let us know. Well, hello everybody, welcome to DevConf US and welcome to the OKD4 OpenShift Kubernetes on Fedora Core OS session. Today, I'm happy to have with me, Christine Glomberg and Antonio Radaca to help fill out the details on this topic and we're really glad you came. We're really proud of this latest release of OKD and I'm going to tell you a little bit more about what we're going to talk about today. So the next slide. So on today's agenda, first, what we're going to cover off is what is OKD, then we're going to jump into an overview of operators and the operator framework and we're going to dive a little deeper into one of the operators that's very important to OKD, the machine config operator and then we're going to take you a little further down the stack to Fedora Core OS and we may have a demo or two slid in there and we'll leave some time at the end for questions that we know that's probably going to be through the chat or Slack or whatever facility that they give us for DevConf US. So look for Christian, Antonio and myself in the chat afterwards. So without further ado, next slide. So what is OKD? You're going to talk a little bit about it from a historical point of view. First, you probably remember OKD being called Origin back in the day when it was a Ruby on Rails, MongoDB platform as a service offering and then about four or maybe five years ago now we shifted and rebased on Kubernetes and OpenShift went through a significant evolution going from OpenShift 3 to 4 as well, rebasing, leveraging operators. So you'll hear us talk a lot more about that as well. So we basically take the OpenShift or OCP or OpenShift Container Platform code base and combine it with Fedora Core OS and that's an interesting distinction giving us a pure open source play from all the way down. So from the code base is all open source for OpenShift naturally, but there are some things about OpenShift, the product, some of the images and things that are based on Rails Core OS. So we at Red Hat are committed to having a pure open source offering of each of our products. So we have collaborated with the Fedora community and Fedora Core OS and come out with a distribution, which we'll hear us talk about a lot as OKD4 and it allows us to distribute everything as open source. So we're remaining true to our sort of commitment. Now this shift really went from being a platform as a service built on Kubernetes to something more of a self-contained ecosystem. Go to the next slide. Let me explain what I mean by that. So some of this, it's a really important distinction for us that we want to deliver something that everybody can use freely and as open source, but also has all the functionality that comes with OCP just running on Fedora Core OS. And so a bit of an example here is the difference between OCP and OKD is OKD delivers releases in a much faster cadence. Fedora Core OS comes out in a faster cadence than Rail Core OS. So you get to try out all the new features and sometimes the bugs, a little sooner than everybody else. And if you're really brave, you can even build and update your clusters from our nightly stream. So that could be a lot of fun. And really it's basically a new, highly opinionated as we are sometimes at Red Hat, but highly flexible Kubernetes based ecosystem. And it's built around this new concept of operators and we'll get more to that there. So through the operator framework, OKD manages your entire platform, automating the installation, automated patching, automated updates and maintenance of the entire environment, and very significantly even the operating system itself, the updates for those. So this is kind of key and we'll go into more detail about Fedora Core OS in a bit. But the full lifecycle is managed by OKD through a specific set of operators. And it's deployable on many, many different infrastructures. Hit the button one more time. There you go, pops in. It's deployable on many, many different infrastructures and platforms from bare metal to cloud. There's just a small sampling in the little icons here. You can do small edge clusters. You can build out massive compute workloads with OKD. You can deploy a single node if you're willing to forego high availability. You can deploy high availability three node cluster where your worker nodes and your control planes are sharing roles, or you can deploy a full enterprise grade cluster with a dedicated control plane infrastructure and worker nodes. If you want to know about these specifically, you can go look at the installer project, which you can find in GitHub under GitHub, OpenShift slash installer, where you'll see a list of all the available deployment platforms and configurations. At the end of all of this talk, there'll be a slide with more links and more resources. So don't panic. And that's really at a very high level. What we're doing with OKD. I don't know, Christian, if you want to add any more into that. I think we'll go in depth more in a bit. All right, well, then I think we're handing it over to Antonio now. Yes. So I'll, you know, the next point and the agenda is talking about the operators. And as Diane mentioned, OpenShift had a huge shift from 3.11 to 4, where in 4 we introduced the concept of the operators at the cluster level itself. And so before diving into the actual OpenShift 4 architecture and later in the MCO, we need to talk about what is an operator? What does it do in the operator pattern as well? So next slide, I'll do it myself. So operators are a way for packaging, deploying, and managing cube application, right? So to put that into a real-world example, think about a MySQL database. And we can think about the MySQL operator as something that is responsible for packaging, installing, managing the MySQL database itself on a cluster. You know, that's the key difference. And we can think about the possible management things that this operator can do as, you know, rescheduling if a node fails or automatic data replication always is the, in the case that the node fails. So all this kind of stuff that we usually do manually, the operator can, you know, call for the human and can do all of this automatically. With, you know, it's some tweaks and some configuration is still needed on the administration side, but, you know, operators can do all of this and they're pretty powerful. And we'll see why, well, we dive more into how OpenShift is actually architecture with operators. So this is actually all we can say about the operators. There is, I talked about configuring the operators. That part is handled specifically by, if you're familiar with Kubernetes, by a custom resource definition, those are just, you can think about them as just configuration files. But in this word, those are effectively Kubernetes object stored in a CD. So OpenShift 4 has been built leveraging operators. And we can see that they cannot only manage applications like MySQL, but they can also manage things that are key to the cluster itself. So what we did in OpenShift 4 is, you know, introducing the operator patterns for the components that make a cluster, make a Kubernetes cluster. And so it's like the analogy would be having the cluster on autopilot because many of the things that, you know, an administrator would usually configure, like scaling up a node is like takes, you know, all the manual steps to bring up the node and configure it. That will be done automatically by an operator. And we'll look about that specific operator in the next few slides. In these slides, we're going to have a look at, you know, the key components in form of operators that make OpenShift 4. And, you know, we have one of the very first operator which is the main one that is responsible for the overall health of the cluster is the cluster version operator. And since OpenShift 4 basically has operators for anything that really makes the cluster, like you can see just below the cluster version operator, there is the CUBE API server, the CUBE controller manager, the scheduler, etc. Those are all operators. And what the cluster version operator does is making sure that those components, which are still operators within the cluster, are at the right version. And, you know, this is really open up the door for things like automatic cluster upgrades. You would just hit the button and sync to the new version of the, you know, whatever latest OpenShift or OKD release you have. There are, you know, many other operators which are core to the platform, like the network one, as you can see, just make sure that the CNI plugins are there and the SDN is installed. There is the image registry. Anybody coming from 3.11 knows about this. In OpenShift 4, we now have an operator that takes care of all of this. And it takes care of the image registry from, you know, the real beginning. It setups the registry, the route, an initial storage and things like that. So you can see, you know, all the manual steps that we used to do before are now handled by the operator itself. Other examples of the operators that we have in the fourth release are, you know, the monitoring one, which, as the name suggests, is responsible for collecting the metrics, display them on the console, or anything, any aggregator that you can also install. The inverse operator, which ensures the router is set up. The storage one, make sure that the CSI plugins are installed and the storage classes exist. So all these, you know, operators are the core of the platform. And as I said before, what we did in OpenShift 4 was leveraging the operator pattern and used that at the core of the cluster. So it's something like the cluster manages itself. Because all these components in the forms of operators can just end their own life cycle in an ordered way. And, you know, you'll always have the latest version and it's automatically synced. So the concept itself is really powerful. And OpenShift 4 is making a great job at leveraging it. Then we have this other thing, which is still operator-related, which is the operator app. So the components that I've talked before, the operators that I've talked before, are core to the cluster itself. You know, those manage the cluster life cycle. But, you know, at some point there will be somebody using the cluster. So OpenShift has this concept of the operator hub, which is a community-sourced index of optional operators. Like you can see some of them like Grafana or Argo CD. So if you want to install them, the operator app is integrated with the OpenShift console. So, you know, any admin can just go there and install their additional operator. Those are usually application-focused. As I said before, these are really application-focused, whether the one I talked about before are core to the platform. And guess what? There is an operator, which is like we call that manager, that takes care of the life cycle of those additional operators. So you can do things like taking care of the operator scope, whether it's cluster-wide or namespace-only. Again, it ensures that it can be updated manually, manages permission, you know, and so on and so forth. You can think about almost anything life cycle related for an application like Argo or Grafana. And so all of this brings us to the MCO, which is one of the core components that OpenShift 4 uses. And it's related to the nodes that you have on a cluster. I mentioned it early that, you know, in the early days or before OpenShift 4, in order to bring up to onboard the new nodes, you would need, you know, many manual steps, you know, in order to bring up the actual instance, then configure it, then, you know, get up and running the kubelet, join the fleet, stuff like that. So all of that isn't necessary anymore, thanks to the operator pattern and specifically, thanks to the machine config operator or MCO for short. The machine config operator is, again, a core operator. That means it's managed by the cluster version operator, which I mentioned it early. So the cluster version ensures that the machine config operator is always at the latest version and it's healthy as well. The MCO, I'm going to just say MCO from now on, hopefully, it's the operator that manages the machine configuration. It does just these two things. It manages the machine configuration and it applies the OS updates on the nodes. In our case, since we're using, since we're on OKD, we're going to apply these OS updates with RPM history. So the MCO really does just these two things. Say you want to configure, I don't know, the time zone setting on all the fleet of the nodes that you have in your cluster. You would use the MCO to actually ship the config to the nodes in your cluster. And again, the other super important thing that the MCO does is making sure that your host is always updated. The way the MCO works is it's really easy if you're coming from the Kubernetes work as we are leveraging custom resources and we are living by the concept of current versus desired or spec versus status. And so what the MCO does, it's basically computing a diff between what it has and what the admin want. And after you compute the diff, it just applies it. And so it continuously reconciles itself to the latest status and spec that the administrator want. It's like it's a peanut state machine at the end of the day. It's just there is a continuous loop like any Kubernetes controller and it just watches for any changes. In our case, again, those would be customizations or OS updates coming from where the actual OS updates is coming from. So this is the machine config in a nutshell. And hopefully that clarifies what it does. The machine config operator leverage mainly one customer resource definition. There are many, but the most important one, it's the one in this slide. You can see it's a super common Kubernetes object. It has the type and the object and just a spec where an administrator can just go and tweak all the fields. The most important thing about this custom resource is probably the config field. And I'm going to explain the others as well. The configuration field, config field, which you can see it's just a runtime raw extension. Nowadays, it just contains the ignition config as we're leveraging ignition to bring up new machines and install the cluster. So the MCO still leverages the ignition config to be able to customize the node in a way which is familiar to most cluster administrators as well. With ignition, you can of course do the usual things that you would do even manually, like creating a system unit, a timer, disabling a service, all these things, change a configuration, change the crowning columns. You can do all of this with ignition. And so the config field is probably the most important one in the machine config CR as that allows administrator full control over the node. And then the other things that you would find on a machine config is the US image URL. That's another important thing, as that's the second point from the previous slide, where the MCO does configuration and OS updates. So OS image URL is nothing more than a pullable container image which contains the actual diff of the US update. But I think Kristin is going to talk more about that later on. In the end, the rest of the fields that you can see all relates to the customization side of the MCO. So to some extent, they're still related to the config. But we actually split those out so that we can control them more. And I guess I'll finish with the component of the MCO. It's a sub-component of the MCO itself, which is the one responsible to change the status from current to desired. So say you want to ship a new file on every host in the fleet, masters, and workers. You would create a machine config, do all the things that you would do with that machine config, create it so the cluster has it. And once the MCO notice, it will go and create its vision of the host. And then there is this component that actually takes care of applying that diff. And this is the machine config demon. So the machine config demon is just a demo set that runs on every node in the cluster. And again, what it does is just watching for changes in the administrator requested and just apply them. And as I said before, the machine config demon understands the config field of the machine config. So almost anything since it's just a subset. So it really understands a subset of the ignition configuration, like I said before, like folders, system D, and perhaps some other. And again, what it does is just apply this new view of the system on the node itself and just continuously reconcile. And again, in the Fedora-CoroS case, this container image, which we call machine OS content, and then use RPMOS3 to actually update the system and trigger reboot. With that, I guess, Kristin can take over. Thanks, Antonio. So yeah, next we'll dive deeper into Fedora-CoroS. What is Fedora-CoroS? Next slide, please. So Fedora-CoroS in one sentence. Fedora-CoroS is an automatically updating, minimal, monolithic, container-focused operating system designed for clusters, but also operable standalone optimized for Kubernetes, but also great without it. So let's dig in a little bit. Next slide, please. Let's try it in two shorter sentences. Fedora-CoroS is an auto-updating container. And you can run it with Kubernetes or without it. Next slide, please. So there is a lot to unpack here. And we've come to Fedora-CoroS, what it is now. So yeah, a few streams have flown together here. So it was two communities, the container Linux community and the project atomic community that you may know for delivering retarded enterprise Linux, atomic host, CoroS, Fedora atomic host, and CentOS atomic host in the past. And these two communities have merged. And we've kind of taken the best of two worlds here. So yeah, most importantly, the container Linux philosophy of how they really pioneered the container-focused operating system and their provisioning stack and the cloud-native expertise here. And from atomic host, we have the very solid Fedora package foundation, which we build on. We have the update stack. And obviously, we run as Elinix enabled. Yeah, next slide, please. Let's have a look at the features. The OS versioning and security is a major part of the goal of Fedora-CoroS, providing a secure platform for containerized workloads. So Fedora-CoroS uses RPMOS tree to create images. They're composed out of RPMs. And OS tree is like a Git repository for your operating system. So you may know RPMOS tree or OS tree generally, in general, from a few other projects. For example, Flatpak uses it. And it really just commits a file system to a repository and writes a hash. So it's very easy to follow back through the stack and see what came from where. And if we compose a new image, we have a very clear delta of all the files within the file system that have changed from one commit to the next, which also allows for functionality like rolling back a commit or rebasing to a totally different post-operating system. In the case of OKD, we use the machine OS content container, which Antonio mentioned, to deliver an OS tree commit, which is encapsulated in a container, unpack that, write it to disk, and reboot. So yeah, you'll get a single identifier for each version of the entire operating system, which makes it very monolithic and very secure. And a very important feature is most of the file system is mounted read-only. So you can only write files in specific places that are enabled for it. But that general protection on most directories in the file system prevents accidental OS corruption and also other kinds of attacks. Additionally, as Elinix, as I mentioned, is enforcing by default to prevent compromise steps from breaking out of the sandbox. Next slide, please. All right, automated provisioning. Automated provisioning is a big feature. So Fedora Core OS uses Ignition to automate provisioning. You've already heard a little bit about Ignition before, because the machine config operator and the machine config resource actually encapsulate and manage Ignition configuration. So within an Ignition configuration, you can encode any logic for machine lifetime, really anything. The machine config operator only supports a subset. I think Antonia mentioned that as well, which files in system D units. But within the Ignition specification, the config specification, there is actually space of it. There are actually much more features in there, which Ignition takes the binary and applies at the very first boot of the machine. So you can reformat your drives, repartition, and really do a lot of things there. That happens at the very first time the machine is provisioned. The machine config operator then takes over that config and enforces changes on a subset of that config. So for us, if you have a pure Fedora Core OS system, you can provision it with Ignition. You can really do anything. We automate that with the OpenShift installer for OKD and then manage it with the machine config operator. And very importantly, it is the same config on any platform. So it's really important and supposed to give you no headache whatsoever when doing the configuration because for Fedora Core OS, the machine, once it runs, notices where it runs and applies defaults for that platform. So we have just one release artifact, or a few release artifacts, but fewer than there are clouds because we don't need a release artifact for each cloud. Fedora Core OS is very smart about this in using Ignition. That makes it very easy. OK, next slide, please. So let's have a look at the Ignition configuration in detail. Ignition is a declarative JSON format. It's Ignition runs only once, as I just mentioned, at the very beginning of your provisioning once the first boot happens. And Ignition actually doesn't run on the root file system you later have. It runs within the init-ram file system. So it sets up the new file system within the RAM and then writes it to disk and reboots again into that file system that you just configured the way that you wanted to with that Ignition configuration. Yeah, it can write files, system D units, create users and groups, partition disks, create rate arrays, format file systems. There's really no limits with configuring your machine with Ignition configuration, which is also why we've chosen Ignition as our on-disk date representation for the nodes on the cluster, which are then managed by the MCO. To make writing Ignition configuration a little bit more human-friendly, we've created this tool called FCCT, the Fedora CoroS config transpiler, which lets you write Ignition or FCCT configuration in a human-friendly YAML format. And it has some short ends for generating Ignition that does a little bit more complex tasks like generating the system D units for those tasks. You can have a look at this back. It's very similar. And you can transpile a Fedora CoroS config to Ignition configuration. Yeah, next slide please. These are the features in use in OpenShift OKD. We have automated provisioning. OpenShift install generates Ignition configs. And when each node is started, that Ignition config is applied. Subsequent processes join the node to the cluster. So it is picked up automatically. No human interaction necessary. A single bootstrap node configuration is about 300 kilobytes. So it's a lot of data conveyed in that Ignition configuration. With OS versioning and security, we include OKD. We include Fedora CoroS in each OKD release. So we have a very clear pointer to the version of Fedora CoroS that is encapsulated within the machine OS container. And we know exactly what we're delivering every time. Yeah, it's cloud-negative and container-focused, obviously. This is a Kubernetes distribution. And Fedora CoroS is aimed towards running containerized workloads. And we automate that with the machine API and Ignition. Automatic updates. We leverage the OpenShift update mechanism for it. And all you really need to do to update your OKD cluster is click a button once it's available. Next slide please. A quick recap, what is Fedora CoroS? It's an automatically updating Linux OS. It's aimed at containerized workloads based on RPMOS tree and Ignition. It's composed of Fedora RPM packages. And it's great for running Kubernetes clusters on top or OKD clusters. Next slide please. If you want to join the Fedora CoroS working group, you can find, as neither of these places, on IRC. It's the Fedora CoroS channel on three note. We have an issue tracker on GitHub. We have a discussion forum on the Fedora project forum. We have a mailing list. And we also have weekly meetings on IRC, which you can find on the CoroS calendar. Next slide please. So this is where we're here today. If you want to join the OKD working group and help and participate in releasing and improving OpenShift and OKD4, then please talk to us on Slack. We're on the OpenShift dev channel, on the Kubernetes Slack. We're on the OpenShift Commons Slack. If you're a member there, you can find us on any of the channels, really, definitely on the general channel. We have our own Google group, which we reuse as a mailing list. So it's the OKD-WG Google group. We have bi-weekly video conference meetings, which you can find on the OKD Fedora calendar. And we have two repositories on GitHub, where most of what we do is happening and documented. And those are the community and the OKD repositories in the OpenShift organization on GitHub. Next slide please. More links. Have a look at OKD.io. This is our main homepage. You can find everything somewhere in there. The documentation is at docs.okd.io. And then the OKD repository, again, which we use as an issue tracker for technical things and the community repository, which we use as a tracker for meetings and group tasks and related things. And with that, I think if we have time for questions after this, I'll be in the chat. We'll all be in the chat with you, and we'll try and answer your questions. And please do come to the OKD working group meetings, especially if you're interested in deploying on any interesting configurations. We're always listening and looking for feedback and happy to help you answer any questions, too. So look for us all, Antonio, Christian, and myself, and others from the working group in the Slack channels here and at other DevConf sessions. So thanks, Christian and Antonio, for taking the time today to record this. And hopefully we gave you enough depth to get you started and interested in participating in this collaboration between the OKD and the Fedora Core OS communities and keeping the open source pursuit of happiness alive. So take care, and we'll see you all again soon. Thank you. Thanks a lot, everyone, for attending this session. And yeah, this was quite interesting session learning about the OpenShift and the Kubernetes platform. I once again, thanks to Christian, Antonio, and Dine for sharing out their time and presenting out this session over to all of the open source community across the world. And yeah, we are open now for any of the questions or the Q&A sessions. You can just post your questions in the chat box right now. Here we go. I just popped in again. And yeah, I think we had a great session, and I don't see any of the questions popping out in the chat box. I think everyone is clear about the session. Like, they have just got a re-answer from this presentation slide. So yeah, thanks, everyone, for attending this session.