 So, I've been hacking on free and open source software for a long time now, almost 20 years going back. If you can find my old posts on DebianDevelop, it helped out with the PowerPC port and like hacked on Emacs and stuff, long time hacking now, I love it and I'm going to keep doing it. Been in the OS space for a long time. And now I'm in the CoreOS group at RedHack, which I think is just a very cool thing to say. So, it was actually, it kind of boggles my mind, it was just a year ago that I was coming back from the last dev pump, back to Boston, I was on my plane, it landed, I turned on my phone and holy cow, my phone just exploded, yeah, because Red had announced we were requiring CoreOS. So, now I'll be honest with you, at that time, I mean, so I helped create Atomichost for RedHack and it was a direct competitor, I mean, it was a direct response to CoreOS Container Linux because Container Linux was getting an immense amount of traction, even from people who were paying well subscribers, they said, we love this model and we like Container Linux. When I was on some calls with some customers, they're like, why don't you do this? So, when Red had acquired CoreOS, I'm like, a lot of uncertainty. What does this mean? What's going to happen? There's so much overlap, right? And obviously, at the same time, there are other products too, but a lot of uncertainty, so I'm going to tell you a lot about what we've been doing for the past year and I think it's pretty cool. So before I dive into that though, I want to talk a little bit about, I mean, I kind of mentioned this at the first, but why I'm here specifically is to act on free and open source software. I think it's very important to our society. If you look at software today, it's like electricity, you know, electricity laid a foundation for our society, everything depends on it now. And the same is true of software. Every industry now, you have to hire software engineers, right? You have to get involved in software. Maybe you're doing machine learning for, you know, your crop group or something like that and crunching data. I mean, it's everywhere, pervasive in our society. Everyone, nearly everyone has a cell phone and, you know, it's just, it's everyone in our society. And the thing is proprietary software takes away a lot of control over your life from because you just can't change it, right? If you, yeah, I think that sort of goes without saying, I read a news article a while ago about how some TV manufacturers are now basically just gathering data on what you watch and just sending that. And that's actually allowed them to reduce the cost of the TV a lot. And I just, that's stuff like that. That's just, I think that's pretty evil, you know, like we should be in control of our software and devices. Yeah. Free and open source software is a counterbalance to that. Now, so I'm in the open shift group, part of the chorus group is part of open shift. And why specifically open shift? Like how do I take that passion for free and open source software into this? Well, turns out that same sort of thing about control also applies not just to, from people, it also applies for businesses, right? If you're a business and you outsource some, if let's say you outsource your database to a third party service, you're outsourcing a lot of your business, right? Your outs, if that database goes down and that vendor isn't responsive, it kind of just do, right? So, and the, you know, especially in the public cloud space, I mean, the rise of proprietary services, some of these are very good, very compelling, you know, and I think the future is a hybrid, right? I don't think we're going to ever, free and open source software is never going to defeat proprietary software, but we have to be there to counterbalance it, to give you free and open source software options. So, not only are we making, Kubernetes has gained a lot of traction as an abstraction layer so that you can take these same containers, deploy them to public cloud, take them on premise, right? The exact same app. And that's actually part of, we owe a lot of credit to someone in Hykes who invented Docker for this original Docker manifesto of, you know, a container that you should be able to take around. And that's a lot of why we're here. And so we're not just making a, an infrastructure layer, you know, you can also get a build of Postgres, for example, from our registry. That's a big deal, right? A couple of years ago, I might not have thought that would happen, that Red Hat would ship a container, a Postgres container that it can run exactly the same way inside Kubernetes across public and private cloud. Now obviously, in order to run containers, you need an operating system. And Red Hat got a lot of traction. I mean, Red Hat Enterprise Linux is by far the biggest product, right? And there's a lot of things you could say about Red Hat Enterprise Linux. I think one of the number one things in the, in the software space is just the length of that lifecycle. The idea that, you know, so many people, it's easy to create something, it's hard to maintain it over time. People want to deploy their software and then go on to do other more valuable things, right? They don't want to spend a lot of time maintaining it. Red Hat Enterprise Linux, if you installed REL 5, which was 2007 or something like that, you could still buy extended lifecycle support for that. That's a long time, right? And for a lot of these businesses who have, you know, maybe you deploy a server in your factory or something like that, it's a pretty compelling value, right? But containers fundamentally change how we think about the operating system. So many ways. We, remember when Docker first came out, we were talking about this. One of the people at Red Hat was saying how, one of our very most senior engineers was saying, hey, so when we go to build software, we build it in the container, you know, right now a lot of the software is built in, you know, RPMs were built, are built inside this mock. It's like churru, and it has some container features. It's kind of old school because it predates, long predates Docker. But, you know, he said, wait, we build software in containers, why don't we run them that way, right? And he was right, right? Now, containers impact a lot how you manage that software. So, you know, it's kind of took the rise of Kubernetes to not just say, okay, well, we have this file system tree. It's also all this infrastructure built around it. And I think, you know, part of it is actually the Docker layering thing. The idea where you derive from a base image is actually very clever, not something you get from a mock. Yeah, and so containers change how we think about that operating system. And the rise of CoreOS, which was renamed to Container Links, and then as I mentioned, Atomichost, was a lot in response to, how do we think about that operating system now that containers exist? And so just for those who don't know, basically, CoreOS came out and it, it's actually, the history is very interesting because it's, ChromeOS used some Gentoo technology and then CoreOS forked from ChromeOS. And so it's, you know, it's a very, it's kind of has that heritage with ChromeOS. But so they had a sort of, they have a sort of dual partition updating system and then everything else runs in Docker. And so Docker was the thing that was embedded, shipped with the OS and the idea is you run all your apps with Docker and actually protect out some, this toolbox command where the idea, if you do log into host, you know, you can debug things from it. So container focused and auto updating is a really big deal. If you listen to some talks, you know, I think, as far as creating things and maintaining it, auto updates by default or a bold move, it changes how you think about the OS. I'm going to go into that a little bit. So CoreOS had not just the OS, but also a distribution of Kubernetes called Tectonic. And to summarize a lot of things very briefly, Tectonic was a lot more focused on the operational side, whereas in OpenShift it was, there's a lot of higher level things. For example, OpenShift by default comes with the concept of a build, right? And actually in triggering from Git and all that stuff, that's not part of Kubernetes. If it was all designed today, it would probably be CRDs and custom operators, but, you know, it's, yeah, it was very developer focused and it had an installer that sort of had what you could call an undercloud written in Ansible, so sort of interest and created a lot of attention. So the idea, a lot of the ideas with Tectonic are that the cluster was self-driving. And that's a lot of that DNA that we've taken over to OpenShift for. Okay, so that's all the background now. So what is Red Hat CoreOS? Red Hat CoreOS is DNA from all these different things. So we have Red Hat Enterprise Linux, we have CoreOS, we have Atomic. It's also inheriting technology from Tectonic and OpenShift. Now, if I say that to you, you probably say, are we playing buzzword bingo or something? It's okay if you guys yell out bingo. So it's harder, I, this is just how, this is, it's harder to simplify it more than this. Oh yeah, and it derived from Fedora CoreOS. Forgot to mention that. So we're going to unpack this a little bit. I mentioned that ContainerLinux is derived from Gentoo. Now, things could have gone, and I mentioned that there are real customers that use it, there are non-real customers that use it, that we would like to be our customers. And so, you know, things could have been very different. You know, I definitely thought sometimes, are we going to just throw resources behind ContainerLinux that would have profound implications for how Red Hat maintains things long term? You know, we could say, well, ContainerLinux for a long time has had a new kernel, REL7 for a long time has had a more stable old kernel, and older kernel, but with backwarded features. And then, you know, so, well, sometimes, you know, when, say, a specter hits, and we need to, like, improve GCC to add retpolines or something like that to harden the kernel, well, now we have to patch GCC and yet another play, I mean, huge implications. Red Hat CoreOS is REL content. It's the same kernel, the same user space supported by the same people because we really don't have any of the options. Now, I do, I would love to take more of that DNA, the fast updating model of ContainerLinux into this, and that's a huge topic, I'm not going to dive in that too much. The two technology pillars here, so it was actually really good because the way I would summarize some of our initial planning is we realized, A, we kind of wanted to do a reset. We knew we wanted to do that on the open shift side because the tectonic installer solved a lot of practical problems with the open shift ansible approach that I won't get into in this talk, but we really liked the tectonic approach. And it turned out Ignition was one of the technologies that CoreOS folks liked the most, and tectonic was designed for Ignition. So I'm going to go into that, and we took the RPMO Street technology from Atomic to replace the dual partition thing. So one of the biggest changes, I'm going to demo this later, is it's not just in OS, we come out of the box with management tools for it, opinionated management tools. The one thing I would like you to take away from this is Red Hat CoreOS is an OS that comes with its own operator. Actually, when I say operator, how many people know what it means from CoreOS? Some, not many. Okay, I'll mention this real briefly before I continue, but an operator is basically, in Kubernetes you deploy containers, and then you have as pods and things like that, operators are a way to describe the management of your application as custom resources, so your app itself appears as custom Kubernetes resources, and you can scale it and edit it in the same way you do native Kubernetes resources. And this is by far one of the biggest changes that I'll get to, and again, Red Hat CoreOS is designed for open shift, lifecycle bound with it. So let's dive in a little bit, why ignition? And actually, if you go to that link, I'll post the slides after. It's actually interesting, because Container Lakes didn't have Kickstart, and, but it, yeah, so the first thing you'll see on that page, so basically, it runs very early on boot, and that allows it to do things like change the partition layout, which you can't do from Cloud in it, and Cloud in it has a lot of problems around, it sort of runs in the middle of the boot process, and then you may wanna add system D units, and then you sort of get into this weird loop where you're booting halfway, and then changing what you're booting. System D does handle this for the most part gracefully, but turns out it just works a lot better if you do the initial setup from the initial Rambus, your system boots, and before it even really runs the main system you've configured it. One of the things I think is most compelling for coming from the red hat side is ignition is a single language that for us replaces both Kickstart and Cloud in it. That same, those same configs you can take on premise to bare metal, and you can run in the Cloud the same way, which, and we built a lot on top of that. And yeah, the other thing I want to just mention is that ignition, unlike Cloud, Cloud in it runs every time you boot, and that means if the config changes, it will reuse it sometimes, and it's kind of a mess. Ignition was really designed to be in the immutable infrastructure model, which, I don't like that terminology, I actually like to think of it in terms of controlled mutability, basically I guess you boot the system with the config, and then in container links you get auto updates of the OS, right? Immutable infrastructure doesn't mean you don't apply security updates, right? It just means that your config is fixed and doesn't change, and it does what you need it to do, and it doesn't get applied halfway. If ignition doesn't work, if for some reason your system D units are broken, the machine won't boot at all. And yeah, I'll get into the machine config operator, which extends this model of ignition. And yeah, one of the things we did is course container links has a C Linux, but it's in permissive mode. We did a lot of work. Red Hat Core OS will ship with SELinux enforcing by default and having ignition support SELinux clean out of the box. So one of those things we've been doing. And okay, so why this RPMO Street project? So this was my project, created it initially. And it's definitely, it's sort of something I'm passionate about is you should be able to automatically update your OS just like container links. If your kernel crashes in the middle, you should either have the old system or the new system. Applying updates shouldn't be a process that instills fear, right? If something goes wrong, you should be able to roll back. That should always, always work. I mean, this is part of my passion for free and open source software. Like we, a problem I think we always have and container links has actually had to is like, we really want people to try the new stuff and report if it works and if it doesn't fall back, right? And so that's part of the idea behind RPMO Street is it always, you always have a known good system to go back to and yeah, it's transactional. There's a lot of details of this and why it's better than a dual partition system, among other things that has overrides and layering and you can build the init room if you want to on the client side and addition of the server. A bunch of stuff, I'm going to, OS Street is actually very popular in the embedded space. So it's a proven system. It's worked, we shipped it, we're going to continue to use it. One thing I do regret though and I think I missed part of the vision of container links was those automatic updates. We never did that for a atomic host and that's actually the real, a lot of that real value. Yeah, I feel like I missed that now. It's the transactional part is good but it's only part of the implementation of actually going all the way to automatic updates because it really changes how you think about agency in updates. If I log into a system and I type app get update or yum update and something goes up, it's kind of my fault, right? I type that command. But if you're doing automatic updates and something goes wrong, all of a sudden it's the person who's providing you that update. It's their fault, right? I really feel that's true. It changes that sense of agency of the system. And yeah, it's really important to get right. It changes how you think about how you deliver updates. Part of what we're going to be doing is staging updates and making sure not everyone gets the same update at once. ContainerLinks has been doing this for a long time. They roll out updates slowly. Yeah, and also we didn't have opinionated reboot management so ContainerLinks had a couple evolutions of tools there was locksmith. It's like, how do you manage the reboots of your servers and make sure they're not all done at once? Stuff like that. One way I like to describe our chemistry now is think about the OS itself in slash user is like a container image. Your Etsy directory, which is mostly configured via ignition. Think about like that, like a Kubernetes config map. And then your var partition. Think about that like a persistent volume. It's not actually implemented that way, but it's a good way to think about it. So ContainerLinks, like I said, derived from the Gentry model, the Chrome model of the dual partition. That has a custom update payload model. And OS tree was in addition, so we had three kind of things admins had to worry about. You have to worry about, okay, my OS tree versions, you have to worry about your container versions. And you have to worry about RPMs and yeah, a lot of tension there. So what we're doing for Red Hat Core West is we're embedding the OS updates inside a container. So you don't have to care about OS tree. Like the system, once we will just auto update and you, yeah, I'll get into this. And the operator understands how to extract and apply. So one of the biggest things that I think a lot of our customers and people hit is we really want to mirror updates on premise. And you have to learn how to mirror OS tree then too, which it supports, it's documented and you can do it. But you also have to mirror the container images. So this basically by embedding the update in a container, we just solve the mirroring issue. So if you want to mirror all the containers that comprise OpenShift 4, I'll get into the release payload, yeah, you can mirror the OS just as you mirror your containers. And as a brief assign, why don't we just make the OS a container? It would be a profound leap into the unknown. I mean, it would impact everything around how we manage the OS. You know, and there are people who are doing this like Rancher, they sort of ended up with this dual level system where you have a docker and a boot docker and then a separate docker. Because a lot of people just want to blow away all their container images. You don't want to remove your OS, right? You don't have to teach Podman, how do you handle this special container? We're not going to go there, right? So this is probably one of the biggest changes though. Like the simple life cycle binding problem, we created huge issues for ourselves with rel. So basically rel 7 came out, then containers happened. So we added rel extras and then we added OpenShift on top. All three of these things run on different life cycles. I mean, it's hard to even describe how much pain we inflicted on ourselves with this. And yeah, and we added OpenShift on the side, right? So you have this huge matrix. So we are at an atomic host on the side. So with rel.coreOS, we're making an OS that's always tested to run Kubernetes and OpenShift, right, like that because the OS itself comes as container for release payload. That thing that you run, we've tested that vertically as a stack, right? And so yeah, we have that OS container. Now, one of the biggest changes in OpenShift 4, which I'm only partially covering here, is this concept of a release payload. So there's one container that has references to all the other containers that comprise the platform. And there's a high level operator that manages all the sub operators and deploys those new containers on updates. The operating system is just another entry in that payload of containers. Yeah, so hardy. Again, a lot to cover here. Another huge change we made is we decided to embed the Kubelet, the container runtime in the host. Like one thing we tried really hard to do was to containerize the container runtime. So the idea, which again, kind of came from container lengths, is like, well, what about the versions of Docker? I want to use a newer one, or this one newer one has a bug. I want an older one. And so we wanted to try and get to the model where, okay, there is this base we update, but then how do we handle this stuff in the middle? So on the atomic side, we had system containers, which were kind of before Docker, won't go into lie detail. Container lengths was developing this thing called Torx, which was designed to overlay some additional things, like different versions of Docker, which was actually in use by Tectonic. So you'd boot the OS, but it would basically ignore the version of Docker in the OS because they wanted, Tectonic wanted to have a vertically tested stack, right? So yeah, this is really one of the biggest changes in how we deliver OpenShift. I mean, it's hard to even begin to describe how many problems we're solving with this. So when I mentioned an installer, now I'm not talking about like Anaconda or Ubiquiti or something where it's like we take an OS and put it on a disk. No, I'm talking about the OpenShift installer. It's a way, when you use the installer, you get a cluster. You don't get a single machine. And so it has an architecture that's derived from what the Tectonic developers called Track2. So they had an initial installer and it didn't basically have everything under management. There are some things that were hard to upgrade. So it was redesigned and we repurposed that and integrated with OpenShift. So rather than Ansible, the cluster is self-driving. So basically the installer has a bootstrap node. You boot this one node and then the masters come up and download Ignition from it and bootstrap into an etcd cluster. And then from there, you can tear down the bootstrap node and you have a cluster that's self-driving. Now this is, if you look at other projects, like OpenStack I think is a really good one where it has this concept of an undercloud. It's actually very interesting because they have this project, Ironic, which is still one of my favorite project names, which is basically, let's use the OpenStack technology to manage the bare metal. What if we think of our hardware machines as just images, not VMs, but basically instances and Nova and how we manage that. Because it really made sense to use that same, if you have an undercloud, it creates a lot of tension. It's like, where do you configure things? And OpenShift, the 3x OpenShift path had a lot of this tension. It's like, do you edit a Kubernetes object or do you edit an Ansible playbook? These are radically different things, both radically different trade-offs. Among other issues Ansible requires kind of SSH-ing to each node, which is a huge problem at scale. So anyways, the installer, self-driving, which is an immense, immense change. And so for bare metal, the path that we're going to be falling for Red Hat CoreOS is very much the path that container lakes follow, which is you have a disk image, you copy it to disk, that's it. And then on boot, Ignition runs and configures it. So we don't want to have two ways to configure the OS. And this circles back to the kickstart versus cloud in it and how I configure my partitions, how I configure different things. Always Ignition for us. And furthermore, the Ignition is always under management. Let's see. Okay, so not even done with the biggest changes here. So I mentioned with Atomic Host, we didn't really end up inventing a lot of management tooling around it. Now part of the Tectonic group created operators to manage the container lakes updates. So, and you really want a team that's developing management to really be integrated with that software, that's the ideal case. So with Red Hat CoreOS, we have an integrated team where we build the OS and we're building an operator to manage it as if it was a Kubernetes object that's integrated with Kubernetes. So this again is a new code base derived from, inspired by and derived from Tectonic technology. But we've expressly designed for Red Hat CoreOS. You could really think of it, well it's called machine config operator. You could really think of it as the Red Hat CoreOS operator. It's a valid way to think of it. So there's four components. We have a pod that's an operator that sort of manages a high level status. Like a lot of Kubernetes objects, we have a controller which is like a reconciliation loop that's trying to make sure it's trying to synchronize the current state to the desired state. If I don't run out of time I'm gonna demo all this. It also has a component that serves ignition. So when you go and boot a new node, it goes out and talks to the cluster and says give me my ignition config. And so all that stuff is basically managed by the cluster. When you wanna bring up a machine that it's talking to the cluster itself for configuration and not some undercloud that's managed in a different way. And on each node there's a demon, a Kubernetes demon set that ends up talking to RPMOs tree to do updates. And also manages reconciling ignition configs. So if you dive in a machine config operator there's a lot of different concepts. There's a concept of a machine config which you can think of a lot like in a fragment of ignition. So again, if you wanna configure anything on Red Hat CoreOS, which is basically everything between the kernel and the kubelet, including the kubelet config. So basically not things that are actual containers and everything sort of in that OS. It would be ignition configs that are managed by the machine config operator. So machine config object is ignition. We have some pre-made ones, you can also create your own. And then a machine config pool basically is how we manage rolling out that new configuration across the cluster. So again, rather than having an undercloud that machine config demon and the operator and the controller work to reconcile state of that operating system from the current state to the desired state. It manages reboots. So making sure that when you wanna apply a new config like let's say you're applying an OS update, it's only by default it has a max unavailable of one. So you're only gonna be rebooting one node at a time. It makes sure it's to drain all the pods on the node so they get rescheduled elsewhere. And so basically you have zero downtime updates, right? All integrated. Yeah, so in terms of stuff between the OS and the cluster it manages SSH keys. So the installer takes SSH keys. You can provision those in the installer. It ends up as an installation config and we can roll that out. Kubelet configuration. And yeah, so we've basically unified config management and OS updating into this machine config demon. It's all part of one thing. If you think about the state of your system it's basically a two tuple of the OS and the config. And that's basically what we manage as a unit. Yeah, okay, so I mentioned the SSH key one. So that's again one of those things where the installer uses a technology that's not Kubernetes, it bootstraps itself. But then that SSH key that you provide to it ends up as a config map in the cluster and then machine config operator can take over and if you wanna change your SSH keys you know you have an admin who leaves the company or something you can edit that machine config and it'll just get incrementally rolled out to your cluster. So let me try and do a demo here finally. Well, I guess I'll just, no, I need to, let's decide. You guys see that okay? So let's, okay, yeah, 10 minutes. Yeah, okay, so I'm logged into an open shift for cluster right now. So again, I talked about the release payload. There's a cluster version object that describes everything in your cluster, right? You wanna know what am I running? OC get cluster version. There's also OC ADM upgrade to find a new release payload and initiate an upgrade to your whole cluster. Again, it's all self-driving. Open shift four is an operator managed Kubernetes distribution. So you can see if I look at the cluster operators there's a whole bunch, right? Like how we manage the networking plane is an operator. It's not Ansible, it's not something else, right? The cluster drives itself. There's a whole bunch of, yeah, DNS. Two of the most interesting ones in my opinion are the machine API operator and which works in concert with the machine config operator that I've been talking about for the OS. So let's see. And we'll look a little bit. So I'm in the, we'll look at all the Kubernetes namespaces to make up an open shift four install. So, you know, a lot of our operators are in their own project or namespace. Right now I'm in, I wanted to look at the pods. Now let me just do this. So this is the machine config operator. So again, if you wanna look at the state of the operating system, this is where you dive in. Let's look at the machine config objects. So we'll see edit, I'll edit this config. And so if we look at this, again it's a Kubernetes object. And inside here, I'm gonna guess most of you have not looked at Ignition JSON before, but basically Ignition just has a declarative way to say create this file with this content. It's Erlang coding, which is something I wanna fix. But you can see here, this is how we, when we're running course node boots, it's not configured, right? There's a lot of configuration you have to add. You know, there's obviously certificates and all this stuff. So the Ignition is managing all that configuration. And yeah, so we saw that. And so basically the concept of the machine config operator takes these fragments of Ignition and then these last two ones are rendered configuration. It's the final configuration. So let's look at a machine config pool. There's two machine config pools. So if I wanna roll out a create a file on all my worker nodes, you can see here the machine config pool object is part of the controller that's managing, rolling out that config or the OS update across my pool of worker nodes or in the same for the masters, right? So if I wanna edit the Kublet config, it's making sure, again, to synchronize that desired state with the current state. There's the machine config pool. Oh, yeah, and I also wanna demo. So let's look at my Kubernetes nodes. An important part of this, let's look at this one, which is a master node. So if you see here the way a lot of this works is through annotations on the node object. So the machine config demon and the controller work together to say, here's my current state. And they basically communicate by these annotations on the node object. Yeah, so, yeah. And then finally, the operator, I could type machine config operator. If we look at the machine config operator, it's basically providing a higher level status because a lot of cases you wanna know how's my cluster doing, which is both the master and the worker, and actually we support creating other types of machine sets, like infrastructure nodes and that sort of thing. So the operator's managing, watching that whole status and is reporting to the highest level cluster version operator. And that's what you, and so basically all the state is very visible to you. OS upgrades, again, I just can't emphasize this enough, are represented in Kubernetes itself. Yeah, okay, so let me, one thing I would, so, how much time do I have? Five minutes, yeah. All right, I'm gonna have to skip some stuff, but I do wanna demo this because I think it's pretty cool. So, this is showing you, I took up on the machine config operator. Another component that's derived from upstream Kubernetes is called the cluster API machine API operator. And this is basically, in a public cloud and infrastructure as a service, it's what's talking to that cloud and saying, okay, I wanna scale, let's say I wanna scale my number of nodes. So it has a concept of a machine, which you can think of as like an Azure, AWS, OpenStack instance, right? Not a Kubernetes node yet. And it has, and that machine set has a number of replicas. There in that replica it said two. I'm actually doing this demo and I'll livevert install my laptop. But if you wanna scale the number of nodes in your cluster, I think this is super cool. You just say scale up the worker nodes, right? Just like any other Kubernetes object, just like you wanna scale the replicas of a pod. And again, this is part of that abstraction across the clouds, right? They all have different APIs. By thinking of things this way, you can take that same management tools, same workloads across public clouds, between AWS and Azure, or to on-premise OpenStack. And you can definitely think about implementing this type of thing on top of a bare metal provisioning system. Yeah, and so when I, again, when I scaled that machine set, that new node went out and talked to the machine config operator and said, give me my ignition config and then it joins the cluster, right? It's under management from the start. And that's where it begins. So let me see if I can demo one more. Yeah, I did the overview. So yeah, I wanna demo operating system updates here. So right now by default, we don't have updates enabled if you try the beta. So this is basically, this is not how we expect you to do updates. This is kind of like, I'm basically overriding the system, but I kind of wanna show this at a low level. So here I'm SSH'ing into a node and I'm running our Primoistry status, which will show me my OS version. There isn't really a young micro one. It's kind of like, and actually we taught our Primoistry how to think it came from a container. Like in that status command, it's like I pulled that OS update from a container. And so here I have a machine config object, which is almost ignition, except it also has this OS image URL. And so basically I created that machine config object and I pointed at a container that has a new OS version. And then what you can see here, so I have that machine config object and then a new rendered config, the unified, that's a checksum there of all the inputs. And basically what's gonna happen now is the machine config pool is gonna say, oh hey, there's a new machine config. Let me incrementally roll that OS update out to the cluster. Yeah, I should probably should have added this out, but as fumbling around, trying to figure out how to edit ASCII casts. Yeah, anyways, you can see it ended, that OS image URL ended up in the rendered config. And so yeah, now what we're doing is we're looking at the machine config pool and you can see the status changed updating, right? So the updated status changed to false. It's saying I'm trying, I'm in the progress of reconciling this state. Degraded is what happens if for somehow a node goes out of management. Okay, I'll get to that in a sec. So I'm assesaching it in the node now and you can see OS tree is running, right? In the background, it's getting a new update ready. Now, I haven't drained a node yet, right? Your cluster stays running, that node stays running, all the pods stay running until the very last moment where the update is fully ready applied, all the config is ready and it initiates a redo. Let me see, oh yeah, this one got a little bit broken. So as assesaching in, wait, oh, yeah, sorry. Yeah, this ASCII cask got a little messed up. So what I'm showing here is after that OS update got applied, I'm assesaching back into the node and you can see here that I have, I've rebooted into that new OS version. So because how OS tree works is, you know what I mean, it's not like snapshots, it's like version file system trees. So I initiated that new one and if you need to go back. Uh, yeah, there is more I could demo, but let me, oh yeah, cool. So what didn't I cover? So we're doing a dual track path. We're introducing not just Red Hat Core OS, we're introducing Fedora Core OS, which will be, is the upstream for Red Hat Core OS. Part of it, right? Not the open shift part, but the OS part. And especially, you know, I've heard from a lot of people who like this, some of the Core OS technologies, they like OS tree or they like Ignition or whatever. Our upstream project to build this stuff is called Chorus Assembler, like everything I will work on, it's free and open source software. So if you wanna do something custom, that's there, we're gonna be maintaining it. I think it's, it started out with some bash grips, just gluing other things together, but it's, it's turned out pretty well, I think. And yes, Adora Core OS is our upstream. If you wanna improve something, please join that project. And yeah, so here's the summary, Red Hat Core OS, Operator Managed OS, designed for open shifting communities. And if you wanna try it today, you can get it from Try the Open Shift Techno. So that's it. So I think, I think we have 10 minutes for questions. Yeah, over here. Okay, so the question is what's the boundary between Fedora Core OS and Red Hat Core OS? So yeah, we definitely see those core technology pieces, the Ignition part, the Archimelistry part, you know, the bare-metal installation path. We see all those as shared between Fedora Core OS and Red Hat Core OS. Now, we're not baking in Kubernetes into Fedora Core OS, so that gets into an interesting topic of, you know, we're discussing actively right now about Kubernetes and Fedora Core OS, and it gets tricky because we have a project product split, but it's our upstream, but a lot of the higher-level components live in open shifts, so that makes sense. It's not a good idea. In fact, we were happy that Core OS is just a minimalistic distribution to run containers. What will happen with it? So this came to us, to be honest, ugly. Moreover, I completely disagree that RPM is more used on embedded than two-partition scheme, because two-partition scheme is used by Android, and Android is definitely, the most important system on embedded. We use OS 3 by default, but yeah. So the question is, what is the future of Core OS? Because for me, it looks like that Core OS is dead, and everything is moving to open shift. What is the benefit used for us? Okay, I will take that. So the question here was, from someone who basically has, they built on top of ContainerLinux Core OS, and what's that future? You know, they don't wanna move to open shift. So absolutely, that's where this Fedora Core OS thing comes in. Now, it's gonna be a free open-source project, and it will resemble, in a lot of ways, Core OS container links. Now, we've had a lot of discussions about exactly how this thing works, because for example, ContainerLinux shifts with network D. Fedora has never used network D by default, and there's a lot of implications to all this. Now, I think we wanna have a relatively easy transition, especially because we process ignition from ContainerLinux. Now, part of this question was about the dual partition thing and embedded systems. So, when I say RPM OS 3, it's actually OS 3 by default. It's an image system, not a package system. So, it has all those trends, like, it doesn't use libRPM. Like, it basically takes RPMs as inputs, but it's OS 3 that's managing things on disk. So, we've rewritten that part. You know, and so if you, I think if you wanna do something custom that's not open shift, that's where we like to hear from you in the Fedora Core OS project, if that makes sense. Okay, thank you. Yep, yep, alright. Okay, so the question is there a detail, is there a migration path from V3 to V4? That's a very complicated question. Now, open shift V4 is a fundamental rethink of how open shift is installed and managed. I mean, and it's something that, you know, I think we all looked at the state of things and said, we need to reboot how we're doing this. And that comes with some powerful benefits. It would have been very difficult to demo everything I've done here with just a sort of incremental path from what we were doing before. Now, honestly, I'm a low level OS guy. Like you talked to me about C code and, you know, opening files via file, descriptor relative, APIs and you know, I live in that world of OS updates and all that stuff. So, you're gonna be getting more communication about this over time about how you transition from V3 to V4. And we've definitely been talking about a lot. I can't give you like, this is the answer right now, but we're just, we hope to make it an easy transition. I have to say. Oh yeah, and relate to other talks. There are talks about Fedora CoreOS, there's talks about Ignition and other things. So if you wanna learn more about those, please go to those. And I assume a lot of time. So thank you all again. Thank you.