 All right, hey welcome over on so I'm gonna talk about fedora core s and live pixie When I start this talk, I'm gonna go pretty broad because there's a lot of stuff. That's pretty new here But then I want to so I'll go kind of broad and then do a sort of deeper dive into the into the pixie and ignition Advanced ignition, so I'm actually taking over this talk from Andrew Jettalo Former ignition maintainer. He went on to do other things But he did a lot in ignition and hopefully I'm channeling his his vision talked with him some about this talk and hopefully What he wanted to have will come through here Yeah, so I work on co-ess and open shifts. I've been contributing to free and open-source software for a long time And I can start all my talks by saying why I do what I do. I love working free and open-source software Just it feels like we're all collaboratively building something together and because software is just kind of a foundation of our society Like what we're building now is going to power, you know different businesses and nonprofits and all sorts of things for years and years to come so Yeah, it's just a it's just really really fun to do and important Let's see So let's start and let's talk about fedora. So fedora is a bunch of things Remember we're talking about fedora core s and pixie. Let's start with fedora So fedora in the in the ecosystem that is our distributions is leading edge we really want it's the place where we're new features are landing and We have a number of those actually in the fedora coral west side give some examples But it's where you know, we track the latest Linux kernel And it's also crucially a great the place to start if you want to contribute to the ecosystem that you know The red hat has That leads into sent us and and read at our price Linux So it's a very vibrant place to I love like the activity on the development mailing list and as I'll mention later Fedora core s is fedora. So like We inherit all that that happens there Obviously fedora is the upstream for well But one thing that comes up a lot and this is really important here is fedora is not just a desktop system I I can't tell you how many people I've talked to and they say I run fedora and they always mean as a desktop They never mean as a server. I mean one percent of the time. That's what they mean So that's something that we're trying to change and that's actually pretty profound, you know, how do we? Manage that how do we make fedora into something you want to use as a server? okay, so Before we get to the core west part of fedora core west The whole technology industry, you know kind of evolves in layers We've added virtualization and when we added virtualization It didn't change how you manage the operating system the role of the operating system is basically work the same You know, we added, you know, things like vertio to optimize how you do guest host Communication but again if you did pop it on your bare metal machines, you did pop it inside your VMs to broadly speaking Containerization though deeply impacts how we think about the operating system and you see this tension over and over in fedora course because we're not Getting rid of the way things worked before Like you still have traditional fedora fedora co-ass is for you know, we're optimizing for containers But we can't just stop what we did before because it's heavily heavily used And a great example this tension came up in the minimization talk was basically like if we right now We ship an Apache container that includes the rpm and closing includes the rpm it drags in system D And it drags in system D because that's how you run Apache if you young install it on a traditional system So we have we have these interacting ecosystems but containerization is deeply deeply impacting how we think about the operating system and this trend really accelerated with the introduction of container Linux red hat acquired core s with three years ago now and We we're trying to inherit the direction and Momentum the container Linux started with with fedora core west because container links was and It still is very very widely used for a whole lot of different things and There are a couple ingredients. I think that we're we're preserving I'll get to that the next side but the important thing with fedora core west here is that we're taking it is a successor to both container Linux and Fedora atomic host There was a comment on LWN from Someone who well, I guess I won't go into that, but I guess what I'm trying to say is we're We're really combining these forces and we're putting a lot. We're putting a lot of effort behind fedora core s it means a lot to to our team and Yeah, we really wanted to inherit this mantle So again taking the momentum of of containerization another aspect of fedora core west is also upstream to Rel core s and I'm not going to talk about rel core s too much in this talk other than to say that we have closely tied it to open shift Currently and it is It's kind of Kubernetes native operating system is the way I like to describe it But the the sort of core ingredients of what we think of is core west go into rel core west too So if there's something you want to land in rel core west to please contribute to fedora core west first You know everything we do there for an open source. We have public community I think we are pretty good at the sort of rough consensus working code model Yeah, again, like we really want to preserve Preserve and build on the huge community that fedora is Well also Interacting and you know, so for example, maybe we're also pushing back against fedora in certain ways Because again, we're trying to make a server that you can install for a while and that you know, there are a lot of Changes there actually a great example is the C groups v2 fedora core west today still uses C groups v1 Because we include docker which doesn't support it It's a great example So we are taking some divergence there, but it is upstream to rel core west Yeah, and I just really have to emphasize here that like definitely the vision I think is that it is fedora like it it's a different way of Managing a fedora system, but like something should be feel very familiar to people for a long time admins But we also want it to feel very familiar to people or container Linux admins Which is a balance a tricky balancing act in some ways Just recently it came out of preview so Yeah, check out the fedora magazine blog post if you want more details of that and one of things I think is most interesting about fedora courses We're also tying it to okd, which is our upstream for open shift So if you haven't tried okd you can get a Kubernetes cluster the same way the the open shift product works Except we're tracking fedora core west so I'm really excited about what we're going to do here because that's going to provide a Lot of additional testing for fedora as a Kubernetes host Well, you know fedora is tracking the latest Linux kernel So if we want to prototype out cgroups v2 and how that works in kubernetes You know having that land in fedora core s and okd first will be a great way for us to do that Yeah, so that's that's gonna yeah basically just ties together communities in a new way So talked about fedora talked about coral s so but when I say coral s what does that really mean? So we talked about okay, you know, we're emphasizing containers You know it's server-focused, but but let's get in the details, right? so Fedora coral s like what we're talking about when we say corvus is a fusion of technologies from the original container Linux Which was called coral s until it was renamed naming is hard and Atomic host and things kind of from the traditional red hat ecosystem like so for example se linux Can't tell you it was a lot of effort to make ignition work with se linux for example a lot No, we'll ever know how much effort that was it was a lot, but yeah, we're fusing using these technologies. So what is ignition? Ignition basically if you if you come from the traditional red hat ecosystem Ignition is a single replacement that works across metal and cloud that replaces kickstart and cloud in it Ignition runs in the initial ram of s and The core one of the core principles of ignition is that it runs exactly once and it either Succeeds in a hole or it fails So you know if your system boots and you provided it an ignition config and that succeeded your system is in that state Described by that ignition config it doesn't it doesn't vary so the the grating contrast here is cloud in it because cloud in it kind of runs right in the middle of your boot process and If it fails your system can just kind of be partially configured It might still be accessible by SSH But you know you you kind of have to log in and check to see or if ignition fails It stays in there at fast and you can't talk to it over SSH Which does make debugging a pain But one of things planning to do is basically support dumping the failure to a remote host So again the kind of principle here is that it's I really don't like this term But the most popular form is called a mutable infrastructure where you're not kind of logging into the machines Periodically to like move them to a new state. It's if you want Basically, yes, you start your system and it's desired configuration. They have automatic updates So getting to automatic updates our PMO Street is the ingredient. We took from Fedora atomic hosts I've been working on the OST tree and our PMO Street side for a long time I'm really passionate about making sure that updates are fully transactional and offline. I Yeah, because if you're applying updates period right applying updates is just a crucial aspect of maintaining software infrastructure, I think it's it's somewhat irresponsible just to start an operating system and not have a Mechanic like not have a procedure and mechanism to keep that up to date. I Do believe that over time we are improving the security and reliability of Linux and the Ecosystem on top of it. So, you know enabling updates is is crucial Let's see so Yeah, the automatic updates on by default is true today for Fedora core OS And this is a large part of our what we Well, no, yeah about I don't know 20% roughly of like kind of what we're focusing on like there's new tools here So what actually drives our PMO Street? I didn't mention it here is a tool called Zen Caddy that knows how to It basically has pluggable infrastructure for coordinating reboots of updates because if you run more than one server You don't want them all to update at the same time, right? And that was something that container links had a couple iterations on that Also is completely replaced by the machine config operator in real course. I won't go into that too much, but just to decide Yeah, and obviously container focused There are a number of other related projects like a toolbox like we kind of really want when you log into node To rather than think oh, I have to debug something I'm gonna yum install s trace is you toolbox a yum install s trace in there or you want to do some BPF tracing It's like not don't put on the host Do it in your toolbox container? So it's separate that keeps the host small and so when the next kernel security erotic comes out You have a smaller update to apply your your debugging tools can wait, right? You want to apply your kernel security erotic? Okay, so we talked about fedora talked about coral ass talk about pixie Pixie is a bare metal provisioning infrastructure. So And this light I would start to describe pixie, but I realized I really want to make an entirely different point That fedora coral ass isn't just about clouds. It's about We want to because containers apply everywhere Containers are a thaw going old to virtualization. You can you can use containers on bare metal and this actually makes a whole lot of sense Actually, I think we basically virtualization should be inside containers so We want fedora coral ass to go wherever you want to go and that that includes You know public cloud infrastructure as a service whether it's gcp or aws or you know on-premise open stack If you want to virtualize that's great, you know, we have builds for all those but we also want to run on your bare metal machines Because for me like part of working on free and open source software is you it's your computer you should have control over your infrastructure and that's that's actually an important point of worry why you want to use a Fedora coral ass live pixie so When we do builds of fedora coral ass We get basically an OS tree update, which is something that nodes can do in-place updates to but we also generate a huge variety of image types like virtual machine images and and this live pixie media So what is pixie pixie actually mean? so when you start the system is in the bios or ufi and That system the the BIOS would do a DHCP request and from there If you set up the server it will reply and it will provide a kernel and in a net ram fs And then so here's where it becomes and so normally when a system boots The initial ram fs transitions into the real root file system So if you have a persistent install it kind of works the same way, but Here we basically We run ignition in the init refs and this works the same way as if you boot fedora coral ass in a cloud or for a persistent install Where things start to diverge in this setup is that we basically have the full roots file system Inside that initial ram fs. So you press power on in your server The that firmware goes out over the network and retrieves all this stuff So this is what it actually looks like to set it up. There's a couple of ingredients here for for Linux admins Hopefully about half this is familiar if you're already maintaining pixie infrastructure So the the IP equals DHCP for example tells The booted system to go out and do DHCP as well The the rd.need net as part of drake it it tells it, you know, please wait for networking There's some console arguments that tell you where I'll put the console The ignition arguments are new to fedora chorus and derivatives like raw chorus the Ignition platform ID says this is a bare metal system Which because basically ignition needs to know how to fetch its configuration depending on what what Platform it's running on so for example if you're running in Amazon web services We need to fetch from the metadata service which is on a link local IP So the the OS needs to know what platform it's on that's basically what's going on with this kernel argument The and then here's here's where you start to configure things so we're gonna look at is some ignition configs and What you provide here on the kernel command line is a link to your ignition config. So if you think about Analogy here as if you're booting an instance in infrastructure as a service like Open stack then this would kind of be what you would put in the user data field It's it's basically the the same thing. It's the node boots Ignition runs and it fetches this config and the config can start doing a whole bunch of other things Okay, so let me So if you want to actually try this On your infrastructure and you can do this in a virtual machine too I'm gonna I'm gonna demo that if you're doing bare-motor deployment So it actually highly recommend like kind of having infrastructure for testing Changes in virtual machines before you roll out to your bare metal. It's just a generally useful thing So basically you go to get fedora click on bare metal and virtualized and Over here. There's downloads for the kernel and the initial RAM disk now I already did this download because I'm sure as you all know don't trust the Wi-Fi Yeah, so what I wanted to demo here So this is the manager and I'm actually sort of cheating here because You can set up Pixi in Invert manager, you know, like basically you need two VMs and then you know You run a pixie serve on one and you actually kind of really want to run an isolated network for this So I'm sort of cheating here because I'm basically telling Livered and QMU to just directly boot the kernel which is almost the same thing that happens when you pixie because basically It you know instead of having the kernel be provided over the network. It's just directly entered by QMU So cheating, but it's it's basically almost the same thing. The only detailed here is some if you go back to This thing there's this Magical option IP append in the pixie linux config and what that does is it tells the BIOS to set to tell the operating system Which network interface it's booted from because that's actually very important in some scenarios like you might have a management network and a You know like a sort of front-end network. So, you know, I'm basically ignoring that part I don't have multiple nicks in this VM, but that's it's kind of you know So I have all those same kernel arguments That was in the demo But the most important thing to look at here is this one at the very end this SSH basic ignition file Can you guys see this? Okay? Yeah, it's pretty evil so one thing I didn't mention is Ignition is sort of intended to be a low-level language. It's JSON, which is human editable ish You know, you always forget to add the comma or remove. Yeah, not have a trailing comma and listen all stuff So you can human edit it but the idea is that we have higher-level languages that basically compile to ignition which is also a big difference compared to cloud in it and Yeah, so part of the idea is these higher-level languages might have Specific knowledge about the operating system that might turn into a bunch of things like system D units and scripts and stuff I'm not using much of this here. This is kind of just the most one the most bit of minimal ignition configs you could make that just has my SSH public key and You know creates a user Walters and adds me to the pseudo group the enabling SSHD service is not necessary I don't know why I added that Yeah, and you can also tell here. I'm bad at key rotation Alright, so let's go ahead and boot this So It's very exciting to see Linux boot, but give it a Minute here. So what we're actually seeing right now we're in the initial RAM desk and We did a DHCP request there ignition ran and fetch that config that I have running from a Local Yeah, I'm just I basically have these ignition configs on my host. I'm just running a web server and they're stored and get kind of It's not doing anything Awesome, I Just this well, okay, it's cool seeing things fail because then Then it's actually very descriptive of oh I know what happened. I rearranged where this stuff is and My reps web server was running in the wrong directory. I Move this stuff to a directory. I wanted to be in the assets directory So if you look oops looking here, basically, I have a couple ignition configs that I'll demo Let me run the web server again And what you saw here actually is is kind of actually an interesting one to go into it's because when it fetched the config I got a hard 404 and so that made it fail fatally If it fails to fetch the config and there's like an internal server area or a transient network error ignition We'll sit there and retry but if it gets something that says this config doesn't actually exist or it's malformed It will fail Okay Let's try that. Okay. Cool. We found the ignition config this time and So one of the things one of the nice things we actually took from container Linux is changing the console to include your Your IP addresses your SSH keys some of the things that are actually Actually also involved a non-trivial diversion into making divergence into making this work with se Linux Because we kind of had to like tie together a couple different processes You know when you change when you tie together a bunch of different things that you need to change the S&X policy So yeah, it's just an example of like where we're preserving That kind of container Linux experience So yeah, I have the the IP address that it has here, which is basically this s station. Okay, cool So I'm logged into Fedora co s You know one of those things it's like Yeah, we are Deriving from Fedora, you know, you can run RPM dash QA and It does work right because if you're this is one of those things that's actually quite important It's like what is inside my system? What kernel am I running? You know what version of system D if you're maintaining systems you got to have infrastructure to like track what you're running and You know RPM. I mean it could be improved, but it works pretty well for this You know tracking what's inside your system, so we're not we're not replacing that and we basically Our team can focus on what we need to and inherit the Fedora kernel instead of you know Container links kind of was a derivative of gentoo and you know That's part of the what they had to do is maintain packages. We're doing less of that Let's just focus more on containerization stuff But you know There's a try and find a Yeah, so on the other hand though, some things are different Yeah, so by default and this is something that we inherit that is true was common to both container links and and Fedora atomic host and derivatives is the slash user Is a is a read-only mount In fact something we inherited now. Yeah, I mean it's a general OST tree system We also make slash directory immutable and basically the only writable Directories are Etsy you can write to Etsy and you can write to Var so Real brief really briefly here with OST tree the idea is you have configuration and Etsy Very familiar right But a corollary corollary corollary of this is you shouldn't have configuration outside of Etsy, right? All your configuration should be an Etsy period and then further all of your state Should be in var and the code is in user. You have this really clean separation Just chain code that we manage. That's read-only the source of truth comes from Fedora Configuration and state is owned by you and the difference between configuration and state is actually pretty important here because The best practice is to use ignition just to write to Etsy Broadly speaking and then so your state is written by programs So one one thing we maintain with RP most three is for those of you familiar with traditional Fedora systems, you know just the way it's worked for a long time the RPM database was in var Which is basically wrong in this model because it's it should be immutable and owned by the OS So one of things is we move it to slash user. It's an example of So the separation between this is pretty important to understand let me run a fine mount So yeah, when I'm booted in this live system, let me actually demo thing here Do you notice here? There's no discs in this system. I basically I'm just living inside this Living inside a transient system actually don't like the term live because the antonym is sort of confusing It implies non-live systems are dead. No, it doesn't make sense Probably a better word is actually ephemeral. Yeah, kind of ideas Everything here goes away. It just lives in RAM if you look at the the file system mounts here We have a writable Etsy. That's an overlay of s that points Yeah, it's basically all coming down to a temp of s and the we also have a read only squash a fast Like that's where the real root is so if you booted in this system now, you know, some things work busy box so Hopefully this one works over the Wi-Fi. I didn't cast this one. At least it's small so you can do this and you know one the important things here is all the All the container state is stored here. So when I did that pull, you know pod man wrote some stuff here, this is Yeah, that's the that's the image. It hasn't created a root file for it The point is basically when you did that pod man pull it wrote to var and not Etsy because it's not configuration It's state basically Cool, okay, so let me Demo that so yeah, I mentioned it's live. It runs for ma'am now you can have disks That's what I'm gonna demo next so this this starts to like get into a trade-off between the two systems Yeah, I mentioned the OS is an error fast one difference here with you if you look at traditional Fedora and rail systems with anaconda anaconda basically is a live system Like that's what it does. It basically just runs out of rams Just its role is to install the OS, you know to do partitioning and all that stuff on your whatever you disks you want to install to you and And then go away whereas here We're kind of emphasizing the use case of you can just run it persistently you can provide a nation campaign and run from RAM I Wouldn't think I actually forgot to add to these slides is that if you do want to do an install to disk Running the installer is just something you run via ignition from Fedora chorus You can just write a system to unit that runs our installer and pointed at a disk It's just one of the things you can do and that's so it's basically a generalization of that anaconda model another difference with anaconda is For size reasons because Anna at least some builds of anaconda include a basically a full desktop There's limitations on the internet ramp size. There's a stage two that so basically go in the internet fast and that fetches a whole other file So we aren't doing that yet We may have to do that eventually we'll see So I'll talk a lot about what gave you a lot of demos But actually and you know may some of you are already downloaded the live image Maybe you're you know logged in the console and one of your bare metal servers. You're like this is cool Why am I doing this? Right, so good example use case here It's basically on-premise systems disk lists and you really emphasizing compute You know, I've definitely heard from some real customers that have large fleets of bare metal servers and they really want them to run all the same thing I'll get the stateless in a second. So they do is that there's no persistent storage I think in their case they were doing NFS route, which we don't really support I think we really want to emphasize this live pixie model because honestly it's just kind of better It yeah, it just it works better I won't go into a lot of details for that and the compute portion. So like let's say you're doing a bunch of numerical simulations Great thing to do package those as containers and then you can write ignition configs That run podman and pull those containers do your computation and then write it to you know Do a post HEP post to some server and That's kind of a good use case for this live type of pixie Setup of now in this scenario right like one things I just said is that it's kind of we're providing some base level infrastructure for this We're in you know invested in ignition. We're invested in you know providing an OS update stream and testing it All that stuff, but it is kind of up to you then in this scenario. Okay. How do you do orchestration? That's kind of a bring your own scenario, which can be good if you're doing something costume Yeah, so why wouldn't you do this? Well, one reason is that it actually We're not really using OS tree here the OS tree aspect of fedora core s because that's basically about maintaining in-place updates and Since there's no persistent story or there's no we're intentionally not doing persistent storage for the operating system Like OS tree doesn't do much here. It can tell you what version the OS is But not more than that so What yes, it is it is useful um Reminds me. I need to demo something else if I don't run out of time. So okay. Yeah bring your own orchestration Yes, so it's just not our primary path We're definitely supporting this actually partially because it's now critical path for the install to disk like the Yeah, the persistent path is just a subset use case of this live pixie So we're we're heavily testing it. It's just that for example the Zincati process. Let me go in Tim with that. Yeah, if you're logged into fedora fedora core s system Zincati is one of our custom services and what you see here is it basically has a stem to unit that says I'm not gonna run if there's a live system. There's a slash run OS tree live stamp Wow, so Zincati doesn't run doesn't know try and apply OS updates. Yeah, and so I Definitely talked to a lot of rel customers that really want their They want to make sure that their systems in the desired state like they've you know had an admin log into a node as a one-off And maybe change something as you know a hot fix and then that kind of sits there and is a time bomb for later Right, you don't want that What some of them do is basically kind of once a month at least they just Whatever happened just flush and reprovision a node it just makes sure that you really have all your state stored and revision control or manage somewhere else Do that kind of on a rolling basis? Honestly, I think that's very much of best practice if you're running for example Open shift for today one of the cool things introduced with that is the machine API Which manages provisioning of underlying virtual machines and infrastructure the service scenario So you could totally write a controller that just periodically looks at uptime and like does an OC Delete machine slash blah and that'll that'll kill the kill the VM and like open shift will actually react to this Make sure it drains the node beforehand and maybe scale up a new node in response Definitely best practice. I'd actually like to see that machine API And that's the kind of thing you could script in your infrastructure Even if you're doing persistent installs, right? That's not something we provide code for but you could do it And obviously if you don't have any disks, but you do have a lot of data. It does mean that If for some reason all the machines power cycle at once, maybe have an electrical blip or something, you know When all those machines come back up, they're all going to be getting all their Data over the network again, and it's not a it can be a non-trivial cost. So You just have to be aware it basically makes applying updates more painful, but again going back to the control It's your computers you have total control like once you've configured that pixie server to boot a specific version That's that's what you're gonna run right? Nothing's gonna go and change that unless you change it Yeah, so let's There's a lot of good docs here on ignition one things I That needs to be emphasized the one the cool parts about running ignition in the initial RAM disk Is that it can do disk partitioning because if you're doing installs on bare metal, that's absolutely critical, right? The disk partitioning path also works in the cloud So we have a bruising path again that works symmetrically in bare metal and cloud So if you want to for example, create a separate var partition on your cloud images your fedora core cloud images That works great today Because we run early enough to do that We're also something we're actively working on is basically support for DM Crip slash Lux So that if you want to encrypt a root file system again doing that in the cloud makes a lot of sense if you you know If you're saying okay, well my Ami or whatever is encrypted Yeah, probably but that's like invisible encryption that you can't see and they could just not be doing it You wouldn't even know whereas if you have DM Crip inside the US, you know again kind of goes back to you Yeah, some scenarios where it makes sense. So it's sort of core ID core stuff You'd craft together in the scenario you may have private CA certificates So that's something you'd include in your ignition config and then kind of one things I mentioned before is your you have your numerical Stimulation packages container your ignition config has a system to unit that says podman run block and that could be it So let's talk a little bit about mixing the trade-offs of these two so If Yeah, if you have large Container images like it's not just the OS size like right now I think the niche the live pixie and it ran fast as something like 600 megs if you have some you know Like hundreds and maybe gigabytes of container images when you do that reboot where you're pulling all those can be really expensive So one things you can do is basically have a persistent disk for slash var. So let me Yeah, that's the rendered ignition config which I didn't demo yet That's the JSON. This is the Fedora core S config. So there's a couple things going on here. It's a little bit more advanced I included my my SSH key what you can see is a system D mount unit that basically mounts the var partition Here's where Ignition is running. So we basically this is the storage section of ignition right here And we are creating it with the label var, which is what this thing finds. We're giving a provided size We're saying we want it to be XFS in this case now You can do a lot more advanced things and I just want to emphasize you can pass this It's exact same ignition config in a cloud instance or bare metal and it works just fine. So what I'm going to do here is Go ahead and Whereas yeah at hardware. So let's say I have a disk I really want to be Verdeo because if you notice my ignition config references dev VDA, which is Verdeo devices So, yeah, and this would be kind of You know just like if you want a bare metal machine that happened to have some NVMe drives or whatever cool So now my system has a disk if I booted it with this config that disk would just be completely ignored like it should be untouched Because nothing's there's nothing that's configured to touch it But what I'm going to do here is I'm going to change the ignition config I'm providing to this var persist Yep, oh, I think I it should go back when I shut down for managers, let's see Yeah, yeah, okay now we're good. Okay. Yeah, that's what divert managers really confusing Basically only shows your changes once you shut down. Cool. All right. So now we're we're booting Ignitions running we're doing a do you superior quest? Yeah, and what scrolled by pretty quickly is it's basically creating that partition in var and I demo here Rebooted in the same IP address reconnect and Now I do an LS block and what you can see here is I have a partition mounted on var So what's going to happen here if I touch Etsy foo That's going to go away So Again that scenario where where you want This setup, but you want to cache your container images or cache other data Maybe you I mean you could just imagine any kind of on disk cache If your software sort of written in a way that you know, maybe it checksums that data like it's an object store So it's identified by its checksum. So like you're very careful about how you manage state then you can You can do that with external disks even in a live pixie scenario So if I run podman images busy box will be there. I'll reboot the system So again an ordinary live pixie scenario if you reboot again, everything goes away because it's in RAM Just give it a second We still do to each DHCP in the interface ignition did not run on this boot though Or no, it did run on this boot. Sorry. Yeah, we refetch and reapply the ignition config On a persistent scenario ignition would not so if we go back here remember I touched Etsy Something yeah, yeah, it's not there, but Images I have this box right so it's still there. So you can kind of mix the trade-offs. So think of my time so One of the cool things I think Andrew want me to demo I didn't quite get to is like if you want to like test something in the cloud You could again boot that same ignition config if you want to capture your state All you need to do is serialize var and basically you have your system The door cross is uploaded everywhere. You have your configured ignition. You have your state and var You can put those things together anywhere you want to want to have them. Yeah, so one things I should definitely mention is this is not yet shipped by open shift. Yeah, let me let me just finish real quick Then we'll do questions We're almost certainly gonna ship this but Running it persistently like we don't have a story for how this would work with the machine config operator because it Breaks OS updates for the most important parts of OpenShift 4 and like you don't need to think about it But yeah, one things we do like about the live image though I think in this scenario is that you can use it just to play around and test things like what are what are my nicknames? Because like right now we just this is not an ergonomic thing to do Yeah, cool. So what a demo for our core last live pixie kind of all subset of what we're doing But I think an interesting one that kind of illustrates the number of the trade-offs involved Yeah, it's available on the website now That's it so what have any yeah, how to do kernel modules Yeah, no, that's a great question. So that's something that heavily interacts with our chemistry We've gotten a lot of requests for this I think the most there's the most recent one is actually maintained by Dusty's kmods via containers and Yeah, we're there's a foot or a core west tracker issue where we've Debated this a lot We put that way, but I think that's that's the most promising approach we've gotten so far and definitely check out kmods via containers So that should just work, right? Right What if you have an NVIDIA GPU? I mean it kind of falls in the numerical simulation case, right? You want to load the NVIDIA module which I think should work It should agree Like one of the goals of this right is that you don't have GCC and kernel headers on your host because again That's other stuff. You need to if you type young update it would drag in but like we keep the host small Keep all your build stuff in the container Cool. Any other yep This one right right, okay? So the question basically I think is around how the disk is identified and What happens when it's not format so that the identifying the disk is actually very simple in this scenario because like there's only one disc And so I've just kind of hard-coded I think if you were in a bare metal scenario like you might end up having to do something where you Template your ignition configs depending on like the particular server, you know like on this class of server my newer ones They have NVMe drives, and you know you need to write Dev NVMe You could probably write a script that detected things and generated those system D units That would work too if be a little ugly. So that's kind of like the I just sort of hard Hard-coding dev VDA is cheating in this scenario Yeah, it's it's up here. So Well, okay, right. There's there's a couple chains of things for how So the storage section operates on the block level this section says on dev VDA first It says wipe the partition table which yeah, and then and then it's saying make a partition with the label var and So that that's basically what happens on the block level like that's telling ignition find this device and apply the var partition So I think I only glossed over this basically ignition by default you can configure it to have Create this file system if it doesn't exist if it does exist and that files to matches exactly what the ignition config is Then it gets reused you can configure it that way and that's what I was demoing here So that was the block level and then this is the file system section that says create XFS on that partition Yeah, yeah, that's that's the key one here So the question is around bare metal and what happens if you specify the drive on the kernel command line Basically don't do that Like so if you want to do a persistent install I mean I can go to the Fedora core West doc I like is that kind of what you're getting at is like if you want to do an install to disk Okay. Yeah, that's Docs so if you go to the Fedora core West Docs, there's actually two sections here now So up here is what it looks this is yeah, this is probably I should have mentioned this talk because it's a really good example You basically Have here we have magical kernel arguments that are interpreted by the operating system to run a system to unit There runs chorus installer. So here's what you're talking about is this chorus install dev dev SDA like that's what it looked like if you want to do a persistent install we do We've we've debated the insta how the installer works recently because again, it's a general system So we're gonna install is just one thing you can do So kind of the ideal is you have a system to unit that runs that you're not using the kernel command line You don't know what the device name is right? So the question is about how do you handle to scenarios where you don't know What the device name is so It's a complicated one there's not one answer to that But I think one scenario here is if you have a class of servers you boot one of them live We're the way they're identified should be the same now it gets tricky like because You can actually Find disks by their I may get this wrong there WW ID I think most modern artists have basically a UID encoded in them And you can find that that find them that way if that makes sense that there's not one solution to this But you know if your servers only have one NVMe disk, then it's kind of simple Yeah, I think I can answer that In one I can only give you one answer to that But you know I basically recommend booting it live first and seeing what it looks like and then kind of come back and craft your configs If that makes sense Well, but once you know what the config is then you come back and and automate it, right? Like once you know what the disk is in the general general case Yeah, hopefully Cool So the question is why wouldn't we want to use this for OpenShift 4? I think Live pixie. Yeah, so Like the MCO today like so applying OS updates applying updates period is like we put a lot effort in that no an OpenShift 4 and A lot effort in the MCO and managing OS updates. So I mean it could make sense for workers It's just Right, yeah, that's right. I think yeah, that's that's the one of the yeah, thanks Let's see that's definitely one of the biggest issues and so what does he replies that? Basically the kernel that's booted is not under control of the cluster anymore in that scenario Which means it's a kind of partially managed scenario You know we could support that I think it's just It's just not a primary target right now Like I think we basically like again what I would recommend people they're running OpenShift today Is like do that periodic reprovisioning and that's going to get you 90% of the benefit of this Without all the drawbacks of making you know when I know reboots it repolls all the containers, right? Please go to come to you, but Cool All right. I think I'm out of time, right? Oh Five minutes. Okay. Any other questions? Okay Okay, all right. Yeah, Dusty said there's a nicer way to do that in the bird So I'll have to you may update the talk cool. All right. Thank you everyone