 Good morning. Good afternoon. Good evening and welcome to another episode of OpenShift TV. I am Chris Short, Executive Producer of OpenShift TV. And I am here with two of my favorite Red Headers. I just met them. Let's talk about Fedora Koros is our topic today, or FCOS as we might refer to it. Clement and Timothy are here and I'd like them to introduce themselves because I'm terrible at introducing people. So Clement, you want to go first, just alphabetical order-wise? Yeah. So I'm Clement Verna, Engineering Manager at Red Hat with the Koros team. And I've been a Fedora contributor for five years now. So I'm involved mostly in the infrastructure and also I'm in the Fedora Engineering Steering Committee. So looking after changes and what is the future of Fedora. And that's about me. Timothy, you want to go next? Sure. So hi, I'm Timothy Ravier and I'm part of the Koros team too. I work on Red Hat Koros and Fedora Koros. And I'm also the Fedora Keynote maintainer on Fedora, so which is like a key variant of Fedora Silver Blue. And yeah, I'm doing most things in Fedora right now and around that. Awesome. So what is Fedora Koros? What is the goal of the operating system, right? Like all those fun things. Tell me more. Good question. So we've got a small presentation that I think I'll share with you just to kind of Fedora Koros basics one-on-one. And I think that hopefully at the end of this presentation it will have a good idea of what is it, what are the goals and how you can start playing with it. So as you said in the agenda, what is Fedora Koros, what are some of the features, how it relates to REL Koros, and also the relationship with OKD. And we have at the end, we have like a small demonstration on how you can use Fedora Koros to deploy a metric server at all. And we will have time obviously for questions. Obviously, yes. And please feel free to ask questions or comments or anything during the presentation. And if there's a good spot for me to ask it, I will, or we'll ask it at the end. Regardless, your question will get answered. Yeah, don't hesitate to interrupt. I think it's nice to get the question as they come with the context. Yes, definitely. So Fedora Koros is currently an emerging Fedora edition. We're trying to become a fully-plated Fedora edition. Currently, it's with the emerging status. So it came from like the merging of the Koros, the company container Linux, and a project that was in Fedora, Project Atomix. And pretty much is trying to look at what would be like a specialized operating system dedicated to run containers. So Fedora Koros incorporates pretty much from the container Linux, the philosophy, the provisioning stack, and also a lot of the cloud-native experience that was in the Koros company. From the Atomix host, it gets all the ecosystem from Fedora, so the Fedora Foundation, like all the community aspect, the update stack, and also it comes with the SC Linux and all the experience around SC Linux hardening and security and all that. So if we look at the philosophy behind container Linux, probably one of the most important is automatic updates. You want your operating system to update automatically and not have any need for an administrator to log in and run commands. That's quite important from like a security point of view. We see, so you're always up to date, you're like, you secure, you fix this, you fix everything. You want a way to provision your nodes and servers using, pretty much, they all start from the same point and they all live, they all have like a common definition and like a way to provision them. And you have like the same method for bare metal or cloud-based, so you can have like a mix of instances in your infrastructure. Yeah, everything in the immutable infrastructure. So yeah, if you need to change, you update your config and your reprovision. So those hosts are like very easy to just destroy and reprovision. And the main point is that everything that is like a user software or like workload or everything that you want to run on the operating system has to run in containers. So which makes the host much smaller and also the updates much more reliable. Awesome. So let's go through like some of those main features related to the philosophy that we just saw that. So yeah, automatic updates is probably a very important one. So to be able to achieve that, you obviously want your updates to be reliable. If you offer automatic updates and they break all the time, people will just like disable them. So the real focus of the project is to make sure that we provide reliable updates and we make sure that when your operating system updates, it doesn't break. So to do this, we have extensive, we rely extensively on CI and we're doing a lot of tests there. We're testing also on different platforms, on different clouds. For example, for example AWS, Google Cloud, we're also testing with OpenStack. We have a system of release streams that enable us to push changes to different streams and be able to catch like regression or to catch breaking changes early. So it doesn't come to the stable stream. So we have three streams, stable testing and next, we'll go into more details after that. Nice. And another key point is that there is a manage rollouts of updates. So the updates will be rolled out over several days. So if you have a fleet of Fedora Core OS nodes, you can maybe have like a couple nodes to try to update early. And the rest of your fleet to update at the end of the rollout. So you can kind of do some early testing on one or two nodes and protect the rest of your infrastructure. So I think this also enables us to stop or halt the rollout. So if we got reports from something breaking, it's fairly easy to just stop the update rollout and not impact too many people. And yeah, if things go wrong, we rely on the RPMOS 3 technology that we talked about just after. And it's very easy to roll back to the previous working state. So even if at one point your operating system is in a state that is not working or that broke, you can roll back to the previous working state. And if there is some work that will be done in the future to try to automate that rollback. So trying to have users specify some health checks and make sure that when the operating system reboots with a latest update, making sure that those health checks are green and just continue. And if they are red, you will just roll back to the previous state. We have a bit more information on the update streams. So as I was saying, we got like those three streams, next testing and stable, and next is pretty much the latest freshest. It's also where every major Fedora rebases happened. Testing is a preview of what's going to stable. It's pretty much where we're trying to catch any breaking changes that we didn't catch in ECI. And stable is where we want to offer this reliable experience where you can just stay there and be happy and let your node update themselves without rolling too much. So yeah, how it works is that pretty much the content that is in testing stays in testing for two weeks. And after this two weeks period where we can have like feedback, it will get promoted to stable. So the content that is pushed to stable usually is very well tested. So it's certain that we have the goal to publish new releases every two weeks. We are able to achieve that goal. We release every two weeks on a regular basis since more than a year now. And yeah, the stream next and testing is really where we try to catch any issues or get kind of the real life testing and rely also on users to report problems with the app. Awesome. So you can decode the version of Fedora of CoreOS and kind of have an idea of what you're running. So that just a quick slide to explain that, but pretty much in the version you have like at the beginning 31, 32, 33, that would be the main version of Fedora, so the base. Then you have the date at which the snapshot of the RPMs where was done. So at this particular date, we just took a whichever RPM version were available in Fedora. And then you have like the release stream so one for next two for testing and three for stable. So you got here the diagram that explained this like two weeks for motion and how the content is is propagated. Do you have any question on this yet. I mean there's, there's, there's a lot of questions about like, what was the, what was the origins of atomic. Like, where did that come from? How did that start all the way up to how is sent off streams related to Fedora CoreOS, if at all. Yeah, so. I mean, let's just run the basics there. Yeah, yeah. I mean, really, in terms of atomic, it was really linked to to RPMOS tree or so. And like trying to have pretty much having this idea of having a very small like operating system with all the core core application that you need. And be able to run any user application or any software using containers. So a lot of, I think a lot, a lot of the philosophy between atomic and core OS were shared when one of that both core OS company. I think it made a lot of sense to have those two projects together. So it's safe to say that for Dora core OS is pretty fast moving right with two week releases and everything right like you're pacing ahead of Fedora a little bit maybe. He wouldn't say that it's fast moving in terms of feature. Because, yeah, we, you want to have like this stability. So, right. So we will get because because your your operating system is immutable. You don't really go on your machine every day and run like DNF update or whatever. So, pretty much what you get every every two weeks is this set fresh of packages with like security fixes or just bug fixes and things like that. Really the key. A key point for Fedora core OS is stability and making sure that we don't break the stable stream so I think that's most projects right like to be fair. So it can also it can happen that for example, when Fedora 33 was was released the stable stream was still on Fedora 32 because we wanted to make sure that the switch when we are switching from from the base from Fedora 32 base to Fedora 33 base. Everything was rock solid and that the update would not break people's host and work. All right, so another main feature is the provisioning and this came from from chorus so Fedora chorus uses the ignition, which is a provisioning system and pretty much you have like the you declare everything that you want to you want to configure on on your on your host. And this is run at the first at the first boot of the machine in the internet fs and we just configure the host. For example, the user SSH keys if you want or like how the partitioning is done. And yeah, where maybe for people that are a bit more used to the RPM world and like where you would have used in the past kickstart or cloud cloud in it for cloud, you know use ignition. Got a bit more details here. So ignition is itself, the configuration is the json document. And so yeah it runs it runs once when you first put your your instances and it runs in the internet fs when when you configure the machine. And here you said yeah I was giving some example about users but yeah you can create file system units configured with user books partitions, create red arrays format system so you can do a lot of configuration there and just tune your, your Yeah, and if the provisioning fails the boot fails so this is something that has to work otherwise. There is no point to boot in the system that would be half provision or half configuration. Another of the configuration for ignition is for the, it's to be easily consumed by machine. And this is usually not necessarily a good thing for humans so json is not super easy to, to work with as human beings and to modify. There is a tool called fedora coro s config transpire fcct that allow you to write your config ignition configuration in your mall in something that is a bit more human friendly. And this will transpire the emerald to json so you can keep your emerald file with all your configuration in git. And just transpire this to json before before applying it to, to you. There was a question about you know why Jason and I'm, I am basically answering in chat like this is designed to talk to machines. So, you know, lightweight Jason is better for this than a YAML or God forbid XML. So, if you're editing the admission files by hand, that's like an anti pattern I feel like sometimes right. Yeah, so yeah you don't want to to touch the json file, you really need to use the sec and make. And it's also a good way to force having this, you know, every change you make, you committed to your git repository. You don't really want to act quickly, make a change quickly in your json file, run the provisioning and forget about it. You want to go through making the change in your YAML configuration committed to git and have some type of CI and maybe your CI is actually doing the transpiling from YAML to json and even the provisioning. So it was made to, to be kind of GitOps, very GitOps oriented and making sure that every changes you make are stored and tracked in your configuration and your git repository. So FCCT is that human to machine interface basically right. Yeah. I think we had plans to have a web service that does that also currently it's still just a CLI tool. If anyone is interested or if we have the time that might become just a service. Yeah, there's a Docker image somewhere, a container image somewhere to run this as the container to as a web, web service, but it's not on any official server yet. Yeah, Timothy, if you want to take the next few slides. Yeah, sure. And so yeah, Fedora Chorus is definitely cloud native and container for Kust. That's where we are. And that's where we want to still be in the future. And so if you want to run anything on Fedora Chorus, basically you will have to run this in containers. You will have to run all the software containers. So we have both, we have both FunMan and Moby, the Moby Engine, which is usually known as Docker to run containers. So we support everything. And the OS itself is really made for cluster deployments. So you can either run just one instance of Fedora Chorus by itself that works perfectly. But if you want to spin up like a hundred nodes at the same time and have them join themselves next to a cluster. That's perfectly fine too. And you can do that. And that's where Ignition itself shows its power because you can automatically have every node that boots up, join the cluster and form a big computer by itself. So yeah, when you don't need nodes, you just shut them down and you let them go away. And if you have a lot of user traffic and you need more power, more CPU and more memory, then you can go again and spin up new nodes that will be provisioned with Ignition. So Fedora Chorus itself, we support a lot of the cloud and cloud providers and a lot of platforms. Okay, we have a list of the most popular ones that we have. But essentially, we are communicating and compatible. So you can run that on almost everything provided that you can give it an engine config. So yeah, mostly supported almost everywhere. Not yet, but we'll see working on it. Yeah, go ahead. So yeah, versioning. We have a strong focus on security and on doing things correctly. The idea behind Fedora Chorus, so we use RPM S3, the mechanism, the way RPM S3 works is like it for your operating system. So it's as if you were going from one version to another of the operating system with RPM S3, just like you would go from one version of your code to the other with Git. So instead, you still have hashes, but you also have version numbers, which are a little bit easier to manage. And yeah, essentially, when you get one specific version of Fedora Chorus, it's one hashes. And then with this hash, you are absolutely sure that what's in this image is the same as everybody else. And you just get the same version as everybody else. So as you cannot go and change everything on the system, because you still want to make sure that all your nodes have the same content, then this means that most of the five system is read only. So slash user mostly is read only. And that's a good, that's a really good thing to add because you don't want things breaking up by inheritance or by mistake. And you want to make sure that all the state of your system is stored in slash bra and that you just keep it there. All the containers, all the data, all everything, everything that you want to have on your Fedora Chorus instance be there. And you still have slash it see if you want some configuration for the software. And finally, the last nail in the security features is Solenox, which is enforcing by default on Fedora Chorus, just like every Fedora distribution. And which really offers a strong protection for isolation for containers on the system. So that's like the strongest isolation that you can get by default. So yeah, so what do we actually have in the in the US shipped with Fedora Chorus? Well, the main thing to remember is that we ship Fedora, we ship Fedora RPMs. So we ship the same content or almost 99% of the same content as Fedora RPMs. So if it works on Fedora, this will work on Fedora Chorus. And if this works in the Fedora Chorus channel, this will work on Fedora Chorus. And so the hardware support is the same as Fedora, as classic Fedora and everything like that. So yeah, it's Fedora. Fedora Chorus is Fedora. And we ship a lot of common tools that you want as an admin. So hardware, some hardware-specific tools and some user management and things like that. The main, the two main container engines. And we only ship the only interpreter that we ship is Bash. And we don't ship anything else. So we don't ship Python, we don't ship Pearl. So that's like a big difference from most other minimal containers. Is there even a compiler installed? There's no compiler installed. So if you want to run a compiler, you need a container image to do that. Everything by containers. Everything by a container. So just commented we don't need busypox either to run a container. Yeah, that's a good point. Yeah. So just real quick, I don't know if either one of you know the story behind it, but used to be in digital ocean, you could go tinker with Fedora Chorus. Now I see that it's gone, but then I went to the Chorus page and I found an image for it. Is that kind of our strategy going forward is just to make images for certain clouds as opposed to like having this deeply nested integrated console kind of deal. If you don't know, that's fine. I can figure out the answer from someone else, like a product manager or somebody. I think it's working progress on digital ocean. It's, we do ship images and you should be able to easily spin them up on digital ocean. And I don't remember exactly the service of or collaboration with them regarding that. I think it's still working progress. I will dig. Maybe for the interface to have like the support for ignition and anything like that. Sure. Yeah, I got it. I will dig detective Conan Kudo and chat. I will actually text message a PM on the Fedora team right now. How about that? Yeah, this should work because Dusty is running OKD on digital oceans and OKD is Fedora course based. So there's like no reason this doesn't work, but yeah, I don't know the specifics. No worries. I will figure it out and I will get an answer. Please continue. Yeah, sure. So that's perfectly on topic because what we have coming next is more cloud platforms. So yeah, more collaborations with more, more cloud providers. We already have a bunch, but we always add in more. And we also trying to improve our state on running. Clément, you're not focusing on. Oh, the joy screen. Should be like. Yeah, you're back. So yeah, we're also working on multi-arch support on better. Because currently we only running on x86, which is like a lot of the workload. But we want to move that forward, especially on 64. I got a box of Raspberry Pi is just waiting to be put together for containers. Yeah, you're mostly you'll still need like something that has basic like SBBR or some some on standards for for bring up. Because we it's it's hard to support every single harm sock out there. But if you've got like standard arm, this will work. So I, I don't know about the support for Raspberry Pi, but this is working progress. Nice. Okay, cool. Yeah, I'm not too worried about Raspberry Pi as more and more things come about, right? Like the the arm architectures we should focus on are the server based ones first, obviously, not necessarily for my pleasure, but yes. Yeah, yeah, the server ones are a little bit easier to work on because they put in a start out way. And that's easier to to support. And next, we also have some more FCC sugar, as we call it. So it's more friendly helpers to help you write emission conflicts in a more straightforward way with FCC. So FCC is YAML. That's the one that you should write. And we want to have that being as friendly as possible so that when you write some config, you don't feel like you're you're slow down by the format. Nice. And yeah, we, for a while, we had, we had issues with adding more package more more packages on top, well, not issues, but like limitations. And most of those has been resolved, but we still want to move forward on that. So we federal corrects by itself is immutable, you don't change the host, you don't change the set of packages installed. But if, even if it's not like suggested, you still can do it because our chemistry is capable. He knows how to do that. So you can still have packages on top of that. And it's safe. And, and we, we want to improve on that. So we did a lot of work to make that work and work reliably for days. And we are still working on that to make this more even more reliable and like even more supported. So yeah, documentation two, we want to improve our documentation. We have a good bunch of documentation. If you go on the Federa docs, you can see a Federa correct section and a lot of docs on all to deploy on each platforms and how to work with ignition. And how to write configs, a lot of storage and everything related. And we'll maybe we'll show some some part of it. And finally, some, some, some work on OKD. So OKD, I think it's the next slide. If you could move on, we'll talk about that really soon. So yeah, what's like, okay, we have Federa correct. And there's also real correct. So what's like the difference? What are the relations? So the idea is they share a lot of the tools. We share a lot of the tools that we use to build them and both and and they yeah, they're the same mostly. And the idea is that the difference, the main difference is that well correct is is not to be is not intended as something that you use alone by itself. It's so instead of being based on federal RPMs, it's based on a real RPMs. And it's mostly a component of open chiefs. So you don't run well correct by itself. You run them as part of an opposite cluster. And the difference here is that all the configuration and the updates are managed by the cluster itself. We are operators. So that's basics for open shifts and the relationship with with our course. Federa is is an also by itself. So it's based on for our packages. And even though we share the older toolings, the updates are standalone, you each and everything a node dates by itself. We you can still like other than being part of a cluster and share a big mechanism, but it's not enforced. It's not enforced by the by the system. So yeah, go ahead. So OKD. So OKD is basically open shift on federal echo race. Oh, well, maybe I'll get it. Yeah, it's close to being open shift on federal echo race. So the idea is that you use the same same kind of install it's open shift install and you will get ups and ups and bits of open shift. Directly running on federal echo race and everything there is operator controlled. So it's controlled by cluster. You get almost you get something really similar to what open shift is, but with but based on federal echo rest itself. So the dates are managed the same way and they are cluster managed and the nodes reboot one at a time and one after the other and not everybody at the same time. And the cluster is capable of bringing up machines and automatically if you need more power, more CPU and more memory. So that's basically OKD. We'll have a short demo for if people are interested. So yeah, that's where to get involved. Yeah, yeah, that's where we are. So if you want to go forward with that, if you want if you're interested, you can get federal echo rest on the get federal local websites. There's we have an issue tracker dedicated or dedicated repo and get up to track issues. And of course, we we discuss we also available on the forum. We have a mailing list and RC channel and we'll be talking about federal echo rest. We've been giving a short workshop hands on lab during def cons. At the end of the week, if you want to go over there and meet us there. And yeah, they've got this virtual this year. So everybody can join in. Yeah, it's gonna be cool. Yeah. I'll be doing the dance of the lab. So you'll see me there. Awesome. It's really a good opportunity to have a kind of first experience with federal echo rest and play a bit with it. And the slides are on our speaker deck page. Folks, I just dropped the link in chat. So if you want to pull up the slide and click at the links individually, feel free. Right. Time for some demo. Yeah. Let's go. Go ahead, Kevin. Yeah. So the small demonstration we have today is pretty much showing how to run like a matrix server on the federal echo. It's like host. So we have like this simple and diagram that kind of show you how, how the server and other services are configured. So as the base, you have like federal echo rest that provides more the kernel. So the networking stack. And a container manager. So in that case, we use pod man. And so, pod man is running different services. And all those are configured as system D services. So when the host boots, those system D services will start the pod man pods and the different pod man containers. So we're making use of the pod feature of pod man to have like a share network between each services, so it makes it very easy to configure and very easy for each containers to talk to each other. So they just assume they're on the same, on the same network, you can use like local host. And also like some volumes to store like the data and configuration. So you don't have like to be necessarily a big expert into, into all the matrix server works, but pretty much you have like synapse, which is like the server that implements the matrix protocol database element is the web client. Then you have NGNX as a web server in front of it. And we have like a small NGNX server that you can use to automate the configuration of getting the certificates for HTTPS and having let's send encrypt provide certificate. So we can use HTTPS. I love lesson grip. It's like the most unthought of most needed service ever, right. Yeah, it's easy way to get everybody secure. Thank you, if we really appreciate you. It gives you no excuses to not use HTTPS. Yeah, like my cluster here. I, every time I have to like redo it or whatever because I destroyed it for during a demo or something. Yeah, like that's step one after getting it installed and up and running is get rid of the certificate error. It's just an annoyance that I can get rid of so why not. Yeah. So we talked a lot about FCCT in it's an example of of this email file so you will have a version. For example, how to configure a user and give like some some SSH key. So here it's using some templating by the way there is the link to that repo in the slide. And if you want to have a look, there is a read me where you can, you can follow the instruction and pretty much from that and run your own metric server based on Fedora. Yeah, I just dropped out in the chat here so folks please if you. Yeah, we are kicking the tires. Go ahead. Yeah. Yeah, we just have like a small series that configures C Groups V2 because they are not yet enabled by default in Fedora voice. We just still use C Groups V1. We have like what we are talking about like so for example, creating a pod and starting different services so it's if you're familiar with Ponman, or even like Docker or any container engines is fairly common, you will give a name, like share some variables, some volumes and things like that. So pretty much declare all your services. And those will be provision and configured and run at the first boot of the machine. Like a nice little service that will renew the Redsense script certificates. Yeah, it's checking pretty much every week on Sunday. It's running server to make sure that that our certificate is still valid. So that way you don't wake up one morning and say, Oh, I forgot to date the certificates. And some examples on how you like create directories on your host and how you can copy some configuration file. So for example, all the NGNICs configuration or the configuration for Synapse or Postgres and things like that. So that gives you kind of an idea of what is FCCT and so this is then transpired into JSON. Timote maybe you want to show that part so I'll stop my screen and if you want to share yours. Yeah, I'll do the next one. So here. A terminal, my favorite place. Everything is a simpler feeling. Yeah, so the idea is everything is in the repo. So here it's just a clone of the repo where I've just set some secrets and some variables, some secrets to configure the instance that we are going to create. Because essentially what we have is like this configuration is enough to start an instance by yourself and run it. So you have the main config which is the YAML and we just have a short make file to do some substitutions. So replacing all the secrets value with the one in the config. And if you just go and do that and do that inside my toolbox, that will be it. And you will get essentially is, you will get an ignition config. So that's the last part here. It's a call to FCCT that will pass all the YAML and will give you an ignition config that you can give to a virtual machine. A Fedracor instance. So if I just take a look at what the config, for example, here is like how full you cannot really read it because it's like it's very messy. It's really big and has a lot of compressed data because we have all the configuration inside it. So it's not really readable because what we want to read is this one. It's a YAML here, which is clearly readable with all the data that we want. So yeah, once you have this configuration file, you can go ahead and use your preferred cloud providers, your private cloud providers and spin up a VM, a virtual machine and have it automatically start a Fedracor instance and everything to host a Maastricht server. So yeah, that's what I did just this morning. And if I just go and this is H2 machine, here we go. That's what I have here. So if I just run an Opera Maastricht status here, we can see that I'm running Fedracor S, the latest version, latest table version here. And that's it. Awesome. So what do we have? That's it. What do we have running? Because, well, having a system is good, but like if you can't run anything, that's not great. That's not good, yeah. So here are the containers. So they have been automatically deployed by ignition. The configuration has been written and then the system just booted up and started everything. So I did almost nothing on the system after that. And here we have SNAPs, we have NGNX, Postgre and everything running just like we shared before. So yeah, you can actually try that live. So I've deployed that live on something that is running. So if you go to, I'll just stop checking this screen and share another one. If you go here. So if you go on chat.fcos.fr, you can go ahead and register yourself. So it's a fully running instance of matrix that is federated with the rest of the ecosystem. So if you already have a matrix account somewhere on another server, you can reach out to me. You can create yourself an account here. Of course, the instance will be destroyed at the end of the day. So don't make that your permanent account. And once you've created that, you can log in. So I'll just log in with mine here that I just created and you can send me a message. And that should work. Yeah, just forget about security. We'll do that later. Here we go. The nice thing when it's that pretty much having this FCCT config or the ignition configuration and knowing that the host is able to be provisioned and to run that, it gives you kind of a very easy way to, to ship software pretty much. You just don't to ship your, your individual containers, but you can start to think about like just releasing the OS with all the service running and everything configured directly as one configuration file. So that's, that's quite powerful. I think for, for services where you have like a lot of moving, moving component. Yes. It's, it's really a high, like something like it takes some bit of effort first because you have to write everything down in the config. But once you've done that, it's really, really high value because this one here, we, we, we did this one a while back to for, for an article and we deployed it. So it took us a while to, to make the config. But this one, I just deployed the machine this morning and it was like five minutes the time the machine boot up and, and you're done. And that's it. And nothing has changed in the config. And, and like, so it's really, really highly viable when you want to boot a lot of instances or when you want to do that frequently. Because if you do the work once and then you reap all the rewards every time you do it and it's, it's really powerful. That's awesome. You can forget, you can forget about it and the box will be updated. You get the latest updates. It's, it's pretty cool. Yeah. So that's another one of the features that, that's important here for, for this, for this deployment is that you will, you know, that this will keep the, the machine properly running and up to date with latest features. Because as we auto update the OS, we also auto updates all the containers on restarts. So if you just leave it running, that's going to be fine because you will just like every two weeks, the house would reboot for days. And then you will get all the latest versions of the containers. And if for whatever reason, something happens and that let's say another version or some version of Postgres does not work anymore, then that's perfectly fine. You just go back to the previous one and you move on with your life while people get to time to fix the bugs. Nice. And that's it. You check again in two weeks. And yeah, if report works, you check again in two weeks and things move on. And that's like the, the power of containers with auto updates and everything plugged in at the same time. So Carlos Santana, who happens to work at IBM is on the channel right now. And he's wondering if there's like a K native thing that you could do with these systems potentially, right? Like an HTTP gateway to be able to start containers based on requests or scale to zero, that kind of thing. Any, any kind of serverless or functions that we can do with for us kind of out of the box, just as long as we have the container, it should work, right? Maybe. I think you could probably, but you would have like to quite a much right the glue that there's the orchestration and spinning up the more VMs if you need anything like that. Yeah, you would. Yeah, like that. There would have to be something on top. Right. Like, yeah, you would want like Kubernetes essentially. So that kind of leads me to my next question is how do you go from F cause. To OKD or vanilla Kubernetes, however you want to describe it, right? Like, is it as simple as just saying, you know, I want OKD and I get F cost as part of that or is it something that I can build up together as to kind of like cut my teeth on it kind of deal. You, you'll have to go straight to OKD. So there's like, there's no really middle ground between Federa Chorus and OKD. Well, you can build your own Kubernetes on top of Federa Chorus. Right. That's fine. You have distribution such as Typhoon, for example, that automates all the bring up of the vanilla Kubernetes cluster on top of Federa Chorus. And that works great. And it's using Terraform. So that's support for God, a lot of support for all the cloud providers. But yeah, so, and on the end of the spectrum, you have, you have OKD, which, which in a sense you don't see Federa Chorus in OKD because it's hidden in the radius. Right. And you don't see it. The cluster and Federa Chorus is just underneath there. I think to me is really the difference is really the scale and the number of VMs you want to deal with. If you have a relatively small shop and you have, I don't know, 20, 30 kinds of VMs and you don't really want to get the complexity that comes with Kubernetes, probably having Federa Chorus is a good idea. I think some, some way to orchestrate that with like a small, small number of nodes. If you start to go into something bigger, it's, you know, Kubernetes is more. Yeah, exactly. Right. Like that's, that's exactly what Kubernetes exists. So if you really need to scale at the big, big numbers, yeah, it's, it's good to go directly to Kubernetes or KDE or OpenShift. If you have like a small infrastructure with like some small services that you kind of want them to be low maintenance and not really care about them. I think it's a very good, very good. I just, I like F cost for tinkering, right? Like I just want to throw a container on something off I go kind of deal and Yeah, I think this is also for me, it has a lot of value or something kind of this DevOps pipeline where Yeah, totally great. Like throw it in your CIC pipeline and just let it rip. Yeah. And instead of thinking about just your application as a container, suddenly you can test all your services, the OS, and you have like this all stack pretty much together. And you can almost, you know, develop your application, integrate everything in Federal Co OS and deploy that directly to to to production and continuous delivery where you run your test and your CI on the OS that will be running in production. Yeah, so someone's asking what's like the minimal like, you know, hardware specs I guess for Right, like what do you need kind of out of the box maybe to just run the OS So the reasonable like low requirements are something like two, two gigs or three gigs depending on how you do the installation. If you do it on live big C, you will have you will need three. If you do it on platforms, you can go with two and that should work. Okay. Of course, that's only if you want to run only really small containers. Because if you want to start like again, really big postgres data base, you will need more memory. Exactly. Yeah, I mean that bare minimum line though is good, you know, like far right like you know you need that two gigs, plus whatever you're running on top of it. So you really can, in a more simplistic way kind of be like, okay, well I know this container is going to be a database and it's going to hold about 16 gigs. So I need, you know, 24 gigs of Ram just to make sure everything works right. That's the basics and on the disk footprint, you need at least five gigs. So let's say six gigs, eight gigs is good. If you want to be conservative and other ways the more the better. Right. Yeah, you know, I mean, you can easily run this on your local machine with Podman kind of deal is, you know, as long as you can get a container going. You'll be good. All right. I think that wraps up all the questions. Yes, it is, it is a smaller footprint than CRC, but it is just the OS. Yeah, and just the tools to run containers. It's not the whole kit and caboodle with all the operators and everything. But if you want to get your hands on a developer sandbox for OpenShift, those are available now. And those are ephemeral, I think to the they rotate every two weeks, I think right now. So you can definitely fully kick your tires on OpenShift for free. You can potentially write like just log in with a developer at redhead.com account and off you go. So yeah, thank you, Timothy. Thank you, Clement. This is great. A lot of people, a lot of questions, right? Like that's always a good sign of a good show. So thank you very much for coming on and showing this off and I would love to have more, you know, some related things in the future if you can sometime. How about that? Sounds good. That would be good. Awesome. Challenge. Yeah, great. We'll have to write a new example with almost everything. Yeah, whatever service comes next, right? Like just start adding on to it, right? Like Matrix, IRC. Keep going. Right. Some CI runners or things like that. Yeah, CI or things like that would be interesting. Exactly. A full lab instruction on Fedora Chorus. There you go. Why not? Come on. And then you put way on top of that, you know, everything else. All right. Well, thank you so much. Thank you everybody out there for watching. We will be back on the air here in about an hour with our developer experience office hours. We're going to be talking about auto scenarios, like when to use auto, that kind of thing. So please stay tuned. You know, it's an office hour, so you can bring your questions to feel free. And until then, stay safe out there, folks. Thank y'all. Thanks, guys.