 I will start by saying I really miss going to DefConf in person. I see some familiar names on that attendee list and I remember in prior years hanging out and talking to people and it's tough because every time I would leave the conference very energized and it's just not the same talking like this but I hope DefConf will happen in person again and get to see some of you guys. So what I wanna talk about today is what's new in our PMO Stream as the title says. This is the 2022 edition. I've given some related talks to this at prior DefConf and I know a lot of you come from a mix of backgrounds like some of you may not even know what this is. Some of you may use our PMO Stream based systems. So I'm gonna, there's gonna be a lot of ground covered. I'm gonna try and save some time for questions because they're already coming in. So anyways, let's just dive in. So I do like to start every talk who I am. Yeah, I'm calm waters. I work on Fedora and OpenShift, Red Enterprise Linux, the CoreOS ingredients go into those. But I like to start every talk by talking about why I do what I do because I think that's equally important. I think it's sort of an uncontroversial fact that computers are essential to our society today for better or worse. My, one of my favorite examples of this today is if you're growing grape for wines, if you're running a vineyard, today you might have a drone that's flying over your vineyard and it's taking pictures, maybe even has a thermal imaging camera and is assessing the heat, the health of your grapes. And it's one of those things where, yeah, and then you take that data and you do analysis on it. It's just like a perfect example of like where computers are pervasive, right? On manufacturing floors everywhere. Everyone knows this. And I think free and open source software is really essential to making sure that we have control of the computers and not the other way around. There's the rise of proprietary cloud services and we can't escape using some of those, but I like to work on free and open source software because it helps us make sure that we're in control of this foundation for our society. So how does that translate though into this OS tree? How's it translate into what I do for a database Linux? So if your business or your personal life is involves computers, right? You have data in there that's important to you. You have things that those computers do that need to run. And unfortunately, computers have flaws. Software has flaws. Like one example is there's been flaws in the Linux kernel Bluetooth stack, right? So you could even be vulnerable to a drive by attack where someone just has a device that's out there trying to exploit the Bluetooth stack and just drives by your place of business. Maybe that vineyard earlier and happens to exploit your kernel and take it over. And then maybe you're in a ransomware scenario, right? Not even you weren't even directly targeted. It's just something that just sort of happened, right? Because computers have flaws. So keeping your OS up to date is the best thing that we can do here. I mean, obviously we have, there's a lot of other things to do. You can turn off Bluetooth for one thing. But so applying OS updates is critical. I like just think that we're making computers better more than we're making them worse overall. And if you're running a computer, you wanna apply an OS update in such a way that it's not gonna take down what's running. So what OS tree does is it basically constructs a new route that doesn't affect what's currently running. So it's very different from like traditional package systems. It's more of an image system. And that happens on the background and you take a reboot. I'll talk about live OS changes a little bit later. And there's some links in here, it's image-based. So if you're running, especially a core OS-based system, we build an image in our CI and we test that in the cloud. For example, we verify that we can run pod-man containers in Google Compute Engine and Azure. And then only then once that CI has passed, that image ships to your system and you're running exactly something that's tested. Something very similar is true of OpenShift 4.2. There's a lot of other things going on here. Separating applications into containers helps make sure that you can apply those kernel security updates faster. For example, getting GCC and developer tools out of your workstation's root file system. It's just a good idea, because it'll let you update the OS faster. But so we just don't live in a duality here between RPMOS tree and OS tree. OS tree is a little bit more like Docker or a pod-man. It's agnostic to what you put in it. The idea of RPMOS tree here is that it has really tight integration with the RPM ecosystem. For example, one of my favorite examples here is, it's not a lockdown system by default. So if at some point you need to test a new kernel, you can take the same kernel RPM that comes out of the Fedora build system and override that locally. You can say RPMOS tree override replace kernel. And that's a first-class operation to test and replace parts of that base image because it's essential to, yeah, having you be in control of your computer. So, yeah, there's a bunch of places that RPMOS tree is used today. The first two are products from my employer, Red Hat. Relfor Edge is designed to make custom images in many different forms. Supports outputting RPMOS tree. There's a lot we could talk about that. I will mostly skip that for now. OpenShift 4, we basically encapsulate OS updates in a container. I'm not gonna talk about that too much today either. Fedora Silver Blues, a desktop-based system. And yeah, there's a bunch of derivatives. Yeah, and briefly touching on the OS tree aspect again, it's really hard to work on general tooling. So OS tree is software that can be used outside of RPM-based distributions, right? Again, it's agnostic to what's put in it and there's a pretty big user base for it. People are using it with Debian-based operating systems, open embedded slash octo, a bunch of things I don't even know about. And actually, I've been kind of trying to drain some of the higher level logic out of RPMOS tree and into OS tree slowly. That's gotten easier with Rust. Cool, so some of you may have known most of that. So I wanna talk about what's new. This OS tree native containers change is definitely one of the biggest things that's been happening in the last year. So that's what I'm gonna talk about the most. I'm gonna skip it, I'll circle back to that in a minute. I'm very excited though to note that we've been increasing our usage of Rust because, yeah, applying and managing operating system updates sort of demands, you really want your code base to be fast and efficient and you want it to be memory safe. So you're not subject to buffer overflows and all those classic type of things. So Rust has really helped us out here. So we could touch on here because we've invested a lot in bridging between C and Rust and stuff like that because we can't rewrite everything either. Another thing that's happened in last year in case you missed it is RPMOS tree now has full support for modularity that mostly fell out of the fact that a lot of the modularity logic drained from DNF into libDNF which RPMOS tree uses but there was a lot of other details there. So that's there. One of the things that was cool is someone just showed up and did this very cool patch for making sure that the RPM database inside the build is reproducible. He basically went through and fixed all the embedded timestamps there with someone who just kind of showed up one day with a giant patch and it was very good code. That's one of the things I love about working on free and open source software is that that time when someone just shows up with a big, cool patch like that. What is that person doing with RPMOS tree? I had no idea. There is the change to the SQLite RPM database. A lot of details there around supporting cross builds like where we run RPMOS tree in a Fedora container and then generate rel content. So we need to have the new RPMOS tree be still be able to output the old Berkeley DB format. I think we spent at least a solid week talking on and off about system DE and rebooting, circling back to that automatic OS update scenario where you're automatically rebooting periodically. If you have a scenario where you say, you need to hold an OS update briefly, how do you do that? What's the best pattern, right? Like you need to, you need to debug something on that node or machine. You need to, you need uptime for just the weekend suddenly because you got a sum birth of traffic. What we ended up settling on is system DE inhibitors. So if you have RPMOS tree automatic upgrades on, you can type system DE inhibit and that will block that upgrade because of historical reasons for system DE this is opt in. So we had to do some work. Dnf count me, another cool thing. I'm pretty excited was in 2021.1 release, the first release of 2021, we rewrote the apply live code. So when you talk about duality RPMOS tree, yeah, supports OS tree and RPMs, that's one of the dualities. It is an image-based system, but it also, we also invested a lot in being able to cherry pick and apply live changes to your running root file system in a safe way. So again, in case you missed it, you can now type RPMOS to install dash A, some package and that should generally work. Yeah, another cool thing is we actually switched to being the main function is now Rust code and it basically the rest of the logic that was there before is now a C++ library, C and C++ actually, which is its own story. There's a lot of work in reworking the build system there around auto tools and a bunch of stuff. I hope eventually the build system, you just type cargo build into work. Okay, so this is problem. So that's a bunch of other stuff that's happened, but I know there has been a lot of interest in this OS tree native containers. So the way I like to summarize this is, yeah, what you see up here at this very front is like the sort of what you can write a Docker file that looks like this today, where you take our base image, add some content, add some binaries, maybe it's extra packages, maybe it's something else, add configuration, and then you can boot it. So you build it as a container, the output is a container, but then you just type our png mystery rebase and it can pull that container. And we'll dive in this more. So why are we doing this though? One answer here is that I think we've been fairly successful so far with the strategy of keeping your container host small and running your workloads as containers. That's generally worked out well, but there's a lot of cases that are in between where people have security scanners or agents and things like that. So basically by, and we wanna keep supporting that, like if you're a happy user of one of these our png mystery systems today, we're not gonna break anything, I'll touch on that too. But if you wanna do non-trivial customization, we're giving you a lot more power with this and more responsibility. So yeah, this is basically bridging, bridging the worlds of configurability is the way I like to think about it. Like you, we're giving you a new way to configure things. Another bridge that was created as part of this effort is in the container ecosystem, Go is obviously very popular. Go really wants to be the only thing in your address space. Mixing Go and C and C++ is not the best. And so I really struggled with this for a long time because in the OSTU stack, we've been investing a lot in Rust and we have, for example, LibRPM is C. It's just not gonna change tomorrow, right? And I didn't wanna link in Go. So basically what we ended up doing, if you click on this link, you can find more, but basically there's a new IPC mechanism for the containers go, containers slash image library, basically as part of the Scopio binary. And so our PMOS team now knows how to fork off Scopio and delegate a lot of the pulling container stuff to Scopio. Just as one example, in the containers ecosystem, there's good support for GPG signatures of containers. There's good support for like mirroring. And basically we want all that. We don't wanna rewrite the container stack. We wanna leverage what exists while still being able to use it for OS updates. Yeah, the current proxy is basically between Rust and Go, but nothing stops you from writing, say, Python code that talks to this proxy. Okay, so I sort of mentioned this before. So we're kind of rethinking of our PMOS tree and especially OS tree as like we're producing a base image that you can boot. And I really have to emphasize, we've built up a lot in the OS tree and especially our PMOS tree stack. Like, so for example, you have client-side overrides. I touched on the RPOS tree install stuff. I touched on the fact that you can replace your kernel when you need to. All that is gonna keep working. If you're using OS tree today, it's gonna keep working. We're not dropping anything. If you're using OS tree for an embedded system and using OS tree is really efficient static deltas, it's gonna keep working, okay? But what we've done is add a whole other backend basically. So I am hopeful though as part of this effort, if it works out that we will switch probably Fedora and the derivatives to use this new to use containers on the wire instead of OS tree on the wire. And there's a lot of stuff involved in that. One cool thing, and I'll demo this later, is it now actually becomes much more first class to add in custom non-RPM content into your booted OS. And it also, yeah, also talk about binding configuration and code. One thing that could be its own whole sub talk of courses, if you haven't kept track, there's a lot of RPOS tree stuff going on in Fedora. We could actually rethink of things, how we built things so that for example, Fedora Chorus or maybe some part of Fedora Chorus was actually used as a layer and then the desktops were built from it. Should the desktops have a common base image layer? A lot of interesting stuff going on there that we could do with this. So probably one of the things you're wondering is I gotta emphasize all this code exists and works today. Like there's links here from the tutorial. You can try it out, but there's stuff that's not implemented. For example, one of those things is we don't garbage-collect container image layers yet. So container image layers will just continue to sit on your system unless you kind of manually prune them later. And we've discussed shifting some of the logic for what happens at build time actually to be inside the container. So one example of this is today, like OSD and RPM OSD have first-class integration with SE Linux and we do the labeling on the client side today, but that's really ugly. That involves running through thousands of regular expressions for every file. We basically wanna move that to the container build time. Basically to shift the OSD stuff that happens to the build time. That'll probably just involve a command you need to run in your Docker file or equivalent. Okay, yeah, this is me trying real hard to express something very detailed in a single slide and cruelly failing, but let's dive in. So if you're using Fedor OS or OpenShift for today, like there's this idea that we ship a tested image. That's the code aspect of this. That's the slash user file system. And then what you do is you take our tested image and you configure it, right? And you have your data, which we have nothing to do with your data, right? We're not gonna break your home directory or your container images, all that stuff. So yeah, you per machine typically configure this image. Yeah, and I touch on that here. So for example, you could use Anaconda to configure this image or ignition with Fedora CoreOS, that sort of thing. So the thing that's new here and I think it's gonna be really interesting is that now it's much easier to bind exactly version together, the binaries with the configuration. So in an example here is, let's say you wanna add USB Guard into your OS and you wanna have a specific configuration for USB Guard or anything else, right? Maybe it's your configuration, your mirroring configuration for Podman. Maybe it's your TLS configuration for the host OS. Now what happens if you build a container with this, you can add configuration into Etsy and then when you type our PIMOS tree upgrade, you get exactly that pairing of configuration and code together as an atomic unit. And which just wasn't really possible before, like another way to think of this is it's now much easier to sort of transactionally update Etsy too, which a lot of people have wanted. And yeah, as I touch on here, this is a big change between what we currently, you know, how we're currently handling things with say anaconda-based systems or ignition-based systems, that sort of thing. Yeah, this idea of, Nixos has a lot of cool things going on. This whole talk could be about like, well, what ideas we could steal from them. But I just want to mention it's they also sort of try and bind this configuration of your system along with the code. Yeah, the OpenShift MCO also tries to do this. And sort of people try and do this with Ansible too. Okay, so yeah, I sort of touched on this before. Here's a user story, you build your container and then you just boot it, right? So you run this command is what it looks like. And what's going on with this OSTree unverified registry thing is I'm sort of of the opinion, very strongly of the opinion that your containers that you boot should be signed. And this is making clear it's not. So one of the OSTree things that we're carrying forward is actually OSTree has first class support for GPG signatures. And we can actually, we today embed that GPG signature inside the Flora CoreOS image and can verify it client side. So that's not something the container stack does. We're not using the container stack signature stuff yet, but we will in the future as well. And then, so the idea is, yeah, you boot that container for your system or, you know, array of systems you can enable automatic upgrades. And then let's say you wanna make a change, right? You go into your Docker file, we're actually trying to also work on a declarative build system which is its own whole subtopic I haven't touched on here. And then, yeah, you can just enable automatic upgrades. We'll probably also make it much easier to take that container and wrap like a bootable disk image shell around it, like a VMware OVA or a bare metal ISO or an Ami, Azure image, all that type of stuff that image builder does. So yeah, I'll just skip out the alternative history, but this is like one way I've been trying to describe this is like, if we just made it so yum natively understood how to boot containers like this, all this RPMOstry stuff, I don't know whether that makes sense. So here's what's gonna happen. There's a lot that still needs to happen. I mentioned the garbage collection bug, but yeah, this is what I've been doing for a while and we're pretty committed to it. There's, yeah, another thing that definitely needs to happen is right now we have like this embryonic implementation where we basically take the whole Oustry commit and we just turn it into a single giant tar ball. Container images are basically tar balls wrapped with JSON and this pull request that's in flight is basically breaking it up into a series of content address layers, which means that you don't need to redownload when there's a kernel security update, you don't need to redownload everything. So that's gonna happen and that will I think be the number one thing that will make this practical for many more use cases like outside of like a data center where you have fast networking. I do wanna get apply live out of experimental continuing our oxidation. The rethinking origins is actually another NICS thing where we basically wanna have like a YAML file in Etsy instead of typing our RPMOstry install foo, you actually edit a file and then declarative site rebuild to my desired state. And that's actually happening as part of this container stuff. If you wanna contribute to this stack, be happy to help mentor people. There's a very friendly group of people behind all this. So yeah, we're hoping to make this happen in 2022. So actually where am I in time? My session chair will say. I definitely leave time for questions. So if you want more links. Yeah, I think you are on thing. Okay, still on time. All right, so yeah, there's a bunch more information. I've been trying to drop in like announcements into this into this bugzilla. There's a lot more stuff. So just to reiterate again, most of the container stuff is actually happening in the new OS tree Rust code. So again, we're trying to still support using non RPM content in there, non RPM, no RPMs at all, whether that's Debs or whether that's open embedded or something else. But then RPM OS tree will kind of bind this stuff together with the RPM ecosystem. So I know that was a big dump of information and I'm hoping that you guys have your questions. So I will stop showing my screen. Yeah, we have a bunch of questions actually. There was an amazing conversation going on in the chat as well. Okay. I would like to begin with a question that popped in early. Just, how much does the oxidation affect OS tree's binary size from? Yeah. So that's a great question. In the OS tree community, like I mentioned, there's a lot of embedded users and they were very conservative. Like they basically had invested in supporting C, tool chain, C and C plus plus. And you know, there's a lot of cross compilation, which Rust supports as well, but it's a big thing. And so the OS tree core has mostly been unchanged, the OS tree C library. So the user bin OS tree binary and the C shared library do not include Rust code today. Okay. So you can use it without Rust. All the new logic basically links to it using the Rust bindings. But I would like and basically all the, so what happens is we didn't want to duplicate all the Rust logic between OS tree and our chemistry. So basically our chemistry has all the rust stuff, even though on Fedora systems at least. There's a lot of details there, but I'm trying to break existing OS tree users, if that makes sense. There were, I think you can go through the chat as well. There have been some interesting discussion going on around. I'll pick up the question from Q and A. He will go on asks, are there any example scripts of how to create an RPM OS tree commit and wrap it as a container image? Yes, yes. Megas I can just share my screen, but it's in, yeah, if you have an RPM OS tree commit, you basically type OS tree container in Capsule. And I guess I could share my screen and demo this too, but it is, it's in the docs. I, yeah, I'll dig up a little bit. So yes, that's fully supported. And like, I gotta, I guess I should emphasize, we're trying to do this really carefully. We're not throwing away what works today because a lot of stuff does work. So the build system can just, if you have something that outputs an OS tree commit today, you can just use the tooling and convert it to a container, losslessly, and then you can un-encapsulate it basically back into an OS tree commit that's GPG signed and all that stuff. Great. Another question we have from Martin Coleman is, do you know about OS tree being used on mobile devices or mobile OSes based on OS tree? Seems like a perfect fit for robust update on mobile devices. For mobile, like, yeah, especially mobile phone type things. You know, I think it's hard to overstate the 800 pound gorilla of Android in that ecosystem. You know, I think, so no, is the short answer to this. I think it's, yeah, people are, Android though really is oriented towards things that have a user interface. And so I think where people are using OS tree type systems more is headless embedded, like the IoT type space, competing in the mobile phone market as a whole other challenge, I think. Okay, great. I think with three minutes of what I'm in the questions, but chat has been interesting. Cool. Yeah, sorry, I'm just still trying to scroll through this. So I think this is actually the last talk. So I'm happy to hang out. I know for some of you guys in Europe, you may have been up for a while and want to go, but oh, actually there is the wrap up, but I'll hang out after. And you know, if anyone wants to chat, you can catch me on Fedora Devel, Matrix, Fedora, also on Fedora co-ass on Matrix, and yeah, I'll be around in the dev comp. So that's about it. Thank you for the session. It was a whole lot of crazy information for us. Thanks again. Thanks, bye bye. Thanks everyone. Thanks to the audience here.