 Hello, everyone. So the next session is starting. Please welcome Colin Walters. So yeah, my name is Colin Walters. I work in the platform engineering group. I work with Dan and a bunch of other people on our container efforts. The community side is called Project Atomic and we ship a lot of this software in a supported enterprise release in Red Hat Enterprise Linux. My sense of identity is inextricably linked with two large prime numbers. That's the first link and then I have email addresses. And honestly, if you guys have random questions that you think of after, don't hesitate to email me. I think email is one of those things that we just don't quite use enough sometimes. Obviously there's a bunch of public forums or questions, but I don't mind personal email. So one thing I like to start out my talks with there. I'm here because I like to work on free and open source software. I love working in an international community of really intelligent people all around the world on free and open source software. And let's not take it for granted. All the software that we've built and we're talking about here is free and open source. And let's do what we can to support it. I think it's really important. Just a quick note. So the talks before were really great actually. We didn't coordinate a lot, but they should really set the stage for this one. Because in this talk it's going to be fairly advanced in that I'm going to assume you know about pretty much everything that's been talked about before. Particularly the high level. I'm not going to talk about what Kubernetes is. So if you didn't see Adam Miller's talk before, hopefully you at least know what Kubernetes is and those sort of things. So Project Atomic is a codename for a lot of our container efforts at Red Hat. And there's a lot of things that are now under the Project Atomic banner. And I'm just going to summarize this really quickly. You know obviously it kind of started with Atomic host in our Docker efforts. We brought in Kubernetes. There's also a new Atomic app effort which allows you to more easily provision container images with answer files. We have a really slick new vagrant box where you basically you boot up the vagrant box and you get OpenShift and you get all this stuff together. The new version is really nice. It's definitely worth trying. So anyways there's a bunch of stuff there and I'm really happy with all the things that's going on under the banner. And of course we're closely tied to our sister project OpenShift version 3 which is also based on top of all the Project Atomic technologies. And for those of you who don't know, one of the things that's actually interesting and it's going to cross some of the threads that we're talking about here. OpenShift v2 and v3 are radically different. We basically for OpenShift v3 it doesn't really use any of the same technologies except the kernel. But the advantage of this and there are some trade-offs with this. For example you know people ask well how do I port my cartridges? You know there's going to be some work on that. The benefit though of rebasing OpenShift v3 is that it's a lot more general. Container technologies have matured enough now to where you can run general purpose workloads in our container environments in Docker and Kubernetes. Whereas before I think there was this perception, valid or not, that OpenShift v2 was more of a web app pass. Would you run your Postgres instance in? You could actually, but now container technology has evolved a lot more to the point where it's a lot more general. And we show that. So specifically one of the things I'm really interested in is the concept of the traditional Linux distribution and how that changes as we moved in this container world. I've been in the distribution space for a long time. I went to a long time ago in a past life, maintained Build Essential in Debian before I joined Red Hat. I've done a lot of this stuff for a long time. And I want to highlight there's a lot of advantages to the way the distributions are structured. There's a lot of reasons why Debian and Fedora, the original Red Hat Linux split into Fedora, Red Arbor and Linux. There's a lot of reasons why the model has survived and thrived so long. And obviously one of the biggest ones is it's very easy to create something. It can be a lot harder, most commonly, to maintain it over time. I've certainly learned that. Actually in the demo I'm going to do later, there are honestly some hacks and it's just the start of something. So if I really want to make it work, it's going to take a lot of maintenance. And the same is true with a lot of the way we're building applications. Few people want to maintain open SSL in their application. If you have, there's a lot of stuff that you don't want to maintain as an application author. So a lot of this model still applies. And beyond just the common maintenance, there's a lot of interesting things that have happened in the distribution model. I think is a good example is in Fedora there is a drive to have a crypto policy, which was a standard configuration model for the different crypto libraries. So you could, for example, say that I don't want to speak TLS 1.0. Any use of HTTPS should always be TLS 1.2 or greater, because there have been a lot of vulnerabilities in older protocols. And this kind of required driving a common change across a couple of different crypto libraries, across NSS and open SSL and things like that. So I think this concept of shared maintenance of a core is still very valid, and there's a lot of reasons to have it. Yeah, and now there's like license verification and things like that. So one of the things that's really, that wasn't initially clear to me for a long time, is that the problem domains of how you build something is intimately tied to how you deliver it. I mean, you can always do, people talk about continuous integration, right? And it's very easy to stand up Travis or something like that, which is just basically outputting a web page that says whether a thing built or not. That's not delivery, right? That's just giving you HTML. And that doesn't really, I mean, it's useful, but it doesn't really matter. But when you start to deliver something, all of a sudden how you build it becomes very closely tied to it. If you're releasing RPMs, you need to know a lot about, well, how is that consumed? How does the RPM version numbering work? All that stuff. There's all these details that become very critically important. And once you've delivered something, it becomes very important. How do you manage it, right? And this is a thread that crosses a lot of different things. So once you've built that RPM, well, there's a bunch of stuff that knows how to consume RPMs, like Ansible, I mean, just, and the semantics of where the config files live, you know, the fact that config files live in Etsy. A lot of this stuff gets built on and accreted. And so these problem domains are really tied together. So it's a challenge when we change any of these earlier layers. You have to think about, okay, well, how do I manage that, right? This is very true of Docker today. You know, as we've introduced a new way to deliver software, there's whole new ways to think about how you manage things. So for example, if you have secrets, keys, you know, in Kubernetes now we provide a secret store. And there's, you know, there's certainly a, I asked Adam Miller a question about config management. You know, what's the role of config, classical config management in this new world of containers, right? And obviously you can still write to the Etsy directory in containers. You know, if you have some private CA certificates or something, you can still drop those in your container build. But these things change. Whenever you're thinking about a new way to build something, if you're thinking about a new delivery format, you need to be thinking about how does someone management, how do these management tools affect it? Because that's something that matters over time. And so with virtualization, for the most part, I mean, there are unicernals and all these other new ways to use virtualization. But for the most part, the model, as far as I can tell, that really won with virtualization, was just took what you did in a physical box and we put it in a virtual box, for the most part, right? So classically, you run YAM or AppGit or whatever inside your vert box the same way you do on a physical box. We didn't change how we manage software when we move to a virtual environment, for the most part. And that is advantages because all that stuff transitions. It also has disadvantages in inefficiency and things like that. And that's what I'm going to talk about. So where we are at Docker today is, yeah, Docker's got wild levels of adoption. And one of the interesting things, one of the reasons I think this is, is because we are actually, in a lot of cases, just doing what we did before. In a lot of cases, we're treating Docker like we're kind of treating virtualization. In other words, you still have YAM or whatever inside that base image. And just like with vert, there are people who are doing different models. There's been a lot of prototype work on producing basically application-specific images and things like that. All this stuff exists, but what I'm saying is with Project Atomic, we kind of, for the most part, just put what we did in a box before. And this has resulted in a lot of interesting tensions. So Dan Wallace did a really good presentation on Docker versus SystemD. And this is actually a kind of consequence of the fact that we haven't changed how we built software, right? We're still putting RPMs inside our base images. So, yeah, and there's a great example. If you look at the Fedora Apache Docker file, yeah, it's basically just yum-y install htpd, and they change some config files. They basically just have a new shell script to start Apache. So it's all pretty simple. One of the things that I will call here, though, later today, in OpenShift v3, there's also a different type of Docker build system called SourceToImage, and it has a lot of new advantages. One of them is that it runs as non-root, and that's primarily what I'm going to be talking about today is this role of RPM and root versus building. So, yeah, Dan did a really... In Dan's presentation on SystemD, one of the things that came up at the end was, are you running as root or not? And user namespaces. So, I want to make an assertion here that, again, we've mostly, up until now, been doing what we did before, just putting in a new kind of box. What I want to argue is that containers are the right time to move to doing RPM as non-root, and people have done this before. But once you take this out, a lot of the other technical parts become a lot more clear, I think. And there are a lot of good ways to do this. A lot of good reasons to support this, to support RPM as non-root. I'm going to talk about that in the next slide. So, the other thing I want to talk about is, right now we have Yum in our base image or DNF or whatever is in there. But in a lot of cases, as Adam Miller was talking about, when you get to immutable infrastructure, you want to use a supported, common-maintained base set of software when you're building something. But you don't want to go in and run Yum update inside each one of your containers, right? You actually want to just use it on the build side and then have thin images on the output. And that's actually a real challenge today with the Docker layering model. So, and I'm not the first one to do any stuff in this area. In fact, Adam also mentioned Linux v-server, which I'm going to link to as well, and that's ancient now. And in some ways, we've failed to learn from some of the things they did. A good example of a great project in this area is Richard Jones has a tool called Superman, which also runs as non-root, and it wasn't designed to build containers. It was designed to generate virtual machine images, because you can run VMs as non-root. And this is one of the neat things about virtualization. So, you know, he basically wrote a tool that just unpacks packages and makes VM images. And it has some hacks that are kind of similar to mine. So let me talk more about non-root. The original Unix was a time-sharing system. You had this box, and one of the things I find hilarious is there still exist, like, multi-user environments of, like, small groups of people on the Internet. You can, you know, give me... You can ask for a shell account in a particular area. I find this hilarious in the world of, you know, Amazon Web Services and these other gigantic public clouds where you can get a slice of a VM for a small amount of time, but it's kind of cute. You can then type who and see the other people logged in. Anyways, but the point was the original Unix was time-sharing. You could log in as non-root, and, you know, if I'm a scientific researcher, I could run my, you know, some sort of cluster of math in one process group, and then, you know, another researcher could do something else on the same machine, the same kernel. So since then, we've kind of reverted in some ways. By having a lot of software require root privileges to run. There are reasons why... And this gets into sort of the centralized software management versus per user and the different ways we control that. But what I want to talk about here, specifically argue as the security target, is in an atomic enterprise in OpenShift v3, and this is trickling down into Kubernetes, there's this concept of docker containers that are under security constraint, must run as range. And what this does is actually pretty interesting. So you take your docker image, and then the system picks a user to run as. This is actually kind of technologically similar to the way Android works. When you download an application to your phone, it obviously doesn't run as root because that would be crazy. So it allocates a user ID for that application dynamically. And in a clustered environment, this is more general. It's a powerful way to do things because it ensures that you can isolate applications from each other using that same classic user ID isolation mechanism. Now, there's also a reason that the public cloud model has been built on virtualization. It's because virtualization is more secure than a shared kernel. It probably always will be. And this is certainly, there have been flaws in virtualization. There was at least one local exploit for QMU. But the CVI linked here, too, was a local root exploit that just any unprivileged user could access. It's unfortunate. It'll probably always be the case, but we try and fix them. And it's not a reason, of course, not to do multi-tenant systems. Okay, so the other thing I'm going to demo a little bit is in the Docker image model, you have a set of layers. Where's the way OS tree? So I guess I should back up. Before Docker existed, I was writing this program OS tree, which is what's used for the atomic host update side. And there's technical reasons why we have two different formats. And I'm going to get into, they have different advantages. And I want to talk about some of the advantages of the OS tree format. So one is that basically what OS tree does is it just checks on each file, a lot like Git does. I mean, it's very much like Git, except it's designed to ship around binaries and not source code. And along with that, it has some data format changes. So for example, it stores extended attributes and UID and GID, things that Git does not. So that was why I created a new format. It also uses SHA-236 instead of SHA-1, which I think is the right choice, and some other details. But the thing, what I'm going to demo later is how if you have this model of checksum subtrees, it's more powerful in general than layering. So if we want to go this route, and I think we do, because user name faces are probably going to, you know, we're probably going to invest in them more because the thing is you can just run YUM or again, AppGit or whatever, unmodified. The problem with user name faces is they add a whole new attack surface to the kernel. And what I'm trying to push for is if, again, the thesis here is if we change how we build and install RPMs to run as non-root, a lot of complexity just drops away. Okay, so one, but if we want to do this though, we need also a container runtime that's ready to run as an on-prilage user. So, again, before Docker, I wrote one of these called Linux User Trout. Their couple is actually only about 500 lines of C code. But there are a lot of other container runtimes, too. So one of the approaches that we could take is we could filter access to Docker or SystemDnSpon or RunC or whatever one of these other container runtimes is. It's a challenge. Kubernetes is kind of doing this by default now. So that's one thing to look at. And there are others, actually, zero install has some pretty neat ideas, too, that would be useful to look at. But again, so installing RPMs is non-root. So what I'm going to demo here is actually, in some ways, really similar to an improved version of what the Linux V-server people were doing, you know, 12 years ago or more, a long time ago. If you click a link to this Wiki page, basically what they do is, you know, it's like the equivalent of Yum install root or I think the W1 is Debootstrap or something like that, or basically you just run the packages, you unpack them into a root. And the problem is these don't share storage, right? And this is one of the things that Docker changed with this concept of a base image that you branch off of. But what they did is they wrote this tool, v-hashify, that just scans all your files and checksums them, and if they're identical, makes them into hardlinks. And OSTree is like a really overgrown version of this. And actually today, for a long time in Fedora, there's this tool called Hardlink that if you just run regular, you know, Fedora with Yum, whenever you have multiple kernels installs, it checksums them and dedupts them. Again, OSTree is just an overgrown, very polished version of this. I'm certainly not going to claim it's innovative, just that it's a pretty good implementation of it. Okay, so I want to actually demo something. I gave a similar talk to this at ContainerCon, but at the time I was just saying, hey, we should do this. And, you know, I realized I basically need to do it. So, okay, let me go here. So, yeah, running as non-root, right? So what I'm going to do here, what I have is a local mirror of CentOS 7 because I didn't trust the Wi-Fi to stay up here while I was doing my demo. So, yeah, I just have a local mirror. So what this command does, and there's no container runtime here. All I'm doing is managing hardlinks of directories. Like Linux vServer was doing. Again, just a little bit better. So there's a couple component. There's a number of directories that get created here. One is there's an OS tree repository, which, again, runs as non-root, and it's in what we call bear user mode where the files are unpacked. And I'll show how that works in a little minute. There's a, this directory, rpmmd, reposd is like yum.repos.d. If you put repo files in there, they get parsed. And then finally, there's a roots directory. So what I'm going to do now is make a container, oops, I just tested this. Okay, rather than debug that, I'm going to go over here and clean this up. I made a Docker container of this. Hold on a sec. Oh, sorry. Anyways, it's not working. Just give me a sec because my sentos mirror is actually on this external hard drive, so I forgot to plug it in. Okay, just give it a sec. Sorry, I forgot to plug that in. Okay, and now I need to system control reload because of the system debug. And, okay, cool. Have my cache again. And, all right, let me get back in my... One of the things I was going to show later is this is actually inside a Docker container just to make everything a little bit more meta. And then, oh, yeah, and something I probably forgot to do. So I'm enabling this sentos repo. Hopefully this will work. So I actually left this warning in because it's useful to explain why it happens. So I actually built this Docker container using the tools that I'm demoing right now, and there's no Etsy password inside the container assigned to that UID because something has to do that. Okay, so you can see that was actually, that was pretty fast. So what happened here? So again, we did a depth solve on bash, the same way YAM or any package manager will do. But we're, and we download them, where things actually start to get different is that rather than unpacking the RPMs using RPM itself, OS tree takes over and parses them and imports them into an OS tree repository. So this is where each file in the RPM is getting shot 2326 checksum and that sort of thing. So if we actually look at, if I look at the branches in the OS tree repository, you can see I have a separate branch for each RPM that was used as input to the system. So this again is kind of like, imagine you just did a git init and then unpacked each RPM and did a git branch and git commit. It's kind of like that. Except that unlike git, this mode stores the files uncompressed. So, you know, if I look at this one, I don't have filing in this Docker image. But anyways, the point is just a regular file. Hopefully it's text. If I cat it, it won't work. Yeah, it's just a file. Anyways. So, and then, so the OS tree repository stores these hard links. And then inside this directory, you can see I have a bash root. So what this tool is doing is like what the OS tree or RPM OS tree for the system does. It basically uses sim links to point to different charoots on the system. That's how the root updates. And I'm just doing the same thing for these root directories. So if I, so up till now all again I've been doing is unpacking RPMs as non-root. And there's been stuff around this before. So what we actually need to do now is pick a container runtime. Now again, I could use Run C, I could use system DN spawn. I could use any of those tools for this. Those tools all require root privilege. Whereas this tool does not. And I claim it's secure in the sense that you couldn't affect the system integrity with it. There are a lot of challenges to non-root containers around resource controls and things like that. I'm not solving those. I'm just arguing that once we get to the point of doing RPMs non-root then the need for a non-root secure container runtime becomes more important too. So now I can go in here. So let's use a treat as actually kind of awkward and unfriendly to use because it's a little bit more of an API. Yeah, okay. So now what happens is I just entered the container and I don't even have LS anymore because the container just has bash. It's really tiny. But that's not that interesting, right? So let's do something a little bit more interesting. So if I do our PMOS tree container assemble HPD. So a couple of details here become pretty important. You can see that I have 111 packages I'm going to put in this treat. But I'm only downloading 94. Why is that? Well, that's because we have this concept of a shared store for each, for all the exploded packages, right? So I don't have to download those or recheck some of them. It's using the fact that OS tree has a get like branching model for... It's kind of like if you took yum or apkid or one of those other things and you just cut off the bottom half that writes the file system. That's basically what I've done and take it over. So yeah, okay. So now we have an Apache root. Now if I go in here. So I'm going to do bash inside here. Okay, so one of the things that I'm doing here is I'm moving all the config files to user Etsy because the idea is that you want to mount one of the things I omitted here is you actually want to mount user read only. And that actually works as non-root too because we want our containers to be immutable even though I'm still using the Linux kernel container features as non-root here. But so what I'm going to do... So I'm going to make a copy of those Apache config files. And this is actually the same thing that OS tree on the system is doing. We're just making a copy of Etsy, which again, just all works as non-root. Oops. So... Okay, so we have the next failure. And this is actually an interesting topic, right? The RPM package of Apache comes hard-coded to expect there's a user named Apache. But if we're in a Kubernetes must-run-as-range model where the system picks the UID or like Android where the system picks the UID, what I'm going to argue is that we should build on top of this and make this whole thing a lot more dynamic. So you don't have the RPM post running user add. You don't need that stuff. That stuff should come from the system injecting it. So let me actually... Yeah, this is a pretty... I did actually add Vim to my demo container, but it looks like I was running a different one. Just going to hack this up. Let's try this. So honestly, I actually switched from Nginx just before this conference because I didn't trust the Wi-Fi, and so I did a mirror of CentOS, but then I forgot that CentOS Core doesn't have Nginx, so I didn't fully test running Apache. Honestly, yeah, I could make this work, but the point is, yeah, Apache will run happily as non-root. We just have to change some things. We also have to change the port configuration, so I'm not creating a new network user namespace here. I don't need it to bind on port 8080 or something like that because I'm not... Yeah, so one of the important things is... A lot of the container runtimes do network control, so in a Kubernetes environment, the system actually allocates a separate IP per pod and makes sure... There's some really fancy networking stuff you can do to make sure that the different tenants between Apache... Or between your containers are isolated, and that's all stuff that Linux user intrude is never going to do, because it's not going to give you any more privileges than a classic Unix system had. So in a classic Unix system, you can't just SSHN and create network interfaces, right? So that's, again, something... That's a problem I'm not solving, but I think the higher-level container runtime should handle. All right, so I could get Apache to run, but let's take this one more, and... So when I assemble a Postgres container, you can see I'm only downloading two packages because it shares almost everything else with that Apache container, except just Postgres, right? But unlike in a Docker layering model, each of these file system trees is as minimal as the RPMs will let them to go. Now, there's a lot of other stuff that we can do to make the RPMs more minimal. So, for example, the RPMs actually right now require system D. Now, if we take this the next step, this gets a good complicated because system D has a lot of useful features, but what I want to assert here is that in a multi-tenant environment, we don't want users uploading system D unit files. System D is... So we don't want them to allow users to control the system boot process. So by moving to a forced non-root model, I'm kind of solving that because those users can't affect the boot of the system, and that affects how system D works. Anyways, a lot of different details. That's the core of the demo. So each of these roots, I got to emphasize, is just hard-linked trees, and that means they all share memory. So one of the problems with the Docker layering models, if you have multiple base images, they don't share memory and that sort of thing, because it just doesn't know. Whereas if we use OS Tree, we can make that work. So I got to fly through the rest of this stuff. Yeah, that's the demo. Yeah, so dealing with post means we'll need some sort of runtime. So one of the things I want to do with this technology is actually just go in and replace the whole guts of the Mock RPM build tool, because that's not competing with anything else, and right now Mock is actually not secure in that if you're in the Mock group, it's basically just a glorified root shell, because it's actually really hard to write container tools that are accessible to non-privileged users by default and be secure. Anyways, and also the fact that you use OS Tree hard linking will be... It should make take... It should change installing a build root from something like minutes to 10 seconds or less. It's really fast. So another thing I want to do with this is you still use Docker, but this is a container that you run in your infrastructure and you give it inputs. You say, I want containers with these RPMs, and it knows when to rebuild them. Oh, and actually that was... Sorry, that was actually an important part of the demo that I skipped. Let me just dump it back. Right, so again, going back to that first point, making things as easy, how do you maintain them over time? Sorry, I skipped this part of the demo because I think it's cool. So I have another repo here, a demo update.repo, and this basically just has a new version of OpenSSL, right? So the next Heartbleed comes out or whatever, right? In what we have right now in Atomic Enterprise and OpenShift V3 is this concept of an image stream where when a new base image comes in, it'll rebuild all of your apps and things like that. I can actually do that a lot faster. Oh, wait, hold on. So now, when I rerun the upgrade, it saw only one new package. This machine, whole machine only needs to download one new version of OpenSSL, and it knows when to upgrade my container. It made a new trute. It's all hard-linked and clean. And again, if I type upgrade again, it's like, you're done. You are secure. It was that fast. Okay. So that's one of the things that I want to do is have an infrastructure container that can basically just generate your Docker images assuming you're using RPMs only. And then finally, the other thing I'd really like to do with this is basically have a centralized server, and this goes all the way back to NFS roots. You know, why unpack all of the software into each machine? You can just have a centralized server and mount it read-only. And that meshes well with all this model. So I'm pretty much, yeah, okay. That was the end of my slide. I only have five minutes for questions. So, questions. So the question was, where am I finding this being used or most applicable right now? So yeah, I was trying to answer that with the last couple of slides. So I'm going to start with this and it's actually secure. I believe anyways. And again, or build Docker containers. So I mentioned that this tool actually was built using its, the container was built using itself. So I used RPMOS tree container assemble to make a file system that just had RPMOS tree and Linux users root. And then I exported it into a tar ball that I Docker import. So that's part of how this infrastructure would work is like, you have the shared storage on the centralized server and then you export it to Docker or you just mount it from Docker. So the Docker daemon could learn how to mount it. I'm not sure. How's it compared against what? Oh, NixOS. Yeah, there's a comparison there. So the executive summary of OSD versus NixOS. NixOS is also a build system and they have this fairly rigorous process where they check some all the inputs. And if any of the inputs change, and I basically don't think it's practical to rebuild your entire infrastructure for the next G-Lib C security update. Otherwise, they share a lot of ideas. You know, they have their own binary format. They could probably just use OSD but OSD is not attempting to solve. OSD kind of replaces the bad parts of RPM only and that's intentional, right? Like I'm not trying to make a new package manager because that has all sorts of ramifications. I'm just changing how we write to the file system. That's all. They have some good ideas, too, but it's the rebuild thing that I think makes it very impractical. Yeah, so the question was can a system administrator provide a base tree and then users add stuff on top? Yes. So that gets to the point of if you have these really big apps, you probably actually want some notion of layering. It's interesting, but remember even though there's no layers in actuality, in practice, they all share storage, and it's all pretty fast to assemble each root. So if you have a user that wants a different version of Apache, that root shares all the same storage with the rest of the stuff, transparently and automatically. But yes, probably investigate something like that and this will also help be our PMO Street package layering. Okay, it's a great question. How do you add stuff that's not packages? I would definitely insert that's one of the number one most popular things about Docker is you can basically, you know, yum or app, get some stuff and then you can pip install and then maybe I'm going to use cargo or go or something else to add more stuff and glom it together. The problem when you want to do all this stuff is around the updates. Like this is how do you know when to update it? And so there's a couple answers to that. One is you auto-generate RPMs. One is someone else could write a tool like this using OSTree that understands like the problem is you have to port the tool that generates the artifacts to know a little bit about how OSTree works. OSTree is an API, not a daemon. But I guess auto-generating RPMs is probably the simplest to start. But yeah. You could certainly do other things but if I saw a question over here but maybe not. Oh, in the back, yeah. How do we make this friendly and usable with rocket system dnspawn? It would probably be a two line. All this does is generate the roots. So I mean you do need some sort of management layer on top and this is where Docker is actually pretty good as far as the daemon and providing an API and things like that. So there has to be some sort of management layer and whether that's Docker or at the Kubernetes level but I don't I'm not trying to like do that because that's that has huge ramifications but yeah if someone was going to do that it would probably be at the Kubernetes level or something like that but otherwise you basically just point system dnspawn to one of these directors. It just works the same way if you use yum install root. It's just that it's a better way to do yum install root as non-rune. The same way you do that. So the question was what can RPM do to make this better? Okay. Yeah so it definitely gets around the scriptlets but it's not actually necessarily at the RPM level. For the most part all we need to do especially if you take out stuff like user add then you just make your root and then run all the post-dns and so you just we need to move to that post-trans model for pretty much all the packages like for things like LD config and stuff like that. Okay. I'm out of time. Yeah like I said don't hesitate to email me if you have random other follow-up questions. Thanks all. Okay. Yeah. Yeah. Okay. Sorry. Okay. Thank you very much. All right. Awesome. Your sticker? Yeah. No problem. Yeah. Yeah. Yo. Hey Simon how are you doing? What is running this? Cause we run the show here. I mean there was a call for volunteers so everybody who wanted could just sign up and be a session there or just you know help with stuff. Yeah. So it was for everyone right ahead but you know we are the most sociable ones. Car. I think I can help.