 Hi everybody. I think I'm competing against food, so I guess that's a very tough competition to be in, but at least my belly says so. But yeah, hi. I'm Leonard Pataling. I'm going to talk today about portable services, which is something we did in a system D project over the last year or two. I'm going to introduce you to what this is, and I'd like to clear up, like if you have any questions, do not hesitate to interrupt me right away. I much prefer it if we can have an active discussion going on here all the time, or we're just at the end. So don't hesitate to just interrupt me if you have any questions whatsoever about what I'm saying. Okay, let's jump in. So what are portable services? Portable services are ultimately the combination of system services, which of course is, as you all know, the thing that system D is about, right? Like it's a service manager and a couple of other things, but primarily and originally a service manager, plus some container features. So the idea is basically, I mean, we're not living in a vacuum, and system services is how traditionally services were managed on Linux, right, like since system five times than before. And nowadays, there are containers that try to do something slightly different, but essentially are still about managing services just in a more slightly different ways. Portable services are one approach, how I think we can take some of the features on container management, not all, and put them to regular system services simply because they're generally useful. You could also say the other way around that it's kind of containers, but with more system service features, right? What does that actually mean? Like first of all, the question is what does containers mean? Nobody really knows what containers are. Like, I mean, there are various definitions. To me, at least it's about these three things, resource bundles, right? You package everything up in a top all, like all the dependencies, all the shell libraries, all the resources it needs into one. It's about isolation, right? You apply sandboxing to it. It might be better sandboxing, worse sandboxing, but it's definitely there. Plus delivery, right? Like you have these orchestration frameworks that distributed on your cluster and so that everything can run, right? So under the assumption that this is what containers are about, I'm pretty sure some people have other opinions that there might be more necessary, but yeah, that's at least the three things that I see what containers are. Yeah, portable services are hence the combination of resource bundles, right? Like exactly that idea from containers. But in contrast to isolation, I put the focus on integration, right? Because that's what system services are. Traditionally, system services ran with full privileges, with full integration to the system, right? Like they could do anything that they wanted because they ran as rude and had the full access. But then again, also at sandboxing. So where do I draw the line between isolation and sandboxing, right? Like as mentioned, I put the isolation thing here and I put the sandboxing thing here. Where do I see the difference there? Isolation, for me, is a little bit stronger, right? Isolation is really about creating your own world. Containers in many ways mostly communicate with the outside through the network, right? So they might as well be VMs in many regards, not in all regards, but in many regards, which is not traditionally how system services work, right? Like they just have access to IPC. And yes, some of the local diamonds tend to communicate via TCP IP, but many of them communicate with completely other things among each other like AFU, NICS, SOCKERS, or whatever else there is, FIFOs, whatever we have. So yeah, for portable services, the focus is less on isolation, more about just sandboxing, right? Like allowing the specific service to integrate into service exactly as much as it wants, but not more, right? Isolate where it's necessary, integrate where it's appropriate, but try to find the sweet spot between those, right? Yeah, I guess it's a different view of the world where you come from, like one comes, you come from the left side from the complete isolation, the other one, you come from the other side where you have complete integration. With portable servers, I kind of want to put the thing there that, yeah, you decide how much you want of that. You decide how much integration and isolation you get. Portable servers are also very much modular, right? What do I mean by this? You know, when you buy into containers, you usually buy into all the concepts at once, right? The bundling, the deploy logic, the isolation. With portable servers, I mean, it's not that you actually have to really buy into them, you can then take Docker, for example, and turn off all the various things again. But yeah, with portable servers, from the beginning, I want to have a modular system, so you have the service and you pick exactly what you want. Do you want the resource bundling, or do you actually don't want that? But you do want the sandboxing, or the other way around, you want the bundling, but not the isolation. So this is supposed to be modular. You pick what you want, and it's up to you to pick the exact same level of isolation, integration, bundling, and so on. Different view on this is consider a range from, like on one side, integration, and on the other side, or well, I probably should do it the other way around. I see the slides in the other direction here. From isolation on the left-hand to integration on the right-hand, if you would consider that a range. I mean, if it was a better graphics artist, I probably would have painted something here, but I was too lazy and late-take made it too easy for me to not do this. So if you have that range, then on one side, you have classic system services, right? They're fully integrated, right? System 5, and system servers have full privilege, see everything, can't do everything that's happening on system. Portable system service is the way I position this new concept of portable services. Docker-style microservice is somewhere in the middle, right? Full OS containers are like Lexi, you know, Lexi is like this original container manager on Linux. They try to put a focus on virtualization using containers, but more in the fashion of a classic VM virtualization, except that it's not. And then there's VMs allocating VM, right? True isolation where you have a completely virtual machine and the communication with the outside is exclusively over networking, right? So, yeah, if you consider that range, classic system service on one end, VMs on the other end, Docker is somewhere in the middle, but nobody really knows where it is. And portable system services and using, I'm trying to introduce somewhere to the right. So just to underline, this is not supposed to be like a re-implementation of anything the containers do. It's just to be something, right, that closes the bridge between these camps because, you know, ultimately we live in a world where pretty much every software we have nowadays is already packaged as a system service. Either it's a native system service or a system five service, but it's already packaged as that. Like, at least every software that is older than five years or something has that. So I want to, like, open up the bridge that people who already have done all that work because everything is already packaged like that, can use some of the container service to, for their stuff, without having to buy into the whole thing. This range, I pretend to be a one-dimensional thing, right? But of course it's not one-dimensional. It's multi-dimensional. So in reality, yeah, consider what's shared and not shared. Like networking, for example, tends to be shared between classic system service and the host, obviously. Like, I mean, if you run your service engine X on my rail machine or something, yes, I configure the rail networking inside of rail and then engine X just makes use of this. On the other hand, the networking for VMs, for example, is generally separated. Like, it's, you have to configure inside of the VM the networking for the VM and on the host, the networking for the host, but there's no, yeah, it's fully isolated. And then, yeah, portable system services basically gives you either, but usually you share the networking with the host. Docker style generally tries to push you towards not sharing the network, but they put network layers on top. Lexi and KVM, of course, have full isolation of the network, generally. But yeah, of course, all of this is blurry, right? Like, if I claim that this is this way, then you can totally make the point that in my particular setup, it's not that way. I do not share it. But in general, this is where it goes, right? When you think about sharing file systems, classic system services, share the file system fully with the host, right? Like, if I have an engine X installed on my system and it opens at CFS tab or any other file on the file system, it's the same at CFS tab or at the other file on the file system that the host sees, obviously. This is different, of course, with VM, like if I open at CFS tab in my program that runs inside of the VM, I will see the one on the VM, not the one on the host, right? And if you think about the other ones, Docker style, you usually would not see the ones on the host. Which portable system service, I want to give you the option, right? Generally, you would not see them because, yeah, I suggest people use bundling. But if you want to see them, go ahead. Pid namespaces. Yeah, you know, Pid namespaces, you know, Pids, right? Like the process identifiers. If you live in a VM, fully isolated from the host, right? PID1 or PDA4711 inside of the VM is something completely different than the same PID on the host, right? Classic system services, no isolation at all in that regard. If I have PID1 or 4711, the thing that the container, that the service sees there, and that is actually the fact on the host, exactly the same thing, they live in the same PID namespaces. Which portable system services, the idea is PID namespaces are shared. Docker style generally does not share them. And then everything to the right doesn't share them either. Init system. For classic system services, of course, I mean, that's what system D is, right? A manager's services. So if you run as a service on a system D service, yeah, you speak to the host system D instance to the host init system. Again, on the other end of the extreme, the VMs, they generally run their own init systems, which can be the same thing on the host, but also something completely different. It can be system five in it on the, on the virtual machine and on the host can be system D. If you look at the ones that are in the middle, Docker usually tries to avoid an init system, but if you would call them, that they have an init system, it's probably their own one, the stub in it. Fools, OS containers, they do run their own init system, however, and that can be distinct from the host, right? So you put the line somewhere else. Then device access. You know, in device access, I mean simply access to something like the sound card or network device or hard disk or something like this. If you do KVM, you're unlikely to get direct access to the real hardware. I mean, there are methods how you can do this now, but it's always like you have to do extra work to do this. And then it's still not the real hardware, but some password weirdo thing. If you do classic system servers, obviously, you get full access. Yeah, if you open def SDA, you get def SDA, the real thing from the host. If you now think about the things in the middle, Lexi generally is like VM. Yeah, you probably won't even have def SDA. So is Docker. And so like portable system services somewhere in the middle. I give you both options. You can open up device access in which case you see the same slash def with the full access mode as you like. Can you also choose to not do that? And then you see a little reduced one which just contains def null and def random, but not the real hardware. Logging. If you do classic system services, the logging of your service goes to the system logs, right? So logging is shared. Every native system running on a system to system puts its logings through the journal and wherever that ends up in. If you do it on a VM, logging is entirely separate, right? The VMs, they log into them, their own logging demo that writes it to the hard disk of the virtual machine. It's not shared with the host. If you then look at the things in the between their Lexi has usually its own logging, right, the stuff that happens inside of this kind of containerization and up inside of it. You have to do extra work to actually pull it out of there. Docker usually shares it with the host, but not quite. So you like everything that Docker containers log ends up in the system logs eventually, even though they don't get access to the system logs directly. Portal system services, they share the logs with the host. The idea really is that everything ends up in the journal. So, yeah, I hope you kind of got the idea of what I mean with integration isolation and that it's not a linear one-dimensional thing. It's more like a couple of check boxes and the different solutions of virtualization from containers to VMs, like from classic system services to VMs, take different boxes in different areas. Okay, so much about the integration kind of where I want to position the portable system services concept. Let's talk about a little bit about, by the way, again, if anyone has any questions about this stuff, totally interrupt me. One of the goals, which portable system services before we actually come to the technical stuff, how it's all implemented is leave no artifacts, right? This is something that container engines generally provide. What do I mean by that? Classic system services generally do not provide that. For example, if I start a system service, I first have to install it. How do I do that with RPM? If I do that with RPM, then this will create a user usually, right? Like a system user that the stuff runs that on a row and Fedora and all the Linux distributions that they are right now, these system users stay around, right? Why do they stay around? Because if these users ever created a file anywhere in the file system, these files will be owned by the same UID. Since we cannot easily figure out which files those are because not all file systems are accessible all the time and because it's not indexed that way around, we have this rule in all distributions, system users are never deleted, right? If you create a system user once, it stays around forever. That's an artifact, right? And a couple of other artifacts like that, like for example, if I would install some weird them and it creates a system five IPC object and then you stop it and it stays around and you delete it and still stays around or it creates files and slash temp, right? You install it, you run it, create some files and slash temp temporary stuff, you stop it, you remove it, these time files will stay around. So one of the goals of portable services is to put the emphasis on leaving no artifacts, right? Taking classic service management and adding this concept that we do not leave artifacts around and that when a service is stopped or the latest when it's removed from the system, everything's gone. No users stay around, no files stay around, no IPC objects stay around, no nothing stays around so that you're basically where you started from. Yeah, as mentioned, containers do generally provide this out of the box, right? Like this is about adding something that this world already had to the other classic world. There's one exception though about the artifacts. The thing that I don't want to remove are the logs because those are generally useful. You actually do want to know that once upon a time there was a system user and the service ran, so logs are not removed. What this effectively means is we need to bind life cycles together, right? Because this is something we never did in Unix, so it's really nice, right? Like the life cycle of a service, right? Like the time where it's installed or the time where it's running is in no way bound to the life cycle of files. It's in no way bound to the life cycle of system users. And this year is the goal to bind these together, right? So that when the service goes down, we remove everything else and vice versa. Another goal is everything in one place, right? Like on Unix, we have traditionally, like if you install an RPM of NGINX, right, it puts files everywhere, all of the place, right? User, userlet, etsy, var, wherever it wants to. Responsible services, we try to, like the idea is to focus on classic services, but try to enable people to isolate their stuff from the rest of the system more strictly. What does this effectively mean? Chiroud, right? Everybody knows chiroud, chiroud is this unique thing that always existed. It's at the core of the idea of containers. But for classic system services, it's not particularly used, like I mean, there are some services that do this, like I think PostFix does, Avahi does, and a couple of other system services does. But generally, chirouding is not the default on classic system services. With portable service, the goal is to make this happen, right? Like so that the emphasis really is on, yeah, everything in one place. But again, not necessarily the need that actually have to do this. We just want to make it easy and move these concepts that have been around for a long time, more into the focus of attention and suggestion that yeah, maybe that's how it should do your stuff. Another thing that I let this real goal of the portable services concept is, I think it's important that they feel like native services. So they, I mean, right now, the system supports two kinds of services, right? Like system five services and native system services. And you control them regardless what kind of type they have the same way, the system control for it. So you can start them, you can stop them, like effect the lifecycle of it, you can do resource management for them, you see the logs always the same way. The idea is for the portable services stuff, if you now use this stronger sandboxing, the bundling and so on, that, yeah, it should still feel like a native service so that you still have the same controls for resource management, logging, blah, blah, blah, that you had for the classic system services because they are classic system services, ultimately. Okay, so much about what it is. Well, let's take a step back at this point and ask why even bother, right? I mean, Quinean has already exist. So I'm, of course, a service management guy, right? Like because, yeah, I'm one of the system people. For me, it's like the next step for service management, because, I mean, service management is always going to be necessary, right? Like, well, because there's always these low-level system components, right? Because you need to run your container manager, your Quinean instance, like these are system services and they generally need stuff first to be up, right? So I think service management is kind of like it's core of the US design and it's not going to go away. But we also don't live in a vacuum and you need to see what other things people do. So for me, it's the next step for service management. Putting, like getting the ideas, what's good about containers and adapting them to classic service management because they're useful. Also, yeah, I'm already mentioned this kind of, this takes benefit of the fact that pretty much everything that we have on Linux right now already has a system service file. And if it hasn't, it has a system five in its script, but it's kind of the same thing, right? So, yeah, a ton of stuff is nowadays packaged as a Docker container or OCI container or something like this. But ultimately, yeah, probably even more stuff already has a system service file. So if we build on that and just allow people to take system services that already have a service file that makes them into something new that has the isolation and adds the bundling, then I think it's a nice upgrade to what we already have another complete revolution. Yeah, Agnes already used to service already. Let's just make them more powerful, right? It's, yeah, it's a thing of learning curve. I mean, the learning curve with adopting system, yeah, I guess, was steep originally for many people, but now we're there already, right? So let's open this up and just add a little bit of sprinkles of awesomeness on top to make it more useful for the normal admin. And something else that I have in mind is like a couple of dev corners ago, there was always talk about super privileged containers, right? Super privileged containers was about shipping highly privileged system components as containers, right? They wanted to use Docker back at the time for this where they basically turned off all security stuff and it should have full expertise system and it's kind of like, so you take this new tool and then you remove everything that you can remove and try to use it because the only thing they actually were interested in was the bundling feature that containers provided. So to me, this is like the perfect use case for portable services, right? Because it allows you to bundle stuff up, but you still, because it is a regular system service, get the exact, the full integration of the host system that you want to the point where you want it, right? So you can isolate as much as you want or integrate as much as you want, you pick the sweet spot that is exactly right for your service, right? So if you, for example, hack on storage stuff, which is one of the candidates for super privileged containers, you can decide, okay, I want to ship my own Lipsy and my own whatever else I want to ship, but I also want to be have, I want to have access to UDIF or whatever of the host with portable service, it's kind of natural because you just write it down in service file and that's how it works. Yeah, and another reason why I think portable service is a matter is that, as mentioned, containers are to some level about isolation, right? Isolating to the host and that is good in many cases, but it's also very like, I mean, for example, in super privileged container use case, like if you actually do have highly privileged stuff, then the integration might actually be a good thing and not a bad thing, so you shouldn't be fighting with the technology that actually is about isolation and then remove the isolation. So yeah, depending on your use case, really depending on your use case, the integration is often good and not bad, and hence we should emphasize it and make it easy for you that if you want to build your stuff this way, and many people do, they can, right? So much about the use case. I hope that made sense, and again, if anybody has any questions, totally ask. I can't be that you guys don't have any questions. Okay, let's get a little bit more technical. I kind of indicated this already. The goal really is that sooner or later, or later, not sooner or later, we actually already do. We support three service formats, classic system five, native services, meaning classic system D services, and these new portable services. What's also important to notice is the buildings blocks that the system five services, the native services and the portable services are built from are actually intelligenerate, right? Though these three are the ones we will and already do support natively and system D, you can use the same building blocks to generate completely different concepts. Like for example, you could probably write a generator that runs snaps or whatever else as a system service if you like to do that. So yeah, the emphasis is we want to support these three out of the box, but also be so modular and so generic that you can build any kind of container service management thing out of it that you like. Let's talk now about bundling. Specifically disk images. You know, disk images like this, like for containers there's now these OCI stuff and it's these are new specs that people wrote and they use all fancy JSON stuff and things like that and it's all great. I was my portable service, I want to provide bundling, but I also do not want to write a new specification, right? So the emphasis of the portable service stuff is about not doing that. Portable services do not introduce any new format. Instead, like so let's avoid defining something new. Instead, we just use simple directories or GPT petition, like disk images containing a squash of us, for example. The emphasis being these are formats like nobody define anything new there. These are the formats that the Linux kernel natively understands, right? If I unpack a tar, then it's just a tree in the file system and that point it's good enough for portable services. It is already one. It doesn't need any additional metadata. Similar, if I have a disk image and it has GPT on it and it has some file system that the Linux kernel supports, such as squash of us or xc4 or whatever you like. Good enough for our portable services. It's all it needs, right? So the emphasis really is there is no concept, like no complexity in that anyway. It just uses the formats that have been established in 20 years, right? Like nothing of this is in any way new. It just uses the concept that already have been here. The idea is then to run the system services directly from them, right? Like so in system B there is root image and root directory. Root directory is just how we expose classic charoot. Root image is kind of like it, but instead of use specify directory where the services run from, you specify disk image, right? The disk image is loopback mounted and figured out what file system in there and then it eventually does something like a charoot. Actually it uses file system namespace, but that's implementation detail. Yeah, so the disk images are not anything new. They are, you can create them at all the tools you already know with fdisk, with xc4, for matters, whatever you like, right? It doesn't matter if it's in some way something that the Linux kernel, like if you understand, it's fine. There is no additional metadata. Yeah, so it's ultimately charoot but unusable, right? Because some people, as mentioned, did use charoot before containers were there, but it's a hugely manual process and had a lot of shortcomings. For example, the password databases and things like that were out of sync, so there was always quite problematic with, yeah, with portable servers, we tried to fix many of these aspects that were highly problematic to use. What's also interesting, by the way, is because we use disk images, right? We use the stuff that the Linux kernel already supports natively. We also can open it up to taking benefit of all the fancy stuff that the kernel supports there. For example, yeah, a portable service in systemd can be a lux encrypted disk image, right? In which case you suddenly have a service that only when it's activated becomes accessible and all the data it stores internally is not linked to anyone. Or even better, this is already supported in systemd. You can have a verity-enabled disk image. Verity for those who don't know is this scheme. Google came up with this for their Chromebooks. It basically allows you to say that every access to some file system is verified on access. And if it doesn't match some top-level heads that is ultimately preconfigured, the access is denied. So the use case of this, of course, they wanted to have that for Chromebooks so that you can have a completely secure system where the whole OS is validated by the hardware and the hardware will only boot Google versions of the OS, right? We can use this here because it's already in Linux kernel and apply it to portable services because we can then basically say that, yeah, this system, for example, shall only be able to run services that are signed and validated on access by Red Hat or something else. But this is just a side note, the fact that we, by sticking to Linux kernel concept there, that we have these abilities is kind of nice, but it's not core to the idea of portable services. Now you wonder, I keep emphasizing the fact that there's new metadata attached to it. Now you wonder, like, but it needs some metadata. I like to know what is actually supposed to start with it. The logic here is, yeah, we already have lots of metadata on the file system anyway, because that's what we do on Linux operating system. Let's just use them for that. Specifically, OS images carry system de-unit files already. So let's just use them. And Linux files isn't regardless how you created them. If you created them from 4L or for Debian or for Fedora, it doesn't really matter. They also have this user libOS release file that describes the distribution that installed or even the image in further detail. So taking benefit of the fact that a lot of metadata is already implicitly embedded in all the distributions that have been released in the last five years, we don't need to define any new metadata, right? So now these two things come together. No new file system format, no OCI layering, Table, or AUFS unpacking, whatever. No new metadata on top of it, because we already have all that OS release unit files. Let's just put this together and, yeah, suddenly have something that is like containers, but not containers, and a lot more low level and a lot more fine grade. OK, any questions so far about that? Can't be that nobody has questions. Otherwise, next detail, sandboxing, right? Like this was on the initial slides, I put like the sandboxing one of the three key goals of this. What does it effectively mean? Over the last couple of years, we added lots of sandboxing functionalities to system services. A couple of them are listed here. I'm not sure I want to go into too much detail. These settings, you can already turn on your regular system services and have been doing that you could do that for the last five years or something. Like some are newer, some are older, but most of them exist for a couple of years already. Like, for example, private devices. It's a boolean. If you turn it on for a service, it gives you a private version of slash dev. If you turn it off, you get the host version of slash dev, private network. It's also boolean. If you turn it on, you get your own private networking environment where you only have loopback. If you turn it off, you get the host networking and a couple of others like this. One particularly interesting one is dynamic user. But I think I have another slide about that. But yeah, these are a couple of more things. Not sure I want to go into all of the details of what these all mean. You should read the man pages if you're more interested. And a couple of more we want to add as well, like protect chronologs. Actually, the protect hearing has been added now. I should probably remove that from the slides or move it up on the slides to protect certain other facets. These options, by the way, are things that if you're a package or for for Dora rel, you should make use of already. Because regardless if you buy into the idea of portable service or anything else, if you write a service that it's not using these, you're writing a service file that is needlessly unsafe. Because most services, for example, don't need the privileges to change the system clock. They're apparently like, I mean, that's the NTP server. Maybe a couple of other ones that want to change system clocks. But that is, so if you run your Apache or engine X or whatever you run with the privileges to change the system clock, you're doing something wrong. So, yeah. Portable services doesn't add anything really new to the table regarding sandboxing. Sandboxing has already been there. Portable service, there's just about taking this what's already there and putting it together in a new and nicer to use way. Which is, by the way, the general model of this. What I talked about the bundling earlier, nothing of that is new either. The root directory stuff, the root image stuff that I was talking about, these two settings that allow you to run a program from a Chirud environment, basically. They have been around in system D since its beginning, basically. And as mentioned, Chirud is a system call that has been around from the beginnings of Linux, right? Like from 1994 or something like this. So, yeah, but on Linux, probably not, right? Like Linux and the other. Okay. Yeah. So, then other stuff that is really interesting is Persever's firewalling is something we recently added in the accounting. So, this, like firewalling and accounting, like this is on one hand for resource management. On the other hand, it's access control. So, I put them up here as well. Like in system, we recently got this and we probably will have more of those very soon. Where you can basically say, yeah, this service shall be able to reach that IP range. But nothing else. One important thing to underline, these options have all been available since a long time, as mentioned. But they're not enabled by default. They are not enabled by default for services for historical reasons, right? Like because system five and it didn't have them. And because the initial system de-versions didn't have them either, we couldn't, we can't default to them because if we did, then all the services files written in the last five years would suddenly break. So, they are so far for regular services opt-in. Which is unfortunate. We tried to do something about this. You might have seen this like the system de-analyze security tool that we added, which is something you specify a unit file and then it will analyze you and then suggest you a couple of more that you could turn on. It's a lovely tool. You should totally all use it. But, yeah, with portable services, as being a slightly new concept, we have the chance to from the beginning turn things around, right? So for portable services, sandboxing is opt-out, not opt-in, right? Everything, like, as much as possible of this is turned on by default and then if you want to pick these specific things of integration, you actually want, like, to be able to have full access to a system clock. But all minds go to it and set the protect-clock thing. Okay, so much about sandboxing. Any questions about this? Nobody has questions. Okay, with the whole approach of doing this, there are a couple of hard problems. The first one is users. I already mentioned this, right? Like, we had this goal. How much time does it actually still have? We have this goal about leaving no artifacts, right? Like, so, and one of the artifacts that I mentioned was system users. So by doing, like, putting a stronger emphasis on root directory and root image, we're not going to solve this magic, like, this hard problem of system users, right? Like, yeah. So in system, like, in a release, like, I think, year old or something, we introduced a new concept. It's called dynamic users. Dynamic users are basically users, like, system users, that are being allocated implicitly the moment a service starts up, then they are viable and the service code runs as it. The moment the service shuts down, these users are removed again, right? So if you listen closely to what I said earlier, this sounds problematic at first, right? Like, because file ownership is sticky on Unix, right? Like, if you create a file somewhere as one of these dynamic users, we're basically, like, we have a problem, right? Like, because the ownership will be sticky and if then the service shuts down, the file will still be owned by this user. Our way out, we simply prohibit that. So if you turn on dynamic user, it's a boolean for regular system services. Again, this already exists for a while. You don't have to use it with the portable service concept. You can use it in classic content. No problem at all. If you turn this on, what happens is not only do you get a dynamic user ID assigned, that is assigned as long as you run, but also you automatically lose write access to the entire file system with the exception of slash temp and var temp, where you get your own version of it that is life cycle bound to your own lifetime too. So this is where we bind the life cycles. Plus, a couple of directories that you can explicitly list in the service file with the state directory setting, which are basically directories in var lib that are, the ownership is changed to what the service is running as the moment the service runs, like it started. And if from a previous run, these directories have a different owner than the dynamic user ID that we are just about assigned, they get recursively choned. This is not pretty the fact that we recursively choned, and I would prefer if we didn't have to do that in long scroll and would have given us options that are better than this, but it's actually not that bad because recursively choning a huge directory tree is surprisingly fast still. So, yeah, so the solution of not allowing, like the solution for the problem of sticky files is let's very closely figure out where the service shall be allowed to write to, and then prohibit it everywhere else, make sure for these few areas where it is allowed to write that either it's all deleted when the service goes down or it's isolated from the rest of the system so that the rest of the system can never have access to it, taking benefit of the weird reusing of UADs, and the ownership has changed if it's not matching the service user the moment it started the next time. I hope this made some sense. There must be people who have questions about this, no? Okay, so the question was regarding if I have a service like this, which runs as a dynamic user and you wanna do something like escape from the sandbox to operate, like to execute operations on the system, the question is how you can do that? The answer to this is really depends, right? Ultimately this user is like a normal user, right? Like so if you have a system user in your old solution or have a dynamic user in the new solution, the difference isn't that big, right? You can still use Pseudo if you like, but you also cannot use it, right? Like it's up to you. There are like for system these own APIs, right? Like because you explicitly ask about starting another service and things like that. For system these own APIs there's policy kit, right? Policy kit is actually not very good right now as dynamic users and we need to figure that out still in detail, but the essence really is what applies to static users also applies to dynamic users, right? Well it's also important to notice that the dynamic users can have any name you like, right? So you can actually reference the dynamic users in policy files if you like, if you feel like it. But admittedly it's not all pretty because right now the policies for policy kit for example are introduced at system level, right? So if you wanna have a dynamic user being able to start a service or something like this, you actually have to drop a policy file into the system and while this doesn't directly conflict with the goal of not leaving artifacts, it still gives me this bad feeling that ideally the policy that you're allowed to do this should be in the image of the portable service and not be on the system, right? But I mean, yeah, I think it's still usable, it's just not as pretty as I would like it to be. And yeah, we certainly could do better. I don't have much time, but if nobody has any questions then we'll just continue with the slides I guess. I hope this wasn't an answer to the question. Okay, another thing is user database mismatch, I already mentioned that briefly before. If you use classic charoots, then at C pass WD on the host and in the charoot is definitely going to differ, right? Like because user IDs are assigned dynamically and then yeah, if you install five packages in this order in the charoot and 20 packages in another order on the host, then they will get the UAD numberings wrong. This is a big problem, right? Because it basically means that if you otherwise integrate to the host system then suddenly something that is called engine X like the engine X user in the container matches the MySQL user on the host or something like this and then they suddenly get access to stuff they shouldn't get access to like kill process and whatever else. The solution we came up with for system D is something called private users. It's also Boolean. What it does is it uses username spacing for those who know these concepts and it does something very, very simple. It matches the root user to the root user. It matches the user the service runs add to the user the service runs add and everything else is mapped to the nobody user. The nobody user is a special user that has been around for Unix since ages and has the UID 65534 like it's a minus two and 16 bit and there's nobody users basically like the catch all you that everything else is meant for. Why like so the code that runs inside of the service what it sees is that everything all the objects that have ownership, right? Files, IPC objects and this kind of thing processes are owned by one of these three use IDs his own root or nobody. Now the nice thing is that these three are kind of like two of them are kind of the only use IDs where everybody can agree on what they mean, right? If you have a Debian machine if you have a Ubuntu machine if you have a Fedora machine if you have Solaris machine well okay Solaris nothing much but at least all the Linux machines they agree that root is zero and zero is root and they agree that there is a nobody user and it has 65534. I'm a little bit lying there actually because on Fedora for historical reasons that nobody uses actually called NFS nobody and I really wish it wouldn't but it's how it is and yeah there's somebody who what it has been changed okay okay then ignore what I said so everything's good now so yeah our hack around the mismatch of the user database is making the user database irrelevant by removing pretty much all the entrants like I mean the entries will still be there but you will not see any object owned by anything but these three use IDs that you can synchronize. A couple of other for debuts doesn't really like if users are registered temporarily and go away it's something we have to deal with and it's not as bad as yeah it sounds it's kind of like the policy kid thing right like because in policy kid if you want to reference something on a user you have to write it down in the policy and Divas has also policy but yeah we're working on this to getting this fixed but it's nasty. Any other questions at this point? I mean in theory we have now 15 minutes or probably now 10 minutes of something seven minutes of questions but if nobody asks any questions I mean it's completely okay by the way if I don't finish all the slides these slides are just to fill time I think I kind of hope that you already got the gist of the idea of what we're doing here but if really nobody has any questions I'll just continue. You have a question? Nope that's on Fedora what's the most recent Fedora? Okay that's where it's supported but this is relatively recent stuff and yeah in the second to last version of system the upstream we still change a couple of things but yeah and in one year time this all should hit your distribution in one way or another. So the question was regarding which system did you version this was supported? So we added as a preview in one version which is two, three, seven or so but in the preview version we installed the main binary that you talked to which is like this one part of a control it's installed and use a lip so it's out of view of pass and the idea is like as long as it's not officially supported it wasn't accessible in the next version two, three, eight I think we moved it to use a BIM making an official API and fully support it but actually nothing really changed between that we weren't sure if it was stable but yeah ultimately turned out to be stable by the way yeah if I presume so yeah it should be. I mean it's a relatively minor component like one of the keys that we wanted to get across is that nothing of this is really new right this is a giant copy fest if you so will of ideas other people came up with and technology that already pre-existed in system D but just then pulling them together in one new tool which is this one portable control to make it easy right like the disk format is nothing new we came up with the service file format is not this thing we came up with it's the old one as before nothing of this is new right it's really just about giving a nice integrated command line to lots of stuff that were separate before and like moving classic system service management to the next level in this regard okay then there are the questions so the question is would you still install portable service via PRM? My question is I don't care right like this is outside of the scope like I have this other slide the one here before in scope is probably simple delivery but generally it's only simple delivery I care about how the images get on the system it's completely up to you right you can package them RPM if you feel like this right like you can for example like package inside of RPM a sub directory in opt or whatever you want where you get the full directory tree it's completely up to you I don't really care about this like what I care about ultimately is that it is available in one of the form words Linux kernel supports natively meaning as a directory or as a loopback file that we can just mount and yeah and the loopback mounting is completely automatically you don't even have to know that there is loopback mounting or anything involved but yeah people can put up tar balls on the internet people can put compressed squash of things on the internet it doesn't matter they can download with W get they can deploy it with whatever they want to deploy it we don't care it's like as soon as it's there we will run as a portable service right the same way as for classic service we don't really care if you package it up with as a table either or as RPM or as a depth or whatever else system D is not interested in that question so the question is what's the benefit regarding a podman and a system like I mean I tried to like my slides mentioned Docker style a couple of things and that's kind of what podman is right they're containers right like they're strong like this is like the first part of the talk was just about trying to draw the line between classical container management and what this is right so yeah podman is in many ways just a more modern version of Docker as you will right the concepts are the same you get the same isolation and these kind of things you get the like ultimately then the deployments through Kubernetes and whatnot this stuff tries to focus on a lower level right like so if you actually want to put together a web service of some form you probably I mean you can do it with portable service and sure I invite you to but it's probably not what you want to do right like you just want to use the container stuff this stuff is for everybody else who needs the more integration into the operating system like for example super valid containers you do your storage stuff and you want to have lots of access to the operating system but also have isolation to some point or if you do embedded stuff for example like there are lots of embedded people doing this right so it really depends on your use case this is supposed to be completely generic tool and we invite everybody to use it for everything they want to use it for but you know Kubernetes and these things like this is huge infrastructure it's already there use that if you if that's your use case this tries to cover slightly other use cases even though it could probably cover many of the of the classic podman use cases too just many of them but I don't know it's not the competition I want to be in I'm looking for something other because I you know I've been listening to what's going on in IT and I know that a lot of people like particularly embedded people come up with running Docker on a on the embedded device right in the ARM device because they want to have some kind of image form but then you just like why do you do this like they don't even have web or anything on there so yeah this is supposed to cover that but I just see that my time is out oh shit yeah I was supposed to so to make Ben happy