 Hi, I'm Leonard Padaling, I work on SystemD. I'm going to talk about portable services I'm ready to use. Portable services is a new concept in SystemD. Yeah, I'd like to introduce you to this. I did this talk before, by the way. You might have seen it if you attend a DEF CONF, or if you said I've seen it at all systems go. There's not going to be much new in that if you saw it there, and probably would be cool if there are still people outside. Are there? Yeah, there are still people outside. If you have seen it, yeah, you wouldn't miss anything. But apparently, nobody has seen it yet. If you have any questions, completely totally interrupt me right away. I much prefer my talks to be like conversations and instead of just a Q&A at the end. So do not hesitate, interrupt me. I love that. Yeah, let's jump right in. Portable services, what are portable services? Portable services, you can see two ways. One way to see it is they have system services with some container features, but you can also see the other way around. They're kind of like containers, but with some system service features. What does that mean? First of all, we have to understand what containers are. I don't really know what containers are. Different people have different ideas about containers, what they precisely consist of. I think the definition that I tend to agree on is that they combine three concepts. One is resource bundles. That's what you do with containers, you pack them all up in a couple of tables like shadow libraries and all the dependencies. There's isolation, meaning that they generally run in some form of sandbox. Like might be a better one, might be worse one, but there's at least some form of isolation generally. Delivery, you're delivering on the server you want to deploy them on, and then you can run them there. These three concepts is what at least I find most interesting about containers. I'm pretty sure that other people who care about containers probably would list more of that. Portable services try to take some of these features and add them to classic service management. Specifically, portable services about making resource bundles available for regular system services. There are about integration. Like if you look at the previous slide, I had isolation there. I put integration here because that's what system services are really in comparison to containers. They're generally much more tightly integrated to the host system and sandboxing. I do not put delivery on this slide because I don't really care about delivery. But yeah, so the difference really here is I don't care about delivery, and instead of isolation, I put integration and sandboxing here. Sandboxing for me is slightly different from isolation because the way I see isolation, it's really about creating a new world that is separate from the host you live in, while sandboxing, I kind of more see, is you're living still in the same world, but you can't do everything you want to do. Another key aspect of a portable services that's supposed to be highly modular, which to show the opposition to classic container management is that in container management, you generally tend to have to buy into the whole idea. Then some people don't do that and for example, end up with containers which they introduced the concept of super privileged containers, which are basically containers where you didn't throw one-half of the concept away by turning off all the isolation features. But yeah, portable servers are not supposed to be like that. Portable servers are supposed to be modular. You pick exactly what you want. Do you want the resource bundling? If you want to have sandboxing, you pick exactly how much isolation or integration you want. So it's really supposed to be modular and very, very fine-grained and doesn't require you to buy into the whole idea, but into parts only. Another way to look at it, consider range from integrated to isolated. If I was a good graphics artist, I would actually have drawn a graph here, but I'm lazy, very lazy. So I just put a couple of words here with an arrow in the middle. Think about a range from integrated to isolated, right? On one side, you have the classic system services, right? These are system five services or system D services. They tend to be very well integrated into the host, right? They live in the same world, they see the same network interfaces, the name file systems, they can establish file systems, mount them and do things like that. They see the same users, they see everything because they are part of the host and they have full integration. On the other extreme, there's VMs on a KVM, right? They tend to live in the entirely own world, right? They could be living on a different host after all, and the way how they communicate with the rest of the stuff that runs on the local system is actually across the network, right? So you got a maximum isolation. Docker style microservices are probably somewhere in the middle, right? It's not entirely clear of their more like system services or more like VMs. I put them on my little range here in the middle. For containers like Lexi, generally try to be something very similar to KVM, right? Like they expose the system that runs an inner system inside and that you can SSH into as it's aging into is probably something you wouldn't do into Docker containers, yeah. And now the concept that I'm going to introduce is portable system services, which I put close to classic system services, but somewhere on the range that goes towards Docker, right? So just to position this on a one-dimensional axis of integration versus isolation. Yeah, think about what's actually shared and not shared with these forms of virtualization, right? Like classic system services obviously have shared networking, right? Like they don't configure there. Like if I install engine X as a system five servers or as a system service on my system, you don't configure networking explicitly for it. You just use as the host and networking. While on the other extreme, of course, it's completely separate, right? Like they have to run their own network management solution inside of the VM for things to work. If you go along the axis, of course, yeah, Docker style microservices generally do not configure on networking, but they also don't share it with the host. So you have a relatively like, yeah, but it really depends on how you configure things. LXC generally also it's like a little bit vague. But yeah, think about file systems. Generally, Docker style microservices do not share the file system, right? They have a different route, right? So they are relatively separate in this regard. Yeah, classic system service, of course, are fully integrated. They see the same files and directories as everybody else. VMs live in a completely different world, right? Like they have their own block devices even and mount these block devices and see something that the host cannot even see, right? I'm saying about PAD namespaces, right? PAD1 in classic system services is the same as the host, right? Like because it doesn't live in a new PAD concept, right? Like in PAD55 that the classic system service sees is actually the same PAD55 that's actually running on the host, right? In Docker style microservices and everything to the right, that's not the case, right? Like they generally live in their own little world where they have PAD namespaces and yeah, PAD1 inside of it or PAD55 or 77 is going to be something very different from what the container sees and from the host, right? And then VMs, of course, PAD is a separate too, right? So you see, yeah, the PAD namespace where the boundaries is somewhere there. The unit system, it's kind of similar, right? Like where is the cut there? If you have classic system service, they, of course, share the unit system with the host. I mean, that's what started there. If you do Docker style microservices, they, that's kind of weird because they generally don't have an unit system. Like there's no unit system visible from the containers, but they also don't have their own. If you do Alex C, then yes, we have an unit system inside of the container generally. I mean, yeah, people can disagree with me on this that yeah, you can configure differently, but yeah, I'm trying to position this in the general case how people tend to actually use this, right? Device access is also, yeah, if you do classic system services, you generally have raw device access, right? Like you can access the block devices like the, or the sound card or whatever physical devices your computer has directly because you're living on a system, right? This is very different, of course, in VMs. VMs are generally isolated completely. Yes, I know that you can do pass-through and things like that, but that's kind of, that's magic manual working to make this happen. It's not how this works out of the box, right? And logging on the other hand tends to be much more integrated. Like even Docker style microservices tend to like provide the logging and it's not done by the payload itself. So yeah, so the point of this lower thing is that it's actually, even though the access suggests it's a linear access of integration, it's actually more multi-dimensional, right? Like depending on what you look at, the cut where you get the integration is isolation is more to the left and more to the right. I hope this make any sense so far. Okay. Then portable services, one of the goals of portable service we had is leave no artifacts, right? Like in containers that's kind of a given, but on system services it's not, right? What do I mean by that actually? If on a Linux system you install Nginx or MySQL or whatever package you like on your server, right? Down at the depth of RPMs install them and then remove them again, right? This leaves artifacts around, major artifacts. For example, on Unix users are generally, like you cannot sensibly delete system users or any kind of users actually, because if the user is created in some directory, some file and you remove the user, these files will still be owned by the user ID of that user. So the file ownership is sticky and if the user doesn't exist anymore that the file ownership refers to, then you have a problem and then if the UAD gets recycled later on you have a security issue, right? So this is a problem, I think with Unix since forever, most distributions generally do never delete users, right? So that basically means you install Nginx once, you remove it again. Yeah, your UAD is gone for good. With portable services, our attempt was to fix that problem and provide a way how user IDs can be used, but they also become inherently transient. We'll talk about that in more detail later on. What this specifically means, but it's not just about system users actually. There should be no artifacts left around, like if you ran a service and you remove it again, like for example temporary files should be gone as well, right? So yeah, it's about binding life cycles, like when a system service starts up, it can allocate a couple of resources, when it shuts down, they are released, and unlike classic system services, we don't leave stuff around. Another goal of portable services to have everything in one place, this is kind of the bundling thing, right? Like yeah, it's not a new concept of course, because charoots existed in Unix since forever. In fact, the whole of portable services you can summarize as yeah, making charoots useful. But yeah, it's one of the key goals. You know, I'm a service guy of course, because yeah, I've wrote most of system B. So for me, yeah, doing charoots is awesome, but I want to make them viable to service managers in a very like just regular system service in a very powerful way, and that's what portable services are. One other goal is I want this new concept that adds these couple of portable, of container features to service management, a lot like a native service, right? Because I wanted to actually be a native service. Native by this, I mean like a regular system B service, like that has a dot service file or so on. So the ultimate goal after this is that if the portable services are used, then you end up having an inner system that supports three formats for services. The native ones, classic one system B service files, the old system five in its scripts, right? And these new portable services. And as you like, if you're using system B, you already noticed that the behavior of system five in its scripts and regular system B services, the behavior like, do you do system controls start and stop on them and can reset resources and sees the logs of them? It doesn't really matter like this distinction is removed. The key with the portable service concept is to do the same here too. Okay, so much about the goal, so much about the positioning of this new concept of portable services. Let's talk about the why, right? Like this is a question we always have to ask. Yeah, I already mentioned that I'm a service management guy, so we don't live in a vacuum, right? Like things happen around us, containers happen of course, right? They have a lot of mind share and they are in a form of form, in a way a form of service management, but we're a lot removed from the core system, but there are definitely a couple of good ideas that I think make a ton of sense to take and apply to system service management as well, right? It's not about coming up with anything new, it's just about looking what's good, what's out there and figuring out is this something that we want for service management like for regular service management as well? And I think the bundling and the sandboxing is, so let's apply it there as well. Also, what's really interesting to notice is at this point in time, pretty much all packages of services that are viable tend to have native system-based service files, right? So in a way, most of the stuff that already exists on the internet tends to have these service files and that's actually kind of cool because it allows us to, if we add a little bit on top, we can do something that goes in the direction of containers without actually defining any new kind of metadata because we already have the service files and pretty much everything has the service files at this point already. Yeah, another concept is like containers is a separate world where you use different tools. Admins generally are used to system service already, so maybe we can just make them more powerful in some regards because some of the features that you want to have in containers you can just make a revival for regular system services as well. One primary use case for portable services, I mean, just to make this clear, this is not an attempt to reinvent containers or something like this. I explicitly want to position it as something that is more low level than this and I explicitly want to position it for use cases where containers might not be the most appropriate way to do things, like if you want to use containers for something and containers are the right choice for what you do, continue doing this, this is supposed to be a little bit more low level. So one primary use case is what people have dubbed super privileged containers so far. It's, for example, storage people like to do this, right? Like they want to ship a lot of complex stack in one image onto your server machine, right? That's why they want to use containers but on the other hand, they need a really strong integration into the whole system because they need to do device management, like block device management. They need to figure out what's being plugged in, what ISCASI does and whatever else, right? So they earn this weird position that they would like to ship a lot of complex stack software with its own dependencies because it's all far from trivial onto existing machines but they also want the full integration into the host because they do device management and if you do device management, that's where you really, really need it. So yeah, inside of Red Hat, there was this people working on super privileged containers when I saw that I said this is horrible, like because they ended up using Docker initially and then they turned off all the sandboxing features and created all these bridges that you could escape from inside of the Docker to the host so that they could do the manipulations as the device did and everything is like. But yeah, portable servers are supposed to cover that use case perfectly, right? Like so that you can have your bundling and all these kind of things where you can pick exactly how much you want to see from the system or how much you want to be isolated from the system without being completely and terribly ugly. Yeah, also one of the key ideas is integration is a good thing often, not the bad thing, right? Like with containers and all these things, they're very strong about isolation. I think in many use cases, you want the integration. Not in all, but in many you do, right? So yeah, it really depends on your use case, integration and the ability that you can introspect the rest of the system and particularly for tracing tools and debugging functionality in metrics and these kind of things. It's a really, really good thing if you do not have to first play games with the sandboxing to escape it. Okay, I kind of mentioned this already. One of the goals is that, yeah, the System 5 services, the native service and the portable servers are supposed to be next to each other and equally well supported with the same interfaces and same behavior, the same resource management, same everything. The building blocks this all is built of are actually permitted more, right? Like the portable services code base is actually relatively separate. Like when we added that to system D, like the portable control command that makes all of this available, it didn't actually require any changes on the system decore itself. It just added a new utility that allows you to interface with the existing sandboxing options and bundling options in a nicer, more integrated way, right? So this is kind of, because it is implemented that way, it basically even would allow you to come up with your completely own service delivery framework, use all these basic concepts and build something completely different, right? Like for example, people have been working on making OCI stuff work and translate them dynamically into native system service and things like that. Yeah, one key idea about, any questions so far, by the way? Nobody has interrupted me about all the stuff that. Okay, let's talk a little bit about disk images, right? Like when we think resource bundling, we have to think about disk images in some form. Docker, as you know, uses tar balls and then weird layers and AUFS and these kind of things. In, yeah, one of the goal of this portable service was no new metadata, right? Like I didn't wanna sit down and come up with a new OCI spec. I have no interest in whatsoever in that. I didn't wanna define a new image form or anything. I have no interest whatsoever in that and it's like a political massive job. So with system-reportable services, the key really is we use the metadata we already have. Specifically in disk images, this means, we don't care how the bundled images come onto the system, as long as the moment you actually wanna start them, they are accessible as a Linux accessible file system in some way, we're happy. What does that mean? You can ship things as tar balls if you like and unpack them and then system-reportable services but you don't have to. You can also use it as a disk image, right? Like a UFI block device image that you can mount. In system-reportable services concept, we support both equally, right? The key, what we just require from people is that they use it, provide images on the block layer or on the file system unpacked layer but in both cases, it needs to be something that the Linux kernel natively supports. Yes, that's kinda what the point that I was making, right? It's like, I don't really care what you use and we explicitly support both block device level stuff and unpacked tar ball kind of stuff, right? So partition tables it. So the question was supposed to repeat the question, right? So the question was regarding whether whatever is mounted as a disk image has to be a proper block device or if it can be just a regular file. The idea is it's just a regular file, right? It can be a block device but the idea is it's normally a regular file and system you will internally do the loopback mounting and stuff like that. So you will never actually see that there are loopback devices and block devices involved in the end but the idea is generally, yeah, either take a tar ball, uncompress it and you operate on the file system level or provide us with maybe a squashFS file and system you will set it up as a loopback and mount it and then it's kinda the same thing from that point on. You can point on a folder, right? Just say this folder is the file system you use. So the question was regarding, yeah, if we can point it to a folder, yeah, that's the tar ball option, right? Like that I mentioned, right? Like the fact that tar ball is used, I don't give a damn about that, right? Like as long as it's there, in a form that the Linux kernel natively supports which could be directory or could be something that I can mount, I'm happy. So the question was regarding whether I know of anybody using OS tree with this. I heard of people were interested in this but I didn't follow up in details but of course you can totally use OS tree like because as mentioned, if it's directory it's good enough for us and OS tree stuff after you check it out is just a directory so all is good. So yeah, the question was probably, I guess I can summarize it, the relationship to OS tree and SNAPs, right? Yeah, to flat packs and SNAP. So I mean the key really is this stuff is system level stuff, right? So flat packs not system level stuff. SNAP is though, right? This is actually in the design very close to SNAP what it does. But I mean, I don't really care about the disk images as mentioned, right? You can actually use even a SNAP disk image if you want. The focus that I have really is after it's there how to make the stuff that's in that image available as a regular system service, right? And this is what they generally don't care about or want to provide. So yeah, flat packs, different story, desktop stuff. This stuff requires privileges. People have asked about making this available unprivileged but it's kind of difficult because at least if you do disk images it's all about mounting and loopback devices and none of that is available unprivileged. I hope that answered the question a little bit. Yeah, key here really is let's avoid not something new, right? I don't want to be in the business of defining image file formats. So yeah, let's just take simple directory trees or better SUP volume or maybe a GPT containing SquashFS but actually we don't really care if it's SquashFS and we don't really care about if it's GPT either. I just think it's a nice thing to do. The services run directly from these images, right? There has been this root image and root directory service setting since ages in system deep. We just make use of this here to say yeah, now I run the service from this image and then the moment the service starts this image is mounted if it's not mounted yet or bind mounted and yeah, this is the moment the service shuts down but yeah, the image is not used anymore. In a way, this is about like fixing Charoot. I mean Charoot has been around since ages and some people have deployed things like that since the 90s but Charoot has a couple of serious problems like for example, one of the bigger ones we'll talk about later on is about like SEPASWD because you kind of have to synchronize the SEPASWD on the host with the container. We'll talk about that what we're doing in this area. What's by the way interesting because we actually do so like root images stands as basically where you specify a Squash file system like any kind of file system that the kernel mounts. Root directory is where you specify directory but the root image thing is actually kind of nice because when we mount the stuff we can actually take benefit of all the weird storage stuff that the Linux kernel has. Two things that are particularly interesting I think like one is you can Lux and Crip for example an image and then just make system the start the service from it so you can have a model of basically encrypted services and variety I find even more interesting variety for those who don't know is a kernel concept about that every access to the disk or in this case to the image is verified as the access happens cryptographically against some predefined hash value that can be signed. So with that you can actually make trusted services in a way where you basically say yeah I have this computer here and it will only run software that is signed by me and then you deploy an image on it and then system will start it but system will only start it if the top level hash of that image is matches against some signature checking stuff that makes sure that it's only my stuff that runs. Don't want to talk too much about that because it's probably a talk of its own just wanted to mention is simply by the fact that we rely on whatever Linux kernel provides us with we can make these things happen and we did make these things all hooked up so yeah. So the question was if this means that we can use another distribution to start services on some host system and yes this is what it means right like the idea is that much like for containers the distribution that is used inside of this image doesn't have to match what's on the host and things should still work just fine because the Linux kernel people might not be perfect and maintaining compatibility with everything but they're pretty good right. Okay yeah. The only thing that an image needs to have to qualify as a portable services is that it has to carry system to unit files that it has to carry a file use a level hash release and that's it. It has to carry system unit files because we need to know what to actually start in it. Right like so it has to have that. As mentioned it's kind of cool that all software that currently exists generally has these already and the use a level hash release file is something that has been existing since five years already we came up with this originally in the system context but it's actually adopted universally even beyond like even the system de haters now ship this file. It's a very simple file that is supposed to describe the distribution you use but it's actually extensible so our idea is was yeah we use that as a metadata information that if you wanna declare what the image that you have there is about you just add a field there. So again the key here is no new metadata, no new formats. We use Linux file systems, we use system unit files, use the OS release file, nothing of that is you. All of this exists for at least five years and in many cases for 10, 20 years. Any questions about this so far? So in this case if you don't want to use portable control to start the service the portable service you have to declare like it's post system service file to load the portable service which then in turn carries the system service file. So that was a very complicated question and I would like to delay that to the end if I hope we have enough time. Well I kinda introduce you a little bit to the command line if we have enough time to because that will I think explain your question. Somebody else had a question? I didn't basically say the question. Okay, then let's talk about that a little bit later. One quick slide, I mean there's lots of stuff on this but I don't wanna go too much into detail. The point I wanna make here is like system before a long time had all these sandboxing options for system services, right? This is independent like as everything else that I said from the actual portable service concept these things existed since some longer, some shorter but they are generally options how you can lock down your system services. Just as an example, private devices for example as a Boolean that you can set on a system service if you turn it on it basically means that the service gets its own instance of less dev that doesn't contain any real devices but dev you run and dev now dev zero and these kind of pseudo devices that are UNIX API or Linux API or don't actually reflect to real physical devices you could touch. And a couple of other options of this where they are generally designed to be super easy like Booleans or something very close to being just a Boolean how you can lock down your services. In the portable services concept we just make use of the fact that this already exists. There's one major difference though. You can turn these on on your classic system service already but they generally are opt in so far, right? Which is something I mean we would like to turn that around of course but at this point we can't really for compatibility reasons like because system services have been around like I mean they inherit everything from system five even and since system five and in the beginning system services did not have sandboxing if we would turn that on by default now we would break everything, right? So for the classic stuff it's opt in. For the portable services we have the luxury though because it's a new concept we just introduced it's a reverse it's opt out, right? Like so by default you get a policy that you can actually opt out from everything and if you do then you get the full integration into the whole system can do whatever you want. Yeah these are a couple of that we already have. I did talks in the past talking about all of them in detail because they actually do fill like a talk of their own and they're gonna be more. There's also nowadays which is really interesting to know is like the per service firewalling these days where you can set inside of the unit file you can do access control on IP level and things like that this is an IP accounting actually this is so awesome but yeah it kind of fits into this whole sandboxing concept. Yeah I already mentioned this sandboxing is opt out for portable services rather than opt in. How much time do I have? This is 20 minutes. Okay let's talk a little bit about these hard problems. This is I kind of already mentioned this. If you do charoots classic on Unix you have this problem that yeah because the charoots environment generally doesn't see the user database of the host and the charoot environment might have a different idea what UID 500 means right or UID 1000 means. So most of the how-tos that you find on the internet that tell you how to manually set up a charoot are by yeah copy over at C past of UID. Yeah because portable servers are ultimately just a way to make charoots more useful to work. We try to figure out what we can do in this area. For this we added a concept called dynamic users. Dynamic users is something that is particularly useful in the context of portable services but you can use it already on your system independently of it and yeah it's the one building block but the building block can be used in any context you like. It just happens to be one of the building blocks portable services are built on. What are dynamic users? Dynamic users is a concept where you basically can say for service that when the service starts a system user is registered and when the service shuts down it's released again. I've already mentioned this problem. Now you have this problem with the file stickiness, right? So that yeah when the service then goes down and the service, the user ID is released what happens to the files that the service created while it was running? Our solution to this is a couple of things. First of all, when the service writes something to slash temp or slash var temp yeah we it won't be able to do this. What it instead will do it will get its own fake little slash temp and fake little var temp that actually is backed by the real one but whatever the service writes into that is automatically removed when the service shuts down. This is what I call life cycle binding. The life cycle of the system is bound to the life cycle of the temporary files so when the service goes down the temporary files go with it. So that's one facet of it but it's not particularly useful yet. So the other thing is yeah to deal with the sticky file ownership problem our solution is we simply disallow the service to write anywhere, right? So it's a nice way to avoid the problem with sticky files by simply prohibiting files altogether, right? So this of course limits the usefulness because you then have a service that can write stuff to temp and var temp but can't do anything else. I mean it's good enough for probably some use cases probably most use cases wanna have to be able to actually write stuff to disk and like whatever they generate. So our way out of this is in system D that's actually also independently useful of dynamic use and independent of portable services there's a concept where you can specify a state directory inside of the service file. If you do that, that basically means that if a directory in var lib gets choned like the ownership gets changed to the services system user the instant the service is started, right? So basically the idea here enhances that system D manages for you of specific directories the ownership and these the service then gets access to for writing, right? This is a little bit ugly, right? Because it basically means that you start up your service for the first time system D creates a new directory in var lib for that specific service changes the ownership of that directory to the system user it also like catered for you then you run you write some stuff to it you shut down again now the user gets released the data shall stick around and it does and then you start again but now you might have gotten a different user ID so what system has to do it has to recursively chone everything that is horrible but it's actually not as bad as it sounds because Linux is very much optimized on that and even if you have a directory tree that's couple of gigabytes at least on my machines it never took more than a couple of seconds to chone recursively through it if it's something smaller than a couple of gigabytes it's probably actually not noticeable also system tries really, really hard to assign the same services the same dynamic use IDs as it can by hashing them out of the name of the service and things like that however given that the name like the UAD space is a little bit too short collisions will happen in that case it has to chone so in the meantime the system stopped after running Do you change it to root only or? That's a very good question so the problem is of course if the service started up has its own directory then our service goes down now these files are there still owned by this user ID that now sees to exist this is of course the problem we always wanted to avoid so what do we do? We take a lesson out of how containers are managed because in containers they generally say stored somewhere in valid containers or valid docker or something like that and the way because they have the very same problem they also have a concept of users that only exist while the container is running the way they avoid it is they have a top level directory where all the containers are stored and this top level directory is not readable by anybody but root basically so they avoid this by adding a barrier in the middle so that it doesn't matter if the files that are stored below are owned by a user ID that is otherwise recycled potentially now simply by just cutting it off in the middle and saying yeah and try a subtree it's not available to you that's exactly what we do here now this is actually harder than it sounds because we want to make available var lib foo for a service foo so what do we do? Do we change the ownership of our lib to 700? We can't really do that right? So this is actually tricky in the background what actually happens there there's a directory valid private and valid private that thing is actually 700 I hope you guys still follow this it sounds like without slides really nasty to follow so that one is actually 700 and then there's a sim link automatically created from var lib foo into var lib private foo to make it unvisible to the outside and from the inside because the inside shall have access to this directory even though it's unprivileged through bind mounts we make the var lib private hidden and instead mount it to the top and I hope you kind of followed it all what I was babbling here it took us a while to figure out that this is actually workable and is nice and I'd really like to get rid of much of this code and maybe one day we can if we have shift of s in the kernel like the file system where it can actually change the user IDs so far we can't I don't know I try to come up with anything better and talk to a lot of people we couldn't the general runtime behavior of this even though we do the recursive churning is kind of nice right it's not as like it's not a costly operation because I know the updates like because we never actually write through the files we just change the I node ownership are surprisingly fast on the next one on the file systems these days there was somebody had a what if you need two services to have two that's a very good question the so the question was what happens if you have two services that want access to the same directory the thing is like if they have two dynamic users attached to it that's not gonna work right like the Unix doesn't allow that it would allow that if we have shift of s which we don't right I hope that we eventually can fix it makes this happen as long as that is not available though what you can do is you can like system D actually when it creates these dynamic users on us the username you specify inside of the unit file now if you have two unit files both of them have dynamic user turned on and both of them specify the same username system D will actually create one of the same user for this right and if it happens if you do this then you can actually share this directory right but it's really on you system D won't help you right now with this because I kind of still hope that shift of s is the real thing eventually like I actually whether the container many conferences before here where one of the guys actually talked about this and then he said that's gonna happen but yeah we'll see when that's gonna happen but yeah you can do it but you have to be careful to use the same usernames for the services that want to access sorry so the question is about what about couldn't you use something with groups instead but I mean the general problem is that the that the user IDs yeah well I mean the problem is that that that's up to the programs then to to make this happen because they can't for example create files that have non-rightable things like that right so I in my assumption like I we thought about this but using SEA also in particular to make something like this happen but the problem I always solve with that is like it these solutions tend to end up being something that applications need to explicitly support because many applications manage the access control manually while with the solution we went for it's transparent to the applications they don't know that there is like well I mean if they look from the outside they will see the weird thin link but from the inside there's no difference from a from a regular like yeah so so the so the question was how does the file system look like inside of the services namespace basically like if you provide a image right like for example a squash of his image or directory and then you have a unit file inside of it and then you told portable control was the command that I'm going to show you later to attach this thing to the host which basically means copy the unit file out of it put it on the host and updated slightly so that there's a root directory or root image setting that points it back to the image file if you then start the stuff what it sees from the inside is exactly what is inside of that image except for the stuff where you punch the holes into and the holes that you punch are generally something like state directory the thing that I was already mentioned if you specify state directory in the unit file it basically means that directory that you picked there in Vallib is shared between a host and the user and if dynamic users turned on which is an optional feature then it does the magic UID stuff and then a couple of other settings like this there's besides state directory there is cache directory which does the same in Valcache there is configuration directory which you might guess does the same thing in Etsy there's runtime directory which does the same thing and you might guess slash run there's one or more log directory too so where you get the same thing in Vallog so but the model really is towards pushing people to be more declarative in the services by denoting exactly which of the directories are actually relevant to the service and then this actually doubles as a way how you can punch holes into this sandbox in a safe way because we do the dynamic user re-choning if necessary Do you use for arbitrary bind mounts from the host? The question is if we support arbitrary bind mounts from the host we do but if you do that then of course you can't use the dynamic user these things so easily because we would have to churn magic but if it's arbitrary stuff we can't insert the Vallib private thing in between so everything explodes but by all means go and do that like for example the storage people they would run their stuff as root anyway not as a dynamic user and for them yes use as many explicit bind mounts as you want and make a viable whatever you want to have for viable but I mean yeah so the question is regarding if the distributions could like ship tiny portable miniatures and use stuff from the host yes they can right it's I'm not gonna prohibit that from you this is a completely generic tool you can misuse it anyway like I'm not gonna rule into that would I do that right I'm not sure like I mean part of the idea about portable servers of course is that you much like for containers that you distance yourself from the ABI of the host right so to make this stuff more portable sorry yeah so 10 minutes okay like a couple of more slides it's completely fine that I didn't cover this because I got so many good questions and even more here that is it possible to start multiple instances of a service of a portable service like this in with normal system services? so the question was regarding whether it's possible to start multiple instances of a portable service like it is possible with regular services the answer to that is a clear and resounding yes because these things aren't native services right so if you put a template file like a template unit inside of a portable services then you can instantiate it as many times as you want just like you could do it with a template unit that is installed on the host I want to yeah so let's the question was regarding whether if you do the multiple instance stuff whether they can share the same user ID yes absolutely but they don't even have to be multiple instances of the same service in that case right as mentioned earlier if you have two unrelated otherwise unrelated services that are not instances of the same one you can do the same thing what's key is that you turn on dynamic users in both cases and that you said user equals the same string which is the name it's also possible right so to repeat the question if you want to have different user IDs for every instance yes you can do this too which is actually really really awesome for the dynamic user stuff because you can actually implement trivially now a service that is socket activated right and for each incoming connection a new instance is created and each instance gets its own dynamic user that lists as long as this connection exists and then goes away again to me this is actually like just this concept of making user ID something cheap that you can have and then you have can return them and they don't become this extremely expensive thing that stick around forever and hence your package can only allocate like one or two or maybe three but never like 100 of these yeah this is actually one of the like I see it as a breathing new life into the UNIX concept of user IDs because suddenly you can use them for much more than you traditionally could and that is so awesome actually because user IDs are like the core security feature of UNIX after all right like all the other stuff that we have these days was as a Linux and an up armor and whatever else that was second came later and is specifically supported in only one concept but the user IDs as a security concept have existed always and are built into every piece of our software after all so adding the dynamic system user concept to that is kind of like yeah it it turns something really established into a much more powerful concept okay though I'm going to talk about this slide here as my last slide then just to give you a little bit of feeling I don't have demo because demos tend to go wrong but I want to at least show you how the concept generally works yeah the command you use for interfacing this portable service is called portable control it yeah you invoke with portable control with the word attach for example foobar.raw in this case is a disk image like could be for example a squashFS image in that when you do this right when you invoke this then this portable service image is attached to the host what does it mean it means that yeah portable control will do a little bit of a verification that it's the same image and then it will just copy out a couple of unit files from this image which unit files will it copy actually the ones that start with the same name as the image file itself so if the image file is called foobar.raw the unit files it copies out are foobar dash whatever you like as well as foobar dot whatever you like the unit files that are copied out don't have to be service unit files by the way they can be socket unit files as well they can be target unit files they can be whatever you else like not all of them not all eight of them that we have in system D but most of them so you can they were what this basically means is that you can package a couple of related units into one image right you just have to follow a little naming regime that you always call them some prefix dash some suffix and the prefix always the same you can use socket activation time activation all of this at the same time and just by portable control attach all of them become available in the in the system and from that point on they are regular services so at that point you can do portable system control start system control stop system control set property system control whatever else you kill whatever you can do general control dash you with them because after they're copied out like that they are regular system services there's nothing distinguishing them anymore except for the fact yeah that they originally got copied out of some some image file and then of course there's also across reboots yes portable control attach if you call it like this is actually across reboots so the files are copied out into Etsy there's actually portable control attach dash dash of runtime as well and you might guess it copies them into slash run so the attachment the fact that the unit files exist on the host goes away when you reboot yeah there's obviously the other work that undoes all of this it just removes the files where they were copied out and then at that point the services are not available on the host right key again is leave no traces the idea really is that besides logs like because we never should delete logs yeah after you do the details nothing remains in the system so the question is yeah whether the whether these can be template unit files yes they can the files that are copied out just have to have foobar as a prefix followed by either a dot or dash but what comes after that it doesn't really matter it can be a template it can be an instance even it's yeah yeah yeah yeah and by the way by default like the years that you enable them all at the same time as attaching but you don't have to you can you can attach them and then enable them or you don't it's completely up to you yeah sorry oh you mean drop-ins okay the question was regarding like the drop-in files we support for unit files so that you can extend them on the host yes these because they after they're copied out they are regular unit files you can also extend the contained unit files on the host by dropping stuff onto the host and at sea and run like because they are native unit files and at that point they're not distinct anymore you can do everything you can do you can even do system control edit if you like and then system control will drop in the unit file for you if you do detach however these ones would not be removed all right because yeah I mean we could probably add that but it's probably yeah probably needs an extra switch because admins might be pissed if we remove the configuration the change is meant to have this was my last question so thank you very much everybody if you have further questions I'm going to be outside