 Good afternoon again Let me welcome our next speaker. He's not very known in the open source community. He's not contributing to many projects and No, I'm just kidding. So let's welcome Leonard pottering and his portable services Enjoy the talk Okay Yeah, today. I'm going to talk about portable services, which is something that We have been working on the system D project for a while. It's a It's a step-by-step Process and yeah, the portable services concept is something where you want to go to it's not where we are yet But a good chunk of the stuff that brings us there is already implemented. So What do I mean if I talk about portable services in a way The very short summary would be that it's system services system services being the thing that Everybody of you probably knows ever touched a Linux system or else system or something Which is like a system service really like like a system five in its script or a native system You service or something like that plus some container features Some container features meaning What that actually means indeed tell we come come to later But I don't know like like system D. Of course doesn't live in a vacuum There are many other things happening in the open source ecosystem and the big one right now Of course is containers. I mean looking at the schedule of this conference. You can definitely see that so and Yeah When I look as a guy who's working on a service manager onto containers I think that a good chunk of the stuff that they're doing that is actually pretty interesting I would actually make a lot of sense for service management, too So and that's what portable services is you could also say the other way around that it it's a Little bit like container management, but was the number of system these service features like a system service features I don't know. Right, so it's something that If you if you draw the scale It's somewhere in between what a container is understood to be and what a system services understood to be so Little bit more details what are actually containers I think very few people actually have a very clear idea of what containers are everybody thinks It's something very different What I think it is it's resource bundles isolation and delivery, right? And I'm pretty sure other people will disagree with that in some details and think it's more or it's less than this but I think these are the three key key parts of it right resource bundles meaning you have One directory tree where everything's included like your lip see and whatever else Isolation means you live in a in a in your own little world You have you have pp any names based amount names based on all these kind of things So that you basically are isolated from the host system and you live yeah in a completely different world and delivery delivery meaning that Yeah, that somehow these resource bundles get onto your system get deployed and these kind of things If you compare these three things with what portable servers are supposed to be it's also resource bundles but in contrast to isolation its integration and In in addition to that it's sandboxing right now you might ask what the difference between isolation and sandboxing and that would be a very good question In my opinion isolation is really about creating a new world at a separate from the host world where you live on While sandboxing is really you continue to live in the same world But you have less access To what the system provides then you would have if you would actually run the system So that's my definition. I'm pretty sure other people would use different words for this kind of Distinction, but yeah, so Again container resource bundles isolation delivery for me portable services are resource bundles integration and sandboxing There's no delivery among among this because I think for me personally, that's not an interesting problem that I want to solve I don't think that that could ever fit that well into Into a service manager. I think that's something like how how this stuff gets onto the machine is something that other people have to solve What's really key about the the portable services approaches that everything is modular meaning that all the bits and pieces that put together this entire Concert of portable services are Separately useful and you can opt in or out Individually to every single one of them and it's it's it's how this stuff is built right? And it's not this weird thing where you get a shit lot of stuff and then through some hacks You can remove some stuff, but it's really about being inherently modular that you have a lot of options So I already mentioned this this range of integration when I think about portable services I think about a general range like where you have on one side integration on the other side isolation Right and if you I mean if I would be better graphic graphics artists I would probably have drawn that as a proper scale there, but I'm not so you only have to do with this little bit chain of arrows here so If you think about the scale and you put on one end classic system services, you have full integration, right? They run with full privileges. They get the full access to the whole system. That's what I call Full integration on the other hand you have much machines, right like KVM like stuff where you basically have You run your stuff on the same system, right? But in most ways it actually feels like it would be could be on a completely different system You are isolated as much as possible. There's no access to the process tree there's no access to the file systems and You basically the only way you communicate with the hours outside is through networking and nothing else, right? And then you have somewhere in the middle. You have something like Docker style microservices No one really knows what that is because some people run an internet system inside of Docker and other people don't Some things are shared and some things are not For me, it's a little bit unclear what supposed to be but it's it's Provides on one hand quite a bit of isolation because you live in a PID namespace mount main space and all these things But on the other hand it also provides a lot of integration, right? Like because you still like for example on the host you can type PS and you see the process of the of the of the container On the scale, I would also put something like full as containers a la LXC or system the end spawn Where you basically run? Something that is a real system that manages its own networking everything That feels a lot like a VM, but it's actually implemented during container technology And hence, yeah, it's a little bit more integrate than VM, but it's not Properly isolated like VM and Then yeah, the important part of course is here is is the portable system services Which I place on the scale ranging from classic system services to all the way to KVM Somewhere on the left end, but not the very left end, right? So that you get some features that containers currently provide but not all of them so so much really about them How I would position and so that you get a get a conceptual feeling of what am I actually referring to if I talk about this? Now let's be a bit more specific what you actually mean was integration isolation So let's consider what's actually shared and not shared between the host and this program that you're running there In a container in a VM, whatever you want to do it or in a portable system service so You could probably turn this into a nice table by the way where you show on on that So you show all these difficult things that are either shared or not and on that side you actually show The VM Alexi and spawn in this kind of virtualization solutions that you have for example networking Classic system service of course share the entire networking with the host, right? There they live in the same network namespace Traditionally, and if you type if config on the host or IP I don't know whatever you want to run That's the same thing that the service sees on the other hand. If you do VMs, that's totally not shared You live in your own world. You have a known IP address and everything if you think about the file system Classic system so let's share that VMs really don't and Docker kind of doesn't pd namespaces Docker generally has that Classic system services do not the internet system is usually shared Like you like the most common way how people use Docker is of course that they do not run an internet system inside of it So it kind of Docker is the internet system or the host side is the internet system device access is something Yeah, of course if you if you have classic system service You have full access to all physical devices VMs do not have that except if you jump through some hoops and virtualize them and give access to them but it's not natural right and Docker kind of tries to optionally give access I personally think it's really Ugly to do that, but I'm sure people do that logging tends to be pretty nicely integrated with with containers And totally not with VMs, but yeah anyway So I don't don't really want to go into too much detail like actually tell you all the possible implementations with exact feature sets That's really out of scope. I just want to position the the portable system services here and Yeah for the portable system service networking is shared the file system is Mostly isolated, but it can also be shared in parts PID namespaces Are not used so the PID namespaces shared the internet system is shared the device access is shared and the logging is shared So on these points here you get full integration By the way, I know I talk pretty fast and if I talk too fast just show up and I'll try to slow down which can't promise I keep that on for long but And also if you have any questions, I very much prefer if you interrupt me right away Then just doing questions at the end. So if anyone has a question totally interrupt me anyway So so much really about the positioning what the portable servers are supposed to be Now let's talk about specific goals that I want to implement with that So I mean again, I'm a service management guy I'm one of the guys behind system D of course and system D is at its core service manager so The way currently service are deployed is that they like unless you use containers is that services legal Ton load of artifacts on the whole system right if you install an RPM for HPD then You get directories created everywhere. You get a system user created You get all kinds of stuff ends up being hooked into the system in various places One of the goal with portable system services is that that's a thing of the past, right? so that there are no artifacts left in the in the operating system when you install a Portable service when you remove it again This basically means that the life cycles of things are bound together like for example the temporary files you create in slash temp or a system uses you have that their life cycle is actually bound to the life cycle of that portable service As long as that portal services running these things are allocated as soon as the portable service is stopped these things cease to exist and Don't pull you the system anymore. I mean there's all to a certain level like for example logs You probably don't want to kill if you if you Uninstall a portable service, but pretty much everything else you do Another goal which is obviously shared with containers everything in one place meaning It's kind of the other side of the leave no artifacts thing is Yeah, I think shouldn't have dependencies and sync should be self contained meaning that the directory tree where your ship stuff Has everything in it. Let's see configuration files everything Another important goal that I really care about is that I think portable service It should just be special case of normal service management Meaning portable services should feel like native services It shouldn't be a difference if you actually install HDPDD as a traditional system service Right with full privileges in the same namespace and everything or if you use one of the new modern portable services Where it's completely Exposed the same way except that sandboxing is nicer and you ship it a little bit different So, yeah, specifically this means system control should work the same way for native services as for these portable services So why all this it's always a question to ask For me, it's the next step for service management looking around on how people actually use containers these days I see that I mean a good chunk of how people use containers I have in my opinion is probably something where they shouldn't actually be using containers, right? For example, Red Hat has a lot of talk about highly privileged containers I personally think that's that's a weird concept like because if you use Docker Then it's about isolation. If you then turn off the isolation that maybe you shouldn't be using Docker so I think I Probably would make a lot more sense to go from a service Manager level and just add a couple of things to make this useful for this use case and yeah for me That's the next step of service management service management being at the call what system he does it appears natural to me to Make it better Something that I also think is pretty interesting is The metadata right it's really difficult to to like like for example, if you if you would start a new container manager today A lot of stuff is already prepared for Docker So how would you ever compete with that? Now the interesting thing is that everything already has a system you service file. So to me It's really interesting to just say given that Systemd is kind of like has reached the goal as that most of the commercial distributions have adopted it there are Systemd unit files for pretty much everything and I think at this point in time Administrators users have actually understood writing them and reading them So one of the goals that I have was this is saying let's not introduce any kind of metadata Just extend a little bit the metadata that we have which is systemd service files for me Yeah, so that's kind of same thing here admins are already used to services Right and it always appeared to me strange if they have two levels of management for everything, right? So that the system level stuff they manage with system colon these tools the container stuff with something completely different Right. So to me. I think it should be a goal to like Abstract that that doesn't mean that I want to take over the container world I really don't but it does mean that things like I don't know resource management should actually be the same for system services That's for these container are things so Yeah, and I already mentioned the super privileged containers thing I think what these portable services could perfectly provide is the concept of super privileged container super privileged containers again Is this redhead thing where you ship something as a bundle and then it runs with really high privileges seeing a lot of functionality from the host system That's normally not a viable to a container Yeah, and in one one more thing here is integration is good not necessarily bad I mean There are cases where it's awesome to have your isolation for everything But there are also cases where integration is a good thing for example. I mean Most of the actual applications that are deployed in the world usually don't consist of a single service Right there a couple of those that work together and if you want to Run a couple of services together that needs to be kind some kind of interfacing if you isolate everything to the last bit Then this kind of interfacing usually boils it down to networking, but networking It's great But of course it creates its own set of problems you need to actually manage it Right like you have to the low-level managed network management of the host and then you put some another layer on top So that the containers can talk to each other So I think actually in order to keep things think simple in many cases would be highly beneficial if you can get that integration That you want without having to fill with all these things one example For example, you have one portable service that runs Apache another one that runs my sequel and would sometimes be nice If if it would be natural Right if they could connect each other by a unique socket and borrow up my sequel I've run my sequel, right? So Yeah, I mean again, this is like this is what I think like it's a reasoning why I started the portable services stuff It doesn't mean that everything that people do in containers is that way absolutely not There are a lot of reasons where you really should be using something like Docker But I also think for there's a lot of stuff where people are trying to turn Docker into something that probably shouldn't be and Where I think the portable services stuff would be a good alternative and at least one that I think is a little bit nicer so let's go Towards the actual like How portable containers actually really look like in in real life So one thing that I really care about is that system B in the longer run knows three service form is natively and The distinctions between the three are hidden away as much as possible. That is first of all system 5 Service files that you might know from rel 6 and other distributions or unix things There are native system you service files for native services that are shipped in RPMs And now these new ones the portal once my goal really is to make sure that whatever it is It shows up the same way to type system control. You see the mouth you type system control enabled enabled system called disabled disabled should all Work the same way exposed with the same semantics have the same dependencies have the same resource management ways He had your system control set property should work for them and these kind of things. It shouldn't matter Where that service actually comes from in which of these three four months it comes now While these are the three formats that I will care about in system natively The girl with all the portable services stuff is actually to be open that the individual building blocks can be used to Build something completely different, right? so Giving credit to that like you can actually build a generator like for those who don't know generators generators a system The concept that basically allows you to translate at boot Foreign configuration into a native system the units this is for example, how system the implements compatibility with at CFS tab right because at CFS tab is the classic Unix way how you configure your your your your file systems and To integrate that in the system the dependency tree so that yeah that block devices show up your files as a check-in so on We have this generator called FS tab generator and it turns this information into native unit files now We have quite a few generators by default for example system 5 service compatibility is implemented the same way It looks through the system 5 directory it looks for the service and generates native system you services Now with the portable stuff is going to be connected the same way So that yeah, they basically portable service You just drop somewhere and the generator turns it into native stuff So that he cannot actually distinguish it from a native thing because it is a native thing And the idea is that these building blocks for the sandboxing that System will now provide together with these generators permits you to hook any kind of System to this scheme that you like for example, you could and some people are working on that actually Right a generator turns OCI stuff into native system the unit files So that it doesn't really matter what you have what you actually want to run as long as it's something that you can Turn nicely into your system the unit file and can be managed with system D in the rest of the tools like yeah in every possible way so portable services on the technical level like on the implementation level for me are basically services that Ship all that stuff in one directory or in one disk image or whatever you want to call it What's important to me is I don't really want to Define any new formats here because I don't think I should be really in the business I have defined my own for format five years ago with the system the unit file That's what I want to build on so for the portable services approach What really matters is I don't define new formats on disk. I don't define a new OCI standard. What's it called? I just say Portable services can come in whatever you want as long as the Linux kernel supports it Specifically meaning either it's a simple directory tree, right? Put it somewhere and then system D can run that as a service or you can have a complete disk image like for example a GPT disk image that contains a squatch of s or something like that We could be an xd4 without a GPT or something like that what really matters though is it needs to be accessible This Linux functionality directly natively. That's what I care about Implemented is this insisting me like that you have an unit files You have a new switch or actually the root directory switch existed for a long time But it's a lot more useful these days and basically allows you to say yeah Run this thing but use this directory as root directory, and then there's root image Which works like that where you specify raw image file instead And then what actually happens a background is a kind of nice will automatically set up a loop device We'll scan the petition table of this will mount it nicely and run this stuff from it It's actually really cool these days what system you can do there natively like for because it has DM variety support and all these kind of things so it can automatically do Integrity checks while the service running and things like it's something which by the way nobody else can do Who does container stuff right now? By the way anyone has any questions, I'm talking a lot here can't believe really that nobody has any questions about any of this And nobody dares Anyway, let's continue so these disadvantages the approach here is I Mean on Unix we had to root since a long time right in like when I started with Unix like with Linux I I ran Q mail in a to root right and there was I don't know 1994 or something and Containers are not that new they use that concept that was already established back then in a nicer way, but still it's kind of same thing so Yeah, but the problems that you roots traditionally have are still there today so with this approach here with with The root directory in root image settings and system you we try to fix those what specifically That is will come on a later slide Yeah, now I already mentioned that the root image thing that then we have in system D for service Now can do crypto and variety and all these kind of things That's a very good question Okay, I should repeat the question the question was regarding whether These portable servers are actually portable to other operating systems and the idea of course that they are but then again I'm not the guy who will give promises because you ultimately much like containers you give the guarantee If you if you would actually give the guarantee you basically say the API is provided will be the same on every single operating system and Like for containers the primary API that you're talking about there is the kernel API the kernel people have been relatively good at providing Stable API's but they really in real life actually didn't so I don't know I don't consider that part of my business. It's like Yes, they should in theory as much as containers do But I will not give the guarantees. I know that Docker people actually they kind of want to give that guarantee I think they shouldn't because it's outside of their of what they can do all I want to guarantee is that the that the Surface the compatibility surface as minimal as possible meaning that when you run something as a portable service You see from the host system by default as little as possible You get your own directory and slash run your own directory and slash varlib var log and a couple of other things But nothing else right and then you opt in into additional stuff if you actually for example want to access to my a violet my SQL Socrates you get to do that But the idea is really minimize everything and then hope for the best given that the current people have a relatively okay reputation for the compatibility So the question was whether this minimal level is just a kernel or is it kernel plus some other APIs? And well, I mean system. It doesn't really provide API's there for the for the application I mean there there are very few APIs like SD notify if you know what that is SD notify is like a It's a concept how services can notify system D But that's an API that has been stable as long as it exists and exists like five years So I think I mean the very minimal API is just sending out a message. So that's that stability Yeah, I'm happy to guarantee But other than that we don't really provide much API We don't have a rest interface or so from the host to the client. So I think we should be pretty good there It's only really kernel plus very minimal one other things No other questions at this moment Okay, so let's talk a little bit about sandboxing for a moment, right? I mean something that again containers providing with is Is all this isolation if we don't fully isolate things then and we at least should be able to provide sandboxing meaning that the It's it's mostly about yeah, but you still live in the same environment as I mentioned earlier But you have less access to things now in the in the power system releases and in the get to be released to those three two We have added quite a few different sandboxing settings the system D unit files These settings are completely independent actually of portable services The only reason why I put them on slides here because they become Extremely useful as one part of the portable services concept now the ones that I put on here I don't really want to go too much detail what they actually do I invite everybody of you to actually look up at the man page and look what they do because even if you don't live in a Portable services world. It's all options. You probably should be turning on if you manage if you if you Maintain a service on on on Fedora because it actually allows you to reduce the tech services drastically So for example, they have private devices which gives you a private slash def for your service So which only contains that you run and def now and these kind of things But does not contain actual physical devices like def SDA and these kind of things There's private network which basically means that you live in your own little network name space There's only a loop back device and no connectivity to the outside There's dynamic user which is a relatively new thing that I actually want to talk about about if I have the time later more detail it's about Yeah, let's talk about that later There's remove IPC which is basically something where you can bind the existence of IPC objects on the system To the lifetime of this of the service that specifically means post-ex and system 5 IPC like semaphore message queues and these kind of things There's private team P which gives you private slash 10. There's private users which basically allows you to disconnect the the user database of the of the container or the the sandbox environment from the host We'll talk a little bit about that in more detail later There's protect system which basically allows you to make sure that Whatever runs in your service does not get right access to anything on the host except what you want it to be Protect term to slash this kind of same thing for show system control System call filter is a very powerful way how you can reduce the tech service and arbitrary system calls It's actually pretty useful these days originally was pretty boring because it was just really a list of system calls And nowadays has these groupings and things like that restrict address families. It's actually really awesome It allows access to specific socket address families So you basically can say yeah, this service shall get access to the internet But not to Bluetooth and and net link and whatever else There are a couple of other ones like runtime directory is kind of nice because it basically says that your service Got his own runtime directory, which is life cycle bound to the service itself There's restrict real-time which basically says that your service can't get real-time scheduling Which is a pretty important thing because real-time Coabilities are granted by default on Linux to privilege code And if you have that you can make the machine hang Protect kernel modules basically takes away the right to Discover and load kernel modules and a couple of other ones As more to come like the ones I put up here That are more sandboxing where you can basically Disconnect something from the kernel log so that they can't see the kernel logs Like Dmask meaning protect clock so that you can disconnect it from being able to change the clock They couple of other one protect tracing protect mount I think most of this is pretty self-explanatory data directory cache directory lock directory is something that is particularly relevant to a few as we talk about Dynamic users and we'll talk about that briefly later Something that all the ties and disease that we have been working on as per service firewalling and accounting meaning, I mean we have all this resource management and and sandboxing for everything else but not for the network so far and Daniel muck like one now x redhead guy has been working on making it possible said we can do Per service firewall so that he can actually on the network also locked down specific services so that he can say Apache gets access to the network But I don't know my local you just stem and does not and accounting at the same time because I mean to be able to put limits on things you actually need to know What? The thing is actually using So what's really important though is for portable services All the sandboxing stuff that is shown here plus a couple of others will be opt-out not opt-in Meaning, I mean it's something that we probably should have done if we had had all these features from the beginning We should have done for all systems me native services right from day one because of course security is always better if you Default to having it as tight as possible and then permitting people to controls then the other way around after all security is not an enabler It's a disabler. So if somebody puts together a service It's like it takes he must be actually interested in security to enable all the security features So for the portable service and say a new we can turn them on by default and then ask people to Turn them off at this point any questions Very good questions. So the question was regarding what happens if we come with Upmost more sandboxing features later on and that's a very good question I have some ideas what we can do about this like the general problem is basically that if we have the tightest Possible sandbox for portable services from day one and then later on come up with other ways to sandbox things if we then Basically later on enable those two by default then the service might break because they actually needed that functionality so you always have this problem What you do there? One of the ideas is that you basically write into the unit file I want this level of protection by default or something, but yeah, we have to figure that out. It's all solvable I don't think nothing is pretty about solving it, but it has to be done So yeah Any question at this point? so Let's come to the actual hard problems about all of this where things become nasty One of the things is dynamic users, right? If I have a system service like a patchy and I may get a portable service so that I can drop it into my system then it should traditionally a Http dd runs as a user http dd or something like that or ww data or depending on the district you have of course that's hard to do if you if you if the service just is dropped in and Shall leave no traces on the system when you remove it again That is because traditionally in unix users have to be stored in Etsy past wd and Hence and especially an rpm right like if you install http dd on my on a system And then you got a user created an Etsy pasta with you and it's never removed again Even if we remove the rpm the user Doesn't go away the reason for that is because on unix Ownership of files is sticky right like if something created a file somewhere at one time that file Stays owned by it regardless of the user goes away later, and then if the user ID is recycled Something would get access to that that probably should not have so If we want to do unprivileged portable services, we have this problem. What do we do? For the users the concept that we came up with and consistently is dynamic users It's a simple boolean that you can set if you set it then a user will be created that lives exactly as long as that service Is running it goes away the instant the service dies now The stickiness problem. How do we solve that? What's already implemented is the following first of all as soon as you turn on dynamic users Automatically your service enters a sandbox where it will still see the rest of the system But can not write to any of the directories like this is like dynamic user by the way Outside of this immediate scope again of portable services. You can use dynamic users for any kind of service actually, right? So by simply turning on dynamic users you see is Getting right access to anything anywhere, right? So that's a very efficient way to avoid the sticky file problem, right? Like because all the if you cannot create a file you cannot create a file by the use AD that goes away later Then there are two ways out or for it that already implemented one of them is The service will get its private Slash temp and var temp sub directory So that's a place where it can to create directories and the trick is simply that directory gets lifecycle bound to the life cycle of the service So yes, your service can create files there, but you also know the more instance the service goes away those files go away Something similar is done with a runtime directory Meaning in slash run you get your own private directory where you can put your sockets whatever else you want to put during runtime there But their directory is as well lifecycle bound To the service as long as it's running now That solves a big deal of the problem, but it still has the problem. Yeah, you can't leave persistent files around Or if you do and you could figure it that way and these files will be owned by a dynamic user That existed at once upon a time. It might not later The approach that I came up with there, which is actually not yet implemented, but I think I figured out it's going to involve Doing a little bit of bind mounts and having a top-level directory where we can put the Dynamically user-owned stuff so that the Systems on the host cannot access it except for the containers themselves. I don't really want to go into much detail about this Because it's a very low-level Pressure, but it's a topic, but if anyone has a question about that, you can talk to me later So yeah, then I use a problem, but I think we solve them Except for the single persistent file, which I'm still working on but conceptually they're solved the other problem is of course user database mismatch, right if you allow these portable services to run on the on the host and we live in the same pd namespace and you see everybody else's process and traditionally since time began with to root that means that the user database inside of the of the Charooted environment and on the host well if they differ you have a problem right like because of if IHPD inside of such a charoot environment thinks that HPD has user ID 50 and on the host it thinks it has user ID 61 Then it's already problem But the other way it gets much worse if you actually have the same user ID being used by something else so The approach that came up was to deal with that problem By using like introducing an option that I already had my slides earlier here, which is Having private users Boolean the private users Boolean basically detaches you today database from a service From the user database of the host specifically meaning it will set up a username space I'm not sure if anyone if you know what a username space is username space basically sets up an entire new user ID environment now The user name space that this sets up is special in the regard that it will actually map everything to the user nobody except for the root user and the user the service is running as That basically means that only two or three user IDs actually need to match One is the root user. Thankfully all Linux distributions tend to agree what the user ID of that user is it's zero It's kind of they mean required by the kernel The other user ID that most distributions can agree on in one way or another is the nobody user because that's six five five three nine Whatever So everybody agrees on that though some distributions give a different name like Fedora redhead We are pretty bad at that. We call it NFS. Nobody for stupid reasons and The other user ID that actually needs to match with the User ID and the service itself is running as But and that's what this private user setting actually does it will set it up so that doesn't matter the effective difference meaning is then that if you Look at the system from inside the container. You basically see all processes everything that's running on IPC objects either owned by rude or Owned by nobody or owned by yourself, but by no other users Right, so if I'm logged in as on my machine run Firefox as user Leonard that service where that's turned on We'll see Leonard being owned by nobody and that's kind of the point of it. I hope you could follow about The concept around this now having dynamic users and the user data was mismatched is Problem that I think I mostly solved in this regard But it actually spills into a lot of other things like one major problem is debath for example debaths does policy checks and user IDs and if you use ID student aren't sticky You have a problem and specifically because also the policy around debath is actually not stored Like it's stored in the system in one special configuration file or one set of Like a directory and hence doing portable services that provide debath service, which I think it should be absolutely the norm It's highly problematic because right now it means Modifying the whole system and that again, it's directly contradicting my goal of Not leaving your artifacts around on the system. So I'm working on that that probably yeah I will take more time to actually get fixed the approach that I'm trying to follow up there is that Essentially rewriting debaths into something that where the policy is Encoded at the same place where the service definition is So so much a little bit of the details. I'm I'm I'm I don't know I'm talking mostly about concepts here that half of them are implemented half of them aren't Let's talk a little bit about what's in general in scope for the project It's definitely like it's simple delivery meaning that's not really deployment, but Pretty much to the level where you do HTTP. It's verification Simple building versioning socket activation What's out of scope though is the low distribution migration fleet declos stuff cluster deployment claim We define a universal API like the question that was asked there So as a functionality and desktop stuff, right? That's kind of out of scope for it Now to give you a little bit of a feel where this all should be going so that you actually get like Having the feeling what what am I doing there actually I put a couple of command lines that like I mean They are not real right like you cannot actually run that but the goal is that eventually you can and the idea is basically that You can issue. Do you actually have a laser thingy in here? Whoa, what did I do there? Yeah So the idea is really that you type system contours start and instead of specifying just a service name what you do you Specify a URL which points to a I gave us the PSI suffix, but that doesn't have to stay that way I don't know what we'll do there in this case. This could be for example a Raw disk image that contains a scratch effect or something like that And what this does really is it will download that thing and run it right as a portable service That's kind of the goal that I want to go for and Then after you did that you can issue system to status foobar and it's because it's it's a portable service And it's directly integrated into system D like any other services And you can issue system control status for where it will be look exactly like it otherwise would you can stop it? then afterwards after you install like that and This is where it gets more interesting There would be new command system control purge so that you can drop the portable service Which of course does not apply to to native system these services because they are removed with rpm and then another command line to make it more interesting is That you do the same thing But actually run it on different hosts like this kind of already exists like the dash H But if it gets a lot more interesting if we actually have the other part here that you can have this kind of simple thing here now Yeah, the other thing would of course be that you work on something you package up as a PSI and then run it like this Another house. This is all the slides that I have I know that this will talk a little bit more about Yeah, usually I try to prefer doing talks about stuff that actually did less about stuff that I plan to do This one is half-half. I think Well, let's say 70 30 But I hope you got an idea what what what I have in mind there the question Of course is how all of this relates to I don't know more traditional container management I think if for many use cases this might be an interesting option that to use instead I think Given that the building blocks like all these options on the sandboxing options the the roots options and these kind of things are all Individually useful without actually buying into the portable services stuff I think it's it's extremely useful for example to write an alternative alternative generators that Generate native unit files that run something else like for example OCI and these kind of things The key here really is to have something that people can extend can build on But that service management ultimately is something that becomes reusable in whatever way you like In the sandboxing functionality system he provides So that yeah, we put the tools in the hands of people to build what they like and yeah We are just will provide a couple of the basic implementation that makes things work nicely for people but Just to give a demo, but what actually people built from it is up to them. Anyway, I think my time So go like five minutes or something Anybody has any questions about my talk? So I'm supposed to repeat the question the question is whether How that actually works I guess with the Privileges that you pick for a PSI right like if we if we support unprivileged as well as privileged things This stuff is mostly focused as running system to use as a system services, right? It's not so much in user stuff. It's not desktop stuff. It's it's system level stuff and the idea is that You basically run everything as tightly controlled as possible But I don't think I should be in the business of prohibiting people things So the idea is that people start with the tightest sandbox possible But it's completely up to them what they write into their unit files and and how many holes they punch And if they want to punch holes into everything and just disable all kind of sandboxing they are welcome to do I'm not then I'm not the policing guy who will prove it that to do them So yes, while we do default to sandbox as possible I absolutely intend to for people to pick whatever they like and that's kind of the reason why I think this is actually appropriate for the Super privilege container stuff because people can pick exactly what they want and I want them to and I don't Don't have any Philosophical backing that I say no don't do that because I say yeah portable services ultimately the same thing as a native service It's just one that comes with nicer defaults that I think are nicer Ultimately, I hope that answers the question There's a question dynamic users Do you mean dynamic users or dynamic files? Yeah, yeah, so the question So I'm supposed to repeat the question that was like how much I actually reuse on the system I don't reuse anything from the system actually right the idea is that like if you buy into the full portable services thing Right, then you use root image or root directory and then you ship all this stuff the way you want and there's nothing we use from the system But you have to the option to whitelist stuff from the system that gets made a viable to your to your container But that basically means Yeah, that some directories in Val lib that I'm Vival on the host are also mounted into the container right but ultimately there is no reuse There's no use of overlay fs. There's no use of anything like that you live in your own world and You get a couple of things whitelisted that are also like that that are from the world outside of you but they're also Made a viable the same place at the outside if you follow what I mean So no overlay fs. No kind of trying to reuse anything. This is about system level stuff So I don't intend this to to let this is not the world where you have run times and things like that This is really a world where you ship everything And that's a price you pay basically, okay? So I mean system like so the question was regarding dependencies like because rpm Manages dependencies and the question is what happens if you use this of course the idea with containers The original one at least was that there would be no dependencies, right? So that everything like you may merge all your packages into one big blob and then there are no further dependencies Of course in real life that's never going to be that way because Things are interdependent beyond that system D has a dependency engine, right? And system D if you have one service you have another service you can order them after each other You can require them each other in these kind of things so Again the goal here really is not to introduce any new metadata But they're simply built on the metadata that we already have in the system unit files because they actually can express all that Pretty nicely already so that would be my answer but again a package manager and a system D Service managers kind of different things because my dependency of runtime dependencies and the dependencies that you come from are more like Like insulation dependencies, right? But yeah, the insulation dependencies answer to me like that. I would give you would be There are no and that's the idea of containers that you ship all this stuff Along with your image. I hope that's an answer How much time we have I don't have any time. Thank you very much if you have any further questions meeting me outside I