 Okay, hi, I'm Leonard Patering and I'm actually going to do a talk that is, touches probably a lot of topics that the talk immediately preceding mine actually already covered, but I'm going to do my talk from a system-y upstream perspective of course. My talks called portable servers are ready to use. I have given a talk about portable servers before, but in other conferences like the one in Defcon, ZZ, and Czech Republic, but at that point I mostly talked about the concepts without there actually being any real code around. Now I'm talking from the perspective where all what I promised back then is actually implemented and available in the most recent system-y release. Yeah, so what am I actually going to talk about? Let's answer that question, what are actually portable services? One possible answer to that is that they are system services, like the classic thing that system-y manages, with some container features applied. You can also see the other way around there, containers, with some features that the classic system services have. Containers in this regard, like I mean nobody knows what containers really are, but usually people settle on the three concept resource bundling, isolation delivery. For portable services I'm not interested in all of them. I don't care about delivery, but I do care about resource bundling, and I do care about sandboxing at least, which is similar to isolation, but a different word, and I pick a different word there for a reason, because sandboxing, at least in my vocabulary, is a little bit weaker than the isolation, because it's not about creating separate worlds, it's just about making sure that while you live in the same world, you cannot do everything that you traditionally could do from a system service. Portable service are supposed to be very, very modular, and yeah, modular in regards to that you can actually pick of these concepts, what you precisely want, like you can actually use portable service without sandboxing, or you can use just the sandboxing concept of portable services, so it's not this buy-in that you have to make when you do classic containers, because in classic containers you kind of have to opt in to everything, and then there are ways how you can opt out of some bits, but it's not really healthy if you do. Anyway, if you consider range from integration to isolation, then you could, I mean ideally I would have drawn this properly with a graphics program, but I'm very, very lazy, so I just did that with Latech here, so if you consider a range from classic system service on the left hand, very integrated to this operating system, right, runs with full privileges, sees the entire rest of the operating system, and on the right hand, like on the greatest isolation, you think about VMs, like KVM, right, where they have the largest level of isolation from the host, you basically never talk to the host directly, only through networking, so you pretend you're actually a completely different system, right, then containers, classic containers, Docker-style microservices are somewhere in the middle, right, they are more integrated in VMs because they tend to use the networking stack of the host, but they're also more isolated than system services, because they traditionally cannot see the process list of the host, and they cannot get access to the entire file system host, and so on. So, yeah, if you have this coordinate system where on the left, classic system service, on the right, full VM virtualization, somewhere in the middle, they have Docker-style microservices, then I would say portable system services, somewhere between the isolation position of classic system services and Docker-style microservices, so you get a lot of integration into the host system, more than you would get if you would run Docker, but you would also get more separation from the rest of the system than you would get if you would actually run a system service. So, specifically, what I actually mean with integration isolation, I mean, different things that can be integrated or isolated from the rest of the system, like it could be networking, obvious candidate, file system, all basically everything that's not a classic system service tends to have isolated file system, right, they have their own file system tree in general. PID namespaces, right, like if you see the process tree of the host or if you live in your own one, whether it has access to the in-it system, if it has its own in-it system or shares the in-it system with the host, it's about device access, right, like in a VM, you totally do not usually have access to physical devices of the host. In containers, it's somewhat messy, some people do that, while a system service generally has full access to the local hardware if it wants to because it sees the same slash Daven gets all the device management. And logging is also a concept, right, like so, yeah, depending on what you focus on, integration might be good, might be bad, like sometimes you want networking isolated, sometimes you want networking integrated. Anyway, I'm kind of trying to get the message across that this portable services is neither containers, it's neither the classic system service somewhere in the middle and is relatively large level of integration with the rest. Portable servers have a couple of goals, like implementation-wise, right, like so, you know, when you do service management, like the system developers have been doing for a while, then you realize that the way how on classic operating systems, like on RHEL and all the Linux's, when you install the system service, like Apache, or EngineX, or MySQL, whatever, then, and you remove it again, you leave a lot of artifacts, right, for example, system users, right, on Unix, it's generally there's no concept of safely removing Unix users, at least traditionally, so this basically means if you install an RPM, if you install a dot dep that requires a system user that will create it during installation time and will never be removed, regardless if you remove the RPM or the dep, the reason for that is sticky file ownership, right, if there is one file that was owned by the MySQL user, right, and you remove the MySQL user, then it will continue to be owned by its UID, and then if you assign the same UID to a different user later on, then it will suddenly get access to files that probably shouldn't get access to, right, because of that, all the distributions generally do not remove users, so that's what I call an artifact. You drop a service in, you take it out again, and the system has been modified, it's not back at the state where it comes from, and there are a couple of other places where this matters, right, like there can be files and slash run and all over the place, because in classic system services, they have access to basically everything, right, and there is no way how it can be sure that when an RPM is removed, it really doesn't leave files around. So with the portable service context, I tried to focus on classic service management, but fix that facet, right, so I wanted a scheme where you can operate very similar system services, but there's a guarantee that no artifacts remain in system when you remove the portable service again. This effectively means binding life cycles together, right, for example, think about slash temp, right, many, many diamonds tend to write files into slash temp. These files generally owned by the user ID of that service, if the service goes away, they remain there, and then somebody else can get access to it. By binding life cycles, I mean that if we now declare that the service, when it is started, has its own private little sub directory of slash temp, and when the service goes down, remove it, then we have bound the life cycles of these temporary files to the service itself. We know when the service is gone, the temporary files are gone too, and the temporary files will only exist at most until the point where the service goes down. This kind of life cycle binding, we can actually do for a lot of other stuff as well, right, like for example, something that we'll talk about later in this talk is in, with this approach that I've followed here, we suddenly get a concept of ephemeral user IDs, like we'll break up the classic static user ID concept, right, where we have static users, the problem that I explained earlier, but you can actually have some ephemeral that the user peer, and later go away in a safe way, and that also is, yeah, leave no artifacts. Another goal is everything in one place, right, like this is a classic problem, at least I personally see with RPM and Debian-based distributions is the general concept is to distribute the files that are in the RPM basically over the entire file system, right, stuff ends up in slash user, stuff ends up in slash var, stuff ends up in etsy, there is some tracking usually in these package managers to figure out what belongs to what, but it's very much incomplete, because it basically generally only lists the stuff that the package manager actually thought about, the guy who put the package together, not the stuff that is actually written out during runtime. So what containers generally do better in this area is they actually put everything in one place, right, so they know that if they remove that place, then all the data is gone too. By the way, if you have any questions, totally go and interrupt me right away. That's way better than if we just do the question and ask the session at the end. So yeah, anybody question at this point? Nope. Okay, another goal that I wanted to implement was portable service. I wanted something that feels very close to native service, right? I wanted it so close that it actually is one. This is different from containers, right, like if you do container management, like microservice management, you usually live in completely different worlds, right, you have the service management on the host and you have the container managers and they have different tools to interact with them, different thematics, different like runtime cycles, different ways how you configure resources, different ways how you basically do everything, right? I figured that for many use cases it's more interesting to just consider the container stuff or stuff that's closer to classic container management, more like a regular system service and have the same APIs to manage it all. So one of my goals was I wanted to feel like the user interaction, the user concept should be for this portable service exactly like native services. So the mean system control should work for them like for any other native service. These were the goals. Now the question one always has to ask what's the use cases, why do we even do this? For me it's kind of the next step for service management. I mean that's what system D is, it's a service manager and we don't live in a vacuum so we look around what other people do and container management is kind of the hot thing or at least was a hot thing one year ago. So there are stuff we can learn from that. I think container management, I mean there are very good reasons why people use this and I think many of these reasons why people use them are also relevant for service management itself, EE the classic way how people deployed stuff on Unix. What's also interesting to notice is you know everything that we ship on the various distributions that have adopted system D these days which are effectively every single one of them already has a system D service file right? So this is something really interesting right? Like it basically means that if we add these container features to classic service management then we can basically relatively for free have these with bundling have the better isolation without actually introducing any new metadata without actually requiring people to come up with completely new ways and concepts how to manage all this stuff. Also I think admins are used to service management already right? Most people do know what a system service is and know how to interact with it and explore it. So it's kind of nice to just add this little step ahead to make some of the container stuff available so that they can know already how it works. One main use case of what I want to do with the portable containers is something that's often called super foliage containers. This is like at Red Hat for example we have lots of storage people and they want to be able to ship as a container stuff that very closely interacts with the hardware of the local system because it does storage right? So they want to have the bundling but they also want a huge amount of integration into the host system right? They want to interact with the kernel very closely with the device management very closely and all these kind of stuff. They hence came up with this concept of super privileged containers which is basically Docker but with all the security turned off. To me that sounds like yeah taking a tool and turning into something that it really isn't. I think with the portable containers with the portable services this use case is much better dealt with because yeah you ultimately actually do get a regular system service just a system service that you can ship in a nicer form than necessarily an rpm because you get the bundling. So in many cases also the integration I mean this is like the effect of the super privileged container stuff. In many cases integrating this stuff that you want to run into the host is a good thing it's not necessarily bad thing that's not for all the cases right? But frequently you do want to have access to the host systems features and the information that the host system has and the other stuff that runs on the host you don't always want the isolation. So I hope this gives a bit of an idea what what we have what I have in mind with this and why I think this makes some sense as a generic tool. It doesn't mean that this is something that's supposed to contain replace containers anything I totally have no interest in that. It's just I think there is reason to add something between classic service management and container management and for many use case it's probably the better option than either of those two. The ultimate effect of this is that from the systemd perspective you could say previously systemd supported two service formats the native systemd one and the classic system five one now it supports three system five the native one and these new portable servers the services. Actually many of the concepts that portable servers are implemented with are so generic that you could even support others in a similar way without really completely inventing the wheel new and without even patching around in systemd because what's really key about the portable service stuff that I've done here is it doesn't add anything new to the systemd core. It's just a set of generators ultimately and a little extra dammin to make things easy that takes all the stuff already implemented in systemd anyway and packages up in a nice fancy way so it's a little bit nicer to use. This is actually key like all the portable services stuff is not a completely new addition it's not that the PID one would have an understanding of portable services it doesn't. It just means that all the stuff that's already implemented we open up in a new way to make it a little bit more user friendly. What are portable servers in in practice by the way any further questions at this point? No questions yet. Okay so what actually are portable services ultimately there are disk images right a disk image like I mean all the container managers do that right like for example Docker tends to have these series of tariff balls that is that disk image. With these portable services I had this one goal I didn't want to introduce any new metadata so for portable services a disk image is whatever you want to is disk image to be specifically meaning I avoid defining something new instead you can just say portable services some plain directory tree can also be about our sub volume if you're a Facebook guy but it can also mean a GPT partition image or something it can be anything basically that the Linux kernel can read directly so anything that you can mount or anything that is already mounted and then you can access through the file system layer. If you have these images what portable services are are basically that you take the images that are on them or the services that are on them and integrate them into the host system through root image and root directory for those who don't have experience with these two options these two options have existed for a longer time in system root image you can use it in a service file and then you can specify a raw disk image there and if you do then everything that the service file defines is actually relative to the top level directory into that in that image similar root directory is exactly what charoot is right you specify directory there and then everything you specify in the service files relative to that right what portable service now are are basically just making use of this in a friendly way so a disk image that contains services can be considered a portable service and then this basically means services that are contained in and are pulled out and run from directly that image through root image and root directory so it's ultimately I mean you could say that portable service don't bring anything new to the table neither on the system decided like because all the concepts were there nor even on the philosophical side because ultimately portable servers are just a fancier truth what's by the way nice to manage to mention is that the root image option that has existed for a while is kind of nice because it also does cryptography and does verity so verity for those who don't know is like this authentication like cryptographic authentication of images so that you can actually ship an image to a system and then while the system runs that image all accesses are cryptographically verified to guarantee that that image is still in the version that the vendor shifted in originally this is an it's amazing concept actually because it allows you to make guarantees that the image that you deploy is really the image that is being run and that it cannot be modified in the middle but this is something that docker or all these systems cannot deliver right like because this is like opens the door for cryptographically secure data centers but yeah i'm not going to talk too much about that so as mentioned a disk image that is can be considered a portable service is in no way anything new it can be just a regular directory or a raw disk image what matters though is that their system the unit files in there right units in user lip system do you or something you have have to have at least one unit file what also matters is that it carries a us user lip os release file for those who don't know all your distributions do that by default these os release file just says on devian it's a devian on fedora it's a fedora so these requirements we only make like the service one actually matters but the os release one we only make so that there's a little bit of extra verification that portable services you knew what we're doing when putting together the portable service by creating that file so this is about the bundling right like you I hope you got the idea that when you have a portable service pretty much any operating system inch you have is already a portable service right the other thing is sandboxing I mentioned this already that I am not using the term isolation I using the term sandboxing the sandboxing functionality in portable services nothing new again it is functionality that has been in system before a longer time like all these options are basically sandboxing options that you can already use with classic system services already right I don't really want to go into the details with what all of these do except maybe one here for example private TMP is something that gives you private slash temp so that you get the lifecycle bounding and you can be sure that all these access like these attacks through improper slash temp temporary file management just go away anyway these are all sandboxing options that have an existence system for a while what portable services does it makes them easier to use in a one stop solution basically and we are working on more actually so I'm gonna be even more sandboxing options there's also per service firewalling which is also interesting because it's a service like a service the sandboxing concept in a way now I'm as mentioned all of these sandboxing options have existed for a while already so there's nothing new about it what is new about the portable services stuff in this regard though is if you use portable services all these sandboxing options are turned on by default right this is something ideally we would do that for regular system services too the thing is though we can't do this for compatibility reasons because system five services like all the heritage that we have and the initial versions of system D don't do this kind of sandboxing so if you for example would start to taking away right access for all your system services by default then things will break everywhere because services are not ready for that right so this is the reason why in system D unfortunately for regular system services all the sandboxing is opt in not opt out for the portable service we turn that around so it's opt out not opt in for classic containers of course it's also opt out not opt in the way it should be now let's say you have an image file and it contains a service and you decide this is a portable service now then and then you want to run this on a specific host then you have a couple of problems one of them and this can be quite hard is dynamic users like I mean dynamic users actually solution to this problem the problem is basically what I mentioned earlier if you install an RPM on a system for Apache or something it creates a user for you static user it's HVD or something so RPMs do that if you now want to centralize everything in one image then you want this not to work that way you want it so that this user is created but the moment you drop that image again the moment you stop the services from that image the user should go away our solution to that problem is dynamic users dynamic users I personally think it's like it's a big step ahead for the entire UNIX concept because like I mean the user ID concept in UNIX is the quintessential security mechanisms that we ever had right I mean people have added capabilities and as a Linux and all these kind of things they have added namespaces read only mounts and all that stuff but at the very core the one security concept that all UNIX has always had was the user ID right there are operating systems which recognize that for example an Android like on all your phones all the apps run individually gets one UID assigned because they realize if they want to isolate these these user apps from each other then the best way to do it is by just using that quintessential security concept that UNIX ever had on classic UNIX distributions because these users are so static there's only very limited use of it right for example if you run Apache it will only register one user ID and will run everything on it even though it would be much better if for example every V host would have its own user ID so that if somebody exploits one website you don't get access to the other websites as well right so on classic UNIX user IDs expensive right they are static they stick around so you cannot just allocate one and then use it you have to think many times if you want to do this because you know they're going to stick for around forever on the system and they're not that many around anyway because you only have 1000 usually on current Linux is because they have to have user ID on their one cell so our solution to this problem is dynamic users dynamic users try to break this up it's an attempt to make user ID allocation cheap and ephemeral right so that you can basically say a user is allocated the moment a service is started and as a deallocated the moment the service goes down right this I mentioned this earlier there's a sticky file problem with that right like if that service would create a file while it is running then be then terminate the file would stick around then and then some time level the UAD would reuse for a different dynamic user it could get access to stuff it shouldn't get access to solution is this if you turn on dynamic users for a specific service then also you automatically get added to a sandbox that takes away any right access to anywhere in the file system except for very few closely life cycle places right so dynamic users doesn't like the way system deimplements it it's not going to add just dynamic users it always implies also getting a complete sandbox so that the sticky file problem goes away by simply not allowing you to write any files anywhere and only giving you basically right access to slash temp where we can life cycle bound it runtime directory where we can make sure that it's probably isolated in a couple of other places yeah this is this was a hard problem to solve and I personally think like I mean the dynamic user stuff is useful for the portable services stuff because it allows us to put together portable services that use system users and we know that when the portal service goes away the user also goes away but they're also general user for all other cases right like ignore portable service dynamic users are awesome right because you can for example with system de run you can create like a transient service that only exists as long as you want like dynamically on the command line and in that you can allocate a dynamic user and then that command is run dynamically on a user ID that goes away when that command finish so dynamic user a big step ahead I think and particularly to breathe new life into this concept of Unix user IDs that I in my opinion at least has languished for a long time on classic Unix systems another problem is a very closely related to this if you work with charoute systems traditionally you have this problem with the user database mismatch what do I mean with the with that if you like etsy pass wd if you ship that in a charoute there's a very good chance that it's going to be different from etsy pass wd that is actually on the host this basically means that if you do ps in that charoute environment or in any kind of environment that shares the the process tree with the host then you will see that all the user IDs might be not resolved to the right names and vice versa right there's a problem right there's a problem for any charoute environment if we want to do portable services right we want to have this ability that you can bundle up a couple of things in the directory tree and then run the services from that tree on the host then you have to deal with that problem what do you do with the mismatched user database the solution we came up with is called private users if you turn that on for a service and it's automatically done if you use the portable services stuff then that basically means that using the username spacing concept in the Linux kernel from the view of the portable service all user IDs that are viable on the host are mapped to the nobody user except for the root user which is mapped to the root user and the dynamic user that was allocated for the service itself right so the effect of this is basically that from the perspective of the portable service there will only be three users in the entire system visible right there will only be processes visible that are owned either by root by nobody or by himself right and there will everything else would just disappear through the mapping of the username spacing um this is actually really interesting like I mean username spaces are like a kernel feature that I have certain problems was I think they're where I'm extremely over designed but it's a really interesting use case for this um where where we can make this this mapping that username spacing like Linux kernel username spacing allows us to reduce the the the differences between the Etsy past W stuff by simply ignoring everything in it because we know they're never gonna be these other users because we move them entirely out of you I hope this was in any way something you could follow I know it's yeah username spaces are a topic for itself I don't want to go too much detail about how messy this all is but um anyway it just takes benefit of the fact that on Unix across all distributions everybody agrees on the definition of at least two users right everybody agrees that the root user is called root and has user at zero and everybody agrees that there is a nobody user which has user ID six five five three what three nine right minus two and even though people can't necessarily agree what the right name for is because fedora for some reason calls that NFS nobody instead of nobody but anyway they do at least agree that there is this user even if they don't agree on the name of it so yeah so that is a solution to this there are a couple of other hard problems the divas one is not solved yet right like if you do a system service and you want to access the rest of the system the most popular IPC system on Linux is divas right divas is it expects static policy written in xml stuff installed under the system for clients to be able to do something this is an unsolved problems we have talked to various people involved in divas about what we can do there but it's still a bit unsolved um a couple of things how much time do I have a couple of things that are in scope for the entire concept of portable services a simple delivery but a simple delivery meaning that yeah I kind of want to make easy that you can use the built and systemly tools to download something from HTTP so but that's kind of the level where it is in scope for portable services verification I kind of mentioned this already was the verity stuff right I want a strong cryptographic verification to a level where none of the classic container management solutions provide this there's simple building and versioning socket activation I mean it's just about like people ask me what what is portable service about I kind of the message that I want to get across about this is really portable service supposed to be like a basic building block that your operating system like another basic building block that your basic operating system offers you but it's a basic building block right it's not a solution that you can actually fully deploy because it's not comprehensive it does not help you to to do like load distribution migration like the orchestration stuff it just does really the low level bits but in a nice concept I talked a lot about what portable services are and I do hope that that these some of you got a basic idea what I want to do is this right like that I want to take system services add this these bundle concept to it through charoots through these root directory and root image settings and systemd and that I want to do put a strong emphasis on sandboxing now how does it actually look like if you work with it what's the mode of operation yeah so if you run the newest systemd version you'll find this new tool installed is called portable control right it follows the same scheme how everything in systemd that you interface with suffixes with control and yeah so in this case this is just an example let's say you have a portable service image and it's called fooba.raw it has a suffix raw because it's a raw disk image right it doesn't really matter what suffix you give it right let's say it has an xd4 image in there and there's an operating system and then has a couple of services if you issue portable control attach fooba.raw what happens is that this tool will look into the image look for the service files supplied in that image pick some of them and we'll come to which one that we'll pick copies them into Etsy systemd system right the stuff where you put your own unit files into extends the unit files with a root directory or a root image in this case root image because we are working with a with a raw disk image here and that's already it right so what does it do it copies out the unit files makes sure that the when the unit files are actually executed points back to the disk image we're working with and that's it right so if you run this then suddenly the services from that image file are viable on the host like any other system service you can start it you can stop it you can do status with it you can do resource manage with it you can enable it at boot you can do whatever you like with it there's another command which does the opposite cause detach foobar like it's portable control detach if you invoke it on the same image it does a reverse it just looks for the for these unit files that got copied out remove them again there you go now everything's gone right like because all the data was was centralized in that image and because we use like this in the background use all the fancy stuff that I was just talked about about the dynamic users and the sandboxing things like that we know that after the detach there's nothing remaining in the system from that image was one exception by the way which is logging right like any logging that these system services did that ran from this between the attach and detach right I mean just running this after the other will not run any services because it just make them a viable in the system then make them unavailable again you've in between have to actually run system control start or something to actually start something right you I hope you get the idea but anyway everything that might have logged will remain on the system right like logging is something we do consider an artifact that should remain and should not be removed there is a question should we do this with the microphone or should I just repeat over there is there a non-interactive operation mode where you can pre-deploy an image and just first boot into it so there like me this is non-interactive right I mean you can do that from any script but if you want like a directory where you just stop the stuff and no we don't have that but it makes a lot of sense to add this stuff what it internally does I mean portable controls a dumb tool that just talks to a little mini-dammon through debas so if you want to attach images something can just go through debas what is important though is we can do portable control attach foobar.raw right that makes the services a viable to the host and then you can do portable control enable foobar which does exactly what it would do for a normal service like hook it into the boot process so at next boot it will automatically be started right so the idea really is that after the attaching it really is part of the of the system so you don't need any particular special preparation to boot it on the next time you just use the regular tools just system control enable that's kind of the key of the idea really it's not different from the normal stuff what happens if the name of the service inside this image conflicts with someone one already on the system or the version of this image okay that's a very good question so this is the stuff that I omitted earlier like I said that some of the unit files included in that image are copied out to go into the detail is that what it does by default is it derives the names of the unit files that it copies out from the name of the image itself so in this case what it actually does it copies out anything that is either called foobard.service or is called foobard-anything.service or either of the buff but with target timer socket or path at the end right so the idea is that this makes it possible to ship multiple services plus timers plus paths plus targets whatever you like in a single image as long as you give them the same prefix name that happens to have the same name as the image itself that's what gets copied out that said that's only the default on the command line you can actually specify that you want to copy out anything you like the tool will by default before it copies it it will validate it for you and will help you and tell you oh my god you already have that installed on the host are you sure you want to do this actually currently it doesn't allow you to override it even I think but yeah it will notice when there are conflicts and not allow this for you but the idea basically is that if you have your app you want to ship it like this you just adopt the scheme where you give all the unit files of your app the same prefix like my suggestion would even be use reverse domain imitation right gives the image file itself also the same reverse domain annotation and then you just can do attach and detach you can be reasonably sure that they're not going to be conflicts and naming and the units file can run as normal system service they can be run supportable service it doesn't really matter I can extend the web drop-ins from the main system right yes because they are installed into the main system right they are really the real thing right like you can do system control cat you can see them you can do system control edit you can do system control set property they are native services at that point right they just happen to already have one extension which is the root directory root image thing plus the way it actually works is that we have these profiles here these ones this is about the sandboxing thing if you do the enable it will actually do more than just add the root image in the root directory what it also does it links like some links into the .d directory for extending the unit files it also links another one which is which we call the profile which just links the sandboxing options to turn on by default it uses one profile that is called default it locks the system very locks services down very much so they run very little privileges pretty much everything that I had on that other slide was the sandboxing options and turned on in the way we do have a couple of other profiles though if you pick strict then this is even stricter right and then there's trusted which is the opposite if you do trust it basically there are no sandboxing options and all turn on and no network is the same actually as default except that it's without networking you can define your own profiles these are just the default profiles that we ship and as mentioned the one that calls default is actually the one if you don't specify anything if you actually use these commands that I suggested here that's what you get right so the idea is yes by default sandbox by default you have security but if you don't want to you specify the trusted thing and there you go it's gone and is it planned to be integrated into the package manager or are they supposed to be a thing of their own there's a thing on your own like I suppose about bundles that you build whatever you like and it's yeah I mean you can use rpm and dpkg to build your images but that's up to you I don't care this is this is on top of this right let's do one last thing what's really key here is I don't define any new metadata with this right there is no new disk format because the disk format is basically directory tree or a raw disk image which is like all the tools can generate anyway all the tools can read anyway what the linux current can read similar there is no new format for defining what to actually run in it because it's just plain unit files right it's just the stuff that we have anyway there is no new metadata about the image itself because it's just use a lib os release the stuff that already exists so this is a key point to take away here it's not a new format it just uses stuff we already have just makes it a little bit nicer to use so that you can kind of merge the these images with the host system in a safe and somewhat nice way to yeah and I think I already mentioned this so I'm basically done one last thing because it is requires has no like new metadata there is actually no real need to use any specific tool to build these you can build these images with any tool you like it can be for example the bootstrap it can be yum root install or whatever it's called I wrote one tool which is mk osi it's a little bit nicer to use than the other ones but it's it's it's a useless script in many ways because it's just a wrapper around that bootstrap in the other ones so use it if you like but you totally don't have to you can even build images that are compatible with portable services with with like I don't know tools that I typically use for building VM images because if you have a building in a raw VM image you can it can also double as a portable service image because it can like there's nothing special about this stuff if it's something that Linux can access it's a portable service image as long as it carries at least a unit file and the in the OS release file so this is really the key like I'm not going to give you many build tools I mean this one yes but you can use any like vagrant or whatever you have to build these images it's completely up to you because I don't introduce anything you there is no JSON stuff or XML stuff or whatever else that you now have to write out because your image is already compatible that's actually an interesting property because you can have one image that can act as a VM image right and you can boot up and then there's a system inside and boots up the service and everything's good you can build that image so that it also can be run in and spawn which is basically for free I mean it's a simpler approach because you don't need a bootloader but you can take benefit of system in there but you can also attach it as the you can also attach it as a portable service and in that case integrate it with the rest of the system so that's all I have thank you very much for your interest if you have any further questions meet them a hallway track in particular I would like to talk to all those people like you who did the talk right before because I was super close to what I was talking about I hope that was interesting to you and I need to vacate the stage thank you very much