 and Jason, also from Project Atomic. So if you look at the schedule right after this in the same room, there's also a kuban origin discussion session, also presented by both of us. So basically they're very closely tied together. So for the first half for this session I'll be going through some of the concepts behind system containers, we'll take short break, and then for the second half Jason will take the lead and we'll talk about kuban origin a little bit more. So first, I actually set up a quick GitHub repo for this demonstration. In there you'll find a lot of links to a lot of our projects, a bunch of images that I'll be showing off today. So I plan to do this as sort of half a sort of me talking and half of me demonstrating what you can do with system containers, and if you wish to follow along that would be great. So unfortunately due to some internet constraints polling images right here might be really, really slow. So what I recommend you do if you want to do some setup now is if you have any handy virtual machine running either Fedora, Fedora Atomic host, or even CentOS, you can, all you need to do is get the two packages, Docker and Atomic, all you need is those two packages on any virtual machine, even though your main host if you want it should not be any destructive demonstration. So feel free to get that setup while I talk through the first half about some of the concepts behind this. So quickly a little bit of topics that I'll be going over today. I'll quickly go over some background on the Atomic host and why the existence of the Atomic host is some of the problems we ran into created the concept of system containers. I'll go on to quickly demonstrate how to use a system container, how to run, how to do updates, rollbacks, other features similar to what other container run times would have. I'll show you some of the use cases and a lot of the existing containers we have including the ones that Dan was talking about about at CD, Flannel, Docker, Cryo as system containers. And then I'll quickly go on to creation, show you how a container is set up, how you would build these containers with the tools and basically what you would need to get your service running as a system container. All right, so quickly onto the concept. So quickly a little bit of background. How many of you here have used the Atomic host for anything? Okay, so the Atomic host, if you go down our website you'll see that we describe it as a lightweight immutable platform for running containerized applications. So I think there's two key words in that statement. So it's immutable and it's containerized. So a lot of our sort of plans behind the Atomic host is to make it sort of like an immutable platform to run stuff like Kubernetes and OpenShift on. It's meant to be designed as, for example, a node for your Kubernetes and OpenShift clusters. It's optimized for Kubernetes and OpenShift. And I think another key concept behind the Atomic host is that we think of it as an aggregated software unit. So when you update an Atomic host you do not manage it as packages. You do not do individual package updates and rollbacks. Instead we release an aggregated unit that's been tested. For example, for the Fedora Atomic host the release has happened every two weeks. In this release you get all the package updates, kernel updates that you would need. And for you, running the update is just one command. You run Atomic host upgrade, upgrades the whole tested unit to the new version and then you reboot the system and boom, you're in the new unit. You don't have to do individual package management and that's very good for containerized application host. So with that came a couple of problems, I think that we try to solve with submitters. So for example, what if you wanted to decouple some of the hosts and some of the host services? For example, like Dan was mentioning earlier, Docker now releases once every month. There's so many breaking changes in Docker. What if you don't want to, say, update to the newest host immediately? What if you want to keep on running the old Daemon and so you can keep your old setup working correctly? What if you wanted to add a new service to the Atomic host? So the Atomic host does not, by itself, come with DNF or YAM, you do not, well you can do something we call package layering, but the general idea is it's best to work as this aggregated unit, so you do not have to manage individual packages. But what if, for example, you wanted to add something to that base host? And what if you wanted a smaller base? Now you might be thinking, well, why would I need a smaller base if I'm going to run the container applications anyway? I'll have to pull in those images and they'll all be on my host and it'll all be in the same structure. Well, like I said before, for the Atomic host, at least, the idea is that we want to do these upgrades and rollbacks together. So during this, you need to do a reboot to the new kernel so we can support this multiple root capability. So with a smaller base, you will have less downtime, you'll have faster upgrades, and it's just generally better. So system containers are just system D services as run C containers. Now like I mentioned earlier, they don't have to be run C. Really, they're just system D services that you can run as a containerized service on your host. So it uses the Atomic command line to manage the system containers. It uses OS Tree for storage. So a little bit of background, if you're not too familiar with the Atomic host, OS Tree is what we use as the storage system for the Atomic host. So you can think of it as sort of a git for your file system. It's really just a very, very overgrown hard linking system that basically allows you to store your file system as this tree format. So system containers take advantage of that and also uses this OS Tree for storage, which means that, for example, if you want to sort of pull in a new container and you want to use the same image as host, you can set them up so you can use the same repo. And in that way, it takes away a lot less storage because system containers can make use of those same layered images. It uses Scopeo to do potion pull and it uses a system D for lifecycle management. So another point is that the containers are read-only. So the idea is, these work as OS Tree images, you can use the same container image for multiple containers. For example, if you have a big cluster of like 100 machines and you have it set up so that you have an NFS server hosting the roots of the file system, you can set up so that all 100 nodes, for example, they need to run, for example, a Kubernetes service, you can set them up so that it uses the root file system from the NFS server, thereby saving you a lot of space. But the downside of that obviously is that the container image has to be read-only, which I wouldn't call it a downside. It's actually a good thing because it keeps your system immutable. You know what's happening with it. There's no security vulnerabilities because of that. And when I say host specific, I mean that the container images are built for a specific container host. However, that does not mean that you're necessarily confined to this. Like I said, one of the good things about system containers is you can decouple the host and the image, container image, which means, for example, on a Fedora 26 machine, you can run an image from that was based on Fortress 24. You can even run an image that was based on a central S, for example, and it works perfectly fine as long as the compatibility features are there. So why use system containers? I highlighted some of the points already, but so you can run a lot of these pre-docker cryo services as if they were traditional binary out-box system services. So the idea is, for example, at CDFLANA, Docker, like Dan said, needs them to be running before the daemon runs, if you wish to use it for the networking, et cetera. And system containers can do that. It allows them to, system D, like Dan said, knows much better than Docker would, the correct ordering to start out the services. This can be done during boot time, et cetera. System containers do not require running container engine. If you use them as run-c containers, they just use the run-c runtime directly. There's no conflict with the Docker, daemon or cryo daemon at all. I can utilize the existing atomic host in OS tree if you wish to do so on the atomic host or OS tree. You can easily switch between versions. We support updating and rollback, much like the atomic host by itself does, and provides the usual benefits of a bundled image to run your service, provides isolation, provides a consistent experience, for example, if you were a system administrator. So let's take a look at what's inside. It follows the OCI format, which means that, for example, if you have a Docker image, as long as you have the right setup, you can run it as a system container, for example. It puts those services and commands in the container. I'll show you in a bit of what exactly the layout looks like. So it's stored into OS tree as branches for different layers, so it has the same idea of layers as, for example, Docker would to save you space. You can have the same base image, for example. So when you do an install for containers, that's the extra step compared to Docker, that you would need to install the container to the host. It creates a hard link checked out from the OS tree repo, which means that you don't have to take extra space, and because of that, the image will be known. So generally for our system containers, and these are the ones that exist out there today, you need some combination of these following files. So the config.json.template file is the template open container initiative config file for RunC. SystemD uses that to call RunC to create a container based on that config. So the manifest.json file is something you can set default values, environment variables, for example, configuration variables for different services. The user can obviously overwrite these as they wish. These are created during, or these are built in as part of the image and they're created during install as well. So there's a service.template file, which is the service unit file for systemD, and you can also, and the 10 files.template is optional, but basically if you wish to use, I should say systemD.template is not templates. So it's basically the config file for systemD.template. So some quick comparisons to Docker. It follows the open container initiative format, has the same concept of layers, and it uses RunC as the container runtime, and it does not conflict with Docker at all. So for example, so Docker has its own way of keeping track of which containers it's created, even though it also uses RunC, and those are completely separate. I'll show you in a bit what I mean by that, and they don't conflict with each other. So some key differences. So system containers use systemD as the life cycle management, whereas Docker uses Docker. System containers generate specific files to the host, whereas Docker is a little more loose formatted. You can also do the same thing. System containers pre-defined some amounts in a config while Docker, you can pass them in during runtime, for example, and system containers do not require any sort of big fat data like that I mentioned before. So let's quickly go to some usage things. So this is basically the expected life cycle of a container, much like that of Docker, you would pull an image to your local storage, for example. You would install the image to your host, that's the extra step that you would not take for Docker. You would then probably start the system, you can start the container by starting the system in service, and then you can check the status, and then you can start the container when you're done with it. Now one of the key differences is that these containers are sort of not transient. So they will always stay there on your host unless you uninstall it, which means you can update that container as you go, you can fetch a new image or create a new checkout, you can restart the service at any time. So it sort of acts as something that would be part of the host. So some of the other functionality that exists, for example, if you can run image info container list, you can see, so it's basically integrated into the atomic command line, and it does a lot of the things you would expect Docker to do, you can expect, et cetera, et cetera. There are many installation options, like I mentioned before, you can set up the root file system to be shared with NFS, et cetera, you can set environment variables during install that you would like to overwrite the default conflicts with. You can update and roll back a container. So that's one of the benefits of having a system container is whenever you want to update to a new version, you can rest assured that when you do the update, all the old configurations, all the environment variables of the previous deployment is all saved. So if anything goes wrong, you can always just do a one-command rollback to the previous version, and it will still work as you would expect it to before. And then recently we implemented atomic run, which basically is exactly a command in the container using RunSeq. So let me just quickly go to demo. I will size a little bit. So if you have a system set up on your machine, all you would need is Docker and atomic. Unfortunately, because the internet is pretty slow here, if you were to pull any of these images locally, it could take anywhere between five to 10 minutes. But on this machine, I've already pulled a bunch of images I will be showing you today, unfortunately it wrapped a little bit. So you'll see that these images are actually already in the firmware registry. So if you attended the talk by Adam Miller yesterday, he mentioned, he showed us how to sort of create a container and get it to the firmware registry. While many of the system containers are already in, such as at CDN final, and you can use them. So this container engine is pretty much Docker as a system container. Actually, there's also a Docker namespace within the firmware registry. A container engine is just a major built on top of that. I'll show you that in action in a moment. And if you go to the, let's see, one page, if you go to the repo I have, some of the commands that you can use that I will be going through. So if you wish to follow along, if you're free to do so. So the first thing I'm gonna do is I'm actually just going to install, I'm actually just going to install the SC. So as you can see, all I need to specify is that I want to install the container as a system container. Now, if the image was not local, it would have gone to a registry and done a poll based off of which registry you gave it. But since this was already done, all I did was it extracted the container to an existing location. It did a daemon reload and it created a bunch of files and it enabled me. So let's start the service. So you can see that the SCV services, as a matter of fact, are running on my system. It is running in a containerized service. I can view it with our containers list. You'll see that it is, as a matter of fact, running with the OS tree backend using Run-C as the runtime. So for now, because on my local machine, Atomic is not updated enough to have the exact options. So what I can do is I can use Run-C exact directory. So because Run-C, when we created the container, also created a container at CD. Or rather, Run-C created the CD container. So I do something like this and it would create the network and complex that finally would be, for example. So I can do Run-C list and show that it is a matter of fact, right, and the system is managing this. So I will also now do an install for your local. Now I've actually attempted to do a pull. So I'll show you what happens. Let's find that I want to pull it directly to OS tree. And if I try to pull this image, you'll see that as soon as it will immediately return, not saying that it pulled any layers because it did a remote inspect. It noticed that, as a matter of fact, there is no new layers to this image and since the image is already local, it returns the success pretty much. So I can show you that I can start the cloud service. You'll see that it is as a matter of fact running. You'll see that both containers are, as a matter of fact, running as a system container. So for example, if we wanted to know a little more about the images, listing the images or shows us this. So we can look at the info for an image such as at CD. So let's see what this tells us. That's a lot of information. So basically it has the expected labels that would be in the flora registry, for example, the architecture label, the maintainer label, name, release, release six. And then it also shows that what template variables that exist within the container. So for example, if you remember, there were sort of configuration files and they were defined as template files. So in those template files, these variables are not set and then with the manifest JSON file that I was telling you before, it will first take those, apply them to whatever variables that do not have a default. And then it will take whatever the user defines and override those variables with whatever you wish to set them up. So that's just basically a quick and easy way to run system containers. So with that, I will actually now do a little quick demo for Docker as a system container. So if you remember correctly, we do as a matter of fact have this image it's called a 4.25 container image. So this is one of the examples that I said, I can use the 4.25 image for on a 4.26 system. Even though my system is 26, I can use the 4.25 image without a problem. So I can, this is sort of the host and container decoupling. And also because I did not get a chance to build container image for 4.26 yet, but that's not a problem. So with this, I'm going to give it a flag and I'll show you what will happen without this flag later. But right now we have the ability to install files to the host with the system container. So on a 4.25 system, this would be tracked with RPM. In the atomic host, it does not have RPM built so you will have to specify this. But the idea is when you install files to the host, it's much better to have it tracked with something like an RPM. So it will generate the RPM for you and it will use that as the tracking. But if you specify this, it will just do a direct copy and it will not, it'll still be tracked but not with RPM. So I'm going to install this container. This will take a slightly longer because this is a slightly larger image. And first I'm gonna show you that I do not have Docker running locally. I do have the Docker package installed locally because the system container only provides the daemon and not the command line prompts right now. So I could uninstall Docker and this will still run fine. However, I won't be able to show you anything with the Docker running other than the fact that yes, the daemon is as a matter of fact running. So I'm going to system CTL start container engine. So you can see that Docker is still not running but if you look at status in an engine, it is as a matter of fact running the Docker service. And then we can do something like Docker run hello world. And then we can do, and remember, as I mentioned before, the atomic command line integrates all of this Docker and system container functionality. Well, you can take a quick look that since the Docker daemon is not running, the Docker images that exist on the host are also listed. Yeah, so that's Docker running a system container. It pretty much works out of the box. You can switch the version of the daemon as you wish based off of the version of the container. So I'm also going to show you the cryo image that we have recently worked on for system containers. So unfortunately, it's not in the flow registry yet. So I've actually built it to my own local repo and pulled it to OS tree. So instead of actually installing this as well as another system container, because I don't want Docker and cryo to run simultaneously, what I'm actually going to do is I'm going to take the container I've installed before. So container engine is truncated. And I'm going to rebase that container to use the cryo image. So this would, so normally your atomic containers update would just take the container you wish to update and apply any new environment variables you wish to put to it. It would apply any image updates that you have on your local machine. It basically creates a new checkout which I will show you in a bit. But what I can do is I can give it a rebase option. What this means is I wish to rebase this container that container engine on another image that exists within my system. I'll show you what happens. Now this time I did not specify system package equals no. Oh, but it applied from before, so that's fine. So if we look at container engine now, you'll see that as a matter of fact, it is now starting to cryo data. So there's no cryo on my host. Docker still not running. As a matter of fact, if I look at my containers with atomic, or sorry, images, you'll see that the Docker images are no longer on the list because the daemon that was container engine was not running. But now RunC is. So I'm actually pulling. Because like I said, these generally only provide the image services, or rather the daemon services, it does not come with the command line interface. So I pulled some of the demo from the upstream cryo project locally so I can show you that as a matter of fact, cryo works. And in case you ever wonder, hey, I've heard of this cryo, how do I use it? So this is a quick demo. So I can run, for example, I can create a pod. So it takes the sandbox config.json, which is basically it creates a sort of a test pod sandbox and I can look at the pod status once I created it and you'll see that it is ready and I have created it. I can do something like an image pull with cryo. I think I actually already pulled this earlier so let's just take a quick look. So we can show that the alpine images as a matter of fact, not in my local system. I can do something like a container run. So I can create a container, I can check the container status. Given the idea I just created and it shows create container created, I can run the container, I can install, I can remove it. And then for example, I don't want to use cryo anymore. I can just do something like atomic containers rollback. So if you remember, I updated the Docker system container to cryo. So if I do this, it will rollback the container to the previous checkout and if I just show you real quickly, you'll see that it is once again running the Docker service and cryo has gone from the system. So that's some of the existing working containers that exist. So actually let me just show you where the containers are being checked out. So they exist in barlib containers upon it. Whereas on Fedora, Docker containers are in barlib Docker containers, I believe. So that's slightly confusing, but... So these are the system containers that I have just installed. So you'll see that for at CD and file, there's only a .0 version because I've never done any updates for rollbacks. For container engine, there are actually two and we only keep track of two and then you get a time. So we can rollback to the previous version. And basically the names without the .0, .1 are just simulings to the actual current checkout so it's a little easier for the system to manage. If we look at something like what is in this, you'll see that it has the rule file system, which is exactly how you would expect the rule file system to look. It has the configuration files for the run C for system D, it has an info file to tell you some of the info about this running container. It's also used for run C and yeah, so that's how it would look. So let's quickly go to creation, how you would create a system container like this. For example, if you just wish to create a service for HTTPD, HTTPD, English is hard, anyways. So the files I just showed you, they're checked out to varlip containers comic and then whatever, so you can specify the name during the install, that's for example how you can have scd0, scd1, scd2, you can have multiple scd data that's running on your host. I showed you what the bundle would look like. It had a root file system and it had a bunch of JSON configurations and service files for system D. And if you remember, so I'll quickly touch on this in a moment, but so the idea is in the system containers, two of the key functionalities I think is being the ability to install things to the host and the ability of using files on the host in the container, saving space both ways and making sure it works fine. I'll show you how those work in a moment and then system D temp files. So you can use Docker, so basically since it's an OCI format, you can use any building tool that would work with just an OCI format. So you can use Docker, you can create a Docker file around Docker build. You can use something one of my colleagues created called system buildup, which is kind of similar to buildup but not really. It also kind of actually calls the Docker daemon, I believe. And you can use a buildup itself, which I would be using more for system containers, but unfortunately the Fedora registry currently requires a Docker file to do the actual builds. And that's where a lot of the system containers live and will be living. So for now we will be still using Docker files. So I will go to, so I have a hello world image here actually and we will go inside the image. So quickly this is what the OCI config file would look like. Now it seems kind of long and unnecessary, but it's actually really, most of this is a sort of a template file and it's very easy to generate with say the system buildup tooling. The links for all these tools and stuff I'm mentioning, by the way, is in my flock repo. My name with a dash hyphen in the middle. You can look all the links up, I'll show you them after the demo is done. But basically this is a really old version, 0.3.0. We're currently on 1.0.0 for this configuration file version, but backward compatibility exists, so this still works. So you pass the arguments you want, these are some of the environment variables, port and receiver, I'll show you in a moment what these are. Basically this has to be true, otherwise it will fail to install. Like I mentioned before, how system containers use OS treat. This must be a read-only file system. And this is a bunch of mounts that you can use for the container to use on the host. You can give it capabilities. This is not the correct format now. You have to specify more subdivisions, but generally the idea is, for example, if you want the idea of a Docker silver super privileged container, you just give it all the latest capabilities and it will be a pretty much a super privileged container. Some mounts, spaces, et cetera. So that's just a standard configuration file. And then we'll take a look at manifest JSON. So this is basically like I mentioned before, where you would define the default value that variable says. So the Hello World test image basically calls, basically installs and cats to the image and it basically creates, it opens a port of your choice. For example, if you want that when you ping it, it will return a Hello World to you. So I will show that in action in a moment. But basically you can set the default ports and receivers which can be overridden by the user if you wish. So let's take a look at the service dot template. So this is just a system D service template file. This is very simple because this Hello World image really doesn't need much. But you'll see that the exact starting, exact stopper actually generated by a comic during install. These generally will be just calls to run C for it to start and stop the container. And lastly we have a 10 file stop template which I just added to show that as you can use it. So in here we also have a Docker file. So this is what we'll be using to build it. Now these labels and everything are really not the correct formatting. This is really old. But basically the idea is you have the front line. Preferably, for example, if you're building for Fedora, you would use the Fedora registry base image. You would, so the important thing is you would copy whatever the scripts you need. And you would need to copy all those previous files I showed you into an exports file. So the idea is the exports is basically what you see. They'll be used to, it'll be an export file on the root file system of the installed container. And Tom uses that and understands it will go in there looking for these 10 files, 10 plates. And if it doesn't find them, it will generate some defaults for you but those generally don't work very well. So this line is mostly necessary for all system containers as you would need, as system containers need to know where these files are. So you would, for example, run and install. And this last command is just that I want to also make something available on the host. Another functionality of system containers, I think, is that you can install packages tracked by this containerized format as a container. So what I can do is I can actually just install that file to the host by putting in something called exports.host.fs. And it will go into this on the host during install. And that's what I did before with system package to equals node. This means it'll just copy. But I will install this image in a moment without using that. And you'll see what I mean when I said it will use RPM to track it. So how you would build this image is very simple. Just docker build, let's say I want to call it, registry called flop for some reason, I want to call it. Now I'm probably not gonna let this, oh, sorry, here. So I'm probably not gonna let this complete because this install should only take a very short time because I already have registryed out the raw project or the raw 26 on this open machine. But if not, basically, this is very straightforward. You just, you can use any of the existing tools and all you need is pretty much any OCI compatible image format with those extra files that I mentioned before. Rusty configuration files, template file, et cetera. Actually, this might take a while, so we're not gonna do this. But I already have the image locally. Another thing is, for example, if you don't want to, if you want to do the testing locally, right? Like I said, we have Docker and OS tree images being tracked by a company. So what we can do is, we can just do an atomic pull and we can specify that we want to store this into OS tree. And then what we can do this, affix, which means that I want to pull from local Docker. And let's say I want to pull the raw 26 image. Let's say this was the hello world image I just built. All you do is this and we'll do the actual pull. Now unfortunately, it doesn't actually give you a prompt, but rest assured, this is actually working. Or at least I hope so. I don't know in 10 seconds. It's a little slow because what actually happens is, I believe it creates a tar ball of the Docker image and then it exposes that tar ball into OS tree branches and as a matter of fact, it completed. And if we look at the images now, you'll see that there is also a four door 26. So I'm going to install that hello world image that we just pulled so earlier. This time I'm not gonna specify any special flags. Actually, I am actually. I'm gonna show you how to set an environment variable. So remember how I told you that the default it'll tell us hello world when we connect to a port. Instead I will set the receiver variable. So this time I did not specify to not use that. So what it did was it actually created this sort of dummy RPM package. You can't see but this is the dot RPM file. It'll be used to track that one file I did install to the host which I call hello world grid if you remember. So it's actually now on the host. Now, but this is just a script file I copy the host. All it does is it prints out a bunch of stuff so it has nothing in it. But what I did with the actual container which I'm going to start now. Perfect, it is running a hello world service. It is listening on the 4881 that was the default that we never specified. And if you connect to local host 881, it'll say hi flock like we expected. So those are, so yeah, so basically a system container image is very straightforward. You can come to our, they live upstream in GitHub in our project atomic repo. It's called atomic system containers under project atomic. You can check out the existing images there. They'll get started for you to build your own system container image if you so wish. And by all means please do create ones that you feel will be useful and put them up for review on the federal registry. So a quick, so we already did this hello world. So quick, before we go to questions and go to the next topic, I'd like to quickly highlight what this is doing for crew and origin. So like I mentioned, we'll be transitioning through the actual crew and origin talk in a moment. But so on the origin side, this is actually the idea of system containers is actually in OpenShift Asable right now. You can use it during your install as an install option. You can set a bunch of flags to use system containers. These system containers currently only exist, they don't exist in the federal registry, they only exist on the Docker hub. So OpenV switch, node master at CD, and you can specify which registry to use. And the system containerized Docker that I just showed you that lives within the federal registry under container engine namespace can also be used during the Ansible install. So all you have to do is set OpenShift Docker use system container equals true. And then it'll create that, and you'll use that container engine image that I just showed you. So if I can show you where it is. So under OpenShift Ansible, under the inventory files, you can show, actually cryo integration was recently added as well, I believe. So all of this is currently doable with OpenShift. As for Kubernetes, I was testing out doing build a little earlier. So we actually have a bunch of system containers that my colleague, Jason Brooks, has created for us. They also live within the project atomic repository. They have, so Kube-EDM works as a system container, it's a bundled install option. And these are just other individual Kubernetes components that you can install as a system container. So actually on this machine, I believe, these are just various machines that set up. Anyways, so I actually set up some of the Kubernetes system containers to run. This hosts both as a master and node. The containers run fine. There are some small issues here and there. So if you get a chance to test out these, I've linked some of the instructions, blog posts, also in the blog repository. So please do visit us on our website. So under the project atomic repo and system containers are where the system containers live and atomic is the command line interface that manages the system containers. So actually have drafted the guidelines and they exist within the Fedora wiki. So if you go into the container guidelines and if you go down, there is the system containers and how you would create one right here. So under system containers, I show you how to use the labels. There's a separate actual page in the wiki about what a system container is and sort of the defaults you would expect to see as a system container. So please do check it out if you're interested. And then there's build up, system build up, which can be used. So Giuseppe created system containers actually last year. So this is a relatively new project. This is his first blog post about it and some more blog posts that Jason has written up on how to use some of these system containers. All right, and as a wise man once told me instead of asking people, this apparently did not work because apparently I was supposed to get questions. So either I did such a great job that you all understood this perfectly. I did such a terrible job that none of you want to see me ever again. So either our questions, comments, concerns. So the question was, so we generated a file to the host during install. I believe it should still be there. And oh, the file of the, right. So it is the generated one generated during install using RPM file. It's not very descriptive that it gets installed to the host. So the service file, the system D service file actually gets installed to the host. And there's actually a system D temp file is actually installed to the host as well. I don't remember what I said it to. Oh no, that is not. The only things that get tracked by that RPM file are the files that you in the container specifically generate for the host. So stuff like the system D service file is part of the container and it runs, yeah. So those will always exist and those will always be on the host. They're only tracked by atomic itself and not by the RPM. The RPM is only for files that in the container that is not normally would be part of the system container. So for example, something in this case I installed this strip on the host. Feel free to ask me some more questions later if I did a poor job explaining that. Any other questions? Right, that is correct. I forgot to mention this, but we do. So if you actually look at the Docker file that is in here we have an atomic type equal system. So what this means is it specifies a container as only being able to be a system container. So what happens is a drain, for example, a pole. If you're pulling and when does the inspect in the remote registry and defines this label, it will default to a store in OS tree and it will install as a system container by default as well. So you will have to over, so remember when I was doing poles already I specified that that storage OS tree. You don't actually have to do that if the container has a label such as this. And every container I've shown today other than the SCD container which doubles as a Docker container. So like I mentioned before these containers are generally multi compatible. All have this label. So the question was, I should be repeating these. The question was the system containers I've been installing before were all system services. Can I install other units? So the actual functionality for system containers actually does not require any of the above things necessary. So for example, you don't have to have the run C configuration style. You don't have to have the service. Now unfortunately right now this will all be sort of generated as dummies. But for example, if all you wanted to do was to use a system container functionality in the atomic command line to install a package to the host. All you can have is you created the exports, host FS and then whatever file you want to drop onto the host. Run it, install. It will just do the RPM thing I showed you earlier and it will drop down to the host. You don't need the system V service. You don't need anything else. So technically this is compatible with a lot of things but it's designed to run system V services. So the question is, can we install a system V timer as well as a system V unit file? I don't believe there is so there is no functionality for that built in into the atomic command line. I would have to look into it. I think it would theoretically be possible that you would have to probably tweak some of the configurations within maybe the OCI config file or something. But I'll take a look into it later. I think it's doable. So in that case if all you wanted was the file on the host you can do the RPM method I showed you earlier. So if you do like a daemon reload and the system V will also recognize that that file does exist. So in that sense, yes. But as for integrated compatibility, I don't think so. I don't know, I also have a question. So no. So none of these are. So these, the images that you see these system containers running on are stored in OS tree and there are individual images. So much like, for example, if you did a Docker pull of, for example, for door 26 and then you run like Docker exec or Docker run and then that image and then like bin bash, for example, it would take the image that you pull to Docker. So in that same sense, these images are pulled from a registry, they're stored to OS tree and when you run a container with those it takes that image that was stored in OS tree and checks it out. So it has nothing to do with the host. Now you couldn't mount the whole host on if you wish. And as for privileges it's whatever you define to give that container in the OCI configuration file. If you give it all the privileges it will run as a privilege container. But otherwise if you just give it like, for example, cap kill or something, there are many things that it will not be able to do. Right, that is correct. Each individual containers have their specific volume amounts predefined already. So that command that you were mentioning earlier Docker run dash feed the one that were bound out a bunch of stuff. That was actually for the at CD container. And as I mentioned the at CD container works both as a Docker container and as a system container. So that command you saw was what would have happened had you run it with Docker. That was the Docker command for it. That was not the system container for it. Perhaps I should not have inspected that particular image because it had a lot more things because it works as both. So that was a little confusing. But if you look it up in the repo that was actually for to run it as a Docker container which works. So it finds the Docker service so it finds the host at CD. So it does bind some of six things but it doesn't bind mount the whole at CD folder on post. You can scroll down to the namespace part. You can say like these are the things that are going to be separated off here. We had to see yes and no. But you can take all of that and it wouldn't be. So here and there there are different namespaces. So system containers generally are meant to be built for these systems. Yeah, like Colin said they're not necessarily guaranteed to be, for example, secure because they do create system the service files on the host itself. But generally for the service itself there are ways to inspect what the service will do pre-installation. And if you don't start the service and all you do and you can mount the container and see what's inside, for example, similar to Docker, these are all functionalities. But generally these are sort of a lot more integrated into the host because they're designed that way. So the question was is the only security basically the bind mounts that we are controlling with this configuration file. So yes and no, actually let it still apply. It will stop, for example, if this is the final, this is defined as such. It shouldn't be able to do things that it should otherwise not do how to install it to the host as a normal service. So in that sense, really what it is is you're taking a service that you would have installed to the host and you're installing it as a containerized service with the necessary bind mounts and stuff. But it shouldn't be able to do any more than what it would have been able to do had it been on your host. So this one, for a lot of these ones that are older they were mostly handwritten but actually it's a tool that he created called System Builder which makes generating sort of like the default templates a lot easier. So it's currently in his repo but I've tried it a little bit myself and it mostly works minus the fact that you have to do a lot of things manually anyways. So you can generate the files with a bunch of specific defaults that it is shared across a lot of systems. So who's gonna choose? This one uses the OSF word, yeah. So let's take a two minute break and then we'll come back to this in the Cuban origin continuation. So thank you all for listening.