 Hello as it's as it's noon right now, so good afternoon This is a talk from a redheader she's a pescadano About atomic system containers who may have heard something about it from Dan Walsh's talk. Well, this is probably a much Deeper dive into the topic. So enjoy the talk and welcome Joseph at least So hi everyone and I'm been working with project atomic for almost three years and today I'm going to talk about system container Probably you're already familiar with atomic cost But anyway, I will give a quick introduction as some of the concepts here will be useful for understanding better system containers so I copied the sentence directly from the project atomic IO website and In this sentence there are these definitions. There are two words that I think are very important and that's immutable and container So atomic cost is best on the concept of having an immutable Operating system so whenever you want to deploy it you you install the entire image you don't install single packages and and this operating system is Immutable, oh, it's it's read only and so all the system services run inside of this What is read only image and? On top of these applications run inside of Containers so this is what we have now and this is where system containers will help you while system containers are quite generic and they can be used on Different system you can use on sent OS you can use on Fedora and They bring with them the usual Benefits of running a application in containers like bundling of everything that is required for running the program to isolation from the Rest of the operating system on atomic cost they have some additional advantages because we will be able to to move systems from the immutable image to To the upper layer where there are containers then Well, this is an advantage because we will have a smaller image a smaller image Will will increase both flexibility as you can as simply your operating system as you prefer you will have a smaller image and on top of this you add the the pieces you you want and also the Well now whenever you want to to install a new version of atomic cost would upgrade your version What atomic host does is to first check out the entire new new version on the side with a current deployment that you are running and Once everything is ready, you can restart the machine So if we will have also less components here, it will also require less less downtime and So To get to their job done a system containers use a set of already existing technologies They use a run C to run the container itself as Daniel was saying Run C is the same technology used by Docker now So it's the the core part of Docker for running its containers and We don't require another Demo not service for managing this This containers, but we will just you just use system D. So you will manage them as a as a regular Service as you already do today And for the storage of the images we use OS 3 OS 3 is the same technology used at the on atomic cost for storing the operating system itself and Finalist copy for copying the images from a register what this should be doc registers just register So for the format of the image and the distribution we use the same concept that they was Saying before we try to use as much of what is already available. So the system containers are distributed and built as OCI images, so you can use the same tools that you have today like Docker build and The metadata for running these containers is distributed as part of the image itself We we don't require a separate channel for Distributing this but to include the inside the image itself So in other words converting an existing Docker container or container to to a system containers just a matter of Copying some extra files into this image and very important images are read only while this is a Good pattern to follow for each well for any kind of container that you keep your image read only and and the only the storage is Reitable so it's a clear what is what is the image and what is modified in a system containers This is enforced. It's not possible to not do every readable images So I was saying that we needed some Some files to add into the image for running a system container and these are in this list We see which files are required So the first one is the most important as it describes how the container will run. This is the This is the OCI container specification for running the container using currency We the second one if you're not happy with the fault Configuration file generated for system D. You can provide your your own Then we have a manifest json for settings various things for the for the image Related the system container and finally if your container requires some temporary files or directory to be present on the host Before it runs you can use system D temp files Probably you have noticed that this configuration files is in touch template So the the reason is that they don't Define a static configuration but they but they are they they define a template and And this will be used So so you can use the image to install different containers with different configurations and Your image provides a template and this And these values will be assigned that installation time when you start the container from this image The template files will be used to generate the final configuration But let's get more into details. So this is a simple docker file for running and the tcd Container and Well, it's very simple on top of Fedora. We install a couple of packages. We had some Some files that are required to run it and finally we define the entry point For the for running the container If you have this it's just a matter of adding a few files that they was It's going before to do the image and this is and this will make possible to run this image as a system container The the benefit is also another benefit is that the same image can be used in both ways Still you can use it as a docker container because these files will not change in any way how it used to work and It can be used as a system container This is our configuration file for running Terran C container look like looks alike. I don't get into the details because It's not But there is more to say and also this is just a portion of the file but what I wanted to to show it's how the template mechanism works and You can see at the bottom of the of the file that we define some environment variables for for the container and And this we don't assign a value, but We we say to get the value at installation time. Some of them are Already that are given a value, but the atomic tool itself when you start for example name It's the name of the container. Some others are expected from the user. So this is this variables are Specific for for anything to see the Container as I was saying if you're not happy with the default configuration generated You can provide one for the system D. The default one looks very similar and But here you can see also the that we use again the template mechanism In the sex start and sex talk for the service that that is exact start of a variable will get to them We will get the value by from atomic and And it will be the command line to run a run C So what we do in this case is we specify the working directory where this there is the destination on On this where we have the checkout of the container and from there we start we launch the run C process Manifest just and well, it's not really used for various settings at the moment It's just for one thing and it's to set the default values for the variables if your image supports different Variables that you can configure you probably don't expect the user to specify a value for all of them So you can you can give a default value here in this file So if the user doesn't override that you see the name it will be an empty string and same for the for the rest of the file also, this is how the Run C expects the the container to be on the file system in order to be executed and This is what's known as runtime bundle or see a runtime bundle So it expects the image under root FS and here you can see all the files required to run the image And the config JSON file The config JSON file is generated at the solution from the image using from the template that we seen before This is the the final file generated and it will be used by run C For the storage as I say we use OS 3 If you're not familiar with OS 3 just imagine it like a git for binary files and they're two very important things to know for for system containers is that OS 3 manages the duplication of files So it means you can add a file several times to the OS 3 storage But it will be stored only once and they check out from the OS 3 storage to To the file system when you check out to the a container or for atomic cost when you check out the operating system It's done using hard links So it won't take additional space So you can check out the same image multiple times, but it will just require the extra space for the high notes, but The file is stored only one And if you are more curious about OS 3 you can attend there. There is a workshop later So we in this model with the advantage also of the docker image format where The docker file is converted to a series of layers where each layer is a step in the docker file We can we take advantage of this model when we when we pull images from our registry as we can As we are able to To reuse these layers in case they are used by different images in the same way as docker does now if you if you try to pull Two images that have layers in common the layers in common will be fetched only once so we we take advantage of this model by by a bank and mapping in the OS 3 to the layers we start this an OS 3 branch And we take advantage of the OS 3 storage because Even if the if the same different layers are the same file, this is stored only once An image through scopio can be pulled from different Sources from a registry that is the most common case or you can Especially if you are developing it you can you can pull directly from the local docker Engine or you're simpler from a turbo And this is This is the command that you that you need to to provide for installing a docker a system content Well here I describe the steps, but This is what I was Introducing before this is a concept we we copied from a atomic host and As as you can see and we created the checkout in our directory with dot zero at the end But but then once we generate all the configuration files and we check out all the files We create a sim link to this directory This is very helpful for for another For another feature That is atomic updates. We support to to update a System container in the same way well similarly to what atomic host does Using this trick of the sim link we we create another checkout with a newer version of the container We generated the all the configuration files So at the same time you have the all the version that is running and The world is is running you you created the checkout for the new version created the configuration files And then at this point you you can restart into the new version you stop the service We change the sim link to point to the new deployment and we restart it If something goes wrong you can You can roll back to the to the previous version Unless the newer version doesn't Destroy storage or something like that Should be possible to go back to the previous version Future plans for for the future I would like to see a better integration with system D For example using dynamic users at the moment. This is not supported we can't We can't define a system container that uses Users that are not already defined on the system Also, perhaps use using a network D for a second for setting up private network for the for the container And another thing we are working on right now is a better integration with the operating system We have noticed that many containers required to copy files on the host and Well, this is well this this model of system containers can be used as a more generic way to To distribute files to that needs to be stalled on the system But we want we don't want really to copy arbitrary files everywhere and pull you to the system, but we we are looking in the way to to Generate an rpm out of these files before it's installed to the system So it will be possible to to track the ownership of the files that are copied So even if a file is coming from a container, it's you can still use rpm to to query Who is the owner of that file? Well, I have a quick demo of what I was talking about Here I'm using a local Registering on my machine because I don't want to go on internet for pulling the image and as we can see here each layer in the in the Image is pulled to to the docker do the OS 3 storage. We can see that We can see that there are under OCI image Each of the layers we pulled it started in OS 3 like a branch and And and finally we also stored the the metadata of the image. It's the second line in the Year Here it describes the image in each of them instead this this water on terrocea image describes the the layer Once the image is pulled we can well, we can install it with With a command that was showing before so here we we start the system container and here we can see we have a Sim link pointing to the to the deployment with with all the files at this point, it's just a matter of running the system the service and We can see that we have Well, we have run C and the it's the process running we can we can see if it works like This Run C exec is similar to docker exec it runs the command inside the the atcd container and Here we can see that it is running so I'm running here just a comment on my computer to generate a new version of the image so that we will be able to simulate an update at this point we can We can try the the update of the Of the container it's enough just to specify atomic Update and the name of the container So here we generated a new checkouts. You can see that Now we have to check out It's zero is the old version and dot one is the new one after that we We stopped the service and restarted into it Okay, so just one minute left. I will any any question Okay So well, I want to show the roll back Okay With a rollback we can Get back to the previous version case that we are not happy with the with the update Okay, that's that's all for the tool and Any questions? We are working on that we have Okay, yeah, so the question was is there a way to To search what system what containers a system container we are working on that we are adding a new label so that it's possible to To get this information even before you pull the container you can quit it directly on the registry I Well, that the question was Dogger is using your auntie as this are these containers seen as from doctor as well No, the answer is no because dr. Keeps track of what containers are created by by itself So he knows it knows which one your auntie container is a doctor container No Runs inside of a container So they don't see it The question is is runs here requiring for system containers? Yes, and no you can still use this mechanism to To deploy other kind of images like if you want to run on sheroots You can still use the same mechanism and just a word and just provide your own the system Deconfiguration file so you specify how the application will we run Other questions Seems no They are not prepared yet for From my knowledge, but I think it will be possible to use the same format that there is some work for moving existing Superfilling containers to use system containers Last question Sorry, which one can you? Can you do? No, but We are working on with the possibility of copying arbitrary files on the file system So that will be possible if it's required from an image to copy to have different services Associated to the same container Okay