 Thank you, Brian. So thank you very much everybody for waking up so early and coming to this session I know it's very difficult myself I had some problems this morning to wake up because you know, it's fast and so Saturday night generally You do stuff interesting with friends and you remember that You were called by Brian to say hey you need to give a talk tomorrow morning or I propose my name So sorry if it's not exactly the same topic By the way, we can talk about the original topic because I have planned for that and and it's a topic which is also interesting It's a relevance of Linux distribution at the era of darker containers. It's something that we can Address hopefully I will cover it a bit But if you have questions on those topics if you have concerns and raise your hands and we will try to do our best to Answer those points So let's start so my name is Bruno Conec. I'm working For a hardware manufacturer, I won't give the name here because they didn't want to support my travel this time So I will use my my association name to present the topic here I've been doing Linux and stuff for the last 25 years and I'm part of different upstream and downstream projects. I'm for this talk I'm particularly constant because I'm a major packageer and I had some Development to do to be able to package more easily at the container era Okay, so let's start with a few reminders so that everybody's on the same page Containers are just Compared to hypervisals or bare metal environment are really really near to the bare metal infrastructure The only stuff which is different from the bare metal infrastructure is that you have the Engine which is managing the notion of containers here And which is just a thin layer that you put on top of application. So you put An environment which is suitable to isolate the execution of your application if you want So you have namespacing of C groups which are set up by the way by default on your operating system And here in a container context that the container engine which creates those Environments of execution for the application before launching the application So no specific overhead because anyway, the canal is doing the job that you use it on the bare metal that you don't use it On the container environment at the same same cost For launching the applications just see the fact that you have an isolated environment is what is of interest to us in this context So I would talk about Docker because I've started using Docker and adding Docker To the magia distribution three or four years ago something like that three years ago and It applies to other type of containment engine, of course Some stuff are typical to docker some other stuff are really generic to to containers environment So the idea is really to pack everything together to give you a way a new way of delivering application to your users 40 years ago you were using tar and A script in the tar file and you were delivering your application to your customer like that or to your users like that After the Linux revolution you had more clever way of doing that which are called packages And we are in the distribution dev room So people building distributions are really keen to make packages to create a Suitable environment for delivering application taking care not only of the application itself and the right place Where it should be delivered on the system So having a standard which is a so Linux on our base which gives you all the directories in which you Need to install your software The logs are in bar logs the application is under us are you have ETC for the configuration files, etc So this is standardized. This is taking off This is taking care of by the distribution management system and the distribution management system is also creating Support for the dependencies for the build dependencies of the software and for the installation dependencies of the software Which is a big advantage? So that's one of the element why using distribution is still relevant compared to to the docker environment So docker came and said okay. There is a new way of packaging application. You can have this Apps in the box that we had in the previous in the previous slide and everything Inside that that box inside that context can be shipped Easily to another platform and run easily on another platform. So that's really the approach they had is bundle everything The approach is to say okay. If you have one process you want to run you will create one container so if you have an application which is comprised of Seven different demons running together working together then you would like to create seven different containers Images and seven different containers to host them It's working with layers and I will detail that in the next slide You have the notion of image and the notion of container which is an instantiation of the image Which is read write in which you can work whereas the images is read only itself You have additional features So if you want to share your images if you want to distribute your images especially inside your environment or outside You have the notion of registry. That's what the people from docker run on the docker hub. So when you do a Docker search for an image it It's interacting with the registry It's trying to find an image which has the name that you gave and it gives you a list of dozens of different images corresponding to what you are looking for You can do exactly the same internally and you can have so private registry as well to allow the sharing of images Which is like on distribution when you do repositories in your distribution and you manage packages Inside the repository and you share the packages through repositories They created the notion of docker files. So this one is linked to docker Which is the receipt to create the image? So you want to have instructions that helps you build an image on a regular base? You can replace it's like the make file for building an application. It's a receipt to build the docker image Inside the image everything is volatile. So when the container dies You lose everything sort of it's stored on your disk in a layer, but you lose everything So you want to have permanent information stored in volumes Which can be mounted from the host inside the container or you can use? network-attached volumes if if you need that Inside the container you are completely isolated from the world So you need to specify which ports you want to make available to the outside world so if you run one demon inside a An image you want to expose the port of that Network service to the outside so that it can communicate with the outside world The goal is to is if you remember JVAP from eyes. It's Right once run everywhere. So it's pretty the same on on the docker container image It's create once run everywhere on a given OS You cannot really mix and match it between different OSes The stuff I have not written here on the slide that you have a standard describing images, which is the OCI and It gives you the possibility to Use the same image content with different Implementation of container engines So we can take a docker image and run it run it with rocket or CRU When you want to orchestrate stuff you go a bit upper in the stack you have the notion of composition You can create YAML files to give to your engine the information on how to launch The container how to instantiate the container from the image So which type of volumes you want to attach which type of port you want to expose which type of environment variables You have which type of networks you have all that stuff that you can pass on the command line to the docker Command you can also store it in a YAML file to and give that YAML file to someone else And you have the docker compose a layer which will create all the the containers from that description based on the bios image that you have So HLA errors is a swarm of Kubernetes that you can use everything is using a rest API Even the command line interface tool so they always discuss with the docker demon on your system Using the rest API. It's developed in go and the composition is really developed in in Python and license under the HEP V2 zero so I Said in the previous slide there are there is a layered approach. This is how it's working So everybody is using the same kernel on your host There is no difference. There is no kernel inside the container image So if you go inside a container image and you use a you name command to look at what is it It's a kernel running Below on your whole system. It's not a kernel coming with the image That does not mean anything because it's just an isolation of processes So you have the same kernel It's launching different stacks different application But it could be a process that you run on your system or it could be a process that you're running in a container on your system It's the same then on top of the canal once you have created that and you have your C groups and namespace Available as feature of your canal to be able to enable those those type of Working environments. You have the notion of image. So the image is a read-only part of the solution and You can create as many layers as you need to reach a point where you are happy with your image You can have something as simple as a just a busy box binary. That could be your image so you execute Docker you create an instance based on that image and you will be in an environment where you just have the busy box binary and You have the 100 something commands as a busy box is providing to you in a very small footprint environment That's one way to deal with it You can use a very small distribution like the talker guy develop alpine Which is a very small Linux distribution To provide a strict minimum of environments that you need to have something which looks like a Linux environment because busy box is really really small Or you can put a full distribution or normal distribution the minimum set of that distribution So for a major distribution, it would be 200 something packages For Fedora the same for Debian maybe a bit less. Okay, so you have the possibility to really create a small Layer, which is the base of your distribution on all top of your distribution You can install packages using your own normal mechanism. So if you're on Debian, you do DPKG install You do if you get install if you're on Fedora, you do DNF install if you're on major Yeah, you do you are PMI, but that's the same approach You just use native tools that you have on your distribution to build the context that you want to have you can add scripts You can add binaries you can do peep install you can do npm install you can do whatever you want inside and That's isolated and once you have that once you are happy with the result of your build of the build of the image And the running environment then you can instantiate One version which will be the container in which you will have the right the right to write in it modify stuff And that's what will be that's where will be Running your application on top of it Any question at that point? Is it obvious for everybody? Okay Good, so why do you want to use distribution packages with Distribution and packages inside the distribution with containers and vm and that probably was what's original speaker was intending to cover more So First why do you want to run containers because you already have a distribution? well What you want is next not necessarily polluting your distribution with a ton of stuff that you want to test in an Dedicated environment, especially when you do stuff like Java script node gs Type of stuff where very few distributions have package the whole stack that you would need to develop That does not exist under an opium format or that format because it's really a moving target You would need thousands of dependencies when you do npm install of something you you get 10,000 of different modules installed Through the network That's a huge work for a distribution and most of the distributions that just package the node gs itself So you can do npm install and then the rest is not packaged because it's moving too fast It's not possible to keep up with that. It's not the case of some other languages You have a lot of pearl modules a lot of python modules a lot of Java modules as well Which are available under a package format so you can benefit from the work of the distribution people to have a Queer on set of packages but force moving target like node gs. It's really something where you want to isolate it from your Native distribution because you don't want to install on a distribution something which is not packaged Why because it creates I mean the Manual installation compared to package installation is Direct way to create problems in your environment because you may have Stuff in the standard place and you may have at the same time the same stuff in a non-standard place under a US or local for example, if you just do Configure make make install for a GNU software It will by default arrive in US are local and then you will have binaries in US are local bean and binaries in US are bean and you don't know which version is which and sometimes you don't point to the right Configuration file because you have multiple instances. So really if you want to to have a serious Execution environment and also build environment. You need to be very clear of what you do and and Identify clearly where you need to use non-package environment such as node gs for example compared to package environment The advantage that containers bring like VM You are not polluting your your running environment You create an isolated place where you can do everything you want at the end It's just in that environment and that's something you can send to someone else for test or whatever So that's something you should be able to rebuild easily So doing it with VMs you should automate the creation of the VM and the operating system Deployment in it and the installation of your application in it, etc. It's easier with container than with VMs But that's the same approach You want to be able to easily scratch and redo your running environment execution environment if you have problems And so it's easier with Docker file to to do that and to address that and to rebuild on the on a regular base Your Docker images to be really up to date Containers also bring something which is useful and that I'm using and that's the goal of the talk in fact You can have on a single Linux distribution tons of other distributions available to make your test so you can automate the Portability of your application in different running environments in different distribution You can package for different distribution your software so that it's installable natively for the people using packages from the distribution So it's a really an easy way to to distribute for for other distributions at the one you have or for I mean I'm running the six version of magia which is the last stable version. It has more than one and a half here now And we will issue seven in a couple of months But I don't want to run a non-stable distribution I use my my laptop to work and and I mean working with a Development distribution is prone to break my ability to work very often So I prefer to isolate the test I do on the development distribution in a specific environment such as a container And that's the goal Compared to to using natively the development environment and having the compiler it's a compiler broken for a couple of weeks because There are stuff that we need to to put in place when we move from one version to another etc Another advantage of containers with regards to VMs is that it's very easy to share your home directory with you with the container so you can attach when you Launch your Instance of a container from the image you can say to you to that attach my home directory and Put it in the home directory of the container environment so that I'm at home and I can use all the Files that I need to work in my environment Which means for example that allows you to share if you are in a opium based Distribution all of the configuration files that are needed to build packages correctly So opium macros opium RC. You can also keep a look on your SSH keys I have I have my SSH key in the build system of magia to be able to push packages and ask for the build system to To recreate the packages. I've tested locally You can do the same with other distribution as well. What I do is really generic. It's not linked to magia itself The only way is the only place where you really need the VM compared to a container to do that isolation Is if you need a different kernel between what you're running and what you want to test That's the only only place where it's important for people like me who are doing packaging most of the time I don't care. I can use a native kernel of my distribution to do the packaging. I'm packaging for 120 different distributions The software I'm upstream for without any problem due to the difference of kernel in on the host and inside the packaging environment So that's really feasible. So how do you deal with that concretely? So you have the Docker registry or you have your own registry or you have your own local images You have a set of images that you can put on your on your environment You create those images using a Docker file I will show you the content of the Docker file just after and every time you need to Test something in a different environment than the one you're running you instantiate a container for the distribution Which is your target and inside the container your building packages that then you can send to your distribution repository either using Subversion or git sources depending on the distribution and the package management system of your distribution okay, so Maybe I should turn around and And show you the real file instead of so so the one on the slides for people will want you have a look after the presentation But I should be able To show you something else here, which is a real one Okay, so I have a way to capture some parameters as input which is not really important I have a configuration file that I can use To pass some variables and have default variables Available in my environment here so the version of the of the major distribution target temporary directories a mirror That I can use to download the dependencies Working directories the architecture are on which I'm working because I can also build for different type of Of architectures, there is a very convenient project in QEMU which allows you for example to run on an x86 machine non x86 binaries As if you were in a virtualization environment except that you are not virtualized You're not in a VM. You're virtualizing the instruction set, but you're not in a VM So I'm I started to able to just a raspberry to make some test with the another architecture to see if it was working here That seems to be very interesting So I get my my create my information My UID TID because I want to map those inside the container and then I write I generate a docker file here I start from What I call my J are official Repository which is in fact local to my system. So those are my local Root images for the distribution. I can show you if we have time how it's built The first thing I do is I add that my distribution inside the the image when I build the image I say I want the latest version of every package that I need then I Can okay, that's not that's commented then you install all the dependencies that you need In your environment, so I have that the repositories and then I install all the dependency packages that have been updated since the last time And I install in that environment because I'm building packages The set of packages I need to build packages. So there is a BM command which does the build through opm build Magia is using subversion for configuration files and stuff like that We have the image here repo command, which is the interaction with a magia official repository and the launch of the build as interaction with the build system of magia and Some other useful tool like a color diff and studio because I want to be able in my Build environment. So when you're building packages for for a distribution never build as route That's a if you if you take something out of this this talk is never built as route in standard because when you're building a package You don't know what you are launching your you are packaging this Set of software which is coming from upstream and those guy made you Remove files, etc. If you don't set up the right environment viable You will remove files in a place where you don't expect to remove them So never run as route run as a single user. So that's why there is some Magic here to create the user in the container image Assocating the right the right ua djd. This line is not useful anymore and giving to that user the studio the right to studio in the container and Without any password to be able to launch some commands as route when you need them But not when you don't need them and you do you do that on purpose. So every all the builds Is done as a single user, but sometimes if you want for example to install the build packages on your Environment, then you will need a route access to be able to write in the package database That you want to install package and then I create the home directory of that user I say that I will be in a work there, which is my the place where I have my My major environment I run the container as a user not as route and I launch a bash command and the rest is just some A small part to detect if there is already a container if I force I can remove the the previous image to rebuild an image And then I just run So this is the line which is creating the instantiation From the image. So here we are building the image with that receipt once the image is built you instantiate an environment You say I want to remove that environment at the end of the run. I want to map my I want to be able to SSH Correctly from my docker container environment. So I need to make some stuff with the with the socket Inside the environment and set up the SSH socket inside the environment to the same place where it is outside So I can communicate using my SSH agent, which is already installed my system I just want to mount my home directory and on my home directory inside the container and I I use the image which is tagged like that, which is the name of what we are creating here in the in the receipt so How does it work? So here I'm running on the major 6 And I can create of course a major 6 environment as well so by default by default so I didn't relaunch anything this morning, so I can not Let me just Let me just restart the docker-demand because I may change some stuff since yesterday Try again So I have a certain number of images. That's why it takes a bit of time So there is there is a couple of images and you see there are a lot of different distributions that I use to build to have different Environments to be able to build different software correctly. So that's why it takes a bit of time Is it better now? No, it's still not no such holes. So I may have lost my network here Which is here. Where is my mouse? Here Yeah, I have no no LAN. Can I use the system legacy here? The system one should be better. Oh, yeah, right So the difference of This should be before I'm not pointing to the right image. I Miss the architecture here Okay, so here. Where am I? Am I I am in a container which has been instantiated from the image which is here You see at the prompt change of course from That perspective here. It's still a major year six environment, but this one has 232 packages where has my native distribution That's 3000 packages. So I'm in a completely new different environment It's a fresh major year six environment Which has a minimum set of packages that you need to have to run the command which can install additional packages So that's what you want to have you have a bare minimum distribution on which you you are able to use if it's a DBN Distribution APT get if it's a major year PMI if it's a Fedora DNF That's just what you want to to be able to do because that plus a network configuration correct so that you can Touch your repositories and download content from the repository. So where you're here So the stuff which is also not right is that I'm a root in that environment. I should not be a root I should be I should be a single user Yeah, so Yeah, right So this is this is the image. So this is the official image. This is the one I I use As a base environment. So this is not the image I use to build my packages I can do the same Easily with another version. So if I use the cauldron version here, I will be now in a different environment Which is the magia 7 version Which has a different set of packages Only 219 so nice job for the guy working on that because they reduce the size of the minimum distribution set from 232 to 219 so we have less packages when we want to create a Small distribution with magia 7 release and here if you look all the packages, which are installed or MGA 7 where has of course here all the packages which are installed on my native system or MGA 6 So I have I have a working magic environment here Which is completely different that I'm pointing to the development distribution I have all the dependencies of the development distribution. I can really do what I want in that environment easily Let me try to Just fix that Because there is something wrong here. You should never make changes Well, that's not true. That's not very correct. In fact, that's not very correct So they before the distribution is a representation Let me check Because yesterday I was building some stuff. So so I have in my own environment normally this one for example so if I go here into This one Yeah, which has the which has the architecture that was the missing part of my on my script script has not been updated for so now I Have an image which is Based on the previous image. So this is still a control version version 7 But this time I have a bit more packages because in my receipt So if you if we look at the docker file That we have for example, where is the presentation So if we look at the docker file that we are using here in the presentation, which is which is the same in addition to the standard distribution which has 219 packages I Asked to add a couple of additional commands to be able to work So for example, I should have the BM and the MGA repo command So let's go back here So here here first I am a single user. I'm not root anymore I change the environment in which I want to run and I have access to the BM command I have access to the MGA repo command which were not there before So and I am placed in my in my directory where I have all The packages I am following for my Jaya that I can rebuild So let's take for example Something related to docker as the docker compose. I will do a remove of Everything which is not relevant So all the intermediate build stuff here I just keep the sources and the spec file which are the strict minimum I need to build packages. So For those not really familiar with that maybe so the spec file is again a receipt Which gives to the rpm system instructions on how to build a package for the distribution. I'm running so it gives you some Dependencies at build time that you need to satisfy to be able to build and as docker compose is a python script It needs a certain number of python modules to be able to build correctly and then it will also Indicate some Installation dependencies. So if you install that package on the distribution, you will need to satisfy those dependencies Around python modules needed and then you have the receipt to be able to to build the software in your environment So for the measure your distribution is as simple as doing BM And of course it does not work not because it's a demo because it's it's on purpose I missed all the dependencies. I show to you that there are build dependencies here And I don't have those if I do our PM pipe rep Python I have a certain number of Python packages typically What is needed to build? Python packages and the Python 3 and 2 versions as well itself But very few other packages just setup tools is the only one I have for example, I don't have I think it needs The docker package the web socket client, etc. So all those packages are not available yet So I can say to my system. Okay I need to be a root because I want to install additional packages And I want to install the packages which are mentioned in the spec file that I need to have So it says okay I want I will default to to using bill requires so which are the bill requires that you need Okay, you need a Python docker package. Which one do you want? Say, okay, let's take the first one And you have recommended packages or optional packages So I say okay, I don't want to pollute too much my system. So I would say okay do not do not install The recommended packages just the one I really need to build so the list of packages as dependencies which are needed in my environment are those I Will say install those stuff and hopefully if there is a bit of network You should be able to download them. Which does not seem to be the case Okay, so maybe the mirror is having an issue because I have the network here. Let me check Yeah So I cannot touch the mirror myself either on the web browser. So I need another mirror Let me try So let's say, okay, this mirror is broken does not want to deliver to me stuff. That's not a big issue Well, that's a problem, but that's not a big issue. So I will go in my Configuration file and I will change the mirror Okay, let's do it the other way around. So I will change the mirror also reference to the mirror here Inside the configuration file to something which is better Which is a kernel.org mirror which should be working Fine here. Oh, there is to this trip. Okay This time should be a bit better So let's try again. Okay, so when you deal with a mirror which is up to date and available You can download the dependencies to build your software it install them for you So now you can build your package and this time as all the dependency requirements are satisfied then you can Build the package and you have in your environment to gain all the directories that have been created For example, you have the new package which is available here Which has just been built in my environment Which is clean because it has been built using the magia cauldron tools magia cauldron dependencies creating an MGS 7 version. So everything is completely Safe from a build environment and now I can just try to install it and Again, it's looking at dependencies at install time and it's okay for Installing that package you will need those packages as the dependencies So just say yes, it will download at some additional packages And now you have the package which is here and you can start testing it in your environment because it's working So you have a strict minimum environment to be able to make tests of one package that you have built here Which is exactly what you want to do and I'm not polluting as the rest of my system It's completely isolated and I can't do that as many time as I want with different distributions available any question Yes Yes Yeah So generally what happens by distro vendors is they have a build system and On the build system you have machine with all the targets that you need to support what you want to support So here I'm testing on my local system I will check that everything is working when I'm done. I can use the major repo command to push My content to the build system pushing my content is just pushing the subversion Set of files that are under control So in my case here, so here is On the build system the subversion Tree that I mirrored I mirrored locally You can have a look at the different stuff that have been that have been done From the system so you see what happened to the life of the package During its development you see when you have modified the compose file when you have a build a massive build For example for my js7 which happened which change automatically a certain number of stuff Okay, and when you are happy with what you have so in your environment what is important or The sources directory and the spec directory so the spec directory contains the spec file That is mandatory to rebuild and the sources directory contains the sources of survive version I have had during time of that component and a shawan file Checkings as for the checksum of the of the source file So those are stuff that are in the subversion repository on the repo at in the major build environment and when I launch build for me The package it will go to the build system extract from Subversions the right files do the BM command like me on all of the target systems that you need to to support So it will be for x86. I eat 586 Which is a 32-bit version arm 7 hl because we are not like the end because I see you have a deep end issue So we are not as deep end Maintaining as many as many architectures that you are maintaining of course and we have less packages as well as the end Here only 30,000 when they been as 50,000 something like that So that's that's the way it's done You you have so your your target system on on the build infrastructure that are used to build the final system So your your building stuff you are testing it of course You may have a software which is working nice on x86 and not working on arm and you will not detect it through this process You will detect it when your contributors say hey It's broken on my on my version and you you have a back zilla You give the architecture on which it's not working and people will make tests on that version if that not done that before That's a way. That's the way it's done No, it's a dedicated system, which is I think using just shrewd for due to I mean you don't change build system easily That's that's one of the problem So yeah, that's the way it's done any other question Okay So if there is no no other question, I'll leave you a bit of time to change room and get another fantastic presentation Thank you very much