 Thank you Brian. So thank you very much everybody for waking up so early and coming to this session I know it's very difficult myself I had some problems this morning to wake up because you know it's fast them so Saturday night generally You do stuff interesting with friends and you remember that You were called by Brian to say hey you need to give a talk tomorrow morning or I propose my name So sorry if it's not exactly the same topic By the way, we can talk about the original topic because I have planned for that and and it's a topic which is also interesting So relevance of Linux distribution at the era of darker containers. It's something that we can Address hopefully I will cover it a bit But if you have questions on those topics if you have concerns and raise your hands and we will try to do our Best to answer those points So let's start so my name is Bruno Conec. I'm working for a Hardware manufacturer. I won't give the name here because they didn't want to support my travels this time So I will use my my association name to present the topic here I've been doing Linux and stuff for the last 25 years and I'm part of different upstream and downstream projects. I'm for this talk I'm particularly the constant because I'm a major a packager and I had some Development to do to be able to package more easily at the container era Okay, so let's start with a few reminders so that everybody is on the same page Containers are just compared to hypervisors or bare metal environment are really really near to the bare metal Infrastructure the only stuff which is different from the bare metal infrastructure is that you have the Engine which is managing the notion of containers here And which is just a thin layer that you put on top of application. So you put An environment which is suitable to isolate the execution of your application if you want So you have namespacing of C groups which are set up by the way by default on your operating system and here in a container context that the container engine which creates those environments of execution for the application before launching the application so No specific overhead because anyway the kernel is doing the job that you use it on the bare metal that you don't use it On the container environment at the same same cost For launching the applications just see the fact that you have an isolated environment is what is of interest to to us in this context So I will talk about Docker because I've started using Docker and adding Docker to the measure distribution three or four years ago something like that three years ago and It applies to other type of containment engine of course Some stuff are typical to docker some other stuff are really generic to to containers environment So the idea is really to pack everything together to give you a way a new way of delivering application to your users 40 years ago you were using tar and Script in the tar file and you were delivering your application to your customer like that or to your users like that After the Linux revolution you had more clever way of doing that which are called packages And we are in the distribution dev room so people building distributions are really keen to make packages to Create a suitable environment for delivering application taking care Not only of the application itself and the right place Where it should be delivered on the system so having a standard which is a so Linux standard base Which gives you all the directories in which you need to install your software The logs are in VAR logs the application is under a US or you have ETC for the configuration files, etc Etc. So this is standardized. This is taking off This is taking care of by the distribution management system and the distribution management system is also creating Support for the dependencies for the build dependencies of the software and for the installation dependencies of the software Which is a big advantage. So that's one of the element why Using distribution is still relevant compared to to the Docker environment So docker came and said okay. There is a new way of packaging application. You can have this apps in the box that we had in the previous in the previous slide and everything Inside that that box inside that context can be shipped easily to another platform and run easily on another platform So that's really the approach they had is bundle everything The approach is to say okay if you have one process you want to run you will create one container So if you have an application which is comprised of Seven different demons running together working together then you would like to create seven different containers Images and seven different containers to host them It's working with layers and I will detail that in in the next slide You have the notion of image and the notion of container which is an instantiation of the image Which is read write in which you can work whereas the images is read only itself You have additional features So if you want to share your images if you want to distribute your images Especially inside your environment or outside you have the notion of registry. That's what the people from Docker run on the Docker hub so when you do a Docker search for an image it It's interacting with a registry It's trying to find an image which has the name that you gave and it gives you a list of dozens of different images Corresponding to what you are looking for You can do exactly the same internally You can have so private registry as well to allow the sharing of images Which is like on distribution when you do repositories in your distribution and you manage packages Inside the repository and you share the packages through repositories They created the notion of Docker files. So this one is linked to Docker Which is the receipt to create the image So you want to have instructions that helps you build the image on the regular base You can replace it's like the make file for building an application. It's a receipt to build the Docker image Inside the image everything is volatile. So when the container dies You lose everything sort of it's stored on your disk in a layer, but you lose everything So you want to have permanent information stored in volumes Which can be mounted from the host inside the container or you can use network attached volumes if if you need that Inside the container you are completely isolated from the world So you need to specify which ports you want to make available to the outside world So if you run one demon inside an image you want to expose the port of that Network service to the outside so that it can communicate with the outside world The goal is to is if you remember java provides it's Right once run everywhere. So it's pretty the same on on the Docker container image. It's create once run everywhere on a given OS you cannot really mix and match between different OSes The stuff I have not written here on the slide that you have a standard describing images, which is your CI and it gives you the possibility to Use the same image content with different Implementation of container engines So we can take a Docker image and run it with rocket or CRU When you want to orchestrate stuff you go a bit upper in the stack you have the notion of composition you can create the Amel files to give to your engine the information on how to launch The container how to instantiate the container from the image So which type of volumes you want to attach which type of port you want to expose which type of environment variables You have which type of networks you have all of that stuff that you can pass on the command line to the darker Command you can also store it in a YAML file to and give that YAML file to someone else And you have the Docker compose upper layer Which will create all the the containers from that description based on the bias image that you have So h a layers is a swarm of Kubernetes that you can use Everything is using a rest API even the command line interface tool So they always discuss with the Docker demon on your system Using the rest API. It's developed in go and the composition is really developed in in Python and slices under the app V2 zero so I Said in the previous slide there are there is a layered approach. This is how it's working So everybody is using the same kernel on your host There is no difference. There is no kernel inside the container image So if you go inside a container image and you use a you name command to look at what is it? It's the kernel running below on your whole system. It's not a kernel coming with the image That does not mean anything because it's just an isolation of processes. So you have the same kernel It's launching different stacks different application But it could be a process that you run on your system or it could be a process that you're running in a container on your system It's the same Then on top of the canal once you have created that and you have your C groups and namespace Available as feature of your canal to be able to enable those those type of Working environments you have the notion of image. So the image is a read-only part of the solution and You can create as many layers as you need to reach a point where you are happy with your image You can have something as simple as a just a busy box binary. That could be your image So you execute Docker you create an instance based on that image and you will be in an environment where you just have the busy box binary and You have the 100 something commands as a busy box is providing to you in a very small footprint environment That's one way to deal with it. You can use a very small distribution that the Docker guide developed Alpine Which is a very small Linux distribution To provide a strict minimum of environments that you need to have something which looks like a Linux environment because busy box is really really small Or you can put a full distribution or normal distribution. It's a minimum set of that distribution So for a major distribution, it would be 200 something packages For Fedora the same for Debian maybe a bit less. Okay, so you have the possibility to really create a small Layer, which is the base of your distribution on top of your distribution You can install packages using your own normal mechanism. So if you're on Debian, you do DPKG install You do APT get installed if you're on Fedora, you do DNF install if you're on major Yeah, you do you are PMI, but that's the same approach You just use native tools that you have on your distribution to build the context that you want to have you can add scripts You can add binaries you can do peep install you can do npm install you can do whatever you want inside and That's isolated and once you have that once you are happy with the result of your build of the build of the image And the running environment then you can instantiate One version which will be the container in which you will have the right the right to write in it modify stuff And that's what will be that's where will be running your application on top of it Any question at that point? Is it obvious for everybody? Okay Good, so why do you want to use distribution packages with distribution and and packages inside the distribution with Containers and VM and that probably was what's the original speaker was intending to cover more So That's why do you want to run containers because you already have a distribution well What you want is next not necessarily polluting your distribution was a ton of stuff that you want to test in an dedicated environment, especially when you do stuff like Java script node GS Type of stuff where very few distributions have package the whole stack that you would need to develop That does not exist under an opium format or that format because it's really a moving target You would need thousands of dependencies when you do npm install of something you you get 10,000 of different modules installed Through the network That's a huge work for a distribution and most of the distributions that just package node GS itself So you can do npm install and then the rest is not packaged because it's moving too fast It's not possible to keep up with that It's not the case of some other languages you have a lot of pearl modules a lot of Python modules a lot of Java modules as Well, which are available under a package format So you can benefit from the work of the distribution people to have a clear set of packages But force moving target like node GS. It's really something where you want to isolate it from your Native distribution because you don't want to install on a distribution something which is not packaged Why because it creates I mean the manual installation compared to package installation is Direct way to create problems in your environment because you may have Stuff in the standard place and you may have at the same time the same stuff in a non-standard place in under a US or local for example, if you just do Configure make make install for a GNU software It will by default arrive in US are local and then you will have binaries in US are local bean and binaries in US are bean and you don't know which version is which and sometimes you don't point to the right Configuration file because you have multiple instances. So really if you want to to have a serious Execution environment and also build environment. You need to be very clear of what you do and and Identify clearly where you need to use non-package environment such as node GS for example compared to package environment The advantage that containers bring like VM You are not polluting your your running environment You create an isolated place where you can do everything you want at the end is just in that environment And that's something you can send to someone else for test or whatever So that's something you should be able to rebuild easily So doing it with VMs you should automate the creation of the VM and the operating system Deployment in it and the installation of your application in it, etc It's easier with container than with VMs, but that's the same approach You want to be able to easily scratch and redo your running environment execution environment if you have problems And so it's easier with Docker file to to do that and to address that and to rebuild on the on a regular base Your Docker images to be really up-to-date Containers also bring something which is useful and that I'm using and that's the goal of the talk in fact You can have on a single Linux distribution tons of other distributions available to make your test so you can automate the Portability of your application in different running environments in different distribution You can package for different distribution your software so that it's installable natively for the people using packages from the distribution So it's a really an easy way to to distribute for for other distributions at the one you have or for I mean I'm running the six version of magia which is the last stable version. It has more than one and a half here now And we will issue seven in a couple of months But I don't want to run a non-stable distribution. I use my my laptop to work and and I mean working with a Development distribution is prone to break my ability to work very often So I prefer to isolate the test I do on the development distribution in a specific environment such as a container And that's the goal Compared to to using natively the development environment and having the compiler It's a compiler broken for a couple of weeks because some stuff that we need to to put in place when we move from one version to another Etc. Another advantage of containers with regards to VMs is that it's very easy to share your home directory with you with the container so you can attach when you Launch your Instance of a container from the image you can set you to that attach my home directory and Put it in the home directory of the container environment so that I'm a hat home and I can use all the Files that I need to work in my environment Which means for example that allows you to share if you are on a rpm-based Distribution all of the configuration files that are needed to build packages correctly So rpm macros or PM RC you can also keep a look on your SSH keys I have I have my SSH key in the build system of magia to be able to push packages and ask for the build system to To recreate the packages. I've tested locally You can do the same with other distribution as well. What I do is really generic. It's not linked to magia itself The only way is the only place where you really need a VM compared to a container to do that isolation Is if you need a different kernel between what you're running and what you want to test That's the only only place where it's important for people like me who are doing packaging most of the time I don't care. I can use a native kernel of my distribution to do the packaging I am packaging for 120 different distributions The software I'm upstream for without any problem due to the difference of kernel in on the host and inside the packaging environment So that's really feasible. So how do you deal with that concretely? So you have the Docker registry or you have your own registry or you have your own local images You have a set of images that you can put on your environment You create those images using a Docker file I will show you the content of the Docker file just after And every time you need to Test something in a different environment than the one you're running you instantiate a container for the distribution Which is your target and inside the container you're building packages that then you can send to your distribution repository either using Subversion or git sources depending on the distribution and the package management system of your distribution Okay, so Maybe I should turn around and Show you the real file instead of so the one on the slides for people will want you have a look after the presentation But I should be able to Show you something else here Which is a real one? Okay, so I have a way to capture some parameters as Input which is not really important. I have a configuration file that I can use To pass some variables and have default variables Available in my environment here so the version of the of the major distribution target temporary directories a mirror That I can use to download the dependencies Working directories the architecture On which I'm working because I can also build for different type of Architectures there is a very convenient project in QEMU which allows you for example to run on an x86 machine non x86 binaries as As if you were in a virtualization environment except that you are not virtualized You are not in a VM. You are virtualizing the instructions set, but you are not in a VM So I'm I started to I will adjust a raspberry to make some test with the another architecture to see if it was working here That seems to be very interesting So I get my my my information My UID TID because I want to map those inside the container and then I write I Generate a Docker file here. I start from What I call a major official repository, which is in fact local to my system. So those are my local Root images for the distribution. I can show you if we have time how it's built The first thing I do is I add that my distribution inside the the image when I build the image I say I want the latest version of every package that I need then I Can okay, that's not that's commented then you install all the dependencies that you need In your environment, so I have that the repositories and then I install all the dependency packages that have been updated since the last time And I install in that environment because I'm building packages The set of packages I need to build packages. So there is a BM command which does the build through opium build Magia is using subversion for configuration files and stuff like that We have the image here repo command, which is the interaction with a magia official repository and the launch of the build as an interaction with the build system of magia and Some other useful to like a color diff and see you because I want to be able in my Build environment. So when you're building packages for for a distribution never build as route That's if you if you take something out of this this talk is never built as route in standard because when you're building a package You don't know what you are launching your you are packaging this Set of software which is coming from upstream and those guy made you Remove files, etc. If you don't set up the right environment viable You will remove files in a place where you don't expect to remove them So never run as route run as a single user. So that's why there is some Magic here to create the user in the container image Associating the right the right uad jd. This line is not useful anymore and giving to that user the should use the right to see you in the container and Without any password to be able to launch some commands as route when you need them But not when you don't need them and you do you do that on purpose. So every all the builds Is done as a single user, but sometimes if you want for example to install the build packages on your Environment, then you will need a route access to be able to write in the package database That you want to install the package and then I create the home directory of that user I say that's I will be in a work there, which is my the place where I have my My measure your environment I run the container as a user not as route and I launch a bash command and the rest is just some Small part to detect if there is already a container if I force I can remove the the previous image to rebuild an image And then I just run So this is the line which is creating the instantiation From the image. So here we are building the image with that receipt once the image is built You instantiate an environment you say I want to remove that environment at the end of the run I want to map my I want to be able to SSH Correctly from my docker container environment. So I need to make some stuff with the with the socket Inside the environment and set up the SSH socket inside the environment to the same place where it is outside So I can communicate using my SSH agent, which is already installed my system I just want to mount my home directory and on my home directory inside the container and I I use the image which is tagged like that, which is the name of what we are creating here in the in the receipt so How does it work? So here I'm running on a major six And I can create of course a major six Environment as well So by default by default so I didn't relaunch anything this morning So I cannot Let me just Let me just restart the Docker Demand because I may change some stuff since yesterday Try again So I have a certain number of images. That's why it takes a bit of time So there is there is there is a couple of images and you see there are a lot of different distributions that I use to build to have different Environments to be able to build different software correctly. So that's why it takes a bit of time Is it better now? No, it's still not no such holes. So I may have lost my network Which is here? Where is my mouse? Here, yeah, I have no no land. So can I use it for them legacy here? I Yeah, right So the difference of Is should be before I'm not pointing to the right image. I Miss the architecture here Okay, so here. Where am I? Am I I am in a container which has been instantiated from the image which is here You see it's a prompt change of course from That perspective here. It's still a major six environment But this one has 232 packages where has my native distribution has 3000 packages So I'm in a completely new different environment. It's a fresh major six environment Which has this minimum set of packages that you need to have to run the command which can install additional packages So that's what you want to have you have a bare minimum distribution on which you you are able to use if it's a distribution APT get if it's a major your PMI if it's a Fedora DNF That's just what you want to to be able to do because that plus the network configuration correct so that you can Touch the repositories and download content from the repositories. So where you are here So the stuff which is also not right is that I'm a root in that environment. I should not be a root I should be I should be a single user Let me check. Yeah, so Yeah, right So this is this is the image. So this is the official image. This is the one I use As a base environment, so this is not the image I use to build my packages I can do the same Easily with another version. So if I use the cauldron version Here I will be now in a different environment Which is a magic a seven version Which has a different set of packages Only 219 so nice job for the guy working on that because they reduce the size of the minimum distribution set from 232 to 219 so we have less packages when we want to create a small distribution with magia 7 release and here if you look all the packages which are installed or MGA 7 where has of course here all the packages which are installed on my native system or MGA 6 So I have I have a working magic environment here, which is completely different that I'm pointing to the development distribution I have all the dependencies of the development distribution. I can really do what I want in that environment easily. Let me try to Just fix that Because there is something wrong here. You should never make changes Well, that's not true. That's not very correct in fact. That's not very correct. So they before the distribution is a supplementation Let me check Because yesterday I was building some stuff. So so I have in my own environment normally this one for example so if I go here into This one Yeah, which has the which has the architecture that was the missing part of my on my script Script has not been updated for so that so now I have an image which is Based on the previous image. So this is still a control version version 7 But this time I have a bit more packages because in my receipt. So if you if we look at the docker file That we have for example Where is Presentation So if we look at the docker file that we are using here in the presentation, which is which is the same in addition to the standard distribution which has 219 packages I Asked to add a couple of additional commands to be able to work So for example, I should have the BM and the mga repo command go back here So here here first I am a single user. I'm not root anymore I change the environment in which I want to run and I have access to the BM command I have access to the mga repo command, which were not there before So and I am placed in my in my directory where I have all The packages I am following for magia that I can rebuild So let's take for example Something ready to docker has a docker compose. I will do a remove of Everything which is not relevant So all the intermediate Build stuff here. I just keep the sources and the spec file which are the strict minimum I need to build packages so For those not really Familiar with that maybe so the spec file is again a receipt which gives to the rpm system instructions on the How to build a package for the distribution I'm running so it gives you some Dependencies at build time that you need to satisfy to be able to build and as docker compose is a python script It needs a certain number of python modules to be able to build correctly and then it will also Indicate some Installation dependencies so if you install that package on the distribution you will need to satisfy those dependencies Around python modules needed and then you have the receipt to be able to to build the software in your environment so For the magia distribution is as simple as doing BM and of course it does not work not because it's a demo Because it's it's on purpose. I missed all the dependencies I sure to use that there are build dependencies here, and I don't have those if I do our p.m pipe prep I have a certain number of python packages typically What is needed to build? Python packages and the python 3 and 2 versions as well itself But very few other packages just set up tools is the only one I have for example. I don't have I think it needs The docker package the web socket client etc. So all those packages are not available yet So I can set to my system. Okay I need to be a root because I want to install additional packages And I want to install the packages which are mentioned in the spec file that I need to have So it says okay I want I will default to to using bill request. So which all the bill requires that you need Okay, you need a python docker package. Which one do you want say? Okay? Let's take the first one and you have Recommended packages or optional packages. So I say okay. I don't want to pollute too much my system. So I would say okay do not Do not install the Recommended packages just the one I really need to build so the list of packages as dependencies which are needed in my environment are those I Will say install those stuff and hopefully if there is a bit of network I should be able to download them which does not seems to be the case Okay, so maybe the mirror is having an issue because I have the network here. Let me check This trip coffee Yeah So I cannot touch the mirror myself either on the web browser. So I need another mirror Let me try. So let's say okay. This mirror is broken does not want to deliver to me stuff That's not a big issue. Well, that's a problem, but that's not a big issue So I will go in my Configuration file and I will change the mirror Okay, let's do it the other way around. So I will change the mirror as a reference to the mirror here Inside the configuration file to something which is better Which is a kernel that all mirror which should be working Fine here. Oh, there is to this trip. Okay This time should be a bit better So let's try again. Okay. So when you deal with a mirror which is up to date and available You can download the dependencies to build your software it install them for you So now you can build your package and this time as all the dependency requirements are satisfied then you can Build the package and you have in your environment to gain all the directories that have been created for example You have the new package which is available here Which has just been built in my environment, which is clean because it has been built using the magia cauldron tools Magia cauldron dependencies creating an mgs7 version. So everything is completely Safe from a build environment And now I can just try to install it and Again, it's looking at dependencies at install time and it's okay for Installing that package you will need those packages as the dependencies So just say yes, it will download at some additional packages And now you have the package which is here and you can start testing it in your environment because it's working So you have a strict minimum environment to be able to make tests of One package that you have built here Which is exactly what you want to do and I'm not polluting as the rest of my system It's completely isolated and I can't do that as many time as I want with different distributions available any question Yes Yes So generally What happens by distro vendors is they have a build system and On the build system you have a machine with all the targets that you need to support or to want to support So here I'm testing on my local system. I will check that everything is working when I'm done I can use the magia repo command to push My content to the build system pushing my content is just pushing the subversion Set of files that are under control So in my case here So here is On the build system The subversion tree that I mirrored I mirrored locally You can have a look at the different stuff that have been that have been done From the system so you see what happened to the life of the package During its development you see when you have modified the compose file when you have a build a massive build for example For my js7 which happened which change automatically a certain number of stuff Okay, and when you are happy with what you have so In your environment what is important or The sources directory and the spec directory so the spec directory contains the spec file That is mandatory to rebuild and the sources directory contains the sources of the values version I have had during time of that component and a show on File Checkings as for the checksum of the of the source file So those are stuff that are in the subversion repository on the repo in the magia build environment And when I launch build for me The package it will go to the build system extract from Subversions the right files do the BM command like me on all of the target systems that you need to support So it will be for x86 i5 86 Which is a 32-bit version 7 hl because we are not like DBN because I see you have a DBN t-shirt So we are not as DBN maintaining as many as many architectures that you are maintaining of course And we have less packages as well as DBN to be clear only 30,000 when they been as 50,000 something like that. So that's that's the way it's done You you have so your your target system on on the build infrastructure that are used to build the final system So your your building stuff you are testing it Of course you may have a software which is working nice on the x86 and not working on arm And you will not detect it through this process You will detect it when your contributors say hey it's broken on my on my version and you you have a back zilla You give the architecture on which it's not working and people will make tests on that version if that not done that before That's a way that's the way it's done No, it's a dedicated system which is I think using just roots for due to I mean you don't change build system easily That's that's one of the problem So yeah That's the way it's done any other question I'm doing this time are we Okay So if there is no no other question I'll leave you a bit of time to change room and get another fantastic presentation. Hopefully Thank you very much