 Hello everyone, my name is David Midrere. I'm going to talk to you today how we build development departments for our developers at EMEZA. So, by night, I'm a contributor to the LXE project and by day I work at EMEZA. EMEZA runs a platform for auctioning in the Netherlands called Katzenvendingen and more recently we have been expanding to Belgium and France as part of EMEZA. When did we choose LXE? First of all, we run everything on their method so we used to have an improviser in order to be able to have more than one waiting system on the server. We are already running LXE as our main improviser for our production environments and I want also to make a question to all of you who is using LXE at the moment who is using LXE on their production environment. Okay, that's a few, alright. So we are already doing it for quite some time for at least two years and we are quite happy with it so we want to keep it to also the rest of our environment. Most of the services that support our main application they were already running on LXE they were being built with puppets so that means that we have an easy way to do all the reproducibility and in order to build up the application part that we use so for us it was just an actual choice to go with LXE. We are currently in the phase of migrating some of the bare metal applications into containers and we want also at the same time to revamp our whole development acceptance and production environment. We had some issues with our whole development environment we had some single server used by 30 developers which is, yeah, it made me cry sometimes. We didn't have much flexibility if we wanted to install software or if we wanted to upgrade something or if the developer would come to us and say I would like to test this new model if it's a body lab or we want to upgrade our each version because it's going to be more performant and yeah, that's not going to happen as easy as you want because we cannot break our current development environments and we don't really want to have to go through that trouble. Having a shared environment also means that if we want to run an application for developers so that he wants to do these things we need to map the applications to the developer which means that if you want to have a memcache for example we would have to say yeah, we want a memcache, okay, this is your port and then we need to make a mapping okay, so this memcache, the one for this developer yeah, that brings a little bit of trouble for ourselves as well. One other funny thing is that we had to do a demo which started really nice so that we just filled up all the memory and all the resources that were all located so that we could do the demons so yeah, it's not a very fun stuff. On our testing environments we had fixed resources for our testers so we had to build up a couple of virtual machines and some bare-metal servers for them in order to test so they would just redeploy the application itself so we had to be a bit careful with these environments meaning that yeah, we would not touch them too much as well if one of them was done it was a bit of a problem even though it was a testing environment we had no proper way of cleaning up the environment on deploys, reinstalled the packages our build system was working fine and testers actually had to use a scheduling system to build up resources because they were limited so some testers wanted to use a server that was more powerful for some stuff and sometimes they would fight with each other and they don't know what to do so enough talking about problems and now I'm going to go a bit more into how we thought these issues so how did we start it? we started by working on our topic recipes and we started by building a container which contains the application itself and all of the supporting applications for our application we have our websites and our website requires a memcache it requires everything to do with all of those elements working and we started to do the heavy lifting on having all of those elements running inside a single container that's one of the good things about the Linux containers is that they are system containers so you are running OOS inside of a really tiny file system and really tiny overheads compared to a virtual machine for example so that was really good for us because we were already using things in production and we just had to basically adapt what we had in production to run everything inside of a single container at the same time we also built our base image our base image had already our software installed so it already had the necessary software that it needed to run and every time we booted that image or every time we started that image we would just run our puppet apply which would apply all the configuration stuff that we required all the things around it and we would have a working application in about two minutes or something just to start up the container run all the configuration stuff and ready to be used this was also the phase where we started the automation so we built up the container automatically with a bash script what the bash script does is pretty simple just run it in the container make an image that we have previously built and that we have previously done boost our puppet repository inside of that container image it applies all the settings that we need to apply it cleans up all the unnecessary files like your log files, key files history files that are not really in time inside of a container after that it adds a container IP address to DNS so that we can just type a container name and we can connect so at this phase we already had a way to boot up a container connect to it and run the application so this phase we were also talking to developers to try it out, to figure out what kind of things were wrong with the container but we run into another problem and that problem is the fact that LXD currently it's being developed currently it only runs locally so that means that if you have a bunch of LXD servers you can remote connect to them because they don't have a way to connect with each other in the sense that they don't know you are running X containers and running X containers and they are not basically over each other so we decided to do our own orchestration tool it's a very basic orchestration tool in order to treat the LXD servers as a cluster we run our own orchestration script based on Pylexy we call it the Emesa container orchestrator or TECO and what it basically does is you put all the LXD servers that you have on your configuration file and then it will just put all those values in the dictionary and the key will be the name of the server and it will be the connector object that you can use to connect to LXD and how we are doing it right now we just need the CLI, we say Launch Container and you can figure out which of the LXD servers has the least amount of containers running and you can just launch the container there it will retrieve any information related to the container like the address, name any kind of storage is attached to it all those kinds of things and you can easily add as well custom facts to the container so that means that when you deploy an application you want to know who deployed that application or that container in this case you want to know as well when was the container launched and we also added things like someone deployed a master branch in here or whatever branch some other stuff that you can add if you want so at this point we already have a way to launch everything build up the container and run the application inside of a single container but we needed to have a way for our developers to launch it and we don't have any root access or to provide them any sort of access to our infrastructure so we decided to use Jenkins for that Jenkins kind of works like a front-end for our shell scripts actually so when you click on the job to launch a container it already pre-fills a lot of data so it pre-fills your developer username your public key that you want to use the hosting prefix you can give your container a name and then you can connect to it VADNS we will add a few records something like your container.developments.ms.whatever it will also allow you to select a specific branch of your application that you want to pull so in case the developer wants to test their branch or a tester needs to test the branch from a developer it will be put automatically inside of the branch and it will pull that code an overview of how the thing is working so the developer goes first to Jenkins launches the container Jenkins calls the bash script which is kind of a wrapper script and that wrapper script will call echo echo will figure out saying where can I launch container which of the servers has a list container where can I launch it if you launch it and it will say I want the container on this LXD server the next step is that because the LXD still is kind of a local we need to connect to that LXD instance we run our provisioning scripts inside of the container so as I've shown before the pocket apply to do all those changes after that is done and hopefully success it will add all of our DNS stuff into our DNS was it worth it? yes, definitely we fixed all the issues that we had for our development environment so that everyone could now have root they can play around with whatever software they want hopefully they will not try to inspect or melt down so we now have a better way to isolate and constrain resources if needed so if they start to misbehave they start to do some great things with their application we can easily burst down the network usage or the CPU usage things that we cannot do before with this environment we have now more of a dynamic environment for testing as well so they don't need to schedule the resources for the servers or whatever and so they can just launch containers and delete containers as they want the development containers are directly accessible on the network this means that the container will boot up it will keep an IP address and they can connect to it and they can run whatever services that they want to connect directly from the laptop one very good use case for this that we are already using right now is that we run the unit tests so when there is a merge from one of the branches one of the development branches into our master branch every time that happens we just launch a container we run all of our unit tests inside of that container because that container is great for that because it has all the application necessary so we can run all the unit tests inside of that single container and yeah it's being automated so container is launched, unit tests deleted it's done so now I want to show a bit of a demo I'm going to demo the echo part so I'm just going to show three LXD servers so they are Ripple machines running on my laptop and I'm going to show the fight on the screen basically there's three Ripple machines running LXD01, LXD02 and LXD03 and these are just regular 1 to 16 or 4 running LXD on it on the cluster there are three containers running there's C1, C3 and C2 so now I'm going to create a container so what the script is right now queries all of the three servers it figures out that LXD03 is the one that has the least containers running or it gave just a random one because I guess if we have three containers it will just be a random one but we probably will in the latest image might take a bit it won't take much time so the container now started and it started and it is on LXD03 so I can now go to LXD03 so now we see that the 4th and 2018 container is running there it has like the others LXD06 running on it and also show information about it so this just returns the LXD host where the container is running and it needs to connect to the LXD and to the container in the team locally this is our custom facts that we have in the containers so we see the container type usually is developments testing or production created by of course created the container that is the created date hostname, IP and brand currently everything is bogus because we are not running this through Jenkins and I am not running this through the mesh provisioning script I am showing just bogus data because there is nothing currently there and that's it any questions? no we are not using that so the question is do we need live migration for our developers? no because currently there is no need for that if a container, if a host goes down we can just recreate the container and it will have everything that it needs in order to run the application can you repeat the question? also the question is if we have to consider using other technologies than taking the LXD for LXD no because this is what we were already running so it works for us so we are not considering anything else for the package any other questions? are you using Jenkins to delete the container? so the question is if we are using Jenkins to delete the container yes we have our job to delete the container to delete Jenkins so that's a very good question we have a job or a website that shows currently the container running inside of our infrastructure and so the container name and the host name are like shown here and then they can just click the button delete and then the container will be deleted and this is our excuse me sorry it's our own plugin project it's just basically calling a shell script with 5 meters and the shell script will take care of deleting the container deleting DNS names and all that stuff thank you