 Okay good day people. I would like to talk about Docker, the use of Docker for people. Some people some people know me as Elche of NXADM on ISE on Twitter or whatever. I'm also the deaf room manager. We have kind of a schedule problem last time so that's last minute so that's why I'm here. A little bit of me I work at the information the Combatant Center of Information Security at the University of Leuven and we do stuff related to security, to identity, to authentication authorization and stuff like that. So I would like to start with a kind of controversy question. So what kind of problem does Docker solve for Pearl? And a typical reaction would be what problem? Do we have a problem? And it's true that in Pearl like we we at C-Pan for like ages you know before the people with birth, we have a very nice testing culture. We have a very nice community. We are great at writing tools so it's very easy to because in your in your Pearl environment because it's very nice. But if we rephrase the question a little differently we may get another answer. So if we if we ask the question how do we deploy our programs now in 2017 we get a lot of different answers you know there's more than one way to do it. Some people use C-Pan or Ceph and our remote machine. I hope no one's here works like that but I've everywhere I've worked I've seen people doing that on production machines. Some people use a Rakudo brew of Pearl brew to use their own Pearl and not use system Pearl. Some people use carton to pin their dependencies. Some people use local leap. Fatpacker very nice if your application is Pearl only so you can just make a one file so you don't need to have this dependency. Mini C-Pan dark C-Pan. Some people just create an archive and put it on the server. Some more CIS admin type people will create an OS dependent package like a Debian package of an RPM. The same kind of people will probably use a configuration management tool like Puppet, like Salt, like Rix, like Sparrow Do. And if we are honest with ourselves we must acknowledge that it can be a little fragile at times when when you're in full control of your environment there are very nice tools but when you're in kind of a bigger environment where it's a separation of responsibility we have some people like the Linux admins being responsible for the OS and updates you have application people being responsible for the application. Maybe they will do an update and your application is not well tested and it will break. If you are working on the cloud when you need a really fast switch of environment it will take a long time so it's not always the best solution. So if we look at other examples if we look over the fence into other communities that kind of work around this problem we see that it's not that easy. If we take Java by example a Java is great in this regard because in Java you put everything in a jar you put the jar in the machine you feed it to the JVM and it runs it will load probably I know 50,000 classes without exaggeration but it will run it's great but even then you you will meet class path hell when you have the same library with two versions on different path your application will run but it will explode around the way so even then if you go to go also a very nice language when you do an static compilation of your program when you add all the dependencies to your binary you take the small file you put it on the server you run it it's fantastic it's fast there's no VM but even then if you have a security problem in one of your libraries you need to track down all those small banners everywhere and that's not easy if you don't have the infrastructure for it because programs tend to I'd leave the programmer I've heard from some colleagues where I worked like 10 years ago they're still using a proof of concept I wrote and the programs start from this is a proof of concept in capital they still use it I don't think they even have the source so that's so that's a little difficult so if we look at a great example of other communities like Java and Go we realize that deploying is only half of the question and the real question in my eyes is how do we integrate with an ecosystem that is no longer language centric and what do I mean by that I mean that the future is API centric so you don't care that much about the language you care about the API you care about integrating stuff together and even more I could say that the present already is if you're working with with within a devop teams when you have a lot of people from different backgrounds you have operation people these admin people that they have their own tooling you have developers that maybe have their own tooling and if you if you work with people of different backgrounds that means different languages different frameworks so you're already mixing stuff there if you work with a cloud when it's very important to be able to switch from one cloud provider to the other when it's very important to be able to to bring your instances up it's very important that you have the best tool for the job and this is a good thing because it's very possible nowadays that the the best tool for the job is not written in pearl it could be written in java in go and ruby doesn't matter because you still get the best tool for the job so you can integrate stuff together um well back to docker a typical question is is it here to stay is it a hype because if you've been around for some years and it you know things come things go these come back slightly different so that's a very good question and the same people i would say we had sivan for like 20 years they would say yeah but we already have vm what's new so a vm a filter machine the idea behind it is to fully integrate a discrete environment to fully to to have a full operating system that also means that you need to fully administer an operating system like you used to have one big physical machine and now you have one physical machine with 10 vm's you need a lot more work to give that up to date to give it secure to create users and so on and more importantly a vm and a container are not at odds they can work together it's a very valid scenario to have a vm and to run containers on that maybe because you you're standardized around a vm you can deploy them and and provision them very quickly of maybe because of security you don't want your containers to share the same kernel there are a lot of valid reasons to do that well after this introduction i would like to answer the question what is docker because i'm talking about docker docker docker but i haven't explained it yet so if i'm forced to summarize it in one word it's i said it already it's a container and the same people that said yeah sivan yeah vm they would take container we haven't doing that forever my me myself i've been doing that 2005 on solaris solaris zones i've probably migrated hundreds of physical machines to solaris zones it was fun you had to see the best you could copy your container through ssh who know the machine it was a lot of fun people working on on ix probably 10 years before that but it's not the same thing so what's different again is the api so docker gave you an api to integrate it with other stuff so if we redefine what a container is nowadays of course it's an application itself contain that's kind of the definition of a container but the most important part is it's portable so you work at your at your workstation on the same container on the same binaries as on the production server as the same thing that the customer has so it's portable you move stuff around and you move the same thing you do you don't need to recreate everything and then test for the differences so container because of this have a really huge impact on how we develop how we distribute and how we run software so as a developer it's really it's priceless to to be able to develop on the same environment as the production machine because that's it's always the battle between cizad means and developers yeah it runs on my laptop it doesn't run on the on the on the production server which is so slow whatever so you you will develop differently because you can have the full stack all the different services on your laptop you can distribute it within your company test quality production you just move the same thing on a new environment you can you can push it to a client exactly the same thing as you're having on your laptop and it's also a very standard way to run software you don't care if they use suce of fedora of db and of ubuntu you just don't care maybe they run they run it on their big iron in their premises maybe they use a ship cloud provider you just don't care so it's a standard way to do it so this is a visualization how a container looks like it took me a while to get this because it's kind of confusing what a container really is and the most important part is the image in an image you can it can be compared to an iso a dvd a live linux distribution where you put all your libraries of your or your binaries and when you bring that up you always get a fresh environment every change you make will be lost when you restart your container so it's kind of a rich only environment that you can change on the fly when you restart it you lose that idea you always start from a fresh environment then you need to have some runtime information so something that you need to have a useful container maybe it needs some network addresses some ports maybe need some access to a file system mount points etc environments whatever and most important for your application is the persistent data that's something you don't put in the image because the image you just you can put it on the internet you don't care but your your configuration your secrets your business data that's outside of the container and the container has access to that so that's those are the big three parts and then if we look at at this again we realize that we still need tools to manage the runtime info the configuration the image creation and there are probably tools that we already talk about and in this case even cpan on the server because you're working locally it's okay an undocker image file is just a series of scripts of commands that you run after a basic image of an operating system you start with a very small down debian of something and then you say at this at that run that that's it but you only do it once and it's get stored in a kind of a binary format it's very easy to to have something very simple you don't need to complicate it stuff at that level so everything is containerized is everything is easy to understand and because of that is also easy to implement but if we go here that we used to have a very big radiator apparel application a radio server project when we only use Puppet and Puppet was the code was very complicated very big we had a lot of tests because Puppet had to manage users had to manage packages services the order they run and and at the end it configured my application now with a container i just don't do that anymore because that's in an image that's frozen the only thing i have to do i have to care about my my application so Puppet now it just take files you put in a directory take a template inject some secret and that's it so my code is very easy to read very easy to understand because i don't have to look at the full picture i only have to look at my application so we did already that so what does docker bring to the table kind of summary of what i said is efficient because it's only one process running is running on the kernel so the kernel marion talked about it has some cgroup to give you some basic security but it's on the same kernel it's not it's not about emulation also the way of working is very efficient because you are working on the real thing so you save a lot of time you don't go back and forth with with the sysad means talking about what's difference it's portable like i said you can distribute your images and it's also embeddable i cannot read that so i will say embeddable um that means that you as a pearl guy of girl you can create a base pearl image have a very up-to-date pearl very secure pearl you can provide a base set of images of modules excuse me um that you vetted the version you tested them and then someone can can take your image in your company of someone from the internet and just care about their application they just add a layer on your image they create a new image for their application so you're responsible for the pearl part and they are only responsible for the application so it's it's very easy to have a secure baseline that you can update and they don't need the all the knowledge for that so only have five minutes so i will warn you so i don't want to sell you stuff i don't want to be only positive so i know do you know the first rule about docker i know i know someone here so you won't shut up about docker so it's it's okay to give a presentation but don't do it at the dinner table because you get very annoying i've been there so don't do that so more seriously when you use docker you need to test test test it's not as straightforward as it looks yes things are easier things are simpler better a lot of corner cases you need to ask yourself some very good questions that you something that you always need to do but now you're forced to do it you need to ask yourself is my application horizontally scalable if the answer is no you need to rearchitect your your application or just don't bother with docker because docker uses the concept of cattle the docker doesn't care about your service doesn't care about your your container if you have a resource problem you just pop up some new one so it's if your application is bound by cpu of memory maybe that's not a good solution um issue application performance in docker already said is very efficient but there are some trade-offs on the level of networking on the level of disk io and because if your application is horizontally scalable it's not that important but still you need to test you need to make sure that you make the right choices because on network there are some implication on security on on flexibility so there are choices that you need to to make and you make sure that you make these choices and you just don't use the default of your distribution because they it's just a very generic setup this is the most important thing i would say today docker is not a security solution for most people that work with docker they think that is it gives you a very um dangerous false sense of security because you think it's containerized i'm safe you're not you need to follow best practices you need to follow common sense you need to test you need to make your application update of course it's an extra layer of interaction that's a good thing but it's not enough so if you get into docker most books most most stocks don't go into that you need to look into that i don't have the time to go in detail but with a very base minimum effort you can get a very secure application but you need to really be proactive about that there's also about people about politics this is not a technical issue but most companies institutions are kind of divided operations and and developers and if you start with docker you get a lot of people that will have something to say about your image you have a lot of chefs in the kitchen so you need to be ready for that too you need to have a good collaboration with other teams you need to be able to to to acknowledge input to talk about it and this also gives an opportunity i already talked about the base pearl image it's a very very good opportunity for a for a pearl person to to create a standard to may to be the one that is not knowledgeable about pearl someone that can create a baseline someone can make sure that the securities followed and so on i have some slides left but i just going to leave it like that so maybe if there are some questions i don't know i can so yeah go ahead sorry yeah certainly i will the idea behind it is to to have the real thing on your on your on your laptop so you are working on the real thing that will run on production so i couldn't develop otherwise because otherwise you will always have trouble with discussion with it works on my laptop it doesn't work on production one minutes so can you compare so the docker to other container uh nothing one minutes so i would say docker is easy because there's a lot of integration already so it does make it easy but there are other good alternatives as well so it's uh i think i'm gonna wrap it like that uh thank you very much