 My name's Sven. I got lucky. I work for Docker Inc and I'm currently support engineer, but I also do well maintaining the documentation, boot to Docker, all those little piddling things that hopefully somebody's read. You can find me easily because my name is unique still. The first Dovidite to call their name, their child Sven will well receive a nasty email from me. Today, well, that's right. Just to remind me. I'm going to talk about kick-starting new developers using Docker. Before I got the job at Docker, what I thought I would spend some time doing is racing around the Internet, looking at projects I wanted to play with, and making Docker files for them so that I didn't have to think about what I did the next time I came to play with that project. And so this is sort of, I'm going to show a few examples that are kind of like that. Hopefully they'll give you an idea of how you can then use Docker in your free software projects or even in your workplace, because people often, when they haven't had time to play with Docker, kind of look at it and go, so what do I do? I'm not sure that I've pitched this to people who have never played with it, but I'm trying. So the question is, who here has not used Docker at all? Right, can we swap? Because you can probably do this talk and pitch to more people than I will. Okay, so the examples I'm going to go through is I'm going to start with Apache, because my slim containers talk, I was originally going to compile Apache statically and show how to make that slim, but Apache is awful. That's the third time I've tried to contribute and they scare me. Then that's Perl C-Pan module, because I wanted to patch one small thing and it was hard to test. And then I will pretend to do some fig and then I'll show you a development environment that just works as my final demo. Okay, you guys can all read microwriting. Okay, so if you go to Apache's HTTP website and try to find out how to build it, they will give you a, I think it's like a 10 page document that tells you a whole stack of things and a little amongst there is all these little bits and pieces that you need to know to compile it. They also at the top of the page have this thing for those of us who can't be bothered reading, those are the steps. And you look at them and go, that's lovely. I could do that. If you follow them, it won't work. Firstly, I can't see any mention of actually needing compiler or which one. Make. No, we haven't got that installed. And there's a series of other libraries that you also need. And let's just find out if I go a little smaller. No. I may have listed them somewhere. But you just kind of go, in this day and age, do I really need to go read your 10 page document just to see if I can compile it? Of course, my answer is no. And then let's presume that you've followed these steps successfully, which of course I did. And then some cruel person who maybe your boss says, okay, so you've built your thing. I'd like you to test it on Centos, Debian, Ubuntu, OpenSUSE, Arch, and the list keeps going. And you just kind of go, okay, it's 2015. I should use virtual machines for that. So you make yourself your thousands of virtual machines and you're by that stage on board. And so this is the first thing that I started playing with in Docker, which is that we give you hub images of all of those operating systems, file systems. So you don't need to boot them. You don't need to do anything. All you really need to do is take this software, which I've clearly built into user local Apache 2, link that into your container and run it. So you get something like this, really simple command. So what I'm doing here is I'm running a container using the Debian image. I have given it dash IT, which gives us an interactive terminal, dash dash RM, which means it deletes the kernel, I mean, the container after it's exited, just because eventually run out of disk space. So for me, Docker run minus IT minus minus RM is the default thing I'll type because I often use very ephemeral containers. Then the minus V, what I'm doing is I'm bind mounting my build directory directly into this container, which means if everything is there and everything runs, I'll get a bash prompt and then I'll be able to Apache control start. And from memory, this one worked, which surprised me because I expected that there would be some libraries missing because this is a base Debian. It's 85 meg. It has almost nothing in it. It doesn't have an editor, I think. You know, that sort of stuff. So you just kind of go, I didn't expect that to work. What I was expecting to see is that I'm missing some libraries and I'd have to figure out what they were and so on, but I got lucky. And then I do the same thing as CentOS. And personally, I find that an awful lot easier to use than Vagrant. Now, this doesn't mean that you've matched up the kernels and, you know, so you're not really running CentOS or Debian or whatever, but for a user space application, it's pretty darn close. Because most of the time what you're missing is user space libraries, that sort of thing. If you're playing with a kernel, the kernel guy seemed to be slightly scared of the idea of loading a module from a container into your non-contained kernel. I can't quite work out why, but then I'm not a kernel developer. I don't even pretend to be, although it might be a fascinating life. So I went through those manual steps and I got bored very quickly. Instead, we could add a docker file to the repository that would do the whole thing. And in fact, I've structured this docker file so that you wouldn't even need to put it in the repository, clone the repository and go from there. You can just hand somebody this docker file and it will go off and get all the bits. And so I'm using the official hub GCC image, which has GCC make and all that jazz installed. It's 1.2 gig. It's great for development, but it's not great for your bandwidth, especially if you download it millions of times like I do. And then it'll install the prerequisites, which were sort of vaguely mentioned in the document I referenced before, but I kind of found them out because things didn't work. And unfortunately, I don't think I can show you the rest. Oh, maybe I can. Let's find out. All right. So what's the easiest way to do this? Isn't that lovely? F11. Jeez. It's not very big, is it? Maybe I've shown you all of it. Ah, okay. I did some more afterwards. Okay. So effectively, let's look at this one. That is the build. And the last thing there is telling the container, well, the image, what to run by default. We can override that and so on. But really, that's it. I kind of think that's more useful than the 10 page document because it'll work every time. And if it doesn't, then hopefully it's because I've got a bug in their repository. Okay. So the fun thing about that is you now have a build, which with all the prerequisites you need in Debian, you can do the same for CentOS and whatever. And you can run it. And you can see the run command there setting it up on port 8080 on your local machine. And in this case, I've decided to start it manually because I was probably debugging something. Does anybody have any questions to this point? Yes. And that's because I've added my user to the Docker group because I hate typing. I still don't touch type properly, so shorter is good. Yes. Okay. Imagine this. Anybody who contacts the Docker demon in whatever way they can will be able to spawn privileged processes on your computer. Do you want that to go away? Or would you like root to be the only one who can do root things? You scare me. Are you a user? Yes. No, I don't know whether because it's really actually kind of a bad idea. It's possible that somebody could do it and write the code. I don't know enough about how you would make sure that we restrict all of the things that you could possibly do. Possibly with not 1.5 but 1.6, I'm assuming that user namespacing will actually get into Docker. At that point, it might be possible for a non-root user to have enough privilege to start a Docker container as themselves. But you can still bind mount volumes and you could probably still link to a volume that's in another container that may give you more privilege. It just strikes me as really messy and really scary. But yeah, with sufficient code, I'm sure somebody could make something that looks like it works and hopefully it won't kill us. Apache building beforehand or I guess this Docker file is building Apache inside the container. Yep. Is there an advantage over one or the other or is one way generally recommended? Okay. Well, for me, there are a number of advantages to using the Docker file. Firstly, I don't have to read the 10-page document. I hope I get all the bits. Secondly, I hate having to install stuff on my local box. So if it's in a container, once the container's turned off, you don't have that installed. So to me, those are the minimum advantages. You also get the fact that this container, it's unlikely to break out without some extra effort. So for me, I don't want to go back to the old way, especially when I do a Java thing because I don't want to have Java on my box. Other than that, because a running container is essentially just a thin little wrapper around what's running natively, to me, there's not very many downsides, especially at a development point like that. Oh, there's more, there's more. Now I'm going to have to rush. You just mentioned Java. Do you have any advice on keeping the size of a Java container below a couple of gig? Last time I built one, I ended up with a 4.9 gig container. How do you make a Java result war file, whatever it is, that isn't ginormous? Okay, but so if your development or build image is for something gig, when is it a problem? It's not a problem when it's on your local machine, is it? Exactly. So don't move it around. Yes? No, no, the slim container talk basically covers that. Okay? So what you do is you have your build container and you have it generate a tar file or whatever of your result. So you've got your war file. Then your next step in your build is to take that out of that container and make a new lightweight container that has only what you need to build the sucker. And that's it. Yeah. No, no, I know. I know. I've talked to people and you just kind of go that's scary. But if your war file is two gig, then Docker's not going to help you make it smaller. What you need to do is figure out how you can transport that around. So your build artifact is probably the cheapest thing to run around. So if you can have your Java runtime images as your base image for what you ship around, and then the only layer on top of that is your build artifact, then it's as small as you can make it. After that, it's Java problem. And I would love to know the answer, but I know so little modern Java that I just feel under qualified. So at the top of this Docker file, you're already running out to get and I'm a bit of a docker noob. How does it get into Debbie and is there something that's missing from this fall? The GCC image is a Debian with GCC and make a few things installed and then compressed into one layer. So it masses of cheating because I can't. All right. I'll move on because I am going to run out of time. Okay. So we can run this sucker. It's great. And then we make the slim container by extracting just the user local out Apache 2 and building a new container that contains nothing but in this case, sentos and the second layer being the tar file or the result of it. So we get instead of 1.4 gig, we get a 280 meg container. If I had chosen Debian instead, it would be 120 meg because the Debian base image is 85, that sort of thing. But you know, in the slim container talk, engine X using my cut down Debian, I think was 60 meg. So another reason to maybe learn how to use engine X. Okay. Second example is where I wanted to test something. Now there's an awful lot less mucking around here of getting prerequisites because there's that C pan in install depth dot, just like Ruby, Pearl, go, you name it, it grabs the dependencies out of the project itself. And in this case, I'm not doing a W get. I'm actually copying. So the copy dot whatever is copying the current directory and subdirectory and bringing that into my container image. What do I want to talk about here? The really cool thing is that so long, oh no, okay. And the problem is with the way I've done this, because I put the copy dot at the beginning, if I change any file in my source, it's going to rebuild from scratch. If I was a sensible person, I'd actually just copy the file that my dependencies are built from, then do my install depths and then later on bring in the rest of the source code because that way, at least my dependencies would be cached. And so if I then change some source files, rebuilding this image would pretty much just go cache, cache, cache, cache, cache. Okay. Now I got to do a build and keep going because I live in Australia. I have sort all bandwidth. And while I do have, I pay for through the nose. I hear in New Zealand, it's worse. Whereas my American friends look at me and go, why do you care? The internet is fast. This again, we throw it in the repository and we end up with something where somebody new to this module can come along and go, I don't know what I need or how to do anything. I can just get clone and then go docker build that, tag it or not. And in this case, what I've done is made it so that it runs the tests during the docker build so that if the build succeeds, I now have a module that is considered working. Whereas, I can't remember with the other one. Yeah, I'm not running the tests here. Terrible man. Okay. I'm going to skip this one. This is a fig thing, which I don't use fig enough, but basically, so the docker is great for starting one thing and you can do things from the command line to link multiple containers. This example is basically just your fig command that tells it to use the WordPress image and the MySQL image and to link the two together so your WordPress gets started and knows how to talk to the MySQL database container. And so that makes the orchestration or composition part about as simple as using docker files as well. And we bought these guys and they're cool. Okay. Now for my little micro demo, because I've used up more time than I wanted. He says crossing fingers and hoping it works first time. Oh, look, what I have here is a containerized X windows. Whoa, that's not wanted. Let's make this look familiar to everybody because everybody loves Windows 95. Okay. All I need to do and here we go. I'm starting a second container of NetBeans containerized. This is about as close to my machine as I let Java get because it's in a container in a docker image. And if I delete the docker image, my machine is no longer Java. Thank goodness. But the thing that I wanted to show about this is it means that in your company or in your group, you can have a set of docker files that you give to everyone or you can just get them to pull the right image. And that way, instead of setting up a new computer and getting all this stuff working, you can just go docker run dev environment or figure up dev environment. And there you have your tools, your prerequisites, everything you need to develop, test, debug, you name it. Okay. Well, any questions? I have time for one. Two. This is probably a silly question, but say with the NetBeans example there. So it would have access to like your other local mount file systems to say if you've got your various development repositories and things in another file system, you just bring that up and it still has access to all of the workspace. So for instance, Eclipse keeps its workspace in a separate space, that kind of thing. Yes. Well, you'd have to set up or use your feed command more than anything else to set up the volumes to talk to either your hard drive or a network mount or whatever and then orchestrate that so that the containers talk. But that's something that you would do once for everybody in the team. So yes. Question about the philosophy of docker. I'm just trying to understand how in a production environment how do you fit configuration management into it or do you guys prefer the philosophy of just nuking containers every time you want to make a change? And also, is CoreOS pretty much the only choice you have for a base OS? No, CoreOS is definitely not the only. I mean, an awful lot of people I know use Ubuntu. I use Boot to Docker because it's a 24 meg Linux distro. As far as ops and deploying real things, I'm not the right person to ask. I'm at the developer end. So I do prefer to blow everything away and start again. Yeah, I kind of, yeah, I'll punt on the ops bit. As far as OS goes, CoreOS makes a set of decisions. If those decisions suit you well, there's obviously no reason to not use it. Personally, I like Boot to Docker because it's just Linux and Docker and everything else is in container. I kind of think Rocket is in part attempting to make that way of working even better because what they want to be able to do is run their services in a container before the container system is ready. So, yeah, there's no one answer. Sorry. My first use for Docker would probably be to replace Vagrant as my one command set up a dev environment for someone with a shared directory on both. How do I do that in 15 seconds? What OS is your poor developer using? Okay, if your developer is on Linux, then you've obviously got to apt-get install Docker. Your 15 minutes is almost up. Yeah, it's almost up. You'd want to install fig, and then you would get clone your repository that contains your Docker bits, and so then you should be able to go fig up, and it would start all the containers with all the bits linked together, and they're done. So what's the listening ingredient for it? Fig. Yep, go fig.sh. It's effectively, well, it is going to be called Docker Compose in a few months' time when they rename it because it's the composition element that is coming out. If you didn't have Ubuntu but we're running OS X or Windows, then you'd add boot to Docker right now and eventually Docker Machine, which will start up the virtual machine or talk to a cloud service provider and create your virtual machine that runs Linux with Docker, and then you can do everything that way. But, yep. So there's at least five steps, and it'll unfortunately. Yeah, it does, yeah. Well, thank you very much. Thank you very much.