 How are those everybody doing? Are we awake after lunch? Are you sure you're awake after lunch? Maybe. Are you sure? Yes. Only a couple yeses. Let's get everybody to stand up for a second. I'm just kidding. Just sit there. We're done. Now we're awake. So I'm Brian. Also, cpu-gaiety3 on Twitter or on FreeNode. And you can read my blogs at container42.com. And I'm here to talk about awesome things you can do with Docker. So before I can really talk about that, you should know what I think are awesome. So in no particular order, Doctor Who is freaking amazing. My favorite show on TV. If you don't know Doctor Who is, you should go look it up. I actually started watching it because I was watching Big Bank Theory. And Sheldon mentioned it. I'm like, oh, Sheldon mentioned it. Let me go check it out. So I did. So it's a show about a madman in a blue box that goes around and saves the universe with a screwdriver. A sonic screwdriver. So that should be enough for you to go check it out. Sonic screwdrivers. Star Wars. I love Star Wars. Even episodes one through three. You have terrible movies, but they're still Star Wars. So they're awesome. And if anybody at Disney is watching, I expect the next ones will be much better. And my three beautiful little girls. The baby there will be one tomorrow. So we'll have a nice little birthday party for her. So I actually have to skip out right after the conference so I can fly home. And then my two oldest there, the middle one and the one in the center there just started kindergarten. And the one on the right hand side, Amelia, she is two. And she is very two. If anybody's had a two-year-old, you'll know that if that two-year-old says something and you do not acknowledge them, they will repeat that same thing over and over and over again until you say, oh, really? And they'll be like, yeah. Only my two-year-old, this one in particular, does not do that. She will wait until you actually repeat back what she said to you. And that is a chore because she is almost impossible to understand. Like when she wants a kind of cereal, she goes and she says, I want cacacacas. And I've learned that that means Captain Crunch. So there is a language barrier there to pick up. So to get her to stop repeating herself. And of course, the whole reason I'm here is to talk about Docker. So you saw those other amazing things. And that's just to show you how amazing I think Docker is. So in case you didn't notice, that's me. Everybody else puts their picture on their own slide, so I figured I would do it too. I don't know why, because you can see me. I have been using Docker since 2013. I guess it's been about a year now. Basically, I was using it at my old company, where we were having all kinds of problems with, in particular, having very limited resources to deploy services and many services to deploy. In addition, I was the only person in the company doing any sort of tech work. So I was doing sysadmin work. I was doing development, everything. So basically, if something happened to me, there would be nobody to be able to go and reproduce that IT infrastructure. So one of the things I set out to do was to make a reproducible environment. So one of the first things I came to was Chef. And I got deep into the Chef community and made some pull requests to Chef itself, but kind of decided it wasn't really solving my problem. Yeah, it could make some reproducible environments, but we also had other issues in particular, density, and even making it easy for a non-developer, someone who does not have an engineering degree, to be able to go and actually make those and have any idea what those cookbooks for Chef look like, because they're kind of ridiculous. No offense to any Chef people here, but if you've seen a Chef cookbook, they are crazy. I decided to become a Docker employee in June of 2014, pretty much because when I discovered Docker, I had pretty much kind of dismissed all the hype around it because Docker came out last year. Probably it's been about a year and a half now that's been out. And there was a lot of hype, still is a lot of hype around Docker. And with those kind of things, I tend to ignore, because there's always a lot of hype around all kinds of products and they never deliver on their promise. But what I found was when I did try Docker, it did exactly what I wanted it to do for me. And it did it so well that I kind of decided, well, I need to go and join the team that's making this awesome product. I am a maintainer of just this most minuscule, tiny little piece of the Docker code base. So it's not that amazing, but I at least get to put that on my slide deck. And I have used Docker in production environments. So my experiences with Docker are not just theoretical, like a lot of people that are just trying Docker or they're saying, oh, we're going to put it in our test environments and what have you. I've actually used it in production. And I'm also a crazy, insane introvert. So if you want to talk to me, you're going to have to come and talk to me. I'm probably not going to come talk to you, no offense to anybody here. I do enjoy talking to people, but I'm not particularly outgoing. So if you do want to talk, feel free to come. I may be brief. I may be terse because that's just who I am, but I really am into what you have to say or any questions you have. Especially if you want to talk about Docker, who are Star Wars or Docker. So probably a lot of people here, I know even some of the people at the speaker dinner last night were wondering, what is Docker? They pretty much all have heard about it. How many people have heard about Docker? Everybody. How many people have used Docker? Maybe a third. Maybe less than a third. How many people have been onto Docker Hub? About half of the third. So Docker is essentially an answer to this. This is a crappy way of life for everybody in our industry, be it developers or sys admins. Every software deployment ends this way. Or not ends because you've got to fix it. But every single software deployment goes through this. Every single iteration, every version, this happens. And it's crappy. Depending on where you work, maybe the sys admins feel like the developers are their overlords and just kind of hand them off crap. And they feel like they're being oppressed. Or maybe it's the other way around where the sys admins are battle hardened and they've decided they're going to dictate to the software developers how they're going to develop software to make their deployments easier. And to be honest, every time I see this, I kind of picture that girl in her head going, ha, ha, ha, ha, ha, like she did it on purpose. And so sometimes that can be true after many deployments. There's animosity in between developers and operations. And unfortunately, at my last company, InView, I was both. So I wrote some software. And I handed it off to the sys admin, me, to deploy it. And every single time, it blew up in my face. Even though I knew how I developed it, I knew exactly how to deploy it, still somehow something went wrong. I was missing some dependency on my server. What have you? So Docker is for deploying basically anything. It doesn't matter if it's a SQL server or Nginx or Apache or just a set of static files. It can be a build system. It really doesn't matter. It is literally for deploying anything on almost any location. With the caveat that it's got to be Linux. So Docker is currently Linux only, although there are some works to be done on getting it running on Solaris Zones and BSD Jails. But that's a whole other topic. You can run Docker on VMs or on bare metal. You can run it on any IaaS, such as EC2 or GCE or IBM or insert whatever, or any VPS, like DigitalOcean or Linode. And it can run on any distro, virtually any distro, because Docker, there's actually some bugs in the Linux kernel that weren't resolved until about 3.8 that we found. Although Red Hat has patched their 2.6.32. So you can run it on Red Hat Enterprise Linux or CentOS or Oracle Linux with that kernel version. And there has been some people that have got it running on ARM, in particular Raspberry Pi. But that's completely unsupported. It's kind of up to you to try it out. But it's really cool to be able to run Docker on a Raspberry Pi. Basically, if it works in one place, it's going to work everywhere else. In production, in a contributor's laptop, and scaled across an entire cluster on an auto-skill, on wherever. It will work on two developers' laptops. And maybe somebody discovered a bug that then two developers should be able to reproduce that same bug every time without issue. But how does this happen? We've heard that this thing, this mantra before, I'm sure everybody has. So here's the problem, though. This is why everything blows up every time we do a software deployment. It's because we have all these services that need to be deployed across every single one of these mediums. If you get a public cloud, a production cluster, contributor's laptop, some disaster recovery area, everywhere. And we call this the matrix from hell because it is impossible to make this work everywhere all the time. As it turns out, there is another industry, the shipping industry, that had the same exact problem where maybe they need to ship a sack of beans alongside maybe a barrel of chemicals and it's got to go across the ocean so they got to put it on a boat. So the dock workers have to know exactly what's going on inside of each of those, like what's in the sacks, what's in the barrels, what's in everything. And they have to make the determination of what can sit next to what. So you don't have contamination. And in addition to that, it's also their job to make sure they can pack as much as they can on those boats in order to save costs in the shipping. This is also a matrix from hell. They have to ship each one of these items across all of these things because it's not even just a boat, it's gonna be trucks, it's going to be trains, it's going to be all the other tiny little graphics that some of them don't really make sense. And the solution to that is a really simple one, kind of like an ode, it's a box, it's a big metal box. And I don't remember, I don't know if any of you remember minivans before there was two sliding doors? Back in the day, there was one sliding door. Then all of a sudden, maybe 10, 15 years after the minivan was invented, somebody thought, oh, second door, that's the same kind of thing. It's like, oh, let's just put everything in a box. So what this box does is it provides a common interface. The shipper doesn't have to know anything about anything on the inside. The person who is shipping the goods can put whatever they want to in there and know that it's not gonna have any interference with the things from the outside. And likewise, the shippers can stack these things up next to each other like Lego blocks and not have to worry about contamination. Or density, because these things just stack right up just like Lego blocks. I swear that's my only slide animation. So containers. It turns out you can do the same thing with software. So that's what Docker is. It's a container system for software. So you can put your engine X inside of a container and ship that container across all these different platforms and it will work the exact same way every time. Because you're not just shipping code, you're shipping an environment. And so we're able to package all these things up into containers and distribute them across each of the platforms. So some people may be thinking that's kind of like a VM. That right? So there's things like Packer where you can take a VM and pack it up to an image and ship it around. So here we have the comparison of a VM to a Linux container, which is what Docker uses. So in a VM, we have a server hardware. We have a host OS such as Ubuntu or ESX or whatever. Then you have a hypervisor such as KVM or ESX. Then you have your VMs which all have their own guest operating systems, their libraries and then the actual application. Which this is a lot of stuff just to get that. And generally, we probably want each of these to be exactly the same. It doesn't always work out that way just because of life and having install scripts that don't pin the right versions and what have you. But this is all duplication. It's massive duplication. It's a VM image is tens or more, tens or hundreds of gigabytes in size. Whereas with containers, we strip away the hypervisor and we strip away the guest operating system and we just share those resources from the host amongst all the containers. So we have all these nice isolated environments all with a common interface through Docker. And I know some of this sounds like marketing speak because it's kind of a 101. And it kind of is marketing speak but it's also kind of not because this is exactly how it works. So each container gets its own process space, its own network interface. You can run things as root. You can limit resources just like you would in a VM. The difference is you share a kernel with the host. There is no emulation. So per virtualization or hardware virtualization is gone. And in addition to that, you can even limit the capabilities of the root user inside the container. So root isn't quite as dangerous as a normal root user would be, such as being able to add device nodes or manipulating network interfaces across other containers, what have you. These are all very dangerous things to allow for code to do inside of a container. And the idea is we want to be able to run untrusted code inside of a container. So we limit root capabilities. So we're gonna do a demo. So we are gonna do, let me clear this out and put this over here. So we are gonna tell Docker to run this image called Hello World. And I already had it pre-downloaded. Let me remove the image. Yes, I can, just a second. So I'm gonna remove that image so you can actually see what's happening. So now I've removed this Hello World image from my system. And we are going to tell Docker to run it again. So what's happening here is it's saying, oh, I don't have this Hello World image. So I'm gonna go out to the Docker registry and download it. And hopefully if the internet works, there we go. And so basically, here's what happened. I am running a Docker client on my Mac. It is talking to a Docker daemon inside of Virtual Machine because Docker is Linux only. Or at least the daemon part is Linux only. Then that told the daemon to go and download the Hello World image from Docker Hub. Then the daemon created a new container from that image and executed the process that gives this output. And Docker streamed that output, that standard output, back to my terminal through the client. Which is pretty freaking amazing. If you figure this could very well be an instance on Digital Ocean or whatever, and I'm still running Docker on my Mac. Or I'm still running the Docker client on my Mac. And actually, I forgot I was gonna do one more thing. Like it says to try to do this next thing. So there we go. I just booted Ubuntu in an instant. I can app get update if I type it right. Yes, still can't type it right. There we go. And so you can see I've got apps. I've got access to all the entire Ubuntu repository. This is an Ubuntu image that I just booted up in an instant. And I can exit out and I will do it again. Just in case you missed it. Docker run Ubuntu, I just booted Ubuntu. That is a different container. It will run it again. So we just created two completely different isolated environments. And I can app get install vim.tiny. And there we go. I have vim installed. And so you can see I have a full Ubuntu environment right there. And so in that I kind of mentioned that we downloaded the hello world image from the Docker index, which is this. It's known as Docker Hub, which has all kinds of different applications on it, including ones that we maintain ourselves or images for things that we maintain ourselves or even things that are maintained by the open source contributor, such as CentOS, the CentOS image is actually maintained by the CentOS folks. So basically I can come up here, I can go to my terminal and say Docker run Redis and I have an instant Redis instance, same thing with MySQL or nginx or whatever. Or if you're feeling particularly masochistic, you can do MongoDB or WordPress. So there are over 30,000 Dockerized applications on Docker Hub. Now a Dockerized application is pretty much the same thing as a normal application. It's just been configured to run within a container. And I say that and we will, I will show you a Dockerized application in a minute. But just know that there is no modification to the application that you have to do to make that happen. Docker Hub has private repos and automated builds and all these things. And it's pretty much like GitHub, but for Docker. It's not a competitor to GitHub. It's completely for managing Docker images and sharing Docker images and what have you. So more what is Docker? Docker is an engine, which is the daemon that I talked about earlier. There are over 15,000 stars on GitHub port and over 600 contributors. It is written in go and pull requests are welcome. So let's talk about building an image. So this right here is what we call a Docker file. A Docker file pretty much automates the building an image. So the weird thing about images is images are created from containers, but containers are created from images. So basically everything starts out from what we call a scratch image, which is just an empty file system. And then we can load up a file system on there using just, we can use anything like a deb bootstrap to bootstrap an Ubuntu file system and pop it right into the scratch container, but that's going a bit advanced for this. So here we're saying we want to use the Ubuntu 14.04 image and then we're going to run all these commands and we're gonna say when I start a container, I want it to be to run engine X in the foreground and we're gonna advertise that we're running on port 80. This is really hard to do with two screens and I lost my mouse. So we'll get out of there. So let's go ahead and build that Docker file. Yeah, it's smaller, sorry. So we're going to, hopefully that's all the right. And we're going to tell it to start engine X. Now the one thing about Docker is we pretty much expect all services to run in the foreground. The basic idea is you run one service in a container. You can run multiple if you like, so you can start up your own init system, like run it or you can even fire up system D within a container if you like. But generally what we practice is to run a single process inside of a container and then it's a lot easier to monitor that way. So I'm gonna tell it to build this image and there we go, I just built an image and I'm gonna tell it to start up a container and expose a port and hopefully, yes. What it does, so the build system is able to read to the Docker file and it's able to go and check a set of cache layers, which I can show you in just a second. Basically each of these entries is doing a commit. It's just similar to like a git commit. And so this is saying it's running it in a container and then it's going to commit an image. And that's the image ID that it did and then it removed that intermediate container. Where's the cache, there it is. So here it's saying ran in the cache. So basically it already has an image that, or an image layer that is Ubuntu 14.04 and with that particular entry just below it. It doesn't. So if you ran this, it will use the same cache layer every time until you change something either about Ubuntu or about that line. So if you manipulate something in that line it'll rerun the thing. Or you can tell it to build with no cache. But ideally if you're in production you want to pin an engine X version so you don't screw yourself over at some point. So as you can see, I am running that here now. So I exposed port 8080 and affording that to port 80 inside the container. And here is my Docker VM instance. I'll port 8080 and we see hi. And I messed up one of my commands so there we go. So what's happening here is every one of these entries is creating a new commit to an image. And we use copy and write layers. So for instance say you have 1000 containers all running on top of the Ubuntu image which is like 250 megabytes. I should say that again. The Ubuntu image is 250 megabytes. If you go and download that ISO off of Ubuntu's website it's like 700 megabytes just for the ISO which is compressed and what have you. This is 250 megabytes for the Ubuntu image because we don't need all the extra stuff that's provided by a traditional Ubuntu installation like network services and what have you. This is all shared from the host. So each one of these entries is creating a commit layer which is a right layer on top of the image. And doing that we're able to share the same Ubuntu image across multiple containers. So you use that same 250 megabyte image 1000 times and we're not using 250 megabytes 1000 times we're using 250 megabytes once. And then we create a container. We put a right layer on top of that image to create that. So when you write to a container it's not manipulating the image unless you went and told it to commit this back to that image. I don't know if that makes sense but it probably if I did a demo. So I could really screw up my Ubuntu base image that I have stored on my laptop here by saying Docker commit. I gotta rebuild that Docker commit. I'm basically just telling you to get that ID for me automatically and we'll do that to Ubuntu 14.04. So now we're really just gonna mess up that Ubuntu 14.04 tag. So now I can say Docker run the p12, yeah, screw it up. So there we go. And if I did this again it will be running that even though I just told it Ubuntu so I kinda screwed up my Ubuntu image. But that's okay because I can go and just repoll my Ubuntu image. And that might take a few minutes. But so this is all based on what we call the official images on Docker Hub where Docker maintains the Ubuntu image. So if you trust Docker you should be able to trust the Ubuntu image. And these get built nightly so it's always up to date. So like for instance if there is yet another issue with OpenSSL we will automatically rebuild this and make sure that it's got the right libraries in there. So there are really, there are two ways to build an image. You can run a container, then run some commands inside the container and then commit that container back to an image. Or you can use a Docker file which is the preferred way to do that automatically. So you can share a Docker file with anybody and they should get the same exact results no matter where they are assuming. So if it worked for me it will work for you. Boot to Docker. Boot to Docker is a very minimal Linux distribution only 27 megabytes. It boots very quickly and it has an installer for dev environments. So for instance I'm running Boot to Docker here where it comes with a special command line that I can SSH into it and it sets everything up for me automatically. And it does have a Docker file. So normally if you want it to go and hack a distribution to make your own that would be a real pain in the butt. But for Boot to Docker you just go and you can modify the Docker file for it and do that yourself. Which I will. So for instance here is a Docker file for building Boot to Docker. So this is loading up Debian and all these build dependencies and what have you. And eventually what we're doing is we're creating this very tiny ISO which is built off of tiny core Linux. So we create this ISO which gets out outputted to standard out when you run it and you save that ISO for yourself. So what this is really doing though is it's using Docker as a build system. So we're not running anything here. We're just using this Docker file to build this ISO so you can grab that ISO and run that outside of Docker. And for instance I have a need to be able to access my files on my Mac from through Boot to Docker. So I've created, I've modified Boot to Docker to use virtual box guests additions so I can do the folder sharing for that. And that was just super easy to add. I didn't, that's the only code that I changed right there. So in addition you can also do some crazy things like I also did this but through a container. So you have a normal installation of Boot to Docker and you run this image that I have with these parameters and it will actually go and build those kernel modules and inject them back into the Boot to Docker instance which is kind of like dynamically including modules whereas I couldn't do that straight on Boot to Docker because there's no tooling. Because Boot to Docker is essentially just Linux, Linux kernel and Docker and some very minor supporting services. So we recently added container restart policies. So one thing a lot of people do is they use something like Formin or Monit which are both terrible or more recently people use Upstart or SystemD to monitor their processes and make sure that they stay running. And if a process crashes, SystemD or Upstart or whatever will restart that. And so you can actually hook those things up into Docker if you want and have those tools manage your Docker containers for you but with this you don't need to do that and it's actually a bit more efficient. Because when a container crashes and SystemD is going to restart it basically what's going to happen is Docker is going to have to tear down that whole entire container environment all the network interfaces and everything and then when SystemD says to restart it it will have to recreate all those things but with Docker and the built-in restart policies you can, it knows that it crashed and so it's just going to try to re-initialize that process for you instead of tearing things down and rebuilding them up. So these are the modes that they support always on failure and never. So you kind of basically if something exits with zero with exit code zero and you have on failure it will never restart or somebody kills it and you have always then it will always restart or you can have it not restart it. Automated builds are really, really awesome. So, and actually it turns out that you can do a lot of really cool things with them. So these are things where for instance I've got very many GitHub repos where I have linked to a Docker Hub image where anytime I commit to my GitHub account or my GitHub repo it will send a hook back into Docker Hub. Docker Hub will go pull my repo and build that image for me and then I can go on and pull it and that's really great when you have a team of people you don't have to build the image multiple times you just have to download the image and we also have web books. So basically this enables you to automatically deploy things for instance on Git push. So if I push to my GitHub repo GitHub is going to notify Docker Hub that I have made an update. Docker Hub is going to go and repo those changes build the image and then go back and notify my server saying hey there's a new image and what do you want to do with this? So for instance my blog container 42 I've got the setup so I can, I'm using Jekyll so I can write a new post, push it up to GitHub it will go back to Docker Hub, Docker Hub will come back and boom I have updated site all I had to do was push a new post to GitHub. You can use automated builds for tests so you could technically run your test suite in a build if you wanted to and if the build fails you will get notified that it failed so you would know that your tests failed which is really cool. Or any number of other things it's kind of once you start using it you will each individually find your own things to do. This is a really cool project that we just saw a demo for at the office. Nucleotides basically they catalog genome assemblers for scientists so they can rate them so there is a really difficult problem in their industry where there's all these different genome assemblers that are super difficult to build because mostly they're scientists and they're not computer scientists they're cataloging genomes so they don't know how to build their tools so what these guys do is they take all these genome assemblers that they have make Docker images out of them so someone can just go Docker pull the image run the assembler and get their results without having to handle that themselves this is a really cool demo that we saw. So I had a Rails Dev demo unfortunately pretty much all my demos broke on me at the last minute but basically this is a Rails project that I created for kind of demoing what you can do with Rails as a development of moving Rails from a development environment to production it has certain optimizations in it for the Docker build commands to take advantage of some of the caching that is available you use it to do your developments you use it to run tests and use the same exact image for production so let me see if I can pull that up are there any Rails developers here? A lot. Okay so this so here we go we have our Docker file here at the base of the repo with all the instructions to do set install these dependencies for it libssl, sqlite, what have you and add our directories these are all like build time instead of adding everything all at once these are all build time optimizations and then we do things like this where I want to start my background worker I run this image only I say use sidekick or if I want to start my unicorn I tell it to start unicorn but I'm always using the same exact image I'm not building separate images to do these different things because it's all the same code base so one of the things that you do so here I'm telling it to start bash and so I have bash or I can say appget update and it's gonna run appget update and then destroy the container it's pointless but the idea is you run these commands with whatever you want it to start up so you have the same exact image everywhere it doesn't matter if you're running tests it doesn't matter if you're deploying the app server or engine x or whatever it's all the same exact image so that you don't have discrepancies in between environments so what is Docker? It's primarily a tool it's for developers one of the best use cases I have as a Ruby developer is I don't have to use RBM or RBN or CHRuby all these various invented tools for managing Ruby environments all I have to do is say Docker run Ruby and I have a Ruby environment and I can specify Docker run Ruby 2.1.2 or 1.9 and I have a fresh Ruby environment and likewise for Python I'm sure Python has the same version manager issue so you don't muddy up your development laptop with all this version manager crap or even having to install all these C extensions and what have you on your development laptop just to make stuff work it is for sysadmins so that you can take that box like the shipping container box and plug it into your infrastructure so you have a set of Lego bricks that you can put together so the developer writes their application they put their stuff in a box they hand you the box and you just plug it in you have a MySQL database you have a Redis database all set up you plug it in as a sysadmin the developer doesn't have to know about your infrastructure at all so it is for building systems and it's for change so when I discovered Docker it started to change the way I felt I was doing development so things like being able to do Docker run Ruby are really powerful as a developer I get to kind of skip all the crud that people have to deal with in managing dependencies on my laptop versus in production or on CI or what have you and for sysadmins it's the same thing you no longer are worrying about configuring applications or configuring services when you're pushing the prod you are just worried about deploying a container you say I have this container I have it in staging I want to pick it up and move it over here so these two things are intertwined the development and system administration are they're two things but at the same time they are the same thing you cannot deploy a piece of software without also you cannot write a piece of software without having to actually also deploy that software and so it's about working together so that this becomes this any questions? you mentioned that it's recommended that a single application be deployed within a single container how do you put together a more complicated environment multiple web servers serving multiple instances of your application so that's a really hot topic so the word orchestration gets thrown around a lot in the industry so there's a lot of people trying to solve that problem Docker does it to an extent although it's a little limited at the moment where we have this feature called links where you tell your application your back end application that you need a MySQL database so you fire up a MySQL database container and you use the link feature to say hey I've got this MySQL container over here and it will propagate some environment variables and host file information about that container so you can just say so you can just tell the developer to or the developer can say I have my database set up to connect to the DB hostname and the Docker will handle that through aliases so I would say dash dash link MySQL colon DB and it sets up a link from the MySQL database to an alias called DB inside the container they only work across one host so the question was do links work across multiple hosts they only work across one host however there are patterns such as the ambassador pattern where you set up an ambassador on one end and on the other end that know about those so for instance for I guess a load balancer would be a good example load balancer needs to know about some service over on this other host so you would set up an ambassador here that knows about the information on that remote host but you would set up do the linking between that ambassador and the load balancer does that answer your question? yeah you mentioned the I guess the promise of Docker is that it'll work on your laptop it'll work everywhere obviously that's a oversimplification things for example like that are in the container in the virtual image you need to communicate with other stuff whether that's for example if my laptop's on the VPN and it can talk to our production servers I deploy it to a new AWS instance we got it's not going to work since that the idea of it being self-contained an image is so part and parcel to Docker how do you communicate to people hey this is isolated but you know asterisk here are the ways it needs to communicate with the outside world to take care of that so generally what we follow is the Heroku's 12 12 factor I don't know if anybody's seen that 12 factor dot net I think it's the website yeah so 12 factor where basically you do configuration through environment variables and through that Docker run command you can inject environment variables it'll be dash E in the name of the environment variable and likewise you can put environment variables into the Docker file overwrite those up the command line what have you so definitely 12 factor I was wondering if you could talk a little bit about the current state of security with Docker and if there are any significant concerns or things that need to be addressed with we're going to implement it in production yes so security is actually really good and at the same time there are their security is also a huge hole like you can always go deeper and deeper with security uh... so generally docker is quite secure so processes are are isolated uh... they can't break out of the container unless there is a bug in the kernel or or some prior to 1.0 we had a few over sites and are and the capabilities that we allow a root user to have inside the container so they were able to break out and do things but those of all been taken care of at this point uh... then there's also things like s e-linux where they've got this whole labeling security how many people know about s e-linux how many people have used s e-linux yeah nobody or everybody turns it off usually you make dan walsh cry when he turned it off basically s e-linux is about applying labels to things like file system objects so uh... maybe nginx can't access uh... something at the password or something like that on your host uh... even though nginx is running as root it should not have access to at the password because it could wreak havoc on your system so with s e-linux they apply labels to make sure that even though it's root it can't access that that file uh... so there's that layer so you would you would uh... there are integrations with s e-linux and app armor to kind of mitigate those kind of uh... things uh... in addition to that probably the next big security boom to docker would be user namespaces which would essentially means uh... mapping uh... external user IDs into the container so root inside the container is not actually root outside the container whereas right now root inside the container is even though there we limit the capabilities it can still be dangerous uh... but generally though uh... i would say even even if we had user namespaces and everything just don't run roots if you don't have to just don't run as root it doesn't matter how secure it is don't run as root alright well that's my talk uh... you can reach me online at cpguy 83 uh... container 42 or if you have any questions like i said i'm an introvert so just come and ask me uh... it may seem like i'm not interested but i i am