 Hi, everybody. My name is Ben Golub. I'm the CEO of Docker. Docker, if you're not familiar, is an open source project which launched two weeks before the last OpenStack conference. We just passed our 150,000th download 200th developer. So we're not quite growing at OpenStack speed, but we're trying to. Before I get started, I just want to do a little bit of a familiarity check. If you are familiar with Docker and containers, can you raise your right hand? And if you're a newbie, raise your left. All right. And now a dev ops versus dev split. If you're an ops type, raise your hand really high. Keep it still, reflecting your passionate belief in uptime and stability. And if you're a developer, just raise your hand anywhere you want and knock stuff over while you're doing it. And finally, if you would prefer this to be mainly focused on demo, please lean forward and act as if you're using a CLI. And if you'd prefer this to be mainly a PowerPoint presentation, just do your best imitation of somebody who's severely jet lagged at a conference. Great. OK, so we will make this largely an interactive session. I'm just going to go over really quickly Docker, where it comes from, the need for containers, and Docker in OpenStack. I'm then going to switch it over to our friend Jason from Rackspace, who's going to talk a little bit about how Rackspace has been using Docker. And then I'm going to turn it over to Nick who is going to do a Docker 101 session and then a live demo. In this demo, we will be building a container from source, running it through test. We will then deploy it on the laptop, then on a public cloud, then on another public cloud, and then on an OpenStack cluster. And we're going to do this all live. So when we get to that portion of the demo, if I can ask a favor, we are sharing the same network as the rest of you. So when we get to that portion, if you can try not to use the Wi-Fi for that portion, that would be great. All right, so quick introduction. Why does Docker exist? Why do we care about Docker? Why do we care about containers? And the reason is something I like to call the matrix from hell. So you all may remember back in the good old days or the bad old days, depending on how you think about it, when developers had one stack to develop on, they developed applications on sort of a six month or yearly cycle, and the applications were deployed on a single monolithic server. And of course, everything in that statement has now changed. Deployment and application development is constant and iterative. We have a huge set of loosely coupled languages and frameworks and components to choose from. And of course, whatever is developed somehow needs to work on a laptop, on a VM, on a physical target, on a cluster, an open stack cluster, public cloud, the customer's environment, you name it. And that basically results in what we at Docker like to call the matrix from hell. There's a sort of ever increasing number of applications and frameworks and languages and versions of applications that somehow need to be made to work across on the column space, lots of different targets. So how can we actually make this work? Well, there are lots of solutions out there that try and automate part of this problem or make the problem less painful. We'd like to get rid of this matrix. And in trying to think about how to do that, we actually took some inspiration from the physical world, the world of transport. So 40 years ago, if you were going to ship good around the world, everything was shipped in its own specialized container. You had coffee beans and bags. You had chemicals and drums. You had parts packed into crates. And somehow, whenever you went from a truck to a train to a crane to a rail, things got packed, unpacked. They interacted terribly. If you shipped your coffee beans, you had to worry that they'd be next to spices. And when you somehow get corrupted or if you were shipping bananas, you had to worry if somebody else was shipping anvils next to you and they'd get smashed. And then somebody came up, this is also a matrix from hell, and then somebody came up with this bright idea of a shipping container. Now, steel crates have been around for a long time, but the great idea behind the shipping container was that they're the same size. They have holes and hooks in all the same places. They have labels in the same places. And so basically, you have a standard container that anything can be packed into and that will then work anywhere. And as you all know, once you are a manufacturer, you put stuff in your container, you seal it, you're done, and that same container goes from ship to train to truck to rail without having to be opened or modified. And it's revolutionized world commerce and a bunch of other great things. Look at the containers as you leave the hall today and you go around Hong Kong. So what Docker is, is it's trying to be a shipping container system for code. A way for developers to take any application as dependencies and package it into a standard lightweight container, which can then run virtually anywhere. And as far as any deployment target, it looks the same. And as far as developers concerned, any place it's going to be deployed to looks the same as well. Docker eliminates the matrix from hell rather than trying to automate it away. Because in essence, if you are somebody who is building rows, you worry about getting things into a lightweight container. And if you're somebody who's running columns, you worry about running containers and you don't worry about what the applications or the languages are inside. For developers, that means they can build once and finally run anywhere. I know we've been promised this for 20 years. But at this point, a Docker container can encapsulate anything and will run on any x86 server running a modern Linux kernel. So we don't care whether it's Ubuntu or Red Hat. We don't care whether it's physical or virtual. We don't care whether it's in the cloud or not. In fact, as you'll see in the demo today, without modification or any noticeable downtime, we can move back and forth between different environments. And for DevOps, it's sort of the converse. You can configure once and then run anything rather than worrying about your carefully configured system breaking when developers get a new crazy idea. You can configure once to run containers. And since all containers respond to the same commands, they can be very easily automated. And the reason that we think this works and why we're excited is that you separate the concerns. So rather than forcing developers to think like ops folks or vice versa, developers, in essence, worry about getting things into containers and the ops guys worry about the outside of the container, logging, remote access, monitoring, network config, et cetera. If you want a more technical explanation, we can talk to you afterwards. But I think the easiest way to think about this sort of at a high level is it's a lightweight VM. And if you're not familiar with the difference between a container and a VM, in the VM world, of course, you start with an application and you take the application. It's binaries, it's libraries. And in GuestOS, you put that into a heavyweight form of isolation, run it on top of a hypervisor, run it on top of a hostOS. And that is fantastic for many, many use cases. Containers take a different approach. You take the application and run it in an isolated way as a process on the hostOS. So there's no GuestOS. You can share binaries and libraries. And as a result, it's much smaller. It's much lighter weight. There's no overhead. And of course, if you're starting and stopping an application, you're not having to start and stop an operating system. And if you're trying to sort of iteratively create and modify applications and the runtime environment, you don't have to create a new VM every time you do that. In essence, you can sort of think of this as taking the way that you think about applications on your Android phone and making it available for server software. Docker takes that great idea, takes it a little bit further. If you're familiar with Git or copy on write, in essence, the original application is smaller than traditional VM. And the copy you make of that is even smaller. And then as you modify things, for the most part, you can deal with just the diffs. Basics of the Docker system, Docker runs on host OS. Say you're a developer, you can iteratively create your containerized application and push that to an image registry. We have both a public and a private version of that. Docker containers can also be created automatically from source using something called a Docker file and we'll demo that. But essentially, once you have created a container and pushed it to the registry, any Docker host can pull that image and run it anywhere. And that becomes very powerful for cross-cloud deployment. And not only can any Docker host pull an image long as it has access to the registry, but updating running containers is also very easy because you're just pushing the differences. So as you'll see when Nick does his demo, things can go very, very fast when you're in a Docker environment. So again, to repeat, once you have a Docker registry, essentially any Docker image hosted on any Docker registry can run on any Docker host in milliseconds, actually, in milliseconds. All right, so I'm going to just talk a little bit about the Docker community. Obviously, containers are not a new idea. We're building on top of some great work that has been done over the past 10 years with folks in the kernel community building LXC and CHroot and C groups. But what we've tried to bring to the world is the ability for containers to be standardized and to have a community around that. And we're very excited about the Docker community. As I mentioned, we now have over 200 contributors working on Docker the project. There are only 14 of them work for our company. So you can do the math. And they're actually responsible for half of all commits. We, at this point, have over 20,000 that we know of trained developers in Docker. They have, in turn, containerized thousands of applications, which are all available at the Docker registry. There's a lot of great support and training resources available on our site and elsewhere. If you're interested in a meetup, there are now meetups somewhere in the world every day. I think the latest one we heard about is in Nairobi. If you happen to be there, but if not, we've got London, Paris, Hamburg, Lisbon, and, of course, Hong Kong covered. Also very excited to have a lot of integrations with Docker. All the common tools, Chef, Puppets, All, Thanceable, all done, by the way, by the community, not by us. Of course, we're here to talk about OpenStack. We've got a bunch of great integrations to talk to you about that. Also a huge set of third-party tools built on top of Docker. There are, I think, at last count, 15 companies that have started to build businesses on top of Docker, which we're really excited about. So standalone passes, you name it, and also a bunch of great use cases. So even though we say Docker is not ready for production purposes, we sort of hear every day from people like eBay and Rackspace Mailgun and Yandex and Baidu and others who are saying they're cheerfully ignoring our warning. And their use cases are available on the website. So quick introduction to Docker and OpenStack. Docker has been accepted into Havana. We have a driver for Nova. So you can essentially treat Docker as another form of a VM. We have a lot of other interesting use cases. If you come to the Dell booth, you can see Docker integrated with Crowbar for deploying OpenStack. Cross-Cloud application deployment you'll see here and a design session on Friday around integration with Heat. So with that, I'd like to turn things over to Rackspace. Rackspace, of course, is important to us, not only in terms of how they use Docker, but, of course, as a major OpenStack contributor and a big contributor to this demo as well. So this is Jason Smith. I got one, I think. You're live? Cool. Cool, thanks, Ben. OK, so at Rackspace, like you said, we are actually using Docker in production, even though we've been warned not to. One of the main ways that we use it is actually with MailGun, right? So MailGun is a programmatic service that allows access to email through APIs. And because we're using that, we've got a lot of different applications and we run into a few different problems. So one of the main problems is there's complex environments, right? You've got your development, your staging, production environments. And you need to make sure that each of these have the same application and the same dependencies on them. So what we wanted to do is get the same image on each of these environments. And luckily, we're able to do so with Docker. One of the other problems that we ran into with virtual machines was that we weren't able to take advantage of all of the processing power available, right? And so through Docker, we're able to use the bare metal and use the actual processing power on those servers. We get all three exact same images on the three different environments. There's absolutely no VM overhead. And we're able to manage those pretty fairly easily. One of the other problems we ran into though was that as you're using Docker, if you only have a few, right, creating containers, restarting them, bringing, shutting down containers and then starting them back up, really isn't a big deal. But when you run into, you're running thousands of different containers, that becomes a problem. And so at first, we started out by using bash grips and other things like that, which were dirty hacks. We actually decided to go out and take some inspiration from Fabric. And we went out and wrote Shipper, which is actually an open source project. You guys can go check out. It's actually on GitHub. And you can please go ahead and contribute to that because I'm sure we have not thought of all the amazing things that you can add to that project. All right, so Project Solum, right? Everybody's heard about this. It seems to be widely talked about at the summit. So you can see with Docker is it's actually the container for those compute VMs. If you are interested in Solum, there are plenty of sessions available. And then also there's a lot of talk around unconferences and other things going on currently. All right, so with that, I'll get out of the way and you can see the awesome demo that Nick has prepared. I'm really happy to be here. Let me just switch over really quick. So as Ben talked about, what I'm gonna be demoing today is multi-cloud deployment using Rackspace Cloud Engine. So first of what I'm gonna do is build a blog application on my laptop and then I'm going to push it to the Rackspace Cloud without any changes, run it there and then push it into a glance integration and provision a container via Horizon. Before I get into that though, I'm gonna just give a quick introduction to how Docker works. So imagine you go to the Rackspace website or to Linode or to Digital Ocean and choose the option to provision a Docker host. Really what that does is sets up an instance for you which has a static binary of Docker installed. So here you are at the command line and you type in Docker. That's the first introduction that you can have. This gives you a list of all of the commands and how to use them. So let's run a very simple command. We want to echo hello world inside of the busybox container, sorry, image. Not so interesting, right? But how about we create a instead, I want to be interactive and allocate a pseudo TTY in busybox and excuse me, and execute a shell session. So now I'm inside of a container. I cannot access anything on my host machine. I'm inside of a busybox image. I can do radical things like remove ETC password and there's no problems. And as you can see, I still have, if I exit out of there, I still have ETC password on my host. So all of that was completely isolated. Um, question. So I just inside of the busybox image deleted or removed ETC password. If I cat ETC password right now, any thoughts on what's going to happen? It shows up. And so why is that? So Docker always starts from a base image, busybox is the base image and every single time you run images are immutable. And so there's a read-write layer in AOFS. If you're familiar with that, that's mounted on top of the base. And so all of the changes in this session are persisted in the base. So I'm gonna open just another terminal here that's on the left for you guys. And what I'm gonna do is introduce a command. So this is the last, if you push it, put in dash L. Actually, let me show you really quick. If you pass in dash help dash dash help after a command it will show you all of the options that you can pass in. So dash L shows the last container that was run. If I Docker div this image, then you can see that as a part of the read-write layer that exists now, we created dev a K message and a TTY that we asked to be allocated. So what happens if I RM ETC password again, then you'll see that if I run the diff again, that ETC password was deleted. So how would you persist a change like this if you actually wanted to create a broken busybox image that didn't have ETC password for whatever reason? So there's two ways. Again, you can see the last container that we ran with Docker PS dash L. So what I can do is take this container and name it and commit that into yet another immutable image. And so what that allows you to do and what that means is that containers are instances of images with the read-write layer and any container can become instantly an image if you commit it. Okay, so the other way to do that would be to create a Docker file. So if you have a Docker file, you would basically say from busybox, run, run RM ETC password. You would save this, you would Docker build broken busybox, oops, in this directory. And what that will do is create an image with all of those steps without having to do it inside of the shell session. And if I Docker images, grab busybox, then you'll see that my image actually is right here. Okay, oops, let's get into a actual demo of deploying the blog to the cloud. So I have my get checkout master of the latest of my blog. It has a Docker file, it's very simple. There's a image that I'm inheriting from called Keyblog. I'm adding the contents of the current directory to slash website. This application is a flask application, so by default it listens on port 5000. And this is the default command I want to run when I'm launching this image. So if I Docker build and pass in a tag for this, so I want to call this blog open stack in this directory. So it's built and I can Docker run dash D which sends it to the background. Here's my ID, I can Docker PS dash L. And you can see that here is the command that was run. Here, the command that I showed, it's been up for two seconds. And port 5000 is actually mapped to the host on 49156. So I can do something like curl HTTP local host 49156. And you can see that there's my application up and running. So now what if I want to make a change? So let me just make a quick change to the footer and say hello to everyone, so hello, open stack. All right, let me build that again. Let me run it again. And let's see which port was allocated to it. So 49160, rerun our curl command. And you can see the same, the update is now there. If for whatever reason we wanted to roll back, well, the previous container is still running. So I actually want the last three. Oops. And you can see that the previous one got untagged but it's still there. And still running on the previous port. So I can still curl that. And you can see that the open stack thing is not there. Another question for you. So how long do you think it would take for an application to spin up maybe 20 times on the same host without modification? So since it's listening on port 5000, normally I'd have to go into my application, change the port dynamically and then, or find some way to change the port dynamically. But in this way, I can actually run n number of instances of my blog and you can see that all of these are running. I could pick a random port and curl it and that would be the update. So let's assume that I actually wanted to make this persistent change and push this out to my production machine. So let's tag that, my new open stack image and provide a location for a registry that is accessible by a machine on the internet. So I will call this, I will tag it as, I will prefix it with the URL of where this public registry is available and then give it the same name. And then I can Docker push it. What this does is since Docker files are built on layers, what it does is it walks the tree of the latest all the way back to the base, communicates with the registry and says, do you have this ID? And if it does, then it doesn't push anything. So you can see that actually my update was just one meg, which is the size of the directory that I'm currently in. And now I have my application completely ready to run. What I can do is SSH to my Rackspace cloud. I can then Docker pull it. Pulls really fast because there's not much that's changed. I can then run the same thing. I can do the same Docker PS-L. I can see that it's running here on 49741 and I can curl localhost 49741. And you can see that hello app OpenStack is there. I can show you in the browser as well, I guess, 49741. So this is the problem with the network. Sometimes it takes a bit of time and goes in and out. Well, I'll go back to that in a moment. Okay, so now let's actually push this to Glance and provision it there. So I'm going to now SSH to my OpenStack cluster, assuming and hoping this works. I will then do the same thing. And so that's gonna do a pull. And then I'm going to actually get the instance of where the registry that is provisioned to use Glance as a backend store. So I'm going to Docker tag. Sorry, I know that went off the screen a bit. Okay, and then I'm just going to do the same Docker push. And what that's gonna do is that we have a registry that's sitting there. It's configured to stream directly to Glance and Glance has been provisioned to allow Docker images to be stored there. And so this is going to do the same thing. It's checking to see which layers it already has, not sending those and only sending the diffs. So back to our browser. Let's go to our OpenStack cluster. Let's go over here to projects and instances. We will launch an instance. We will call this blog. We will boot from an image. And you can see that my blog OpenStack is available here. And I can launch it. Oh, great, there's an error. Okay, normally that works, sorry. What that would then allow you to do is go into and switch to the OpenStack, or sorry, to the DevStack user. This cluster was provisioned using DevStack. Go to this DevStack directory. Use the OpenRC and novelist. It should show up here. But there was an error for some reason. And if I look at docker ps-l, then you should see that's the, oops. Okay, so there's something that really went wrong. I can definitely show this working when the, shortly after this. So that's it for the demo. Are there any questions? Go ahead. Yes, absolutely. So the question was, can you set CPU and memory limits for a container? Yes, if you do a docker run dash dash help, what you can see is that you can pass in a dash m to set a memory limit in bytes. And you can pass in a dash, forget the exact commands. You can pass in any way a command to set the CPU affinity. Go ahead. You limit into a container? So you can absolutely pass in. So the question was, can you pass in environment variables into a container and then set things like you limit? Yes, absolutely. So you can set the you limit of a PID from outside of the host. And passing in an environment variable to a process is running dash e. So you docker run dash e and your key value mapping to the image. And the default environment variables will be inherited from the host, right? I'm sorry, what was the question? What about default environment? It's going to be inherited from a host? Yes, it can be if you would like, but let's just take a look. So these are the default environment variables that are passed in during container creation. Thank you. One of the questions I have is regarding, if you have a docker write and an application from within docker do a syscall, can it make my bare metal unusable by other dockers in that way if it's like a lot of syscalls it's making to the system resources and then other docker processes cannot access it, something like that? Or is there any security layer there which will avoid that? So you can set limits on processes just like you can in a Linux, like any Linux process. And you can also set, as we talked about CPU and memory constraints, so I have a feeling that there are certain pathological cases that you can create that would allow you to affect other containers, absolutely. But we should talk about that more because I wanna hear sort of an exact use case. So right now the back-end container provider, so the question is can you use OpenBZ or any other container providers with docker? Today, no, LXC is tightly coupled with docker. In the future, yes, absolutely. The futures for docker make docker container agnostic. So it will also be able to run as an example on top of VSDJLs or Solaris Zones. That's definitely a goal of ours. So as a consumer of the NOVA API, what's the advantage of using the docker plugin versus another container plugin like LXC, LXC through Livevert or something like that? So a big problem that we try to solve above LXC is the shipping of images around. So LXC is fine to use if you have just a static one image without having to worry about how to create and manage multiple. And so if you already have, as an example, like a bunch of images in Glantz and you just wanna provision them in LXC, I guess there's not much. But if you don't and you want to develop a workflow that is constantly pushing to Glantz and therefore able to be booted through NOVA, docker provides a huge edge and convenience. Production, there's a whole bunch of downloads that we saw. So did it download, did it push those many instances in the production server or what was the download that we saw? The downloads that you saw were the Docker daemon checking to see if it had all of the layers that comprise of my blog image and only downloading the Delta, which was a one-meg dip. But spinning up multiple instances on the production is exactly the same and takes exactly the same amount of time, there's no difference. Yeah, sorry, here. We're using Docker containers in our open stack environment and we found some issues related to security groups, actually. Okay. We're able to actually apply rules to Docker containers. Are you working on that or supposed to be working on Havana? As far as I know, it should be working in Havana. Let's talk afterwards. Yeah, we're using Neutron, that's the particular of the stuff. We're using security groups on Neutron, so. Okay, okay, interesting. Is there any other questions? Does Docker control the guest's access to kernel memory? I'm sorry, does Docker what? Control the container's access to kernel memory. So the memory usage in the kernel pages? Yes, yes, and so today all that's helped by AUFS. Unfortunately, that ability goes away temporarily in 0.7, which will be released in a couple of weeks as we switch to device mapper. We get the benefit of being portable across many more Linux environments, but the downside is that it's block level so you don't have access necessarily to the kernel pages and shared memory. Yeah, I'm in the kernel memory, so I can't exhaust kernel memory from within a container. I'm sorry, what? I can't exhaust kernel memory from within a container. I don't know, sorry, I don't have the answer to that question. I'm sorry, I need to wrap up. So the key takeaways are that the container revolution is coming, our ecosystem matters, cross-cloud deployment is here, and OpenStack and open source technologies are leading the way. If you wanna learn more, please go to Docker.io. Our GitHub is dotcloud slash Docker. Our RSC channel is incredibly active and amazing. There's a Google group and please follow us on Twitter. If you'd like a Docker shirt, I've actually given all of mine out today. I will have, each of us will have about 10 or 15 tomorrow, so please come by and say hi and we'll be happy to give you them. We do have stickers though, so thank you very much.