 Hello to everyone. Thank you for coming. We're going to be talking about practical Docker for OpenStack. And I just want to reach out thank you to everyone that's here at the last hour of the last day of the conference. And we have a packed room. So thank you. First of all, I'm Eric Windish. I'm an engineer at Docker. And I am working on the Nova and heat drivers for OpenStack. So I don't want to bore you too much with what containers are. I don't think this room would be nearly as full as it is if I had to explain what containers are. But containers are lightweight and they're fast. They run on the kernel. They're not on a hypervisor. They can be as fast or fast, well, maybe not faster, but as fast as running native on a system. There was actually a benchmark that said one time that they were faster than native. And I'm not really sure how that happened. But we'll just ignore that anomaly. And what Docker provides on top of container technology is the solution. Solution for solving the problem of shipping code, deploying code, and running your code in all of these different places, all of these different clouds. From your laptop to your testing environment to your cloud or clouds, including your own local clouds or your OpenStack clouds. Again, either yours or perhaps a public cloud which you're deploying your code on. So Docker provides something that allows you to have the sea of sameness, this mass production of these containers and services that they're homogenous and you can deploy them anywhere and they all look the same. And this isn't just the containerization and the technology that Linux name spaces provides. But it's above and beyond that in a sense of the images and the portability of those images from one machine to another. And the ability of the runtime that wraps around the namespaces that provides a consistent environment for them. Of course, in physical goods, we had the physical shipping container. And in Docker, well, we have Docker for that on Linux systems or we seek it to be, right? So Docker allows you to have a way of running those containers in a consistent environment. See, above and beyond that, the fact that you can run them anywhere allows you to share them. So we have a way of there's a global namespace of images and an index of those images of global index so you can download those and run them on your machine. You can share them with other users. Docker Inc itself. So first of all, Docker is an open-source project with over 400 or 500 active contributors or contributors. I'm not sure how many are active. I'm not sure what the statistic is. We do have those. But we have over 10,000 stars on GitHub, et cetera. There's also the company, Docker Inc, for whom I work. And we have about 35, 40 employees. We have a Series B, et cetera. So use cases for Docker and even containers in some respects are, first of all, loosely coupled isolation. So the fact that they're not VMs and they're on the same kernel means that you can have these microservices that can share things between each other, such as network namespaces, IPC namespaces, file systems, et cetera. So what you can do, potentially, is say that I am going to run two containers. Each of these containers is running a microservice. The first one, and I'll actually show this as an example in heat later, is an FTP service. And the second one is an Apache instance. Now, I'm not saying you should run FTP. But in this example, the one container, the FTP container, only runs the FTP service. It has its own root file system. It has its own network interfaces, et cetera. But you can upload files into that and have them immediately be available within the Apache instance, or the container, rather. So doing this, you have a smaller attack vector. You've constrained your attack surface for each of these containers because they're only running one service each. But then they can share things between each other by being, by the virtue of being on the same host. We have imaging, iteration, and integration. So this is more of a test staging production cycle. But I like to use words that start with I, because it's this cycle. We can do all of these things with Docker. You can build your code on your laptop, test it there, deploy it to your CI systems, and then run it in production all with the same image. And this is all the same environment. For instance, I can do local development. So I have a project. Actually, Paul Szczarski started this, and I stole it from him. He's in the audience here. And it's called Dock and Stack. And with that, I can run Dev Stack locally and run it really quickly, make the changes that I want to in OpenStack, test them, submit them, and that code can potentially be the same code that runs in CI. Now, in this case, we're never going to run that in production because we're not going to run Dev Stack in production. But the fact that I can run the full suite of CI's testing suite on my laptop, as opposed to some customized environment specifically for local development and testing, is really powerful. To continue on, we have the multi-cloud use case. So we can take that image, and we can then not only deploy it on my laptop and in my CI system, but I can deploy it on my cloud. I can deploy it on your cloud. And it should be portable. That image will be importable and runnable in each of these environments. Excuse me. I forgot what the other. Oh, right. So alternative form of virtualization, right? So there is the model where people want to use Docker or containers to provide an alternative lightweight form of virtualization. And most of the users of this model seem to be, as far as I can tell, mostly big data users, people with very tight performance constraints, or people who simply don't care about some of the security aspects of running in containers. And it's a viable model. And there are a lot of people that seem to be interested in doing this. And it seems to be something that will be used and supported in Docker ecosystem to some extent. And then you have, well, and then scale a big data, which is kind of overlapping with that. But that's that overlap. So actually, I should probably clarify, too, that by using Docker on bare metal, for instance, as an alternative to ironic. And not to say that one necessarily has to do that, but you could, say, use a Nova Docker driver as an alternative to ironic for those that want to protect certain things and prevent their users from doing certain things on the host. Now, it may not be a complete isolation, but it allows you to say, well, these processes cannot access these devices. There's a barrier there that makes it much harder to break out and do these things, such as accessing GPUs or accessing firmware on the system, either the BIOS or the firmware on your ethernet card. There are all kinds of attack vectors that once you give somebody access to the hardware, you can no longer verify or attest that hardware can now be safe for use for other workloads, especially in a multi-tenant environment. This is one of the great challenges in projects like Ironic, where they say, great, I just gave this user this hardware. Now we don't know that we can safely put another tenant on it and may no longer be safe. So putting containers there allows there to be a shim where we get that performance, but we don't necessarily give users access to break our hardware. So we're putting Docker and maybe not all the things, but in quite a few things. And we've been put Docker Inc. We've contributed work towards OpenStack Compute, Nova, and HeatProject. But there's also been work in Solom. We have Adrian in the audience. I see him. There he is. We have Tempest in the Teacup, which is part of RefStack and the Defcore to try. It's an image that you can run to verify a cloud that it works with the Docker, sorry, the OpenStack ecosystem, right, that it can use the OpenStack trademarks, and there's an image for that that wraps it all on Docker. And Crowbar has been using Docker as well, and it's not an OpenStack project, but it seems to be tightly enough coupled with the community that it's worth mentioning. That was the Solom data plane, if this slide somehow got in there, but Docker isn't Solom or used by Solom. So I want to go over some of the heat integration. With heat, you can use the heat resource for Docker to communicate to the Docker API. And what this does is it exposes a near-roll version of the Docker API to the user, where you can do just about anything that you can do with Docker in the Docker API, and orchestrate that via heat. What you'll see here in this picture, if I can, Nova is not used and connected here. Now, it's not to say that we're not going to use Nova, but the resource doesn't talk to Nova. Heat uses the resource to talk to Docker. And I'll show in a moment an example of a heat resource, where what we'll do is we're actually going to launch VMs in Nova, and then bring up Docker inside those VMs, and use heat to orchestrate those Docker containers. And one of the great things with this model is that if you're using Ironic or Nova Bare Metal, or even a hybrid cloud situation, where perhaps you're using Rackspace, there's actually a resource for Rackspace, where you could actually launch VMs in these other various environments, and use heat to orchestrate them. Now, when I say Rackspace, you could use that for orchestrating for AWS, or GCE, or something else as well, but they actually don't currently have heat resources. If somebody wants to contribute something like that, that would be fine, and the Docker resource would actually be able to deploy on those. So this is the heat workflow, right? So we use the Nova resource to deploy the VM, and the Docker resource to communicate the Docker, and then Docker provisions containers. See if this works now. I think it's dead. I shouldn't have used the light. So what we do is we can install the plugin. This code should install the plugin, and once you do this, you would then restart heat, and you'll be able to use the plugin. It's not installed and used by default, unfortunately. Perhaps we can convince the heat community to do that, but until then, you install it and you can use it. So this is an example of a heat template. Heat templates can be written two ways. They can be written in XML or YAML. For reasons of readability, I've chosen to use YAML. At top here, what we're doing is we're defining first a resource called MyInstance, and this is a Nova server. So we're going to create a Nova instance, and we're gonna give it a SSH key, so we're gonna be able to get into it. And it's going to have an image, which I'm just calling it Ubuntu Precise. You can use any OpenSec image, and this would probably be even a UUID, right? That's in glance. I wanted to give it some name that you can read. You specify the flavor, and user data is information that's fed into CloudNet, and that's going to say, this thing should install and run Docker. You could use images that are already Docker built-in and baked-in if you wish to. This is obviously an example. Then we have another resource, which we're gonna call MyDockerContainer. So the resource names are arbitrary, and that's a Docker and DockerContainer. That's the Docker resource. And we're gonna say, this resource is going to talk to the Docker API on this IP address, and the IP address being that of my instance. And we're going to launch a start-a-container there called Seros. So this is pretty much the most basic version of a Docker work, a heat stack, using the Docker resource. So we're gonna make the, we're gonna bump it up a little bit, and we're gonna do dock and stack. So we're gonna do Tempest. We're actually going to test OpenStack in Docker on Nova using heat. And doing that, it's not much different, right? We just said the image is going to be the dock and stack image. We're gonna say this is privileged, which actually means this is uncontained. So you can actually run uncontained images with Docker. So you can say, we're not going to apply systems capabilities and so forth. We're going to use namespaces, but we're gonna open up the capability set. And the reason you can do that is for things like nested virtualization, which we're going to do in OpenStack, or rather to say in dock and stack, right? Because we're gonna run Docker containers with Nova inside of Docker, inside of Nova, inside of heat. If anybody could follow that. So here's a slightly more practical example. And this is one of the use cases and why at Docker, Docker Inc, we've been telling people about the heat plugin and why the heat plugin is powerful. Because you can launch these microservices, such as FTP container and Apache container, and they are individually contained, but they're sharing a resource, right? They're sharing the slash FTP directory. And when you can upload files to the FTP server, which is in its own separate container, and those files are now present from the Apache container. So you upload files and suddenly they're on the web, but they're on the web running out of Apache in a different container constrained with its own root user, its own processes or its own process, rather, right? Another interesting thing about this model and this particular example is that FTP container is running Ubuntu and the Apache container is running Fedora on the same operating system. Right, so we also have compute integration. We have the Nova driver and I understand that many people are already running Nova and they want Nova and they want this Nova integration. Despite perhaps some of the limitations and lack of container extensions that currently exist, which I'll go over in a minute, but before I get too deep into it, I really wanna give thanks to a bunch of awesome people who have been helping make this thing better. Each of these people have been contributing in some significant way to the Docker Nova driver and I really thank you very much. So what is the Docker and what does, sorry, what is the driver and what does it do today? And it allows you to control Docker via the Nova API and through that also via the Horizon UI. We can launch containers, we can terminate them, we can reboot them, get serial consoles. We can also get logs, but it's not up here. You can do snapshots. We do have some integration with Glance, but it's not perfect. And recently, maybe three or four weeks ago, we've gotten neutron support, which is really amazing. So a little bit more, but the networking, we have Nova Network, Nesera, OpenContrail, and OpenVswitch integration. And I just wanna add, I actually had lunch with James earlier today and I was telling him that all of this code is actually very generic for Linux containers. So we're probably going to try and see how much we can share with the other container technologies and maybe accelerate some of the feature growth for those drivers. So things that aren't supported in the Docker driver, cinder volumes, suspend and resume, pause, unpause, and live migration. Some of these are kind of hard. Actually, pause and unpause, there's no reason we can't do that. The reason we don't do it is because we don't do it. Adding it is just a matter of sick stop, sick continue, and we could probably add that pretty quickly if we really wanted it. Cinder volumes is kind of hard because what we do is we operate on file systems and not block devices. And exposing those block devices into the container wouldn't even necessarily be so difficult as it would be useless because there's not much you can do with those block devices once you get them into the container because we can't give you the capabilities to mount those inside the container without breaking a lot of the restrictions that we're using to actually prevent you from escalating to root. So it may be possible to provide solutions for that down the road and not even necessarily that far down the road, but we're kind of punting that most likely for the Juno release and perhaps we'll see what we'll get in the next release. Suspend and resume is really hard because if I think about trying to take a process, any arbitrary Linux process and dumping it into memory, shutting down your machine is starting up just that process. It's not just suspend and resume of your host. It's suspend and resume of a process on your host. And there's been some things that people have been working on making this possible over the last decade or so. And what happens is generally you can do this really well for a process that you build around that use case. You say, I want to build a process and an application that I can dump into memory and resume on reboot. You can actually do that. But to do that for arbitrary processes that you dump into arbitrary containers is something that we really can't do easily today. And if somebody wants to make that better, yeah, please. And I should probably note that's kind of feeds back into the live migration story, right? For the same reason that we can't dump the disk, we can't dump it across the network to another host. So what has changed in architecture just for reference is we add the Docker daemon. And so instead of using libvert, when you use the Docker driver, you're talking to the Docker daemon that's on your host. And the Docker daemon talks to the Docker registry which proxies to glance. So images are stored in glance and the Docker daemon downloads them via the registry. So there has been some talk of, well, let me skip to the next slide where it just says what I said, right? So this is controversial. This is probably my most controversial slide on here. So I've been making, having some thoughts about how to better integrate with glance. And one of the thoughts I've had is that glance isn't really strictly necessary in our model and it's not necessarily desirable. As it is today, glance is core and thus part of def core and we must keep glance in the model and in the system in order for an open stack cloud with our driver to actually be considered to use the open stack trademark. So this is something that we're gonna have to figure out with the foundation and as to whether or not it makes sense to continue using glance if we want to continue using glance if we can stop using it and what the ramifications of that would be. These are the things that Nova doesn't do, right? So we are saying that he is a better model for using Docker because we don't have container extensions yet in Nova. Nova doesn't yet do the things that make containers really awesome. Linking container networks is kind of present through security groups but we can't pass environment variables to the containers. So we can't say we're going to launch a process in a container and give it an environment, right? You launch processes, you wanna pass environment variables, especially with for Docker containers where often they're defined in a generic way where we're going to say we need to make an Apache container and we're going to pass a different configuration environment variables or we're gonna give it specific arguments. We're gonna say that this Apache process is going to configure a V host using this domain name and we're going to do that not by embedding that in the image but we're going to make a generic image and then pass the information at runtime, right? These are things that you can do with containers that you don't normally do with VMs that we want to be able to do in Nova or in container extensions or container service down the road. We have the Docker volumes, right? We showed the volumes example earlier with FTP, right? How powerful is that that you can launch these containers and share this information between them and that we just can't do that today. And this is just kind of a side note, right? So once you get those extensions you're gonna want to have affinity. And you're gonna say these containers do these things together, right? They're microservices but they are related. They're in different security domains or in different security contexts but they need to be together. We can only, this container only makes sense in the context of being deployed with that other container. So you can do this in Nova today and you can do it by saying hint, same host and this tells the same host filter for the filter scheduler to deploy on the same host as another instance where the instance is this big long UU ID. That's really great except that it's also not very user-friendly interface. So I would like to really encourage the community to think about how we can make some of these, the UX for this better. So it goes back to the question should I be using heat instead of Nova? And the answer really is not one or the other. Well, I shouldn't say it's not necessarily one or the other it's just a matter of what is your use case, right? Do you need to use Nova? There are particular examples where first of all many of these extensions may in fact land in Nova eventually and there is an effort by the community especially amongst those with the vested interest in containers that are trying to push to get these things as extensions down the road. So from one hand, these things will probably come. On the other hand, you may say I don't want or need these things. I don't care about doing this as microservices or these things don't need to run together. I'm going to use Docker for running my CI and my CI is going to be like dock and stack. It's going to bring up this giant system and test it and that's perfect for my use case and that's all I want and that's all I need and absolutely Nova is great for that. But there are cases where you're going to want to use this forward-looking microservices model with linking services and leveraging these things out of containers that Nova just, at least for now, can't do. So I'm going to talk a little bit about dock and stack. So Paul gets a slide here. Actually he gets quite a few slides. So we built dock and stack for testing because at the time we were being told we had to do third-party CI and I did not want to build a giant system that looked like OpenStack Infra. And not to say that I couldn't but it seemed that there were a whole lot of pieces in motion here that were doing things that I could just dump into a container and run and I could do those things and test them locally on my laptop, right? I can say I want to run DevStack on my laptop and make changes. I mean how many times have I in the past working on other features in OpenStack and I've been working in OpenStack since almost the beginning where I would take DevStack or even before that Nova SH and I'd bring up a container or a VM and run DevStack and then immediately snapshot that VM because I'm going to have to roll, I'm going to make a bunch of changes and then I have to revert and go back to where it was and that's a process I had to do manually and I had to wait over an hour to build that image or to get to the point where I could even do the snapshot. And Dock and Stack is this thing that runs, we can run it every day centrally and you can go online, you can just download this and run it locally on your machine and it takes five minutes to run from start to finish, five minutes to get a running OpenStack installation in DevStack that you can use for testing. And that's really powerful and the great thing about that is that at least for us, that's the same thing that we're using for CI. So I'm going to test my code with Dock and Stack and then I'm also going to use that for my CI. So instead of uploading your code so Garrett can grab it and run it in OpenStack Infra and then find out that something was different from your environment and the CI environment with Dock and Stack, that's not the case. It's completely the same thing from your laptop to infrastructure. Now unfortunately, OpenStack Infra is not using this yet but I would like them to do so. So one of the other things about this is that we're not doing nested virtualization, we're doing Docker in Docker. Or, so I also want to clarify, it's not necessarily Docker in Docker, it can be VMs in Docker. So you can actually do QEMU or LXC or OpenVZ testing inside of Dock and Stack, which is really neat because you don't, now, you can make all the arguments you want about whether or not you should do nested virtualization or not but it's definitely lighter weight, right? You don't have dedicated memory resources, there's no memory ballooning, there's none of these issues of the memory and the nested virtualization artifacts. I kind of covered all this information already about OpenStack Infra and how it kind of boxes everything up, like a bento box, okay. So I want to get a little bit into using Docker with the compute plugin. This is the more practical aspect of how do you install the plugin? Check it out and you install, you do pip install. And you set the minimum you need to do at least is set the compute driver. So there are a couple other configuration options, so there is the Docker registry, you can define where that is, by default, it's going to look on each compute host for that registry, so you can deploy a registry on each host and it acts as a proxy, but you can centralize that proxy and you can specify that as a host for which I forget the config variable name, but you can look at the help file. You can, you would run the registry with something like this. Unfortunately, this is a lot of lines, right, but this is basically saying the Docker, the registry has to talk to Glance and in order to talk to Glance, it needs to know where Glance is, how to log into it, et cetera. Also, the registry supports different backends, so you have to specifically tell it this is an OpenStack backend that we're running, that we're going to be proxying to OpenStack. So you put images into your Docker registry. In this case, what we're going to do is we're going to pull the Serus image from the global index and Serus image is actually in what we call Stackbrew. So the Docker ecosystem has a set of official images that have been curated by the community and Serus has become part of that curated set of standard library images. So you can pull the Serus image and you can tag that image saying that this is going to, that this image, the upstream version of this image is at 10.001 slash Serus. And that means that this is going to, when we do a push, it's going to actually push it and store that on that registry and that registry is going to then push that image into Glance and when we do a Docker pull, it's actually going to talk to the registry and the registry is going to grab that data out of Glance. So again, this integration with Glance is imperfect. Currently, you can't actually push images to Glance and then have Docker understand what to do when you do a Docker pull. Again, we're looking at ways to solve that and it's going to be a matter of discussing this with the right people in Glance and maybe even the foundation and so forth. And this is actually, we're having a session tomorrow about the Docker driver and some of the features that are missing and it's going to include to some degree some of these, the Glance integration feature that matters, right? So after you do all that, right, you can do a Nova boot. And when you do this, it works the same way as you boot any other Nova instance. And you pass your flavor and you pass your image ID which you get out of Glance image show or Nova, I'm sorry, Glance image list or Nova image list just like any other Nova container or instance. So I'm going to take questions but I just want to add that I did a V Brown bag earlier today where I'd actually do a demo which was about 11 minute demo of actually running Dock and Stack and it's just a matter of basically of either building or downloading the Dock and Stack image and running it. So I will take questions. I've heard CRI use is being used for doing live migration as snapshots. Is that working or a work in progress? I'm not sure about the progress on that exactly. The thing with CRIU is that there are, my understanding is that certain things that we're doing in Docker don't map or don't work with CRIU and CRIU doesn't work with all processes. So it is somewhat limited and it may work for many processes but it won't work for all and it's still uncertain, at least to me, whether or not it's going to work with containers because some of the kernel features that containers require don't necessarily work with CRIU. You do? Okay. Working on it, okay. So James Volley was just mentioning that Parallels has been testing it and have some progress maybe. And I know that there have been people at Docker as well looking at it. I'm just not sure what the status is. Okay, thanks. Just one more question. Your Dock and Stack, will it boot actually like Docker containers and VMs? That can talk to each other? Yes, yes. So Dock and Stack can run Docker containers and VMs. I've tested it with LXC but we have had people that have reported that they've run KVM inside and it works. I just haven't personally been testing it. One of the issues that we have is that if you're using KVM currently, it wants to install LibGestFS and LibGestFS does technically work inside of Docker but the app packages require fuse which doesn't work inside of Docker. So when you install the packages for LibGestFS, installs packages that don't work inside of Docker. So it's slightly problematic. I was right. No, I didn't mean KVM inside of Docker. I meant like a Docker container running a guest talking to like something managed by Livevert outside of Docker. You were saying automatically having Nova launch KVM containers that run Docker inside? No, no. Either like Noboot could run Docker as a Docker container or run a glance image as a KVM thing like having two different kind of... So you're talking about a multi-hypervisor Nova. So there has been work on multi-hypervisor Nova installs and I believe that this works already today but if not then it exists as a blueprint. Yes. So what does it mean for, what does flavor translate to for a container? So the containers actually do support CPU weights or CPU shares and supports memory limits. So you can say that this, using C groups, it will say this process or processes cannot exceed this memory limit. So the flavor restrictions actually do apply to containers equally. However, they are not like for instance, Zen, at least traditional Zen for which I'm actually fairly familiar. I would never use balloon drivers. I would say I'm going, this instance is, or this VM is gonna have this much memory and it's not gonna be able to shrink or grow and these processes in Docker are not actually consuming a host memory, right? It's not fully allocated memory. It's just the memory that they're actually using is consumed. Are containers integrated with Solometer? You know, I've had conversations with Nick Barcet and some of the Solometer folks and to be quite honest, I'm not sure. So they, I have heard that it's been looked at but I've not been personally looking, it's not been really on the radar yet and it may be something that comes up in the discussions in the design summit tomorrow but considering that there are like big things that are missing like features and testing for the third-party CI for reintegration into Nova, I personally have higher priorities but if people are going to test it and submit patches then you're welcome to do so because that would be great. More questions? If not, we can go home early because it's the last day and I'm sure you would all enjoy that. Thank you.