 Hello, everyone. My name is Adrian Otto. I'm a principal architect at Rackspace, and I work on OpenStack. I'm the chair of the OpenStack containers team, and I'm also the PTL for Project Solem. I had a laugh today because it's the first time I'd ever seen, or at this conference is the first time I'd ever seen a line for the men's restroom. A line out the door, in fact, which reminded me of a joke that I learned when I was a child, and it goes like this. Well, just show hands. How many of you are Americans? So maybe a quarter of the room, American. So if you're an American when you go into the bathroom, and you're an American when you come out of the bathroom, what are you when you're in the bathroom? European. All right, let's go. About seven, I guess, maybe eight. Yeah. So I'm also a father of four, so I'm really into like the eight-year-old humor. So we're here today to talk about containers and how to use containers in a multi-cloud environment. So I've sat through a lot of Docker-related and container-related sessions this week. Most of those are trying to teach you what containers are, how to use them and showing you demos. Very few of those talks were about actual production use of containers and production use of Docker and suggestions for how actually to do it. So I'm going to give you a little bit of advice in my presentation, because I'm actually using Docker for production every day. So an application that wasn't written for the cloud is not going to be multi-cloud in the sense that it's running in two clouds at once. It will only run in one cloud at a time. So what we're really talking about today is cloud portability, which means that your application can be in one cloud or another cloud, and you can use a container as an instrument in order to move your application from one cloud to another, not like this, where you're straddling two clouds. Now, you can do this if you want to. There are tools for doing this. If you're writing a new application, you can write to an abstraction API that will let you create cloud resources in different clouds, and you could use it concurrently. But in order to do things this way, you need to be writing an app that's new. So if you have an existing application, you want some way to still use it in multiple cloud environments. So here's the methodology. First, you containerize the application. You use a Docker file to create a container image for that application. Once you have the container image, you store that in a repository, and then you set up Docker to run on your cloud servers. You back up your data, put it into the target cloud, and you load your application from the image, from the container image. And once you've done that, you can load your data into the application. So that is the recipe for running pre-baked applications in the cloud. I think that concludes my presentation. We can all leave. Anyone who's tried to do this before knows that there are plenty of gotchas in trying to run a cloud application. And when I talk about running cloud applications in containers, I get a lot of questions. People are thinking about, okay, well, first of all, what is Docker? Let's get that question a lot. Probably a lot of you are probably up to speed on what it is, so what kind of glaze over that pretty quickly. Second, they don't know what a Docker file really is or what the inputs and outputs of it really are. So I'll explain it in enough depth for you. An image of repository is something that you all probably understand, but don't know that you already understand it. So I'm going to make sure that we draw the parallels so that it's very clear. People say, yeah, Adrienne, that's nice if my application only ran in one container, but I have a big gnarly application. What do I do? So I'm going to talk about that. If my application needs a separate service that's a prerequisite, how do I get the prerequisite deployed? And what if I just have a ton of data? So container is not a silver bullet for solving all of these problems, but there are best practices for dealing with these issues. So first, answering the question, what is Docker? So some people say it's a way for bundling and distributing and deploying applications. I just say it's three things melted into one. It's the feature from the kernel called C groups, the feature from the kernel called namespaces, and a container image. And those three things combined are a Docker container. So a C group and namespace functionalities have been in the Linux kernel for six years or more. It's stable. It's been used by LXC for many years. The concept of a container image is relatively new. Docker has been around less than two years. It's only been real for about maybe a year. So people are still trying to get their minds around the concept of a container image and its layered construction. So let's get into a Docker file. So my eight-year-old son asks where do babies come from? I wish he would ask me where containers come from. So I could tell him that containers come from Docker files. Well, what a Docker file does is it says you're going to extend from a base image. You're going to do some stuff. And then when you're done, the output of this is going to be a container image. So all of you who have compiled software before have used a make file. A Docker file is just like a make file, except the thing you get out at the end, instead of being a binary executable, is a container image. So this Docker file says it's extended from CentOS, meaning these commands that are run are going to add to a base CentOS operating system environment. Here I use a run command, which means as I'm building this container in my build environment, I'm going to run this command yum install. And it's going to put Apache in that environment. I'm going to expose port 80, meaning when I create my network namespaces around this container when I start it, port 80 is going to be available so that it can be accessed. Add is a way to add files from the current build directory into the container image at the location you specify. So this is take the start.sh script from the current directory that I'm running Docker in, and put it into my container at the location slash start.sh. And then cmd just means this is the default command that's going to run when I start this container. That can actually be overridden. So if you're using entry point, you can't override it. If you're using cmd, you can override it. Let's talk about image repository for a minute. So an image repository, show me hands again. How many of you use git? How many of you use git every day in your job? Okay. At least 75% of the other image users can get. Think of a Docker repository as a git repository. That's your mental model. All the same semantics work. Except the things that are in there, instead of being text source files, are binary layers. Okay. So think of it like each layer in a container image is like a block snapshot. So if I had a block device, I took a snapshot of it, a copy-on-write snapshot, and I started modifying it. The same concept applies to images. The one that you're modifying is actually a COW of the base. So when you want to start working on an image, when you want to start using it, use pull in order to pull it down. And a lot of people say, oh, in order to run Docker, you run install Docker, you run pull whatever, and then you use run, and you specify what you pulled. Well, Docker will automatically pull. You don't actually have to say Docker pull. But if you want to pull something and not run it, or you want to pull a whole bunch of different things, this is how you do it. You Docker pull. There's also a commit. Just like in git, if you've modified your source code, you make a commit, and then if you want to save that back in a remote repository, you would use push. The same thing applies for Docker images. Now, if you need to deploy a ton of containers, you're going to need more than Docker. At least for now, you're going to need more than Docker. Docker allows you to start a container on a single host. It doesn't give you the ability to create a cluster of containers on a number of different machines yet. We'll talk about that later. So if you have a tool like, if you're running OpenStack, you can use Nova as a way to schedule resources. You can replace the VRT driver from KVM to Nova Docker, in which case it's going to produce containers instead of producing VMs. You could use Magnum, which doesn't actually work yet, so you can't actually use it yet, but at some point it will be true that you can use Magnum to create containers on OpenStack. You can use any of the community tools such as Kubernetes and MISOS are ways of creating containers. People refer to them as container orchestration systems or container management systems. I think those designations are a bit of a reach. They don't actually do that, but they do some of it. If you are running a ton of containers, your life is going to be very easy if you treat them like cattle instead of treating them like pets, which I'll get to again. What if my application needs a separate database server? How do I handle that? The question I get all the time. An application that has a database is not one application. It's two. You can containerize it both as a MySQL server in this case or MySQL server and your application, so you would treat the data persistence app as a separate application and then orchestrate them together. You can do that using heat, and this is how. This is a hot file that says create me a Nova server, and my Nova server depends on a Trove instance. This works today. The same works with containers. If you said create a Nova server, put a container on the Nova server, and then have that depend on a Trove instance. We have tools in OpenStack today to do the orchestration. This is my favorite one. Yeah, Adrian, but I have a lot of data. I can't fit all my data into a repository that's holding binary stuff. That wouldn't make any sense. This is an oversight when people are like, yeah, you should be able to run all of your OpenStack in a containerized control plane, and everything is in a Docker container. But things like Glance, things like Cinder don't actually work in containers because you can't fit enough. You can't fit your data into a container because it has a finite maximum size. By default, Docker limits the amount that you can put into a single container to 10 gigabytes. And the amount that you put on a single host is limited to 100 gigabytes. Now, if you're following my guidance, those limits are a total non issue. But if you're trying to put your precious data into a container, then you're going to run into all kinds of issues. So my advice, never put data into containers, or never put data that you would get fired over into containers. I'll explain why. So what you do instead is you use a technique called a bind mount. A bind mount is kind of like an alias to a storage location that's on the host. So use the same techniques you use for managing data on hosts today. In OpenStack, we use Cinder volumes, in the Rackspace Cloud, we use CBS, in AWS, we use EBS. That's where you want to put your data. And you want to mount that on the host. Once you've got your file system mounted on the host, you want to bind mount it through to the container. So that inside the container, you're going to be using the storage directly on the host instead of trying to save it into the container image. So if you want to use Docker with more than one cloud, one way to do it is manually install Docker on every host that you want to run it on and manually start getting shells on machines and running Docker commands on all these hosts. Another way to do it is to use one of the ecosystem projects that was announced in June this year at DockerCon in San Francisco called Swarm, Libswarm is the project name, SwarmD is the tool. And what this is, is it's a way to put a pluggable API on a host of your choice. And the thing that goes behind the plugin interface is a distributed system. So you can have a back end on SwarmD, let me explain the front end first. The front end of SwarmD is a Docker API. So when you interact with it, it looks and feels like a Docker REST API. On the back end of SwarmD is this pluggable interface that can connect to a distributed system. So on the back end, you could have AWS, you can have digital ocean, you can have rex-based cloud, you can have open stack, so on and so forth. And so you would run SwarmD in order to get that API. And you would extend Libswarm if you want to add new back ends. Now for each back end, as you create additional containers, you're going to end up filling up the cloud servers that are servicing that back end. And when they get full, SwarmD will automatically add additional back ends for you. So you'll get more and more and more cloud servers as you add more and more containers and as you stop those containers and you cause those hosts to become vacant, it will kill off the cloud servers for you. So to some extent that is a simple orchestration. There is something brand new. As of a week ago, Monday, this was announced in the global Docker hackathon, or global hack day, I should say, called Docker hosts create, which I'll show to you today. And how this works is you can automatically set up new hosts that have the Docker API on them by running a command in Docker itself. So it's like having Libswarm capability built into Docker that you don't need to add something extra in order to consume. And that has today support for Azure, DigitalOcean, and Rackspace. So with SwarmD, you have the ability to talk to an API endpoint that will show you a collection of servers. And you can have multiple API endpoints. So you could have a bunch of cloud servers running on Rackspace, running a whole bunch of containers, another bunch of cloud servers running on Google Cloud, and you could see a singular view of say a Docker PS command that would aggregate all of those containers into a single output. Now this only makes sense if you're treating your servers as cattle not pets. Now if your servers have names, if you care if they die, then you have pets. And you can't have any love for pets if you're using Docker images for the reasons I've mentioned relating to where your data gets stored and how you treat the image in a Docker repository. So you need to have a Docker file for if you want to do this cattle scenario, I suggest that you have a Docker file for every application. You have Docker images for every application or layer of an application component. You deploy all your apps into these containers. You use a scripted application system, whether you use heat or something else like Ansible or Chef or Puppet or just go on and so on and so forth. Any of those will be fine. And you need some form of centralized logging because you don't want to persist logs in your containers, you want your logs to be in a remote location. So I talk a lot about immutable infrastructure. In order to have a scenario where you build one time and you test or run many times, you need that artifact. After you've built your application, all of its dependencies are contained with it. The reason why we do this is because everybody who doesn't have a good way to bring their dependencies along with their application ends up having multiple environments. They have an environment for dev, they have an environment for staging, they have an environment for production. Usually production is the way things looked a couple weeks ago, staging probably a little bit different than that. Dev definitely much more current than that. And every time you move your application from one environment to the next, there's an opportunity for there to be drift and it doesn't behave the same way in a different environment. And this wastes a ton of time and causes a bunch of aggravation. So by building your application into a container image, all of its dependencies go with it. So it will behave the same way, assuming you're injecting your configuration using environment variable injection. It will behave the same way in production and staging and test. Okay, so before I show you a demo, the new feature that I'm talking about is called Docker host management. It's in Docker issue 86 81. So you're welcome to check that out. There is some proof of concept code that's already out there and already working. There's also a new thing called Docker clustering. So this concept of a grouping of Docker hosts that work together as a singular unit is now being implemented in Docker itself. And there's a spec for that in pull request 88 59 and Docker Inc has already built an implementation against this. The implementation is not yet open source because it doesn't meet their requirements for release. It doesn't have any unit tests. It's kind of written as disposable, but the concept does actually work. So let's go to demo. All right, so what I've got on this machine are two scripts. So this one is for creating a host on digital ocean. This one is for creating host on rack space. So we can go ahead and run before I do that actually. Let's look at Docker hosts. So what this is showing me is that to start with, I'm just running Docker on a single machine and it's talking to the local Unix socket. I can create more hosts. I can decide which one of them is active, meaning which one if I'm using the Docker CLI on this host, which one my commands will be relayed to. You can kill them, you can remove them, you can SSH to them, which is really handy. So you don't have to have an external system to keep track of all the encryption keys in order to shell into these remote systems and a bunch of other things. So let's go ahead and do the create digital ocean one first. So you're going to run this one in the background. And we'll do the same thing for rack space. Now on digital ocean and rack space, this storage for these systems is all based on SSD so they load pretty fast. It only takes a couple of minutes. So I'm going to wait for this to go. When I'm done, I should be able to show you running Docker commands on the remote system. This originally started as LibsWarm code. Rack space and Google and a few others including Docker worked on the LibsWarm project together and built a lot of the logic and code that got put into this command. And about three weeks ago, Docker reached out to us at rack space and said, will you work with us to build a back end to the Docker host command that would work on the rack space cloud? So the one we demonstrate this at our global hack day, it will work. And so we took a lot of the source code that we used in LibsWarm and ported it in and made this build of Docker that I'm using now. So I showed you the two scripts. The first one I ran was to create a host on digital ocean which completed is now called deep blue. And it created another host on the rack space cloud which I named Adrian. So now if I run Docker hosts again, you'll see I've got one here and one here. So right now the active one is rack space. So if I run Docker PS, there's nothing running there. I can say Docker run the first time your container runs, it downloads the base image which is what it's doing right now. And that container is now running. So if I run Docker PS again, you can see this is running on the rack space cloud even though the Docker daemon is running here locally. I can say also, oh, okay, so the command I ran says Docker run, meaning create the namespaces and run a process in that C group. DashD means run it in the background. It's based on the image CentOS with the tag CentOS 6. So you can run different, so it's just like Git in the way that you have a branch name and you can have tags. So you can think of this as a branch and a tag. So I'm getting the CentOS 6 version of CentOS and I'm running the command sleep 6000. So it's showing you that I'm running sleep 6000. I started it a second ago. Well, you see that little star next to active? That host now is on the other one. So if I run Docker PS here, you're going to see there's nothing running, right? So I could say, okay, I want to run that same command there, except I want this to be a different command. You can tell where it's running. So if we go active back to Adrian again, you get the idea. So I intentionally made this presentation relatively short because I wanted a chance to have a real Q&A session. Can we bring out the lights? Because I can barely see you. Okay, so you had a question here in the second row. But I was talking about storage. The device mapper configuration is what's limiting you, yes. That's right. Yeah, how if you're using the device mapper backend, what it does is it creates two files when you when you initialize Docker for the first time. It's got a metadata file where the container information is stored and it has a sparse file where the actual data is stored. And the sparse file by default is a is a 10 gigabyte sparse or is a 100 gigabyte sparse file into which it expects 10 gigabyte containers. So that that is a limitation of that storage driver. But the nice thing about Docker is that it has a pluggable storage engine. So if you wanted to put a superior storage system under it that would manage it in a smarter way, that's certainly possible. So they're currently, to my knowledge, two implementations for backend storage for Docker. There's one based on AUFS and there's one based on device mapper. Better FS. Sorry. Question. Yeah, don't use containers as a security instrument for multi-tenancy on the same host unless you know exactly what you're doing and you're an expert in host security. The Linux syscall interface is hundreds of commands. It's relatively difficult to use that as a as a you know a security tax surface defense mechanism. It's very you know the only thing you can really do is wrap a mandatory access control around it like an SE Linux policy. The trouble with that is once you've done that and you make a policy that's restrictive enough to keep people to running just a small set of instructions, the range of applications that you can run on your cluster or on your on your host gets reduced. And a policy is on a host by host basis, not on a container by container basis. So all apps that run on that container need to conform to the same policy, which is a which is kind of a trouble. So I would see containers are a great way to make your applications more portable, right? In the multi-cloud use case it makes a whole lot of sense. It's a way to bundle everything up. It helps you with immutable infrastructure. It helps you stack more more running applications on the same hardware. But if what you really want is a multi-tenancy scenario you either need to be an expert in host security and be doing very smart things, or you need to be using virtualization in combination with containers. Did I answer your question? Well, where where the minions run is what's important in Kubernetes. Where the control plane runs is kind of a non-issue, but the minions are the location where you're going to start containers. So in the Kubernetes use case you would make sure that your minions are deployed onto virtual machines instead of being deployed onto a bare metal host that's going to have multiple different tenants running potentially hostile workloads on the same host. Now I'm not saying it's a bad idea to do it if you know what you're doing, but you need a lot of security prowess in order to, and kernel prowess in order to succeed at making that a reasonably secure environment. There's a lot of potential pitfalls that you can get trapped in. I'll get to you in just a second. Right. Config files should be injected using environment variables to the extent that's convenient. You can also use something like an etcd or a zookeeper to hold your configuration data, pull it down and generate the config file right before you start the application. That's fine. That's fine. You can do that too. You can bind mount configurations, but then the configurations have to actually be on every host that you plan on running the containers on, so you've got to still solve the how do I distribute that. So if you're using Docker as a way to simplify your configuration management, then you haven't really simplified it as much as you could taking that approach. Whereas if you make it so that you're injecting your, the configuration you would need within the container into a blob store like an etcd or a service registry or zookeeper or something like that and you pull it down and generate the config file on demand, you're in much better shape within the container right before you start the application. If you can't inject it using an environment variable, I didn't show how that works. I didn't show the environment variable, but you can use dash e and put a key value pair there. So you can say dash e adrian equals tall and then you'd have both the key and the value available in the in the shell environment where you're going to start applications and the Docker is going to run command itself. That's right. So injecting configuration of runtime is the best practice for immutable infrastructure. Right here. Well, it's going to be higher performance than running them in a virtualized environment for one thing. How do you get the containers to talk to other containers is the kind of unanswered question. There is a project called weave that attempts to address that question. It has some potential architectural constraints. It's still early days. I'm sure we can solve these and make it more scalable, but it's one way that you can make containers speak to other containers using an overlay tunnel network. If you have an application that does a ton of IO from my experience, like a ton of network IO, I try to get as many things out of the way as possible, but using a container doesn't put anything in the way of the actual data stream. Whereas virtualization, there is actually that IO gets virtualized before it gets sent down to the CPU and then out the network interface. So in the container use case, you're not going through any virtualization layer. You just have a namespace around that so that your view of what network interfaces you have access to or have in your view is limited. I'll give you an example of this. We have a product in Rackspace called cloud databases where we offer hosted database service. We run that service in containers so that we can get high IO both to the distributed block store where we store the actual data and over the network for that very reason. With some exceptions, I mean there is an additional bridge that when you're running inside a container, when you're using Docker, there's an additional bridge and you typically are mapping your network connectivity through a TCP port. If what you're doing instead is just creating a container that maybe doesn't have a network namespace at all, it's possible to do that. It's possible to make a container that doesn't have a network namespace at all and is just using the host networking straight up. So you wouldn't have any additional bridges. You would have raw performance exactly the same as you would if the container weren't even there. Question back here? It's not as active as it was. It's unclear which is going to win out. When Lib Swarm was initially announced, we thought having this highly modular, you know, separate in a separate repo would be more attractive. But what we found is that actually limits adoption when it's harder for people to actually acquire the tools and assemble the tools in the way they need in order to experience those features. So by building it into Docker directly, those features become more accessible. So that's really the intent behind it. Whether that functionality actually comes from Lib Swarm in the future may actually be true. We may actually be using that in a library and not having everything built directly into Docker. That's a conversation that's ongoing. Question? Yep. Are these open source? Yep. You can. If you wanted to use the code that exists now, you would have to put your source files into the Docker source tree and build your own version of Docker to produce your own Docker binary and you could do that absolutely. One of the things that's currently under discussion is a dynamic pluggable interface for the drivers so that you could just load your own without recompiling Docker. How exactly to accomplish that is still an open discussion. Any more questions? I'm sorry, I don't understand the question. It's hard to tell. In my ideal future, public clouds and private clouds look the same and you have the freedom to run the same thing on each. Docker, I think, gets us maybe one step closer to that ideal where your interface to running the application is this container image bundle. Will you be running Kubernetes in public cloud? Well, Google announced earlier this week, I think yesterday, maybe that they are trying that out and seeing how that works. So that's a possibility. Trouble with Kubernetes is that it's not designed for multi-tenancy. So unless it's considerably reworked, I don't know that it's an ideal fit for a public cloud scenario, but in private cloud, it's perfectly fine. You can still run your instances, your minions, on public cloud and run your control system in a private scenario. So you still may be able to do that. But in terms of the future, my view is that containers are useful for a particular set of problems and people are currently solving those problems using virtual machines. How do I get a clean environment in order to build my application for this particular Linux distribution? One thing I didn't show you today is that Docker can run two containers side-by-side that are running completely different Linux distributions. So I can have one running a Buntu, next to another one running CentOS, running another one next to Debian that are all sharing the same kernel, but have different user space libraries. So I could do like, you know, command and say bundle my application for all three of those different operating systems, and oh, I want to run older versions. I need CentOS 4. Well, I can just spin up a container for CentOS 4 and bundle it for that. So what I used to have an entire, you know, array of virtual machines laying around for the purpose of being able to build software and bundle it, I now can create that stuff on demand and destroy it as soon as I'm done. I can do that all on a very, very limited amount of hardware. So it's an entire problem set that goes away. Then I'll ask you a question about the future. Maybe I sidestepped it a little. Sorry. More questions? Right. So just repeating for those that may not have heard you, the question is about security and what's the best practice for using hypervisors versus using containers versus using bare metal. Don't consider containers as a newer, faster virtualization. Anybody who's trying to sell you containers as a newer, faster virtualization is probably stretching the truth a bit. It only is as secure as a, as a hypervisor in terms of a multi-tenancy environment on the same host. If you're using very advanced kernel features to put a lot of restrictions between how that application can interact with the kernel. And in my experience, once you've added all of the appropriate restrictions around an application, you end up with an environment that's difficult for you to run a wide range of applications and the performance of those applications is degraded. Almost more, in some cases more than if you would have just virtualized them to begin with. So if your goal is multi-tenancy with hostile workloads that are, that are neighboring each other, virtualization is still the right tool today to use for that. If your goal is tight, tight packing of applications that are not hostile to each other, like maybe they all belong to the same enterprise, to the same department, then it is appropriate technology for that. And if your goal is make a bundled environment that makes it very easy to distribute your application to different places, that all have a Linux kernel in common, then it's the right tool for that. So just don't try to use containers for a job that they're really not, they're not ideal use for. Yeah. I currently don't have a preference. They're both trying to solve the same general problem. CoreOS is kind of elegant in the respect that you're always running current code. CoreOS has this concept of automatically updating itself. So you're just like kind of the Chrome browser. It's kind of like only one version of the Chrome browser because every time you start it up it like automatically updates itself whether you want it to or not. Well, CoreOS is like that. It automatically like upgrades itself. But the drawback of that is if you were running workloads on there that you didn't want to be interrupted, whoops, they're interrupted. So if you're using them as cattle and you're using like, you know, a distributed application design, that's perfectly appropriate. But if you happen to be running pets, then that behavior might be rather objectionable to you. In which case you might want something similar but without that characteristic project atomic probably better fit in that case. You know, CoreOS is a startup. They're based in San Francisco. The founder is Alex Pulvy. I respect him tremendously. I work with him. He's former Racker. So we've got that connection. But at the same time, we run an awful lot of Red Hat stuff at Rack Space. So it just depends on your needs. I'll take you first in a second. It does, it does, it has an A partition and a B partition where the operating system is loaded. And when it boots, initially you're on A. And then when it gets updated, it updates B and then boots you into B. But the boot is actually a reboot. So everything that was running is gone. And then has to be restarted onto B. So that you do have an interruption in service. Absolutely. You can nest them. Absolutely. Well, OpenVZ, OpenVZ, you might make the argument it's more secure than a Docker container for various reasons. But the truth boils down to that even in the OpenVZ case, and we use OpenVZ, right? So we're fans. But the truth is still that there is a shared resource and that's the kernel. And the kernel does have bugs and they can be exploited. So to the extent that you have a buggy kernel, it is possible, even in the OpenVZ case, for somebody to escape one and enter the other. And so for that leakage reason, it's probably better not to do that until you understand exactly how to prevent that risk from happening. But yes, I didn't mention anything about the nesting of containers. You can create a container with a container inside of it, and you can create a container inside of that. If you did that with virtualization, it would be like go, go, go, go, really crazy. So we don't like to think about that. In the container case, it's really not that bad because we're not virtualizing the workload. The workload is actually running on the host. It just has a namespace around it. So when you execute things to the CPU, there's no additional layers. So the fact that I'm in additional namespaces really only matters in the PID namespace where in order to track the creation of a new process ID, it's going to be in each of the layers. So creating a process in the PID table is going to happen across multiple PID tables and multiple namespaces. So it's slightly slower to start a process in a PID namespace. But we're talking on the order of milliseconds or fractional milliseconds, really on the nanosecond scale that you would notice the additional overhead. So strictly speaking, there is an overhead running nested containers. But practically speaking, there's not. And there are good reasons why you might want to run nested containers. Those of you who are the Kubernetes fans, there's a concept of a pod. A pod is a way that you group containers onto a single host. And one easy way to do that is to create a namespace, a single network namespace, and then you create containers in that namespace that do not have network namespaces. And so they are all sharing a common network. And then you can do things like layer two communication between these without any mapping of ports. Does that make sense? Yeah? OK. Question here? Yeah. Yep. Just want to knock up a few more. Yep. So there's lots of ways to do that. To my knowledge, Mesosphere or Apache Mesos has that capability. To some extent, Kubernetes allows you to signal a scale event. Like you can change the size of the expected. You can change the declaration that determines how many of something is running. So if you had an external controller that knew when it was time to scale, you could send the signals that way. If you were using heat, you could create an autoscale group and you could use the autoscale group in order to scale up and down. Now, heat has a Docker extension or Docker, I should say, provider plug-in, I think it's called, where you can interact with a Docker resource. So you could autoscale Docker resources in autoscale group. That's another option. The trouble with autoscaling just as a general case, I like to think of autoscaling as an intractable problem in the general case. All of the implementations for autoscaling that I'm aware of that are kind of general case solutions are based on a CPU utilization trigger or some other naive trigger. And most applications that I work with do not actually scale. The scaling constraint is not CPU utilization. It's something else. So having a custom controller that's watching that something else, whether that be queue length or network saturation or memory bus activity or some other KPI that indicates when you should be scaling up and down is actually more important. So it's not as important in my view that systems have autoscaling but that they have a way that you can declare that the scaling ratio should be changed and that you have the ability to put your own custom controller that determines the scaling behavior for that application. And if you want just basic skill, skill and skill out, that's what autoscale groups in HEAT is for. Yes. Yeah, I think, you know, I can't speak for all of the community. My, from my view, the Docker community is extremely ambitious and they're willing to try a lot of stuff. I wouldn't be surprised if you see that very soon. The Docker cluster stuff that I saw a week ago is pretty darn interesting. So it does have the ability to kind of do an affinity and an anti-affinity much like we have in the affinity scheduler today for OpenStack. So having that capability built in is like the first step towards, you know, doing it multi-zone, doing a multi-geography, that sort of thing. It depends. Active-active as a HA strategy is fine as long as your utilization is always less than n minus one. So if you're... Talking about the cattle versus pets, right? Yep. That's right, by definition they're pets. So if you're gonna do cattle and you're gonna do active-active, you need to make sure that your pool is adequately large enough so that as you lose capacity in any failure, your workload isn't larger than your available capacity after the failure. So, you know, the general advice there is you do an n plus two based on your expected capacity constraint and you deal with it that way so that you can have two components failed at any given time and the system is still operating. I don't do docker cattle for data services. So I would rely on a data services. In my case, I would rely on the cloud data service and use like Galera replication as a solution for that. You can if you understand a persistence application very well, absolutely containerize it. But capacity management within containers is basically the same as capacity management within any other infrastructure. The discipline is exactly the same. All right, thank you all. I'll be up here if you have additional questions. Thank you.