 So, I'm James Bottomley and I'm CTO of server virtualization at Parallels and I'm here today to talk to you about containers, both in Linux and OpenStack, where they currently are, what they can currently do, and where they're actually going. So with that, let me tell you a little more about me just to try and explain why I'm doing this. I'm actually a kernel developer. I maintain a couple of subsystems within the kernel, SCSI and peer risk, a different architecture. I'm also chair of the Linux Foundation Technical Advisory Board. So although I'm a real developer, I'm also a rather slightly political animal as well. I also have a request from my marketing department, which is apparently everybody around here is using Twitter and so I do have a Twitter handle and apparently you're supposed to use hash OpenStack for Twitter. I have to confess that I'm not a great user of social networks. I do actually have some tweets on my Twitter account, but I basically use it as write only. So the best way to communicate with me is via email. And one of the good things about being a kernel developer on the internet is the internet definitely knows my email address. So you should just be able to find it, but if not, you can ask me. Today it's actually my pleasure rather than talking about a kernel developer subject or even a kernel processor or an OpenStack process project to be talking actually about something I do for my day job, which is parallels. Parallels is a company that's actually been doing containers for a long, long time. The container history at parallels goes back about 15 years. So in 1999, SWsoft first produced virtuoso containers for Linux. In 2004, it actually changed its name from SWsoft to become parallels. It took over a little virtualization company producing a hypervisor that was then called parallels. And in 2005, it actually produced an open source version of virtuoso containers called OpenVizet. So this is fully open source released today. The reason why most of you in the room probably haven't heard of parallels other than parallels desktop for Mac, which I gather the guys at the desk got a lot of questions about, is because we concentrate on service providers. We're not an enterprise company. Containers are actually a technology that is very prevalent in the service provider business. That's anybody you buy a web server from or a virtual private server or they do web hosting for you in a colo, they are a service provider. Containers are also actually very prevalent right at the top end of web hosting for companies who want scale and elasticity like Google, Facebook, Twitter, they're also all containerized. But right in the middle sits the enterprise and the enterprise didn't really know what containers were until about a year ago when they started getting interested. I suppose the other thing that I should say about the company parallels is not only do we do containers for Linux, we also do containers for Windows. So virtuoso, although it's primarily a Linux product, most of the features available for Linux are also available in the Windows container product. Okay, so that's enough about product. Let's discuss containers. One of the fundamental differences between hypervisors and containers is that hypervisors are based on emulating virtual hardware. So the hypervisor itself emulates hardware, you boot up a kernel on top of that hardware and you run stuff up on that kernel. It's effectively like virtualizing your machine at the hardware level. Containers are not based on this paradigm. Containers are based on the paradigm of sharing the operating system. So every time you boot up a container on Linux, you actually share the same kernel. And how much of the rest of the operating system you share depends on how you've actually configured the containers. So this is very different from a hypervisor. In a hypervisor, every instance of the virtual machine is totally separate from every other instance. In a container, every instance of the virtual machine can in theory share everything with the host or almost nothing with the host. The only base level that it can't go beyond is they have to share the kernel. There's no choice to that. And that therefore means that you can't bring up, say, a Windows container and a Linux container on the same box because it's impossible for Windows and Linux to share the same kernel. Okay, so this is what I've told you sort of projected up here. So this is what a hypervisor looks like. You get the hardware, you get the hypervisor kernel. The hypervisor kernel emulates virtual hardware for each of those machines. They run a separate kernel. It runs a full operating system all the way up to the top. And that is how you actually do hypervisor-based virtualization. With containers, it looks almost the same. These are full operating system containers, the light blue bits. But they share the operating system kernel at the bottom. To boot up a Linux operating system in a container, you basically start it in it because the kernel is already there. And what that means is that just on this diagram alone, containers are a lot faster to start than hypervisors because they don't have to worry about initializing virtual hardware and booting up a kernel. I can begin straight in it. If I'm really, really clever, as I told you that you don't have to not share everything, you can share other things with the host, I could set up a container that looked like that. So here, everything in black belongs to the host operating system. So it has in it, it has the operating system, it has the libraries. And all I do is start my application itself inside the container. Starting an application like that is very, very fast. It's basically just how long it takes your process to get going. This means that the start times for containers versus the start times of hypervisors can be of the order of milliseconds as opposed to seconds. They can be thousands of times faster just to start and stop, which is where they get some of their elasticity properties from as well. So if you compare them side by side, you can see that the stack for hypervisors and the stack for containers is also a lot thinner. There's a lot less junk in this container stack, especially if I do that, than there is in the hypervisor stack. And all of this means that containers are actually much smaller than hypervisors. In fact, a full hypervisor stack is a full operating system stack. It's of the order of gigabytes. There are some companies out there, companies like Claudius doing OSV, who specialize in trying to strip as much of the rubbish out of this hypervisor as they can. They can get it down to hundreds of megabytes. But that doesn't compare with an application container where I can just do megabytes. And that's it. So that's the saving of either a factor of 100 or 1,000 in terms of actual size. And for this reason, containers are also incredibly fast to do anything with. Because they're so tiny, there's so little junk in them, I can actually move them about and scale them with a lot more speed than I can for a hypervisor. So the lightness of the container is what makes them far more dense and far more elastic. But there's more, of course. Containers, because they use stuff that's based in the Linux kernel, can do scaling both up and down much faster than a hypervisor can. Scaling for a container is effectively just altering a resource limit. If I want to add more memory to a container, I just tune its memory limit up or down. There's no mucking about with anything like balloons or anything else. Their size makes them easily transferable to other operating systems. So this gives us instant horizontal scaling. So because they're so tiny, I can just project them out over more and more machines. Because I can adjust the limits immediately up and down, I can scale them up and down. This makes containers factors of three to hundreds of times more elastic, actually, than hypervisors. So this is some of the promise the enterprise gets out of container technology. And of course, none of this messy hypervisor junk, because in order to do the same thing that a container does, a hypervisor has to speak at the hardware level. So in order to add memory to a hypervisor, it's very easy. You just do memory hot plug. Taking memory away from a hypervisor is pretty hard. You have to cooperate with the guest operating system to inflate this balloon and then take the memory away from inside the balloon. It's doable, but it's not instantaneous like it is with a container. With a container, I just tune one limit and it's done. That's it. With a hypervisor, you have to go through the hardware interface and get a cooperative driver to do whatever you want to do to actually do the scaling. And realistically, all of this makes containers far, far more elastic than hypervisors. They're far more elastic and far denser. And this makes them a much better fit for the cloud where everything is homogenous. Because there's really just far less junk to move around. And if you think about container disadvantages, the main container disadvantage is that they have to share the same kernel. This means on one physical system, I cannot boot up Windows and Linux. But in the cloud, pretty much everything is homogenous. Most clouds are homogenous environments. Most people don't really care that you're running only a single operating system or a single family of operating systems. So for the cloud, containers are actually an ideal fit. The one disadvantage that makes them somewhat unsuitable for desktop or even perhaps enterprise virtualization, the fact they can't run completely different operating systems, they can run different flavors of the same operating system, just doesn't exist there. However, if you listen to hypervisor people, they often argue that because the application is the same, anything you can do with a hypervisor you can also do with a container. So this is the argument that container virtualization technology is really exactly the same as hypervisor virtualization technology. It's just slightly faster and slightly denser. And these are all things that the same arguments were made with about para virtualization. I don't know if any of you remember what para virtualization was. But it was the theory that hardware operating systems can't run virtualized applications and kernels as fast as native hardware. And therefore you had to actually physically alter the kernel that was running on the hypervisor. This was the original Zen paradigm in order to make it run nearly as fast as bare metal. And the contention in those days was that para virtualization was required because there was no way the hardware could run fast enough. Well, fast forward today. Who runs para virtual kernels today? Anybody in the room? One person, well done. We have one person still running paravert kernels. But the reality is that Intel and AMD added all these extra hardware features that were purely for hypervisors and they basically closed the gap between fully vert and paravert until it almost didn't exist. Hypervisor people use the same argument to say that eventually the hypervisors will catch up thanks to hardware tricks with containers because they'll just run fast enough. I don't believe this is true because obviously they're now, it's the same operating system running in the same box. If you make a hypervisor 10 times faster, I'm afraid the same container goes 10 times faster but it is the same argument. But let's put it into context. A long time ago when the world was still young and computers were things like this which is massive valve-based systems in huge machine rooms. This, depending on how you believe history is the world's first computer. This is EdSac. EdSac was built by Cambridge University after the success of the British War Office computer. The guy on the left in the white coat is actually Professor Samoris Wilkes who actually built the world's first computer. But this thing was so, to them, it looked like a marvel of technology and it took a team of people to run it. To us today, EdSac has less computing power and less memory than actually my mobile phone that I currently have in my pocket. I think it's in this pocket. So I'm a bit of a technology Luddite so this is an old Google Nexus One, vintage about five years ago, but it still works for me. This phone has more computing power and more memory capacity than that computer ever did. But they're both Turing machines. In theory, I can make the argument that whatever EdSac could do, my phone could do and whatever my phone could do, EdSac could do. This is a Turing argument. They're both Turing machines. Any program I can run on my phone, I could, in theory, run an EdSac. Under that argument, EdSac must be as useful as my phone. The slight problem is that there was no way I was ever going to be able to put EdSac in my pocket. Nobody ever thought of making a phone call on EdSac just because it wasn't something they did. They had these tiny little desktop things called telephones for doing that. The point of all this argument is that things may be Turing equivalent like hypervisors and containers are Turing equivalent because they're both machines, but the paradigm that's using them is not the same thing. And the reason why nobody ever thought to make a phone call on EdSac is just because when you looked at it, it wasn't the sort of thing you thought it would ever be capable of, even though its computing parts would have been actually capable of doing that. The reason nobody thought about it is because they didn't think in the same paradigm that said a phone is a computer because in those days phones weren't computers. The fact that computers could be phones never occurred to them. The point about this is that if you think in the wrong paradigm, some problems that appear simple to people, say beings with legs, look insurmountable to beings with wheels. This is a very famous, I think it's a New Yorker cartoon about the Doctor Who and the Daleks. It's why the Daleks could never conquer the universe because they could never climb stairs. Because they thought in the wrong paradigm they could never work out how to do it. In fact, I think it was 30 years later in Doctor Who that the Daleks finally learned to climb stairs, but it's still 30 years later. But the point is that containers represent a different paradigm in the way of looking at things. We'll come back to that later in this talk. The first thing I actually want to tell you is how containers and Linux are doing. So 2005 was when we actually open sourced virtuoso containers for Linux. That was actually the first open source container technology, but it was out of tree. 2006 was when what was then called process containers and is now called C groups came along. 2007 was when Google used C groups to actually containerize search. So this is the year the Googleplex went pretty much fully containerized. 2008 is when LXC version 0.10 is released. So that's when people who actually ran Linux could actually do things with containers. The problem in 2008 was there wasn't much in the way of technology in the Linux kernel that made containers very functional. So there were some users. We had a lot more users on OpenVZ, but by and large the enterprise regarded containers as a curiosity. 2011 is when we actually vowed to change all of that. 2011, all of the container people got together on the fringes of the kernel summit and decided that we would do something about containers. Two of us that was parallels in Google actually had out of tree container technology. We sat down with the people who did have entry container technology and we came to agreement about how we would move all of this upstream. We also saw the disaster that resulted from KVM and Zen. KVM and Zen both exist as Linux hypervisor technology, but inside the kernel they're fully separate subsystems. They share almost no code. And the disaster with KVM and Zen for you in the enterprise is it forces you to choose. How many people have had to do virtualization with Linux and had to choose between KVM and Zen? Is it something you do fairly commonly? That's about 10% of the room. Choice for the enterprise can sometimes be a bad thing. So we all came together and we agreed that there would only be one container technology in Linux, only one underlying technology for containers. It involved a lot of very painful choices for all of us because all of the out of tree container technologies had their own stuff that replicated what was being done in tree. But we all agreed that we would actually work together to strengthen the entry technology and make it the de facto way that containers work in Linux. And we also agreed that it would be the best from all of us. So if there was some out of tree technology that worked better than the entry technology, we take it into tree and kick the entry technology out. So the net result is that we began container unification of the kernel API level in 2011. What's now in the kernel is C groups and namespaces, which is the technology that everybody uses to orchestrate containers. That's everybody. That's us, parallels. That's LXC, Docker, XeroVM. Anybody who has a container technology uses this. And in 2013, the first Linux kernel supporting OpenVz with no kernel patches at all was actually released. So you can now on a community distribution like Fedora, OpenSUSE, or even Ubuntu, bring up OpenVz with no kernel patches at all. This is a demonstration of the technology unification behind containers. LXC will work the same on that kernel. OpenVz will work on that kernel. And Docker will work on that kernel. XeroVM will work on that kernel. Everything is now unified at the kernel API level. The problem is that 3.12 is way beyond where all enterprise kernels currently sit. So if you're from the enterprise and you're running an enterprise kernel like REL or SLES, you're stuck on something like 2632. You're about three years behind the kernel that is actually required to run unified container technologies. So let me tell you about some of the new, or not so new now, C-groups, pretty old container technologies. C-groups is something that allows you to control resources allocated to groups of processors. That's all it does. It's basically a bean counter for the kernel and it bean counts in groups of processors. The buckets are, well, there are lots of buckets. The important ones are CPU, memory, IO bandwidth, network bandwidth, and so on. That basically means that you can use C-groups to take a group of processors and restrict the amount of resources in your machine they consume. This is how we impose resource limits on all containers using the C-group mechanism. If I have a 10 CPU machine and I only want the container to run one CPU, I fire up the CPU, C-group, and I make it only run one CPUs with. Namespaces are the way we isolate things inside containers. They separate resources by making them only visible to the processors within those namespaces. The namespace you're probably most familiar with is a network namespace. Once a network device belongs to a network namespace, it's only visible to processors that are also sitting inside that network namespace. There are in fact six namespaces now within the kernel. Network, you probably know about, UTS is how we virtualize the hostname. The actual machine name and its domain name and its NIS and all of these other fun parameters that are stored in the kernel virtualized by the UTS namespace. The mount namespace allows us to have a different mount tree per container if we so choose. The IPC namespace is used for inter-process communication, making that the IPC semaphores per container unique. The process ID namespace allows each container to have its own process tree that starts at PID1, which is required if you're running a NIT because the NIT gets very annoyed unless it runs at PID1. And the last one is the username space, which is what we actually use to do security separation within containers. The point though is that containers can use all of these in combination or any of them, or indeed almost none of them. However, heavy or light, you want to make a container as a choice you make. And if you look at the number of things you can choose from, I believe there are 12 C groups and six namespaces. It's a pretty big chocolate box for you to choose from. There's also been a lot written about container security. There's contentions all over the place that containers are not actually as secure as hypervisors. This is not really true. Parallels in Virtuoso, we've been running secure containers for at least 10 years. I'd like to say 15 years, but I have to confess that in 1999, the Virtuoso containers weren't that secure. Hostile root is actually a requirement for the hosting providers. When you buy a virtual private server from a hosting provider, they give you root. They don't know that you're some wonderful person who comes to the OpenStack Summit and wouldn't dream of hacking the system. You could equally well be Mr. Black Hat who slapped down his 10 euros and is actually going to hack into this system and try and compromise the whole thing. So hostile root running in a container was a requirement before hosting providers would use them. This is something that we have been doing in containers for 10 years. So the allegation that you cannot give root in the container is actually untrue, but you have to be careful to make sure you have set up the correct security environment before you do it. We achieved this in parallels with something called a capability. Capabilities are a way of adding or restricting things that a user can do. So we gave nobody, the nobody user, a root capability within a container. That means that if that user escapes from the container, that user is nobody in the host. So root in the container is not root in the host. This is how we maintain container security. The enterprise has had a nice little sideline into security context like App Armor and SE Linux and doing security labeling to try and actually preserve security in containers. But this is not sufficient for hostile roots. And the reason is that root in the container is still root in the host. If anything happens and root breaks out of the container, it's automatically the super user in the host and it can destroy or damage the entirety of that physical system. As part of the agreement in Prague in 2011, user namespaces are the way we do this security inside containers. The slight problem is that no distribution really enabled, well, okay, there were a few distributions who are out there, but a lot of major distributions did not begin enabling user namespaces until 2014. This means that if you bring up LXC containers or OpenVZ containers on any distribution before about 2014, you do it without security. You do it without the user namespace just because it's not enabled. Fortunately, in the next generation, even of enterprise operating systems, user namespaces will be enabled. So security will be yours by default. But the point I'm making is that a lot of the complaints about security in containers are because people are comparing containers without actually security mechanism turned on. It's sort of like complaining that your car doesn't have a body because you bought it in parts and you forgot to buy the body shell. But what I've been exhorting you to try and think of is containers is the new paradigm. What you can actually do to take true advantage of the stuff that containers has to offer is actually design systems with containers specifically in mind. The first one of these systems to hit the market today is something called Docker. Docker has been achieving a lot of excitement as well. Docker uses containers to create lightweight packages for applications that give it instant portability. This is why the enterprise is paying attention to this. But the point I want to make to you today is that Docker is not Linux containers. Docker is an application which uses containers. So Docker is a consumer of container technology, but it's one consumer that has gained an awful lot of attention. So what if we could actually encourage more applications to be containerized and consume the full possibility of containers? What actually could we do then if we actually expanded our horizons of the world and instead of thinking of Docker as Linux containers, we expand to think of if we thought beyond Docker, what else could we containerize to take advantages of? Because before Docker, everybody was happy with puppet and chef sort of exporting stuff into hypervisors. When Docker came along, it suddenly looked a lot faster and a lot needed to do it the Docker way. What other tasks do you do in the enterprise that actually adapting those tasks to containers would make easier? And how would you do it, of course? One of the things that's most obvious in the cloud is tenancy. Tenancy is actually a big problem for most cloud applications. In fact, most of the large cloud applications are especially written to be multi-tenant. And lots of teams of developers, sort of hundreds strong, spend ages working on huge cloud applications making them multi-tenant. But if you think about taking a single non-tenant application, you can easily make it multi-tenant just by containerizing it. The way I do it, if you ask me to, I'd take the application, I'd give it a mount namespace, so it now has a private data store because that allows me to project data privately into the container. I'd give it a network namespace, so then it would have its own IP address, I can project on a new IP address. I'd fork it end times with a new namespace for each fork. Each of those end forks running in separate containers on that hardware is now a multi-tenant version of that application. Simple. Done in, well, I told you in five minutes, it would probably take me a few hours to do it, but that's me doing it on my own instead of a team of 100 engineers. This is another advantage that containers could actually give you if they were used correctly in the enterprise today. And of course, this fork is also something which would scale across nodes because I can now pick up the namespace with the fork in it and I can move it to multiple different physical machines as well. Just by containerizing this application to make it multi-tenant, I've also made it instantly scalable and instantly scalable vertically and horizontally, so both above into the actual resources of the single node and across into multiple nodes. So this basically has made my entire application cloud ready. I was gonna give you a brief aside into how container migration was done, but because the last session over ran and we had trouble with the video, I think I'll skip this one. But we do have another open source project that actually allows you to do container migration. But the most important thing that we really think we need to do today is to enable novel uses of containers. We want more applications like Docker, both in the enterprise and outside it. I want them because I'm selfish, we're a container company. The more applications that require containers to run, the more of our container technology I sell. But for you, it should also be a way of actually gaining novel function in whatever applications you're writing that actually allow you to proceed to appeal to your consumers. And the key is providing containers in easy to use form. What we've actually done at Parallels, one of our engineers has created a library and an open source project for doing this. That's actually the GitHub repository for that library. It's github.com slash xemul slash libct. That is an actual, not quite fully fledged. It's a fairly raw container library that would allow you to consume the container properties on any kernel that we have today. An OpenVZ kernel, a Linux kernel, it has to be a fairly modern Linux kernel and so on. Docker had to write this almost from scratch when they started. So we thought that giving everybody a leg up would actually be a way of getting containers more easily consumed by the enterprise. This exposes container features directly to applications. And it can also be used to bridge to older containers technology. So things like the OpenVZ kernel, if you write your application, remember the APIs are different between these kernels because we merged the APIs upstream. But if you use something like libct, we can actually bridge the divide between the different APIs in this technology. And your containerized application will run just as well on an upstream kernel as it would on a much older OpenVZ kernel with a different API. One of the things we can actually use this to do is deploy Docker to OpenVZ kernels earlier than 3.8, which is usually the Docker barrier because of the container technology. We have also discussed doing backends for Solaris Zones and containers for Windows. If we do this, this gives us the ability to use Docker, not only to deploy for Linux, but also to deploy for Solaris and Windows almost immediately. It's quite an interesting possibility. One of the other problems that people discuss a lot is containers in the enterprise. The enterprise actually has significant investments in hypervisor hardware. Things like network functionalization and SRIOV, these are hardware designed for hypervisors. The theory is they just won't work for containers because they run by projecting the hardware interface. Remember, hypervisors based on hardware, containers based on virtual, virtualizing the kernel, projects the hardware interface directly from the hypervisor into the virtual hardware that the kernel of the virtual environment is running on. And obviously we can't do this with containers if we all share the same kernel. So let's look at that. This is how NFV and indeed SRIOV work. You have a physical interface which projects virtual functions. You use hypervisor pass-through to present those virtual functions straight up into the virtual hardware, and then the kernel of the virtual environment takes advantage of it. Very simple. In a container, we think we could do it in exactly the same way, but instead of projecting the virtual function into a different hardware, which we don't have, we basically attach a driver for the virtual function in the single operating system and then use a namespace to project either whatever that device that appears there, be it network or anything else, just straight up into the container. So in theory, we can take advantage both of SRIOV and of network function virtualization using containers. Nobody's actually produced a proof of concept for this so far, but this is the way we think we're going to do it. So this is the future's part of the talk that I promised you. And in theory, if you look at those two diagrams, the path between the kernel and the network is still shorter than the path between the hypervisor and the network. In theory, doing it this way will actually give you more performance than doing it this way, which is something useful and it means that all the investments you've made in hypervisor technology will not be wasted. Like I said, we can do this identically with network function virtualization too. But the point here is that thinking differently about the problem, you can actually even reuse things that you would think just wouldn't apply to container because the containers, because they're based in hardware. So containers are an open stack. We do have a few entry container drivers in Nova, but they are not very functional. Docker actually has a heat driver that it's currently doing as well. That is pretty functional. There are lots of out-of-tree drivers for the Parallels container, the LXC driver and the OpenVZ one. LXC also has an entry driver going by Libvert, but if you use Canonical's distribution, I believe they're currently shipping the out-of-tree direct driver for LXC. But longer term, we think we might be able to use the LibContainer project that we'd like to offer you to containerize applications as a single control plane into Nova for containers in the same way that Libvert is for hypervisors. So this would actually allow us to present the full power of container technology through Nova directly to consumers in open stack. And obviously this is an API challenge because right at the moment, the upper APIs presented by Nova are hypervisor APIs, not containers APIs. But if we did this and we do the backend drivers, we might also get Solaris and Windows containers orchestrated in Nova for free. So it's worthwhile considering. I was hoping we might have had the meeting where we'd agreed on this to tell you now, but I can say that the containers people are coming to agreement that it's something we should be doing. We have yet to convince the Nova people of that. So that's our homework for this design summit. So sorry, overrunning badly now, conclusions. Containers are here to stay. They have security and isolation features to match hypervisors. They can also be used in a much more granular fashion which makes them much more interesting. They can make use of hardware that you've already purchased, like NFV and SRIOV. And we haven't even scratched the surface of the possibilities of applications we can build with containers. Hopefully that's going to be your job. But we are definitely intent on enabling you to try. So with that, I'd like to say that this presentation was done on Impress.js, if you liked it. I actually hacked this up, so Impress.js is actually on my git.kernel.org website. So rather than a kernel developer, I'd become a web developer. And with that, I'd like to say thank you and I presume we don't have time for questions. Oh, we do, okay. If there are any questions, we'll entertain a few. Hi, we know that signal processing in the hypervisor is problematic and slowed down quite a bit. Could you explain that how it works in the container and is it slowed down or is it bare metal speed? So you're talking about signal processing where you're actually using an ASIC to process the signal? Yeah, yeah. So remember that the container runs in the same kernel as the host. It shares the same kernel. So the driver you use for that ASIC is already present in the kernel that you're running it on. So if you project that using a namespace into the container, it just runs at bare metal speed of the host kernel. So yeah, it just works. It should just work. Anything that just works in the host kernel could be made to just work in the container as well. Okay, thank you. So great presentation and a couple of questions. What kind of workloads do you see going up in containers and what kind of workloads do you see not going in containers? Number one and number two, where do you see the config management space in that whole ecosystem, Chef, Puppet, CF Engine evolving as more and more of these container workloads are in production? So the first question was, which workloads do I see for and not for containers? So right at the moment, what we're really seeing is that workloads that need scale and density, the first ones adopting containers. So the real primary user has been database as a service. That's the Trove project that's currently under discussion. I believe they're actually, they're having their discussion now, which is why they're not here. But longer term, it looks like that the enterprise is moving more into a mindset where it actually has to do business from its own sort of enterprise data center to end customers. And that makes scale and density a requirement for almost every application that's put into the web. So I see database sort of, database as a service is the leading edge application, but on the trailing edge, I could see almost everything being containerized just because it wants to take advantage of the scaling properties that containers give it. And sorry, could you repeat the second question? The config management space. Okay, so the config management space, Docker is actually showing us how we go forward in that space. So they are producing what's effectively a template diff in order to try and manage different configurations for a container and therefore give you an environment that you can transport seamlessly between one laptop running a different cursor, a Red Hat laptop, a SUSE laptop, or even docker.cloud. So this is one actual mechanism to manage containers. Obviously things like ChefPuppet and CFEngine are also deployment systems. They too could actually just take advantage of containers. They could also take advantage of the binary diff features that Docker uses. They could also be used to actually manage containerized applications. The problem with containers as I've sort of, I slightly sidestepped it is that, like I said, there are 12 C groups and six namespaces to cope with. Just trying to work out which one you should choose as a hard job. It's not like hypervisors where there's one choice, you get a virtual environment. Choosing the right environment can be the difference between making your application run beautifully and making your application run horribly. Containerizing at the right level is a much harder thing to do just because you have much more choice about where it's done. And I did this Zen versus KVM for giving the enterprise more choice because it's hard to cope with. And here am I giving you much more choice. But in the end, we'll actually work out how to do it efficiently. And I think if you've tried it, containerizing an application with Docker is very easy. They've got that use case down pat. And they actually do it in a much more reduced subset of a namespace than an operating system container. Great. Thank you. Hi. How effective are containers at isolating virtual memory? So the question was how effective are containers at isolating virtual memory? I believe that was a section of the slides I skipped over. The one thing we don't have upstream in the kernel yet is something called the kernel memory accounting piece of the memory C group. There's a guy at Parallels called Vladimir Davidov, who is still pushing the patches and was still arguing over them. So right at the moment in the released kernel 2613, we can exactly scale user memory. So we can account for it and we can attach it exactly to different processes. The thing we currently cannot control via C groups in the kernel is kernel memory. So a container in 2613 can actually do a denial of service attack where it just creates inodes and inodes and more to entries until it actually runs the underlying kernel out of memory. When we finally get the memory C groups for kernel memory patches upstream, you will not be able to do that. We'll actually be able to confine the kernel allocations for that container to that container as well. And at that point, we will have full memory isolation for containers. So the answer is we don't have it today. I know my open source roadmap for the Linux kernel that I keep quietly at Parallels says we will get an MCG kernel memory accounting up in the 216 kernel. So I'm hoping that kernel will be the one where containers finally, finally work just like they work in OpenVz. Hi, great presentation. Thank you. The Docker seems to be pretty popular, obviously. It gets lots of price. It seems to be a pretty easy way to get started experimenting with containers. But it also by defaults puts a couple of restrictions on what you can do. Some doesn't give you access to all the same things that the kernel has. Is there another approach besides Docker that you can recommend for somebody who wants to experiment with containers? So the question, are you asking me whether you can do it or you're asking me whether there's anything that makes it easy? No, I'm asking if there is, for people who want to experiment with containers, is there an alternative besides Docker, maybe which exposes a bit more of the knobs and wheels? There's LXC, there's OpenVz. I'm not sure, 0VM is based on a different use case, but LXC tools will actually allow you to be much more granular about what the container does. Docker for the most part orchestrates its containers using LXC. If you look at the LXC tools, they'll show you what they can do. When you become a Dabhander containers, sort of like, I've been doing it for a long time, I actually don't even bother with the tools. I just, there are commands and Linux, like there's an IP namespace command that you can use for creating IP namespaces and there's a mountain namespace command. You can actually physically construct the container on your own, but it's much harder. So there are lots of tools that exist to allow you to do it. The basic problem we have is there are too many tools. It's too difficult to get confused with all the tools and not be able to work out what's been done. So which set of tools would you recommend? If you want to run a container as a full-fledged virtual operating system, OpenVz is the tool you should be using. If you want to look at experimenting with the granular capabilities of containers and seeing sort of what types of applications you can put together, LXC and lib containers that we just presented as the open-source project is probably the things you should be looking at. Great, thank you very much. Maybe the last one. Oh, no. This is going to be the last question. They're cutting me off now. So one of the things you skipped over was the container migration, which for large cloud operators is potentially quite important. So could you circle back and not skip over it? The town people will kill me if I do that. But basically, there have been many attempts to get container migration into Linux and they all failed because they were done inside the kernel. And when you try to add migration inside the kernel, there's so much information you have to haul out that it's just not viable to do the patches. The patches are a fraction of the size of the kernel itself. So the approach we took for the checkpoint restoring user space, which is a project sponsored by Parallels, is that instead of using in kernel hooks, we actually use debug information that's exported by the kernel to reconstruct the in kernel state. A lot of the kernel patches for CRIU have been adding what are effectively debug interfaces to do that. And now we've reached the stage where we can certainly migrate an open VZ container directly using CRIU from one node to another. And we can even migrate its network socket so we can cap off the TCP socket on one and we can transfer it over and just reconnect it. So CRIU is, it's not quite to the point where it's fully fledged enough to migrate everything. There's still trouble migrating Docker containers and LXC containers, but it's being actively worked on. And with that, I think I should say thank you and I've trespassed on your time enough. Thank you very much.