 Okay, hello everybody, thanks for coming. This is getting to be about the last session on the last day of at least the conference part of the OpenStack Summit. For those of you who are staying at the design summit, you still have another day to go. So I'm here to talk about building the next generation of containerized applications. My name's James Bottomley and I actually, formerly I was CTO of virtualization at Parallels. For those of you who don't know, and a marketing department will be very sad if you don't know. Parallels itself was recently rebranded to the company called Odin. They did this for various esoteric and marketing business type reasons. But the main reason for me is that I now have, don't have to come to conferences like this and answer questions about Parallels desktop for Mac because now I work for Odin. So I know in the past five days or four days so far, you have heard a lot about containers. So this is why you should probably listen to me because Parallels, or actually now Odin, is the oldest container company in the world, pretty much. Depending on how you count, but it's certainly the oldest company actually doing Linux containers. The company began life as SWsoft in 1999, dedicated to actually producing container technology for Linux. The first release was actually in 2001. It's almost the same time that Solaris released Solaris Zones. The open source release that we did called OpenVz, which you may or may not have heard of was in 2005. And based on this, Linux process containers were finally started to make their way into the kernel in 2006. Actually in 2007, work on Linux containers in the upstream kernel came to something of a bit of a halt, largely because Google bought the entire C Groups team lock, stock, and barrel and shifted them off to Mountain View to go and work on containerizing the Google Plaks instead, which is why containers in Linux has had such a bumpy ride and such a bad perception in terms of security and isolation. However, Search Helene and his crew worked in diligently on actually making process containers, what we then call process containers, now called C Groups, actually usable by other people with the LXC, Linux Containers Project. First version of that, 0.10, was released in 2008. Now, things in the Linux community still sort of bumped along. LXC wasn't quite good enough to do a lot of the jobs that a lot of people wanted to do with containers, giving containers a bad rap. We had our own version open source but still out of tree in the Linux kernel and what had been going on underneath the covers in the Google Plaks, they'd also been heavily modifying C Groups and not pushing those modifications back upstream. So in 2011, having seen the disaster that resulted from Zen and KVM to completely separate technologies, completely separate things in the Linux kernel, we got all of the container powers together and we plus a few of the distribution people and we tried to agree on a single kernel level API for containers and what came out of that, it was effectively the kernel summit in 2011, was this agreement that we would all start working on actually pushing our technologies upstream and we also agreed on which pieces of the technology would push upstream and which everybody would have to abandon. So it was actually a painful process for all of us because we all lost something that we held dear in that but eventually the API that made its way into the kernel is the container API for all of us. So the next release of the Odin product Virtuoso 7 will actually be based on the upstream containers API rather than the previous OpenVz API. Same is true for Google, they're busy pulling this all back through the Google Plaks and of course one of the big beneficiaries was LXC because as part of this process we pushed all of the container hardening we'd done in OpenVz up into the kernel and we'd actually done a lot of this because our primary business goal for Odin is to actually enable service providers to run what are called virtual private servers, effectively virtual machines, IS servers and the cloud on containers and to do that you actually have to give out root on your container and that means that you have to cope with any hostile person who has $10 to plunk down per month via VPS to do this and we've been doing it for 10 years. So when it comes to the question of container security we at least on the Odin side believe containers have been fully secure for the last 10 years because we have a significant proportion of the entire hosting business in the world relies on this for their day-to-day business operations. Anyway, about me, because of course I'm talking here it's all about me. I'm also parallels or Odin's container evangelist. I'm an open source advocate, I've done a lot of work with the Linux Foundation, a smaller amount of work with OpenStack. I also work with companies converting them to open source so the reason I went to parallels is their challenge to me was you've given lots of talks about converting companies to open source now come and prove you can do it. And I'm also a kernel developer which makes me a slightly rarer breed around here so I have a long history of actually developing and contributing to the Linux kernel and today I'm actually a maintainer of the Linux SCSI subsystem and the PA risk architecture. Not that that's relevant to OpenStack, of course, that much. So if we look at container basics, most of you know this. So hypervisors are based on emulating the actual physical hardware. Containers are about virtualizing the operating system subsystem itself. The main difference is that with a hypervisor you require two kernels, one in the host and one in the guest to run on the virtual hardware. With containers you share the same kernel so it's all one kernel throughout. The containers use the same kernel as the host. And we can actually look at that. So this is a standard, this is what a hypervisor guest would look like. It's got its own kernel in its system, operating system, libraries, applications. A container if you looked at it would actually look like this. So we don't need our own kernel. We share that with the underlying system and then for an operating system container you can actually put everything else inside the container and run it. This means that you can use a container based on a Linux kernel to bring up securely say Ubuntu and RAL and Fedora and everything else. We can do this because the Linux kernel is actually highly robust in its ABI layer. You don't actually need a specific kernel for any of these distributions. One of the great advantages of containerization as I think probably you've been told is that because we all share the same kernel it means that the kernel which is the engine that's been designed to do multitasking and resource sharing actually uses all of those resource sharing pieces to make sure that containers themselves are actually shared properly. So for instance, hypervisors like Xan have to pull a huge amount of tricks to just share memory between guests. They have this whole KSM subsystem to try and do this. With us it's simple because the file opened here goes into the kernel, the page cache is actually shared between multiple guests. So with the container model we actually get far more sharing of resources naturally than you do with the hypervisor model. And this naturally makes containers far more dense than hypervisors just because of all the sharing. They're also dense because there's a lot less craft actually in the image itself. And they're also dense because you can actually share sometimes from the outside. Sharing from the outside is actually the application container case. This is the case that most people talk about Docker doing. Docker itself is sort of an application container, a process container because it does share stuff from the outside so it takes stuff from the host operating system and uses it inside, things like DNS and so on. But in many ways Docker is actually a full operating system container because inside the Docker image are all of this stuff. The Docker system image is actually a full system image. It goes all the way from the operating system of the base on up. But most of these things can actually run outside of the container. So in many ways it does look like an application container. But the real purpose of this talk is to examine the question of really the application containers represent the last best thing we can actually do with container technology. The reason for asking this question is if you look at what we did before, we were putting operating systems inside containers. So we fast forward to the new paradigm. We're actually just putting applications inside containers. So a container is a box. It's very easy to understand. But the question you can ask yourself is what we put into the box isn't actually aware of what the box is. It behaves like it just sees a natural operating system. So the question I'd like to ask and answer in this talk is if we actually went a step further and made containers more aware of what was going, or made applications more aware of what was going on with containers, we'd actually see if we could develop a next generation of applications that had far more capabilities possibly than the current one. So the question I'd like to ask is is everything we want to do with containers just put things in them? So to answer that, you can actually think about reimagining perhaps the way containers work. So instead of your application inside a container, what if your application actually talked to the container technology in the kernel? So effectively you'd have an application that itself was becoming container aware. And you don't actually have to go too far to imagine what this type of application would look like because they exist today. Docker is in fact one of these applications. So a lot of the confusion today is over Docker and containers. Many people actually believe that Docker is containers, which is completely untrue. Docker is a system for actually making use of containers. It's an application that's actually aware of the Linux container subsystem and uses it to perform specific tasks. It's actually an application packaging and transport system that uses container technology. The beauty of Docker and one of the reasons it's tied directly to containers is it just won't work without containers. The only way Docker functions is by using some of the tricks that the Linux container subsystem do. So you wouldn't get the magic of Docker without containers. This is why Docker today only really exists on Linux. In theory it could exist on Solaris, but it cannot run on Windows because Windows natively has no container system and Docker does not bring containers along with it. So this is why Docker is currently a Linux phenomenon. And I think a lot of you in the room when I put it into those terms would actually believe that Docker was the world's first containerized application. But this is actually completely untrue. About 10 years or five years before Docker appeared, there was this thing called SystemD. If you have been watching Linux at all, you probably understand what SystemD is and I probably, many people have been cursing it. I don't really want to get into the controversy over SystemD. It's the latest greatest replacement for the init system according to its proponents and the latest greatest attempt to take over the world and break everything according to its detractors. But the point about SystemD is that it grew up trying to use parts of the container subsystem to perform init tasks. That makes SystemD actually the world's very first container aware application. It was doing this as early as sort of 2008, 2009, probably 2010. But one of the questions you could ask yourself if you're suddenly inspired by this and want to try and use containers on your own is how easy is it actually to use the Linux container subsystem? So this is what it actually looks like. Inside the Linux kernel, we have a set of C groups. There are actually 12 of them, I think, at current count in 4.0 kernel. And we have a set of namespaces. There are six of these in the 4.0 kernel. Each C group and namespace has a slightly separate control plane and a slightly separate way of working on stuff. C group control planes mostly sit inside CISFS. Namespaces are mostly controlled by the capabilities added to the clone system call. If I've lost you at this point, don't worry, this is the big problem with actually using containers on behalf of applications, this stuff is really, really hard and really, really difficult to use. The native container interfaces in Linux are pretty much toxic. Nobody who is not really a container and kernel expert would be able to use them. In fact, you can look and say SystemD is the only container-aware application that actually really managed to use them. Docker, when it was first brought up, did not manage to actually use them natively. What it did was it pulled a trick where it used LXC to orchestrate containers in Linux. However, Docker has been trying for a long time to actually get itself in touch with the native container interfaces in Linux. And one of the things we can talk about is the way that they did this actually renders the container interface more usable for everybody else. So the usual way we hide toxicity in the kernel interface is to actually invent an API and a library for it that makes it much more palatable to the end user. So most of you as end users have never, you know if you program and see you've heard of the open call, but that open call is very dissimilar from the open system call that the library actually calls for you. It takes a lot of the awfulness that goes on the system calls and just hides it from you. So we can use this same mechanism for hiding all the complexity of containers from you, we hope. And therefore, something was born that was actually called LibContainer. It turned out that there were two separate companies working on this. So we were working on it because we have partners in the platform as a service space who wanted to use our virtuoso containers to do something like this. They wanted, they were called Gelastic. They wanted exposure of the application aware container facilities. And Docker was doing this because apparently they were desperate to get LXC out of the picture so they could claim they were no longer based on LXC. In either case, you need a library for manipulating containers that's palatable to application writers. So here's what Docker did. Instead of having to go from Docker to LXC into the kernel, they're going to go from Docker through LibContainer into the upstream kernel. In theory, that library would be directly consumable as a container orchestration library. The slight problem with it is it's written in the go language. I don't want to get into all of the controversy about why everything should be written in go. All I'll point out is that there is no way of binding from go to any other language currently. That means that if you wanted to use this library from C, you cannot without writing a whole load of interesting wrappers. We looked at writing the wrappers and they are very interesting indeed. So this library needs to be made much more, it needs to have much more bindings added to it. Now fortunately, we at Odin were working on this thing called LibCT. LibCT does exactly the same as LibContainer does, except it was written in C, therefore it had bindings for C++ and we also did bindings for Python as well. The added advantage of LibCT over LibContainer is that we didn't just think about orchestrating the upstream kernel, we thought about how we would have to orchestrate the previous kernels that we'd done that had very different APIs from the Linux kernel. So this was also imagined by us as a shim layer for bridging the gap between our kernel as currently is in Virtuoso 6 and the new upstream kernel that would be in Virtuoso 7. And the advantage of doing this is that this shim layer abstracts the container properties in a way that's not dependent on Linux C groups and namespaces because actually we didn't have C groups and namespaces in the Virtuoso 6 kernel. And that means that it becomes possible to plug other kernels like Solaris or even Windows if they get their container act together in under this as well. So this has the potential to be a container abstraction library for all operating systems. And it also means that it's a way that people who are interested in developing natively container aware applications could use them that you can develop them and prototype them on Linux. But if Windows came along, you could eventually use this on Windows or if Solaris came along, you could eventually use it on Solaris. So what's the current status of the project? Well, there are two GitHub repositories you can go and look at and see. LibCT is ours, libcontainer is Docker's, okay? We announced a collaboration with Docker between Docker and Parallels in June of 2014. I believe the Docker guys went on stage and announced it at DockerCon. Since that day, we have managed to do a complete API convergence. So the API of libct and libcontainer is now identical, modulo, all of the various calling problems with go. The next step is actually replacing the direct call from go into the kernel system calls and putting libct, the C library underneath it. So go, effectively Docker's libcontainer becomes go bindings for the C library. There's already an outstanding pull request for doing this. We're still just basically getting it through some final testing before it's done. So all of this is nearly complete. The question though, you have to ask is, is this API usable by people like you who actually might want to develop natively containerized applications? And the answer to this is, unfortunately not really, as we've tried it out. The API is still a very low level API used for people who understand much more what's going on inside containers, much more than people who just want to make container aware applications actually wanted to do. And so this led us to think that what we should actually do is think about how containers are used today and how therefore we might expose much higher level features on top of libcontainer for people who actually wanted to use this to do. So what we did is obviously the main use case that everybody's aware of today is Docker. I mean, I don't really need to explain that use case. The other big use case for containers is tenancy. You take an application, you put it inside a container, you replicate the container, that application becomes n different separate copies of itself, each with a separate IP address and a separate file system store. That gives you natively a multi-tenant application. So this is one of the big uses that people use our technology for in the cloud today or in the service provider space today. The other use that I won't let you forget is system D. This is all about resource control within Linux itself. And one of the other interesting use cases that's coming along is networking, effectively network function virtualization done inside containers. So one of the things we can actually consider is how would we actually break this up and see what each one of these things does. So if we start with the Docker use case, Docker's big assets, the thing that everybody really loves about it is the fact that it has this wonderful image format, this cascading image format. You can take a Docker image file, it's tiny. You can download all of the pieces because they're SHA-256 sums. You can modify it, that gives you a new SHA-256. You can save the binary diffs into your XML file and you can upload the whole thing to the Docker hub. That's what actually gives Docker the process agility that the enterprise itself is really seeking. And what we decided when we looked at Docker is that using container technology, the actual Docker imaging system is separable from Docker itself. And so one of the projects that we have in a... Okay, so before my engineering manager shoot me, what I should say is that everything I'm talking to you about today, unlike most of the other talks in OpenStack, is not about current product. It's about future advanced development we're doing inside Odin. So I am not promising that any of this, other than by being released as open source, will see the light of day as a product. So I'm looking far into the future. So what I'm showing you now, you can try our Git repository. You can download it and see how well it works for you, but it's really advanced stuff. It's not production ready today. But what we're actually trying to do is separate this image format out into a linkable library that could be added to almost any application. The project that we have to do this today is called Mosaic. It's being developed by a guy in Moscow called Pavolimelyanov. And it abstracts the Docker concept of cascading name spaces used to build up this image cascade. One of the disadvantages of Docker is the SHA-256 sum. It seems so very easy and so very simple and it looked brilliant. The problem with the SHA-256 sum is that one of the things that people who use this for want to manage is not only the application life cycle, but also the operating system life cycle. So what happens if you build a Docker image on top of Ubuntu and that Ubuntu operating system actually had Heartbleed in it and you want to patch that rather than rebuild the entire image. So rebuilding the entire image is easy. That's the way people have done it. But what you actually want to do is go through your Docker XML repository, identify all of the images that have Heartbleed and just quietly slide out a patched version of Ubuntu. This means that you don't really want to use the SHA-256. You want to use an upgradable read-only bottom image for all of this. And so the Mosaic project was actually set up to correct some of the defects in Docker image handling. And what it does is it hides all the basics. This can only really be done with container technology. But from the consumer, it hides everything that we're doing. So you don't need to know anything about man namespaces to use it. You just set up your cascade and you have given this sort of file system on your own and you go away and play with it. And this file system behaves in a manner identical to the way that a Docker image behaves. So you can up rev it, down rev it. You can do version management and fall back. So it gives any application instant versioning and rollback, which is useful for a lot of really big applications. But if you think about packaging systems, this has been their holy grail as well. One of the things that Project Atomic finally achieved was this version management and fall back. So this is no mean feat to do just buy a linkable library with an application. And if you think about what you could do within OpenStack itself, you could actually make Glance, the OpenStack image repository, instantly version aware just by using this. For those of you who've been in the design summit, you might know that there've been some controversy over what we should do with Glance next because Glance thinks completely in terms of hypervisor images. And a hypervisor image is a fully formed, fully fixed image. You can get it from the Docker cascading images just by collapsing all the cascades so they form this single image but that loses all of the advantage that the Docker XML file gave to you with the cascades, the versioning and everything. So the other thing that we've been discussing is could we actually make Glance version aware? And apparently for the hypervisor people, this is slightly worse than a root canal. And so one of the things that we could actually do is just provide the functionality to Glance via this Mosaic project and give it instant versioning if you choose to take advantage of it or you just say turn Mosaic off when you're using bog standard hypervisor images before. The net advantage here is that this wouldn't just bring versioned images for containers and Docker, this would bring versioned images for everything including the dinosaur in the room and the hypervisor. So in many ways it looks like an interesting project. So if you want to go and check it out, remember this is very new code, it's on GitHub at the moment, you can actually go and look at it. Remember I said this is advanced research, open source project, not part of any product currently nor any envisaged product in the future. But if you're intrepid and want to play with it, it's there for you to do so. So the other thing you can think about is tenancy. So if you actually think about tenancy, it's of interest for every cloud platform. Everybody talks about tenancy for large cloud applications. It's also a base use case for containers. One of the ways that people actually use containers today is to replicate these containers to form multi-tenants for applications. If you think about just the standard hosting provider serving WordPress, if they do it in containers, usually each instance of WordPress runs in a separate container so it's separately controllable, very easy. This is basically applications placed inside the box of containers equals multi-tenancy. It's a concept that's very understandable. The problem comes when you want to administer this thing because the application inside the container actually requires an orchestration system for its administration. So if you're taking this application, you want multi-tenancy, you replicate it many times in the cloud, the administration plane belongs to the person who's actually orchestrating within the cloud your systems. So this leads to a lot of conversations about how cloud systems should be exposing orchestration. What you really wanted is the administrator panel for this multi-tenant application itself. So if instead you actually linked the application to the thing which gave you tenancy and therefore the tenants could be replicated underneath you, you still have the application on top, you can now create for yourself that admin panel without actually bothering the orchestration system for how you did it. So in many ways this gives true multi-tenancy to an application. So it's better than placing the application inside a container. By linking it with container technologies you can now design not only the tenant user experience but also how the admin panel should look as well. Obviously there's a system D use case. System D was all about resource management. A lot of what applications do is they continually spin out and spin down stuff. A lot of them want to do resource management. I've got a Linux desktop and if I leave Firefox open for too long it will eat the entirety of my memory and all the rest of my applications will no longer function. Placing Firefox inside a resource managed container is a very good way of actually solving that problem. There are many, many instances in the industry and enterprise where we do this. The problem with using containers to do this today is that the knobs for C groups which are the resource managers in Linux are excruciatingly complex. It's almost impossible for anybody to figure out exactly what they're supposed to be doing. If all you want to do is bound the memory in your container with a three gigabyte soft limit, four gigabyte hard limit, there's no actual file in the SysFS system that will do that for you. You have to set up a whole cascading set of parameters before you get there. So re-exposing this is just a simple, understandable quantity is actually something that will be of great benefit to many, many people, including actually Docker. If you look at the way you do your Docker files today, there's also no way of expressing how much memory, how much disk space, and how much everything this container should be using. And that's going to be essential if the vision of Docker is to do dev test within the enterprise, but then you transport it up to the cloud to be scaled because you've got to give the cloud provider an idea of how many resources you use. And I promise that if you go over that, he can actually scale you back. So these are all things that people are actually going to have to think about in the future when the dev test cycle goes to dev test to deploy to actually interact with customers over the deployment. And obviously there are many, many limits you can do for this. So this is actually also an ongoing research project within Odin. This one is so new that it was only presented internally a couple of few weeks ago. There's not even a git repository for this yet, but very shortly there will be. And if you want to think about it in simple terms, it's basically a much more fine grained, more enforceable version of our limits. And the network use case, which is also very interesting, is it's basically an easy way to bring up an experiment with network function virtualization, or this would actually be network function containerization. You can construct network services based on the entire full power of the operating system and only run the actual network interfaces inside a network namespace. This means that you don't have to bother with all of these weird hypervisor images that are currently used to do NFV. Instead you can do network function containerization where you still have the full power of the operating system available to you. And you can share configuration files and everything else across these multiple network functions that you're bringing up. And it actually provides you with something that's much more scalable, much more easy to use, and obviously if you couple it to the config files and to Docker and other things, much more easy to package and transport. So I promised them I'd leave five minutes for questions. That gives me four minutes to wrap up and conclude, and now I'll ask you what you think of all of this. To sum up, thanks to our upstream work, containers and mainstream Linux are now viable. So anybody who complains about the security problems or the isolation problems of containers is either got an agenda because they work for a hypervisor company or unfortunately managed to use a version of Linux that was too new to have all of these things in. Or there's also one other caveat here, even though most of the security features were available by about the 314 kernel, there are lots of distributions that are based on the 314 kernel who didn't actually enable them in the kernel. So it's perfectly possible to think you have a modern kernel, try to bring up a container with all its security features. It will actually confirm that you've got them because in the Linux kernel we just ignore flags we don't understand, and if it's not turned on it's an ignorable flag we don't understand, and in fact you don't have it at all. So it's perfectly possible to have a bad experience just because of the config options for your kernel. But the thing to bear in mind is that all the mechanisms for bringing good security and good isolation to Linux containers, both operating system and application containers, the one Docker uses, are all upstream in the 4.0 kernel today. And I think, I couldn't say it for 314, but I can say it for most 4.0 distributions. We finally got agreement with all the distributions, they will actually turn these damn features on. So now in almost any distributions based on a really modern kernel, not only can you guarantee that the feature was present in the kernel, but it's actually turned on for you to be able to use. And our mission now as Odin is actually to try and drive the container use cases. Since we're a container company we see our business model going forwards being tied to innovation in the container space that actually separates with clear blue water the containers from hypervisors. Any use case that we have that containers can do that hypervisors can't do is the use case that I'm really interested in because it's the one that drives containers forwards and embeds them in the enterprise in such a way that they can't actually be taken out again. So it's actually encouraging people to consume containers. Since we sell the base layer for containers this is basically our business interest. And obviously we plan to leverage the considerable expertise that we have to do this. So we still have a team of people back in Moscow who go back to the early days of SWsoft who are very used to using container technology and actually seeing what it can do and have been using it for some of them go back all the way to 15 years. I've got to admit I'm a comparative newcomer to the container space. I only started to get interested in in 2007. So I don't go all the way back to so I'm missing eight years of that history. But what I hope to show you today is that we're already building interesting projects in open source to demonstrate the use of containers that you can actually watch and even think about using today. And of course what I hope to promise you is there is a lot more of this to come. And so with that I'll just wrap up. I'll give a shout out to if you like this presentation this is all done in Linux fully open source using JavaScript and CSS3. If you're actual Linux evangelists you'll love it because it can't actually run on Windows because the CSS3 and HTML5 implementations of Internet Explorer are too naughty. You have to run it on Firefox and Windows. Of course this actually makes me a web developer for putting this presentation together rather than a kernel developer. So I can say this to you because you're open stack you don't mind this. If I said this in any of the kernel summits I'd probably get lynched. And with that I'll say thank you and I believe we have exactly five minutes left for questions if anybody has them. So questions? Yep. I'll just repeat it. I won't make you run up to the microphone. Yep so the question was do I see a time when lib container will be so easy to use that everybody will use it? And the answer to that is both yes and no. So yes because we're actually evolving the API but no because some of the fundamental complexity of how containers work can't be abstracted. Complexity comes from choice and the container layer in Linux has a huge amount of choice. So the way I see it evolving is we will have this easy to use lib container interface for everybody. We are committed to evolving that but on top of that we'll actually abstract out specific use cases. So things like Mosaic for the cascading images, the tenancy library for if you want to do multi tenancy just to enable specific use cases much more simply than actually having to wrap your head around what you're supposed to be doing with the username spaces. So does that answer your question? Okay, yeah, next question. So the question is, is there work on going in the Linux kernel to actually enhance C groups and namespaces? And the answer to that is absolutely yes. There are proposals on the mailing list for I think two new C groups. I think it's the audit C group and the device C group. I'm not sure what's gonna happen to them. And we're right in the middle of rewrite, sorry, audit and namespace. We're right in the middle of rewriting a lot of these C groups in Linux to be hierarchical. Hierarchical C groups are actually required to run containers inside containers. And in theory, in parallels, we thought this was never or in Odin, we thought this was never a use case we care about. But of course, system D is a containerized application. Anytime we bring up system D in a container, we have automatically nested containers. And so because system D is now the init system of every distribution we'd actually be putting into an operating system container, hierarchical containers suddenly became very important to us. And obviously running Docker inside an operating system container to derive all the container benefits without having to burden yourself with the hypervisor is another use case for this nested hierarchical C group. So yeah, absolutely. There's a lot of work ongoing in the Linux kernel today with containers and C group, with namespaces and C groups. There are two lists for this. One is C groups at vj.kernel.org. It's a reasonably high traffic list. I'd say about 15 emails per day. And then there's the containers at lists.linux foundation.org which deals more with namespaces and container applications. Slightly less traffic list, probably about five emails a day. If you really want to see the guts of what's going on in container development in Linux, that's the place to look. Okay, I better give that side of the room a chance this time. Someone over there had the hand up. Okay, so the question is, what's the, my opinion of CoreOS initiative as a common container standard? So if you listen to the entire talk, I was telling you that the advantage of containers is the granularity of the containerizations. And applying those in a granular fashion allows us to perform interesting use cases for the end user. So this Mosaic use case is basically on the mountain namespace, the NFV use cases on the net namespace. That means that in this world of container aware applications, there is no such thing as a single description for how a container should be set up. Now, CoreOS is actually in a very different space I'm thinking about. They're trying to compete head-to-head with Docker for the way you actually package and distribute applications. And this involves this cascading full system image. So for them, those two specific use cases, I think description of containers makes a lot of sense. So trying to unify it for them, but it just doesn't apply to the rest of the world who might want to use container technology for things that are way, way outside the space. Does that answer your question? Okay, we'll go back to this side again then. So the comment was that we are actually, as I said, in the middle of re-architecting all the C groups to be hierarchical. What are we doing about this? So we are helping as a company because we have a lot of kernel expertise, but there are lots of tricks that have already been done to most of the C groups and even the namespaces to make it that hierarchy actually works today inside the kernel for most of them, not all of them, but most of them. And it's all of the ones that we actually need to get most of the nest that use cases up and running. So by and large, it just works. So if you're the user of this feature, you can think of it as basically, we're doing a lot of work under the covers to get rid of all these horrible hacks and actually make it pristine, neat and beautiful because it's sort of the janitorial work that we do all the time in the Linux kernel. But the net impact to you, the user, won't actually be hugely significant. But we are participating, yes, because sorting out the mess will help us in the long run. The more cruft we get rid of, the faster it'll be. I think the guys are opening the door and about to bring up the music. So I'll just say, thank you very much and I enjoy talking to you.