 Okay, welcome. So today, first of all, I'm Eric Windish. I'm with Docker. I work as a security engineer and I am the maintainer or core, I guess, mini-PTL of the Nova Docker driver. It's actually underneath of the compute program, so I don't technically classify as a PTL for this, but I'm the main driver of that. We have some other engineers, which I'll get to later. I like the little show of hands of how many people may have seen the previous version of this slide deck at the last summit. Okay, so a small number, maybe a third less. So those of you that most of you I'm sure have probably heard of Docker, if you haven't, it's for build shipping and running your applications. It is a containerization solution that's not just containerizing your processes but also providing image management, providing utilities and an ecosystem around that. There's a API which you can access over UNIX Soco or over TCP with TLS authentication, and that allows you to manage and control these containers in a way that things like LXC and OpenVZ in the past hadn't done. It allows this bundling of image management and so forth to provide provisioning tools that make containers more like VMs from a management perspective, right? It solves that major management issue. And it allows this build ship run paradigm to exist. And what we want to do, or for what many people in OpenStack want to do, is to bridge the gap from their OpenStack installations to Docker so that they can use both. There's an ecosystem of products around Docker in OpenStack. These are the primary projects which actually orchestrate Docker in OpenStack. Now, there's another more projects which actually use Docker in various ways such as COLA to bring up your OpenStack deployment. Different distributions involve or include Docker in some way such as RedStack, I'm sorry, Red Hat RDO, Morantis is OpenStack fuel system, Ubuntu, Pack and Stack, which is a set of puppet modules or manifests bring up Novodocker, which I'll explain a little bit later. Tempest and the Teacup, which is part of the Defcore initiative, they have a suite of tools you can use to verify your OpenStack deployment against what is considered core, and that can run inside of a Docker container. So the four up here are Nova Compute. So we have Kubernetes, we have Heat, Solom, OpenShift, Mezos, Cloud Foundry and Magnum. And these are all tools that are able to orchestrate and use containers. And these tools, actually, well, Magnum is still very early stages, but the other ones, in theory, you could use today. The idea is that we can bring up an environment with OpenStack where these tools can now use and communicate with Docker and orchestrate it. So going back a little bit more to how Nova Docker itself is working. So here is a Docker, sorry, this is a hypervisor environment, right? This is us running Docker in a hypervisor under a VM. And we can demonstrate, right, this is now running Docker with Nova Docker, and you can, this is just, Nova is running these containers. But of course, you can run Docker over here and containers under there as well. And this is one of the powerful things about Nova, right, is that you can do both. You can have VMs and you can have Docker in the same environment. You may want the bare, you want the bare metal performance of Docker and the provisioning aspects of it without the complexity of running Ironic. You want the additional security constraints that you have by disallowing your users or your applications to overwrite firmware on your machines without having to deploy dynamic routes of trust or static routes of trust with TPM. And you can also work with Ironic if you wish to. And again, there will be slides online. And this is just the kind of the vision of, yes, you can run Docker in Docker. And once you do that, all of these things will be able to talk to that. So to get this, it is currently in Stackforge. It was in the Havana release inside of Nova. We had taken it out, was actually taken out by force because testing wasn't ready. There were some various political reasons why it occurred. But it happened. It's been taken out. And actually I'm really happy about that. And it might sound like an appeasement to try and say that I'm happy about it, but I'm really happy because we were able to actually get code merged, which turns out is actually really close to impossible inside of Nova. And we're able to make the sweeping changes and improvements in the driver that we wanted to make by having it out of tree. So you download the driver and you set it in your compute driver. And so now this is the big thing that changed right in Juneau, was that now you can put images directly in the glance. And it looks fairly similar to how you might otherwise upload images to glance. In this case, we do have to export the data from Docker. We use it with Docker save. And then we pipe that to Glance Image Create. And with Glance Image Create, we can import that and give it a name of Cirrus in Glance. And you also note container format is Docker. So that'll let Glance know and Nova know that it's going to be a Docker image. It only works on Docker driver. And finally, you'll be able to use Nova boot. It's actually become amazingly simple compared to how it was two releases ago. So beyond that, we do have networking support. It's fairly comprehensive at this point. We have Nova Network, of course, Nasera, OpenContrail, OpenVSwitch. Most of the drivers in, most of the popular drivers do work. If there's a driver that you're looking for that doesn't work, it seems popular enough that people might want to make it work, just kind of file bugs and push it through. And maybe we can see if we can get it implemented. So I do want to have Ian Main come up. Ian is our first plus two contributor. He works at Red Hat Canada. And I'll let you take it from here. The mic, there we go. Yeah, so I am Ian Main. And we started working on Nova Docker maybe about eight months ago with Eric and some of the other folks he mentioned earlier. And my goal really was to get the continuous integration testing going again because we had problems with that. It was pulled out for that reason. So yes, we want to get as many tests passing as possible. So we first step, of course, is to look at all the bugs that were just generally in the Nova Docker driver. And from our efforts, now we now have just over 1700 tests, tempest test passing. But we do still have many things turned off like volume support, resizing suspending rescue and migrations. A lot of these things don't really seem to apply to containers, except volumes, of course. So after dealing with all the bugs and other issues, we started looking at adding support for things that were missing in Docker itself that containers generally support. The first one we saw that was seemed pretty clearly that we could support was pause and unpause. So we basically drove this feature through, through Nova Docker, in through Docker, submitted pull requests upstream. And it was accepted into the Docker project. It was well accepted. It was a good feature. And it had nice side effects like being able to use pause to safely commit containers without having to worry about file system corruption. Next thing we saw that was a big area that was lacking was volume support, sender volumes. So this one, I've written now two different pull requests for this, not received quite as well as pause and unpause. The big issue was that it was the first API that actually modifies running containers. And Docker was a little bit worried about that change and how it's going to work and how it will affect the user experience. So we would keep working on that one. I have another pull request in now. And I've been insured by core Docker folks that will go in. It's just that they're trying to get the experience right and make sure the infrastructure is in place for all the work properly. So we have sender volumes functioning now, actually, you know, locally with all my patches in place, we can make use of them. Then we started looking at what are the use cases actually of volumes because as Eric was saying earlier, mounting them normally in an unprivileged container is not very easy. So direct access, at first, we're like, well, at least we support direct access to block devices. But that's not really a very common use case. It's not something that many people want to do. So then we're looking now at, you know, we need to look at the security issues and figure out, you know, maybe fuse already supports user space file system mounting. And through support of user namespaces, we can make this work in Docker with the unprivileged container. So I think that's probably the direction we'll end up going in. It definitely highlights again the differences between a VM and container. With the security implications, you know, you're passing in a file system that's completely supplied by the user. And any kind of I don't know of any exploits, but it's possible that, you know, the exploit could be crafted. So with this, then, you know, it passes a lot more tests in Docker, which is great, a lot more tempest tests in Nova Docker, we can enable volume support. One of the things that still stuck out when I did that was proof of concept of boot from volume. I mean, from volume. So I actually did a proof of concept of that is not great code. I need to re look at how that will work. But I also got that working as well. So I think that's it. Thank you. So this one to add that in terms of the testing, the testing that we have today is gating not only the Nova Docker code, but we have a silent gate for Nova. So every time a change is made to Nova, we actually get reported whether or not that is a breaking change to us. And we're going to be applying shortly to have that activated so that Nova developers will know as well. And the booth and volume is interesting that Ian mentioned is because we can do that today with unprivileged containers. It just introduces some complexity for us in terms of answering new problems such as now do we have to support rescue or do we have to support snapshot of volumes, which is a whole lot of code that we're going to have to support and maintain. And we want to make sure that people actually want that code before we start working on it. So with Kilo, we're talking about sender support, as you've just heard, security groups. Now security groups is a main, a big one that's been lacking. And I believe that code has now been merged. It may need some improvements, but it certainly exists now. And I imagine it should work. We have the Docker Pi change with which dims supplied. So this removed about somewhere between a quarter to a third of our code base, removing a bunch of custom client code with a library, right? That's a great win because everybody benefits from better libraries. And privileged containers, right? So there are people that want to run uncontained containers with Nova Docker. It turns out this is a viable use case that people want. They don't care about the security implications. Maybe they want fast track support to sender, whatever the case is. We're looking to supply that in some way. We're figuring out what the interface for that would be. Because obviously, we don't want to turn that on for everybody in all cases. That would be something probably an admin level configurable at the setup of Nova. Or even in that case, maybe only executable by an admin of user in specially crafted environments. And we want more plus two contributors, right? So Ian is our first plus two contributor besides myself. And technically, Ryan, sorry, Russell Bryant. So, you know, Russell actually created the repo and he kind of got grandfathered into owning a plus two, but he's never actually exercised it, so it doesn't really count. And finally, this is our last point, is use the code, right? It's ready for people to start using. We have production users of this code. People have been coming to me and telling me we are using this in production. It's great, right? Some of them have been kind of burned by some of the changes we made because we've been making changes very quickly. But we have production users and production users are using Neutron, believe it or not. Which is almost more surprising. But use the code and submit bugs, fix bugs. We'd really appreciate it because the more people we have doing that, the more plus two contributors we can get. And the bigger we can build this community. So, I'd like to take questions and Ian will be available as well. Anybody who has questions about sender support or more about our testing. Right. So, the question was why do we prefer to use the Docker registry instead of Glantz? And the Docker registry, first of all, is more native to Docker. We have image signing now. That we may be able to work into our Docker registry integration eventually. But it's certainly at issue currently. Right. Because those images, we have to pull it down. We have to make sure they're signed and that there's a trust anchor with Glantz. We have issues where we have image layers. So, Glantz is not aware of those image layers. So, with the Docker registry, we can pull down deltas and we can't do that with Glantz. With Glantz, we're uploading a whole image and we have to download that whole image. The whole process of using Glantz over the Docker registry, since we made a change, has significantly impacted performance of running, of first-run containers. Right. So, if you are running a container for the very first time, from an image for the first time, it's now much slower than it used to be. But we have much better open-sec integration by doing so. So, the question is if the neutron support is available in addition to Docker networking. So, what we do actually is we specify net none for the Docker containers. We do not have Docker specify or set up namespaces. And we set up our own namespaces in Nova and join the Docker container into the namespaces that Nova creates. Right. So, the question is why is OpenStack doing a new project magnum instead of using Kubernetes? So, first of all, I will note that Nova Docker is agnostic to this. Right. So, Nova Docker, the idea is that it will work with any of these solutions. Now, agent could probably better handle the question about magnum, but I was there when we were designing this. And we had three days and we discussed for almost one full day, should we use Kubernetes? In fact, we were going to use Kubernetes, how do we use it? And by the end of that day, we decided this was not going to work. And the decision was moved forward, okay, well, then what do we do now? And an architecture, an ideal architecture was drawn up. And part of that, too, is that there are things that the OpenStack community wanted that we heard voice was that we wanted multi-tenant support, which Kubernetes wasn't offering. We wanted the community wanted support for LXC and OpenVZ that Kubernetes is a Docker solution. And that's great, right? I don't mind. But the community did express some interest in LXC support. In fact, there was a design session yesterday, which was renewed where about maybe a third of the room, you know, indicate a strong preference for having LXC. More questions? Sure. The question is about host aggregates. So actually, in my presentation from the last, from the Juno Design Summit, I actually showed an example of using this scheduler for such a thing. So you can use, if you have a multi scheduler, multi-hoppervisor environment, right, the scheduler can help you schedule to the right hosts. We also have examples of how you can schedule containers to be co-located onto the same host. So you can schedule your container to be co-located with another container of the same tenant. So you can do, you know, a multi-tenant NOVA with single-tenant container spawns, right, via the scheduler. The scheduler is actually very powerful. You might have to write your own plugins or you might just have to craft commands to NOVA and give it scheduler hints. The question is, do we do anything for linking Docker containers? And the answer is no, right? So Docker NOVA does not do that. That's why I write that sweet spot for us is being able to run Docker in Docker because once you do that, then you can link those containers. So you can run a container that runs three other containers inside. It becomes a container pod. And containers in that pod can be linked. Anybody else? Okay. Yes, it works with Neutron. Yes. I was hoping we could have a little live demo of that, but there was a presentation done, I believe, yesterday by Eric Lopez at VMware, previously Nasira, and he did a demo of this. So when that presentation goes online, you'll be able to see a demo of NOVA Docker working with Neutron. Sorry. So the question is, if the only way to make Docker containers talk to each other is via a Docker and Docker scenario. So without Docker and Docker, containers can talk to each other in the same way that NOVA instances can talk to each other at a VM machine level view. But to get the Docker links, right, the links feature of Docker, you're talking about running container pods and running that Docker and Docker. That's an interesting one. So the question is, this code is out of tree so you cannot use the OpenStack trademark. Are there problems with that? I don't see that as being a problem. I believe it's okay for us to say this is a driver for OpenStack NOVA or that this is a driver for NOVA, right? Because it is a driver. This is not part of OpenStack. We don't say this is a driver that is part of OpenStack. This is a driver that's available to work with OpenStack. And that's, I think, okay. We're pretty close to being able to get back into NOVA. Really the biggest hang up, because the big hang up before was the testing. And that's been solved. Where we have a hang up is really that we're actually afraid of merging the code in and not being able to change it again. Because the code review backlog for NOVA is so deep that if we get accepted into NOVA, this code will be frozen and it will never change it again. So we're looking for more drastic change within the community to help us be able to change code and get code review in and get code merged. Because if we're not comfortable doing that, then we're not comfortable reintegrating. So the question is, does Magnum in the long term replace a NOVA Docker driver? And the answer would be no. This is a way of using NOVA to bring up an environment that Magnum can orchestrate and run. From a Magnum perspective, this would be, for NOVA, what is Triple O? So Triple O can bring up a NOVA environment. This kind of can bring up an environment that a Triple O can bring up NOVA. NOVA can bring up Docker. Magnum can orchestrate Docker. Something more of that nature. If you're talking about a wider ecosystem. All right, is that it? I think we're out of time. Alright, thank you everyone. Thanks guys.