 There's the signal. Good afternoon, everyone. My name is Adrian Otto. For those of you who, anybody who see the morning keynotes this morning, you like what you see? You all like Jackson. I think he's pretty cool. So I'm the PTL for the Magnum Project here in OpenStack. And I'm a distinguished architect at Rackspace. So you're all here to hear. You're all present to hear about Magnum. And I've given this talk a few times before at prior summits and mid-cycles. And previously, the Magnum vision has been containers as a first class resource in OpenStack. But that's not our vision anymore, because we've already achieved this. So now what Magnum is all about is combining the best of infrastructure software with the best of container software. And I want everyone to recognize that container software does not solve all problems. Most of the problems that you have when you try to run applications on containers are still infrastructure problems. How do I connect my networks? What do I do with my storage? Where does my addressing come from? How are these things related? How do I orchestrate these? How do I scale them? So container software helps me at the app layer. It helps me with my packaging. It helps me with my distribution. But it doesn't solve everything related to infrastructure. And so Magnum is trying to take and vertically integrate solutions that do solve an entire range of problems. This is actually my favorite slide. And the reason this is my favorite slide is because every single time I give this talk, I have to change these numbers. And they all go up, except for the release date. So we have about, let's say, 2,200 patch sets across 12,17 commits. And this is important because it shows that we are, first of all, we're reviewing code. And we're making that code better before it gets upstream. But we're not making tons of revisions to this code. We're doing a few revisions to make it better, and then that code is getting in. So there's some velocity here. 210,000 lines of code, pretty impressive. We started in November of 2013. And we had an initial release in, this should say, 2014, 120, sorry for the bug. This obviously didn't go through Garrett review. But what I'm most proud of is that this was a collaboration now between 101 engineers who come from 28 different affiliations. And I think this is a testament to the excitement that we all feel about where this new technology might take us. So here's an idea of who the top contributors to the project are today. IBM, NEC, Huawei, Red Hat, Intel, HP, Cisco, Rackspace, and what falls off this list is Yahoo. Again, there are 20, how many? 28? Don't all fit on the slide, but this gives you an idea of who's committed to this. And if you look at the distribution of contributions here, this is not all coming from one place. So if one of the sponsors of Magnum decides that they're no longer interested for some reason, as a cloud operator, you can still have confidence that this project is going to resume. Now, Magnum is not the first project to bring OpenStack containers capability. We've had containers capability for many releases. So the Libvert LXC driver for Nova has been in there for several releases. The Nova Docker driver for Nova has been in and out, currently out of tree, and a heat resource for Docker that allows you to spin up Nova instances that have the Docker daemon on them. But there was a couple key things missing here. There wasn't a real multi-tenancy story here. And there wasn't a real orchestration story here. The scheduling of resources across a grouping of hosts in order to have more than just a few containers was a real problem that needed a better solution. So we thought, well, aren't containers just like virtual machines but smaller and faster? Please kick someone in the nuts if they say that. Because they're not. They're different. And the sorts of things that containers and Nova instances have in common is a very limited set. Basically, you're to create and delete and not much more. Nova instances have other operations. You can resize them. You can restart them. You can attach volumes. You can attach networks. But these are not the things that containers do. Containers are about things that are related to processes that run on hosts. Killing them, starting them, setting their environment variables, binding file system volumes, attaching terminals to them, and running processes within them. It's a completely different set. So you wouldn't try to fit all of that into Nova because it would be like a round peg in a square hole. And it just would not work. So we came to this point of clarity where we decided that this needs to be a new project. It needs to have its own API. Now, the last time I gave this talk was in Vancouver. And during the Q&A, there was a really good question that prompted a statement from me that cloud providers assume a risk when selecting a single cloud technology today. If you choose a single container technology to base your solution on, maybe that will remain in style and maybe it won't. And if it won't, depending on how much energy you put into integrating that, you may be kicking yourself later. So I propose that you use something modular that gives you a choice of what container software you run and maybe run many of those simultaneously side by side. So here's a review from this morning. We have OpenStack, which is compute networking and storage. Magnum is an additional service that allows you to create a new entity, a new cloud resource called a Bay. And a Bay is where your container orchestration engine, I'll call it a COE, which is a new term that the Magnum team just made up because we needed to be able to talk about these things in a generic sense. And you currently have a choice. You can run Docker Swarm. You can run Kubernetes. And now, as of the Liberty release, you can run Apache Mesos. Hongbin, are you here? Hongbin, stand please. So thank you, Hongbin, for this new feature. I'm thrilled to have this. We now have the ability to run an Apache Mesos Bay that runs a marathon framework, which has a REST API. So you can actually create containers against it. So thank you, Hongbin, for your contribution. Now, why have these different bays? Why not just offer one and have just some OpenStack method for creating a container, a generic container? Well, the reason why is because we believe that you should have the native tool experience. You should be able to use the Docker CLI, like Jackson did this morning. You should be able to use the kube-ctl command against your cluster. And you should be able to enjoy the new features that surface in these various COEs as they're made and not have to wait for the Magnum team to build leaky abstractions on top of all that stuff in order to surface that to you. Instead, you rely on Magnum to create the bay and scale that infrastructure for you from a capacity perspective. And then you interact and create containers and manage containers and stop containers, all using your native tools if you want. Now, Magnum also does provide a feature to create a container, which I demonstrated in Vancouver. It allows you to create a pod in Kubernetes. It has this capability as well, but you also have the option to run this as a native experience. So depending on what bay you choose, you're going to get a different native API experience. Now, OpenStack is a relatively unopinionated entity. It does not dictate that you have to use a particular kind of storage or that you have to use a particular kind of a network. And when it comes to containers, it's also unopinionated. Gives you an option to choose. When it comes to what compute surface you would like to run your containers on, there might be a good reason why you want to run your containers on virtual machines. You might really enjoy the additional features that your virtualization platform offers you. You may really want the security isolation that virtual machines offer you. But you might also have a reason to run it on bare metal. So there is limited support today for bare metal capability in Magnum so that you can run your containers on an ironic instance. So I'll leave you with native APIs aren't just a good idea, they're essential. So all Magnum bays. So Magnum has these resources, right? It has container resources, it has bay resources, it has nodes. Nodes are essentially Nova instances. A bay is simply a grouping of Nova instances. And so all of the bays in Magnum today have at least these three abstractions. Now, Hongbin creating container on Mesos. Do we have create container? No, OK. So at least with Docker Swarm and Kubernetes you have this with Mesos maybe soon. It does have a REST API so this is not a hard thing to do. With the Kubernetes bay, we actually have more abstractions. And the reason we have more for Kubernetes is because this is the one we did first before we had really explored having multiple bay types. So here we have containers, pods, services, bays, and nodes. So everyone knows what a container is. A pod is a grouping of containers that run together on the same host. A service is a way of connecting a network port to a container. And a bay is a grouping of Nova instances. Nodes are one-to-one related to Nova instances. So why do Magnum? What is so different about this approach than what came before it? Why aren't we just slapping Kubernetes on top of a heat template and rocking it? The reason why is because that doesn't give you a strong multi-tenancy control and data plane answer. So OpenStack already has multi-tenancy capability in just about every project. It's in Nova. It's in Keystone. It's in all of the different projects that we have. And Magnum is no exception. So because the bay is a grouping of Nova instances, the bay is also the security barrier between one tenant and another. So when you have a grouping of VMs that are all belonging to the same tenant, you have confidence that you don't have the same tenants being able to interact in this way. It also, if you look at the APIs for container creation, using the existing container technology as of 2014, all of them were synchronous. Meaning that when you request a container, it blocks for the duration of that operation until that operation completes. And that makes sense if your operations are always sub-second, right? But it doesn't make sense if you're gonna be doing things like creating virtual machines on the backside of it. You don't want your API to be blocking for long periods while that's occurring. So instead, we return an asynchronous response and you can pull the API for the status. And this is a more scalable approach. OpenStack Heat provides an orchestration capability for bringing up the bay. So when we bring up a bay, say for Kubernetes, there's something like 30 different software configuration steps that need to happen to bring that up. It might even be more now that we have TLS support, which I'll talk about in a minute. We didn't wanna reinvent orchestration, right? We want app orchestration to be handled by COEs and we want infrastructure orchestration to be handled by OpenStack. So that's what we did. And then from an identity perspective, Magnum uses the same credentials for creating bays and bay models that you would use to create any resource in OpenStack. The same credentials you would use to create a server with Nova, the same ones you would use to create a volume and cinder are the same ones you're gonna use to create bays. So let's talk about the stuff that we just added. Now there were hundreds of bugs closed and dozens of blueprints closed in the last six months by the 101 contributors to the project. But these are the four favorites of mine. These are the ones that every single week in IRC meeting we were hounding on, because we think these were the ones that added the most value to the project. First the MISOS Bay type that we talked about, Hongbin led that effort. This gives you the ability to run Apache MISOS within a bay with the marathon framework pre-configured, which gives you a REST API to create containers. And MISOS is very compelling if you want a container orchestration that is job-oriented or task-oriented, there is no parallel to its capability with respect to those workload types. The second cool feature was what we call the TLS feature, which really is about securing the communication between clients and the bay, and securing communication between the bay's controller and its actual COE, and then between the components of the COE. So there's a lot of moving parts there that all of which are deployed on potentially public networks that need to be secured in a way that's appropriate for something that would be on the wide open internet. And the way it was at the time I showed you this in the Kilo release, anyone could come along and just start containers on your machine, which would be a bad thing, of course. But now we automatically generate TLS certificates and sign them and distribute them to the right place and automatically wire them up so that all you need to do is just download the cert that you need in order to access the service and use that because the native tools that I'm talking about, the Docker CLI, the Kube CTL, these things don't support OpenStack in Keystone integration, they could, but they don't. And what they do today is they all have TLS capability. So they use a TLS certificate as an identity to authorize the client. So if you have the right certificate, you can communicate with the API server. And so Magnum handles the generation of management of those for you so that you don't need to be a cryptography expert to figure out how to configure your base. The next cool feature is the external load balancer support. And this is something IBM worked on. Tom, are you in the room? Will you stand? I'm gonna come back to the TLS one in just a minute. So, Tom, thank you for your efforts on this. This was a bit, this took a long, we thought this was gonna be easy. It turned out that there was a lot of work we needed to do in Kubernetes and various things upstream in order to get this done. So I appreciate your efforts, thank you. This was a hard one. And how this works is when you create a Kubernetes Bay, you can activate this feature. And how this works is it will add and remove nodes from a neutron LBAS instance in accordance with changes in your Kubernetes pods. So as your containers are growing and shrinking in your Kubernetes pods, in accordance with your changes there, your load balancer is tracking all of that. And this is really compelling because the Neutron API can control not only the software load balancers that you would normally find in the container world, but they're integrated with a lot of hardware load balancers as well. And so that stuff can possibly actually work with a dynamic container system, which otherwise is a very difficult integration to do. You think it's easy until you try to do it and then you find it's actually very difficult to do. So this is a pretty cool feature. And then the final one was multi-master from Kubernetes. This, the idea here is that you don't want a single point of failure within your Kubernetes cluster because if you lose that node, state about that cluster is lost. So in multi-master, you essentially have two of these. So you don't have any lost state when you have a single node failure. And you can just put an attribute in your Bay model that indicates how many masters there should be by default one. You can change that to two. Now you've got an HA configuration. You can change that to a bigger number. So it's just basically toggling a switch in order to get the additional masters. Now if you were to try to do that on your own, again that is not as easy as it sounds. Magnum is making that easy for you. So I intentionally kept this short because I know everybody has a lot of questions. So a lot of questions about how are we gonna integrate with networking? What are we gonna do with storage? So a lot of these questions are actually still open and we're here this week too as a development team to figure a lot of this out. So I'm gonna open for questions in a minute but before I do that I just wanna, for all of the Magnum developers in the room, especially the ones who worked on TLS, will you please stand? All of you. Hongbin, I see you. Taun, I see you. Okay, you're all a little shy. That's fine. So thanks to all the guys who put this together. So we have a microphone at the back if you would like to direct your questions there so that we can get those onto the stream. I'll start telling jokes if you don't ask questions. Just a quick question. To the microphone, please. Is there a doc sprint in the roadmap? There is not yet but we would welcome that. Right now the only documentation we have is developer documentation so we'd love to build that out more. Nice presentation, Adrian, but I noticed on the contributors list, well I noticed that Google had signed up as a sponsor earlier. I noticed their name not on the list. You know if they're actually gonna consume more code to that. Yeah, great question. So Google is a sponsor, an OpenStack sponsor, and they have at the OpenStack Silicon Valley event which was maybe a month or two ago, maybe September, Craig McClucky made a public statement about OpenStack being the right answer for running container technology in private clouds. That's their point of view. And so I think the idea here is that they would participate in the OpenStack community. I think they've actually got their hands full with the Kubernetes community right now which is probably why they're not participating to the extent they anticipated. But they're welcome to begin participating and I do have an occasional interaction with them but I'd like to see more participation. That would be great. So I have kind of a crazy question. Okay. You know there's triple O, right? So when do we get to see something like container-driven inception things like with COLA perhaps, so I don't know. Where do you see this going? Okay, so COLA is a project that allows you to provision a OpenStack control plane that is containerized using Docker containers. Steve Dake, Steve and Dake, is he here? Is the PTL for that project? He was actually instrumental in getting Magnum off the ground. Will that sub-plant triple O? I don't know. It might. There's also another project called OSAD, OpenStack Ansible Deployments which is another way of configuring an OpenStack control plane. But from a Magnum perspective, it's pretty much beyond my reach so I'm not sure. Hey, it sounds like Courier is coming but what's the story for networking right now between existing VMs and containers? Networking between existing, okay. So let me just characterize Courier really quick. Courier is a remote network plugin for Docker LibNetwork. So you can use this in order for LibNetwork to communicate with Neutron. So when you create a network in Docker through the LibNetwork facility, you end up with a Neutron network, that's the concept. And the idea there is that if you do that, then you can have networks that are Neutron networks that can have both virtual machines and containers participating together. Now that also requires some additional features on the Neutron driver for what we call sub-interfaces. You need to be able to have on a single VM multiple sub-ports. So there is Russell Bryant from Red Hat who is working on a piece of software called OVN, which you may have heard about earlier in the summit. But the idea there is adding the, for OpenVSwitch, adding a sub-port capability. So if you're using that driver, then you can actually have containers and virtual machines on the same network together. Rather than just the Bay nodes being part of the Neutron network and using an overlay network with the containers on top of that. Now, why do we care? We care because we don't love having a network with another encapsulated network, right? You don't want an encapsulated SDN network with another encapsulated network on top of that because you'd have a performance drawback of doing that kind of configuration. That's why we care about Courier. So the Courier team has been working with the Magnum team extensively over the last couple of months to get all of our requirements aligned. That was a little bumpy start, but I think we got that figured out. And we're expecting great things from that project. Yes, can I go to the microphone? So I guess it's a pretty basic question. I'm wondering, for end user who is owning a project in the open site, can have n number of days, right? Yes. So I'm trying to understand, as an end user, why should I want to have n number of days and different orchestration mechanisms, Kubernetes, missiles? Great question. Okay, so why? If you are the type of user who does not care about customizing anything that happens in the deployment process of your containers, then you would probably really like a declarative interface for defining your container deployment. And in a declarative interface, this would be like a YAML file that just describes how this should be set up. So with a declarative system, you describe the output and you do not define the actual process that the system goes through in order to produce that output. As far as you're concerned, it's magic. And Kubernetes is one of these systems, right? You put the YAML file in and magically your pod comes out the back. Now that makes sense if you don't want to customize the process, but if you do want to customize the process, then you might really like something where you define the process, where the system is stupid and the instructions have all of the information in them. Okay, that's what we would call an imperative system. So in an imperative system, you supply the exact process to follow when you create the resource and the system itself is just totally following that. So depending on what you care about and depending on your application, you might want to have one of them running in a Bay that has a Docker API and another one of them running in a Bay that is running a Kubernetes API. So depending on that. Also, you might actually see a shift. Maybe one of these three prevailing choices becomes totally unfavorable in the future and you've actually got applications running on them. It might be nice if you had a migration story where you could run them concurrently and slowly shift from one to another. And so Magnet provides a mechanism for that. To the microphone, could you go? Yeah, I wanted to ask about specifically Elba's integration. So if I understand correctly, with this integration, if I create my service in Kubernetes and I specify the type of service load balancer, it will automatically add the ports of this service to Elba's, right? If you've configured the Bay to have the external LB feature turned on, then yes. Okay, thank you. So I want to ask the direction of the future of the Magnum. So I heard about some movement about the standardization of the container, like OCI or Synasif. So are there any relationship between that kind of movement or standardization and ongoing the development in the Magnum project? Yes, so the OCI is the Open Container Initiative, which is a standardization of container format, which would be like the image format primarily and then related things. And the initial format is coming actually from the Docker specification. Is OpenStack or are the Magnum contributors participating in that effort? I do know that there are members who are contributors of Magnum who are on the OCI, but we're not officially participating as an entity. We're keeping an eye on it. I think mainly concerned with the integration of the COE and the infrastructure and a little bit less concerned about the format itself. But the Bays are designed in a way that you could have alternate formats. So if Docker and OCI end up divergent, which I doubt, but if they end up divergent, then you can have different Bays to support each. That's our point of view. Thank you. I have a question on the Docker's not planning to support stateful containers. Is it a plan of Magnum to support sender directly connecting to containers with Magnum? Yeah, you should come to the fishbowl session for Magnum storage tomorrow and ask that exact same question. So we have a lot of storage capability in OpenStack already and almost none of it is wired up to Magnum yet. There is a cinder volume that gets created on each of the Bay nodes for basically local storage of what it needs in order to run the Bay, but it doesn't expose that in a way that you can easily just tie, as you run the container, tie it to a volume. So in order to make that more compelling, what I'd really like to see is the ability to create a cinder volume and attach that to the no-vincent in the Bay, automatically put a file system in it and allow you to, when you run your container with a dash V argument using Docker, you could connect to that volume. That's like the first, like most basic kind of storage integration we could do. Beyond that, we could use potentially Manila in order to have shared file system capability where you could specify using hints with dash E. You can give hints as you create the container that you would like to bind-mount something but that it should be an existing volume and that those could be shared across multiple Bay nodes, regardless if they're on the same host or not, which would be pretty cool for things like image repositories for CMS systems that they're not written too frequently, but every node wants a common view of the same file system. We think it would work really well for that use case. A quick question was, do you have a mixed mode environment where you can have different multiple Bay nodes giving different services like one with Docker, one with Kubernetes? And now? No, you choose that per Bay. So if you want, well, so if you can run them concurrently, right? You can have a Docker Bay and you can have Kubernetes Bay simultaneously running on the same cloud within the same tenant account, but the only mixing we really do is with the flavor. So if the node is a master node, it can be configured with a different size than the rest of the nodes in the Bay, but we don't try to mix the COE within a single Bay. But think of a Bay as a virtual resource, right? It's not, there's no cost to having a Bay besides the process for running the orchestration system. So creating more Bay doesn't consume a whole lot of resources. So if you really want to do a mixed configuration, I would recommend just creating two Bays and having those two Bays interact. So one of the big advantage of Apache Mesos is to run also other workloads like Spark or Hadoop or something like that. So do you expect people to do that? And do you think also about supporting like Kubernetes on top of Mesos? Because that's something that we see more and more as well. I'd be open to a Bay type with Kubernetes on top of Mesos. The one we have today is a marathon one. It wouldn't fit there nicely, but it would fit in a new Bay type. I'd like to see that. In terms of big data workload use cases, I'd like to see more of those. I think most of the stuff that people have been using Magnum for today are kind of web-centric applications. But there's no reason why it couldn't be used for data-centric applications as well, especially once we figure out what storage capabilities to put on. If we figure out some of these cinder volume integrations and make those really tight, running something like Spark or Hadoop on top of a Mesos framework would make a whole lot of sense. You had a question? Kubernetes is already fairly opinionated around its networking model, right? And it explicitly departs from the Docker networking model. You guys are gonna get stuck in the middle of that classification. Any thoughts on how you're gonna address it? Yeah, so each Bay type has, is Danny in here? Okay, so Danny and Hansen from Cisco has designed a networking, container networking model for Magnum, which is an extensive specification that addresses exactly this area of concern. And the idea is that we wanna use the native networking capability that's in Docker, and we wanna use the native networking capability that's in Kubernetes. But of course, because they're incompatible, we can't simply produce a single opinionated view of how network should work on all container systems. So the short answer is there is a way that you can indicate how those networking systems should be configured using labels when you create the Bay. And those labels are what pass through the configuration information so that you can actually set up the network. And there's gonna be a default network for each of the Bay types that we support. So there'll be one, probably the default will be courier through to Neutron for the Docker one. And we currently have a flannel one for Kubernetes and that'll likely remain the default for some time until we sort out how to integrate that with Neutron as well. Okay, all right, great. If you don't wanna answer your questions now, I will stick around for a little longer and you can come and ask me up here. Thank you everyone for attending. Thank you.