 All right, are we ready to start? All right, let's start. So my name is Adrian Otto. I'm here with Steve Dake. Hi, folks. I'm from Systems. I'm from Rackspace. This is the room safe in the hotel room that I'm staying at here in Vancouver. And last night, I had this laptop with my slides on it for this presentation. And I put it in here for safekeeping. And when I came back this morning, about one in the morning, to just check a little email before I went out. The safe would not open. It wants to say IR on the front. I don't know why it wants to do IR, but it does not want to do open, which is what I want to see, O-P-E-N on the front. IR is irreplaceable. So I call down at the front desk, and I'm like, I'm really sorry. I know it's one in the morning, but I really need to get into that safe because there's hundreds of people who are going to want to hear my talk in the morning at 9 o'clock. And I can't do it without the slides. So they sent up some guys with the safe-cracking device. And they came out with this thing, and they hooked it up to here. And they couldn't make this. IR just would not budge. So they're like, we're really sorry. I'm going to call somebody else and see if they can bring in. They brought in somebody else. He couldn't do it. They said, well, we have the engineering staff comes in at 7 30 in the morning. We'll have them come up. And you just have a good night's sleep, Adrian. And we'll see you at 7 30. Meanwhile, Adrian calls me at 3 o'clock in the morning and says, can you make backup slides just in case? And I did. They showed up at 7 30, just as they promised. Marriott, these guys, their customer service is excellent. They tried again. Two more guys, two different guys, tried to get into the safe. They're like, yeah, your controller board that controls the lock will not work. So they took my safe out. They took it down to the workroom. They took out the grinders. And they're grinding on my computer, trying to get the safe open so that I could be here. And I have proof there are metal shavings in my magnetic, my power port. So I could not actually plug in my laptop. So if we run out of battery, we'll just have to switch. Do an extemporaneous demo on somebody else's computer. But here we go. So Magnum is about providing a container service on OpenStack. It's not about inventing a new kind of a container. It's about making the prevailing container technology just work well with OpenStack clouds. That's its purpose and vision. There's a whole lot of developers working on this right now. And this has seen more interest and velocity than any other open source project I have ever observed. It's obvious you guys want to have containers in OpenStack. There is a diverse set of contributors. In fact, in our new governance process in OpenStack, we actually have a tag for projects that's called something like diversity. Something about diversity. It's a team diversity. And not all the projects can have this tag. But what this means is essentially, if any one of the sponsoring entities were to vanish, that you could have confidence that the development of the project would continue. And so we have this tag. We're very proud of that. But we're not the first to provide a containers solution for OpenStack. There's been containers support in there for a while. And we're able to create Libvert. Through Libvert, we can create LXC containers. We've been able to do that for a long, long time. We have Nova Docker in there. And since Icehouse, we've had a heat resource in Docker. But the thing that these things don't do is provide a scheduling function, an orchestration function, the ability to control what actually happens in the processes that run within the container. Those things are beyond the scope of what the Nova API was designed for. And so heat takes like one step further, but not quite enough, because there's no concept of a cluster of hosts running containers. And Magnum fills this gap. So there are some overlaps between what Nova instances need and what containers need. But that overlap is really narrow. Things like create and delete are obviously overlapping. So if all you need to do is start and stop your container and just run a pre-baked process inside of it, then something like Libvert LXC is perfectly appropriate, as long as all the containers on the same host are not hostile workloads. Meaning, belonging to different tenants. Nova and containers, because they have the different life cycle, because they have the different API, need a dedicated service that has an API intended only for the exclusive use of containers. And that's what Magnum is. So it's a combination of OpenStack and Kubernetes and Flannel and Docker Swarm. And all of those things combined give us this integrated solution called Magnum. And if you saw the keynote on Tuesday, I displayed for you a heat stack that showed all of the things that actually happened in the orchestration process of creating a bay. And there's something like 29, 30 different software configuration events that occur in setting up a Kubernetes cluster. And those are all the steps that you don't need to do if you're using Magnum. So these are the different API resources that we express in the Magnum API. We have a concept of a node. A node is one member of a bay. And a node is always a Nova instance. Today, we can do Nova instances that are virtual machines. Over time, we'll be supporting any kind of Nova instance, so an ironic instance so that you can run your containers on bare metal. A alternate hypervisor is if you're not using KVM. And then finally, a containers vert driver for Nova. And this sounds kind of like inception, but you might want to run your bay in a container so that you get a more densely packed arrangement of bays. So you would have, in that case, containers within containers. And this doesn't make any sense to people that are thinking about virtualization, because nested virtualization produces really bad performance outcomes. But nested containers don't. They have no performance penalty for running in a nested arrangement. I'm using the word no figuratively, because there is literally a tiny bit of overhead, but it is nothing like what you would experience with nested virtualization. And then we also have the ones that map to what are in the container orchestration engines. So pod, service, and container are all things that are all expressions that Kubernetes models. And we have direct mappings to those. We also have two different kinds of bays. So as I mentioned on Tuesday, the bay is the place where the container orchestration system goes. So you have a Magnum API. As a user, you create a bay. As soon as you have a bay, you can start putting things in the bay. The things you put in there, if you're using the Kubernetes bay type, are pods. The things you put in there, if you're using the Docker swarm bay type, are containers. So if you're using Docker swarm as your container orchestration engine, then you have bays, nodes, and containers. And that's it. If you're using Kubernetes, you have bays, nodes, pods, containers, replication controllers, and services. Replication controller is this thing that determines how many of something there is running in your pod or cross pods. A service is a way that you get TCP connections from clients to your running containers. Now, Magnum is different than other container software that has preceded it. And the number one way how it is different is it's multi-tenant. From the bottom to the top, if you create something in Magnum, only you're going to be able to see it. Your neighbors are not. And that's different than if you decide to run like a Kubernetes on your own. If you start a Kubernetes on your own and you create something in Kubernetes and somebody else uses your Kubernetes cluster, guess what? They see everything you made, which doesn't work in public cloud use cases. So we solve this. Second, when you combine the creation of containers with the creation of virtual machines, you're not dealing with sub-second returns anymore. Now you're dealing with things that take seconds or minutes in order to complete. And you don't want an API client blocking for several minutes while you create a VM. Instead, we do this in an asynchronous fashion. So we have an asynchronous API semantics. So when you ask for a bay, you're going to get a 201 created back instead of a very long delay. And then we're integrated with the OpenStack services. Everything the OpenStack does well, we just want to leverage. We don't want to reinvent the wheel anywhere. So identity, we're using Keystone. Orchestration, we're using Heat. For image storage, we're using Glance. For networking, we're using Neutron. So we're just leveraging everything that's already there. Now what you're going to see now is I'm going to bring, Steve is going to come over and we're going to show you a demo of how it works. And after that, you're going to get an opportunity to ask questions about what you see. Hi, folks. So let me maximize this so everybody can see it too big. Too big? Yes, too big. See, on my computer, that's not too big. But set to maximize? Oh, you don't. OK, let's just stretch it. It's right side, that's good. OK, there we go. OK, so first thing I'm going to do is I'm going to source my Keystone credentials. So right now I'm running actually in a container deployment of OpenStack called COLA. We added Magnum to COLA towards the end of our development cycle. And the only reason I do this is because DevStack changes. And I didn't want to check out the stable branches of DevStack and have something potentially break. I wanted to work with something that was stable. So what's cool about this is we're using RDO, which is very stable. And it's based upon the stable branches of pretty much all of Kilo. So let me show you my OpenRC. Just a straight up OpenRC, just like you would use in DevStack. So I source my credentials into a whole bunch of things like you're trying to net list. So we can just see that our system's working. And this system is running COLA for OpenStack. And we're running Magnum as a COLA container, right? Yeah, that's right. So I'll show you. I wrote a couple of demos. Got one for heat and one for Magnum. I'm going to go on the Magnum directory. What we use in Magnum are micro OSs. This is something Adrian didn't talk about. We use Fedora Atomic or Core OS. We could use other micro OSs, like REL or Ubuntu. So there's some options there. But right now, I'm going to demo Fedora 21. And I'll show you our start scripts pretty straightforward. Actually, get out here. So up here, I know this is a lot to take in, but we just download the glance image. Then we get the neutron nick ID. And then what the script does is it deletes the old image and installs a new one. So it's pretty straightforward. And then we do a glance image update. Now, the reason we do this is we register the distro with glance. And that's how we determine which distro we're launching on, because we need to make decisions based on Core OS or Atomic. So that's this part right here, glance image update. And then if you can press enter, Adrian. Thank you. So the first thing we do is we create a Bay model. Now, the reason I'm not typing this in is because I'll probably fat-fing or something and make an error. The important parts here are the fixed network. This is a private network that the fixed network is a private network that Kubernetes runs on that's isolated from the rest of the system. That's for security reasons. And then we have to give an external network ID. External network ID is so the Kubernetes cluster can communicate with the outside world. These are the most important things. We give it some flavor information, and we specify a container orchestration engineer Kubernetes. And then the last thing we do is a magnum BayCrate. We specify the name we want. We give it a BayModel name. This is a BayModel up here, this line. So we named a test BayModel. We're going to actually create the Bay based upon the BayModel. BayModel is like a flavor in Nova. So it's a similar idea, and then we give it a no cap. So that's a script, pretty straightforward. Just so I don't fat-finger it, I'm not going to type it all in. I'll just run it. First let me show you there's no bays running. The reason I have to do this as sudo is because my Python client is installed with pip-e and my home directory and I'm on somebody else's login. So you don't have to run with sudo. It's just my environmental thing here. So you can see there's no BayList. So I'm going to go ahead and run the script, the start script there. And we see it's bringing out a little bit of information down low as a glance image, pretty straightforward. Now it's downloading it into glance. So we have it on the hard disk. It's loading into glance. So when you started, you created the BayModel. The BayModel is essentially a template. So when you create a Bay, it inherits all the attributes that are in the BayModel. So you don't have to specify them all as a train of very, very long arguments to the BayCreate command. So BayCreate typically has two or three arguments to it rather than 15, which is what you might need if you were creating a Bay from scratch. And the BayCreate is the thing that the tenant actually does. The BayModels may actually be put in there by the service provider in advance by the cloud operator in advance, or you can have your users create their own. But the idea is the thing that the user creates is the Bay. Based on the BayModel. OK, I've been started up again. The script can be just repeated. And I'm actually logged in as myself now. So I can actually access my own Magnum client that's installed in my home directory. So again, we're going to download the glance image very straightforward. And this is the image that's going to run on the hosts inside the Bay. The nodes, or we call nodes. And the nodes, right. Now we created a Bay. Now one of the cool things about what Adrian talked about is we use heat. So we can just kind of look at the stack list. And we see that test Bay there. That's actually the one we created. It just picks up the name. And we can do something like resource show. Is it going to R there? What's that? Resource, yeah, there we go. High typo of the time. Take fast with lots of mistakes. That's my motto. So what this shows is all the different events happening inside of heat. Adrian talked about the cloud event, the cloud event events. There's a whole bunch of stuff that happens here. I'm not going to go into a whole lot of detail about this. What we can see though is it's going to launch some Nova instances. And there's actually already some running. So it's still creating. My machine is Xeon. I've got SSD. So it's pretty fast. We're creating one master node and two minion nodes. And we're also deleting my old stake heat stack there. And the reason why this isn't happening instantaneously is because we are using VMs as the Bay nodes. So if we were using containers as the Bay nodes or just adopting bare metal machines as Bay nodes, Bay creations could be instantaneously returned. And one of the interesting things is we have really good integration with Neutron. So I can do something like a Neutron port list. And we see these are all the ports assigned to the Bay. It's pretty straightforward. I'm not going to go into a whole lot of detail about what the different IPs mean. But if I can show a Nova list. Nova list. Yes, if I can spell right. That's what I'm going to say. Oops. Yay. Success. OK, we see the minions are active. Actually, I created this demo with just one minion. And there's actually a bug in Nova. It's not very unfortunate. But for example, you see on this last line, you don't see the IP addresses. This is a bug in both Juno and Kilo of Nova. It's not a COLA problem. This is a. In the networks box. Yeah, it's the. Yeah, the networks box. Maybe a client bug, actually. No, it's an instance info cache in Nova. I've debugged it quite a bit. Something's definitely broken there. But what we can see here is we can see the minions IP. The floating IP is the last one. There's some of the login to that. Everybody gets a minion name, login. It hasn't finished setting up yet. What does Bay status show us? Yeah, I'll show. I'm not going to Bay list. The problem with OpenStack is you have to type so much. OK, great, complete. Now I'm going to try Bay show. So that gives us our node address. What we can show is we can show Redis. So I've got a Redis script, which is very straightforward. What the Redis script does is here is we create a pod. Now Adrian talked about the pod options. So Magnum, excuse the pod create, we give it a manifest. We saw that the Bay is created completely. I'm not quite sure why the SSH key isn't registered. It doesn't really matter. Then we do a service create. Then we do an RC create. RC is a replication controller that creates pods across the system in replication and then keeps those running. So if you want to pull the trigger on that, Adrian, and run it. Now we communicate with Magnum to do all this work. We had a session Monday, which I think was very interesting where we talked about having native tools communicate with Magnum. I think we're going to tackle that work pretty early on the cycle. But we can see, for example, we can see Magnum pod list. And this puts out a whole bunch of information. Now I'm not actually going to demo Redis, because I haven't used Redis in a lot of detail. This shows that the pods work, and Kubernetes is launching the pods, and Kubernetes is set up in an environment. So you want to show a Bay update? Yeah. Why don't you go ahead and show that? So Bay update, the way Bay update works is we can update the node count, so we can scale up. And the scale up allows us to add minions to the system while the system is running. Now somebody in the community has submitted an auto scaling patch, so we can scale up and down. It's going to take a little while to sort out, but definitely scale up works really nicely. See, Adrian's fat fingering. I did? No, you got that. There we go. OK, I incremented node count from one to two. So we should see now, if we do our Bay list again, you see the status of the Bay is going to go to update and progress. And this is mirroring what's happening in the, this is not actually a state that's inside of Magnum, this is a state that's inside of Heat, because this is actually happening in a Heat orchestration. And you can show that in just Heat list, or Novel List. If you want, you can show the new VM running, or being created one or two. You see this one right here? It's in build state. So this menu will come up, it will register with our overlay network flannel, registers with the menu and software inside of Kubernetes. Cool thing is, we talked a lot about Kubernetes today, but we also support Swarm. We just don't have enough time to demo Swarm. We talked about Chorus. We don't have enough time to demo Chorus. So we have to pick what we're going to demo. I know why you couldn't log in. Why is that? Because you created it under Danian's account. Oh yeah, that's right. And it uses his key. And now you're in your account, so you don't have the key. Cool, let me switch over then. I did want to show you something, actually. Thank you for thinking through that. So I think it's a good time to take input from our audience. Yeah, that's good. What would you like guys like to know about? There is, when you create a Bay, you get an attribute called node address. And that's the address of the Kube master. So you can actually use Kube CTL and point the Kubernetes master environment variable at that address, and use Kube CTL directly with. It would be, right now, you would just make a direct connection to the IP. And it's unprotected, but we have an open blueprint for using a TLS key pair in order to do that authorization. This is why we recommend you don't deploy Magnum yet, because there's no TLS endpoint security. We need to tackle that. I wanted to show real quick. But if you're using it on a network that is, if it's a private cloud scenario and you're using it on a network that's a public network, this may be fine. But, right, exactly. So I want to show real quick. I am logged into the menu now, thanks to Adrian to bug my key problem. I was logged in with the different person's credentials. So we can run Kube control inside the menu. So I can do it like a menu and get. Yeah, so your choice is that you can run, from the client that you use to create the Bay, you can use Kube CTL from there. Or what he's showing you now is he's in the Kube master and he's running Kube CTL from that, which is going to connect to a local address, right? We did a Bay update. Now we have two Bay's listed in Kubernetes in the ready state. So that actually works. Bay update works. And I can do something like get to pause. See, people want to use Kube control, not necessarily Magnum, but we need to have Magnum for heat integration for other projects, the Magnum client. So what this shows here, this is just a Kubernetes thing of downloading the Redis software. And we see that everything should be in the running state. So that means it's running inside of Kubernetes. You had a question? Yes. The problem with scaling down is if you delete a node, then what do you do with the information on the node? You may lose persistent data. So I'm not Rokin on the scale down. I love the scale up. I suggested to the person that authored the patch to just make it scale up. I think to scale down, we need support from Kubernetes. Also, if I could encourage people, if they have questions to queue up at the microphone, so the questions can be recorded. Yeah, definitely. Is there a scaling policy feature coming? That was the question. Yeah. Yeah, definitely. Yeah. So Kubernetes doesn't yet have a way to scale down. As soon as it does, we can use that capability. Question? I would have a question about the CLI. A lot of the open stack projects are going to open stack CLI, the common tools. How do you see it happening with Magnum? And if not, will Magnum support Keystone V3 APIs to authenticate? I'll take that one. So there is an open blueprint for integrating with OSC. And that work, I believe, is happening already. I think Ronald Radford was working on that. And there's another stacker who volunteered to do that work. So that should be done very soon. It's a very trivial integration because we already have Python Magnum client. So putting it in OSC is less than a day's work. OK. More questions? Yes. Would you mind queuing at the mic just so folks can hear? Because it's recorded. Could you guys talk a little bit about the plans for network integration and what's going to happen with Flannel, what's on the roadmap for Docker, and some of the other issues and improvements that can be expected in the networking layer with respect to containers? Great question. So Docker 1.6 is integrating with LibNetwork, which has its own implementation for connecting containers to other containers. And it's been suggested on the mailing list. Even this morning, I was reading a note there that maybe we should just use that. The drawback of taking that approach is that it's not generic across any container format. It's specific to Docker. It's not that another container system couldn't leverage LibNetwork but none have yet. So we had two design sessions yesterday about this topic. And we got to a point of clarity that for now, using something like Flannel as an overlay is functional. It's just not as performant as if you just had one layer of SDN in there rather than an encapsulation layer plus an SDN and then non-physical. So we've talked about ways to plug in to tools like Flannel and tools like LibNetwork, neutron implementations so that instead of having a VXLAN on top of a neutron network, you would just have a neutron network underneath it. So that's our current thinking. So that we're handling in this kind of a generic way that's not specific to a particular container type. And it would be the upstream projects where we would make those contributions that would plug into OpenStack. So that's the current plan. We definitely don't want to get rid of the memory copies. I mean, those are expensive. And I think the way to do that is integration with Flannel more tightly with neutron. I was curious if there's any thoughts on how persistent data volumes might be handled. Any integration with Cinder? Or is there some better way to handle persistent data volumes that I may want to attach and detach between containers? I'll go ahead and take this one. So we have Cinder support already. So we do have persistent volumes. Now, if the VMs were to die for some reason and they failed, you would lose that information. Because right now in Magnum, there's no way to connect that new VM if you were to create a new node for a bay because one failed. There's no way to attach that Cinder volume back to the VM. We talked a lot about that in the design session, how to handle that. Somebody mentioned HA restarted from heat, but that's been deprecated. So we're not going to use deprecated functionality. So I would say it's an unsolved problem in Magnum, but it's something we want to solve. Now, we did talk a lot about persistent state. We talked about using Manila in the design sessions for storing information between different containers. Not sure if we'll do that. We're definitely going to use Cinder as a hard dependency for our persistent storage. One problem we have with Cinder is we can't have multiple nodes amount of volume, which is how it should behave, right? That's correct. That's why we're talking about Manila, for example. One issue with Kubernetes is they haven't quite finished the volume support for OpenStack, so there needs to be some work there. And I think that gets to the guts of your question as to how do we support that. And the answer is we really need to wait for Kubernetes upstream to do their job. And they're doing a great job of fantastic upstream. They're just really overloaded. And we are well-coordinated with that project. So there's continuing communication, and we're confident that it's going to come out well. Yesterday in the talk on OVN, the OVM developers were mentioning that containers were kind of designed in as first-class citizen from the beginning. Does the OVN work, impact any of the things that are going on in Magnum, or is it just unrelated? It is one of the subjects of discussion from yesterday's design session. My understanding is OVN is what allows us to have multiple mappings per guest, which is what would allow containers to work. We didn't get too deep into the implementation details because we were trying to focus in on what's our guiding philosophy for where these things belong. That's where we spent most of our time. But I believe that our first implementations probably will leverage the OVN, at least for the reference implementation. And then as we get for more and more different neutron driver types, additional implementations. Yeah, I would add to that one thing we want to make sure we don't do. And Magnum has picked something that only one type of networking infrastructure supports. So most people want either OVS or Langspur. So we want both, and OVN doesn't offer that today. But I think it could, I talked with the folks in the neutron community, the core approval party last night and they were pretty keen on the idea. So I think it definitely happened. So, attending a lot of these container sessions in the past few days, seems like OpenStack talks generically about containers, but there's my observation that close affinity towards Kubernetes then Docker. So is that accurate observation? At the moment, that happens to be the case. And is there a reason why that is? Things are moving rather quickly. If you would have asked me that same question three months ago, it would have had a different answer. And three months from now, who knows? But we do know that CoreOS has expressed an interest in putting rocket support into the Kubernetes Bay type so that we expect that to come. You're gonna see support for a MISOS Bay type as well. If the interest that we saw expressed yesterday in the design track ends up turning into code. So I think the point here is that when you make a choice as a service provider on what your container strategy is gonna be, there is risk in picking a winner right now because things are moving so quickly. And if you say, look, I understand OpenStack is gonna be part of my equation, and I'm choosing Magnum, and for now I'm gonna use, you select which Bay type you want today. If you decide later you wanna start using different Bay types, that's fine. And you're gonna be able to use them side by side. I showed you in the demo in the keynote on Tuesday that I had a Swarm Bay sitting side by side with the Kubernetes Bay. So you'll have an actual migration strategy for your users. So Kubernetes happens to be the thing now. It may end up prevailing. Who knows, things might change and we want you to have a plugability story so that you can be able to put in whatever your users really want when the time comes. Thanks. Any more questions from the audience? We're almost at time. Yeah, we are. Two minutes. Good timing. Thanks everyone for coming. Thank you.