 Hi, I'm Adrian Otto and I'm a Distinguished Architect at Rackspace and the PTL for the Magnum project and I'm here today with Lachlan. Hi Adrian, how you doing? So containers. Containers, yes. They're a very hot topic at the summit. We've been running containers on OpenStack in production for about three months now and I spoke at the keynote yesterday about how we're using containers to solve developer and application deployment issues. But one of the things that we're seeing a need for is packaging and distribution around container orchestration frameworks. When we entered the game it was still fairly raw and some of our implementations now are still fairly raw so integrating with different pieces of OpenStack to provide a more well-rounded solution is something that we're very interested in. Great, so it sounds like you could be the next Magnum super user. Sounds like it, sounds like it. Tell me a little bit about how Magnum could solve some integration issues. Well, as you well know, right, containers are more about just packaging and distribution of apps. And when you use containers you've actually got a whole spectrum of problems. Some of these problems are infrastructure problems and really should be solved by tools like OpenStack and some of them are truly container centric and should be solved by tools like Docker and Kubernetes and MISOS and other orchestration solutions. And so Magnum kind of recognizes this and says we're going to glue the best of infrastructure software together with the best of container software. And we're going to fit this and make this the most compelling integration of these two worlds together. Fantastic. So a lot of people are raising these questions about are containers going to destroy OpenStack? Yes. Are they going to replace virtualization at all of these dramatic questions? And the truth is, of course not, because there's 12 years of value in virtual machines and virtualization software platforms that isn't going away and that containers really aren't intended to augment and these things really need to work together. And so Magnum is really trying to make this easy with OpenStack and we hope that there might be a fit for you. Yeah, absolutely. I think it'll be a great fit. But you touched on one thing and I think that's really important. I'm getting a lot of questions about whether containers versus VMs, containers versus VMs. And internally and something I'm telling people is select the right tool for the job. Containers and as you said VMs are both still very compelling and important and I see them working in unison with one another. So there are use cases, especially for us, for containers but there is still a plethora of use cases for VMs. So I see that kind of them living in harmony. So I think that's something that's on everybody's mind at the moment. So I have a question for you. Sure. So containers make a whole lot of sense for stateless components and it's more tricky to deal with storing data and dealing with data-centric application or data-centric component. Yes. What are you dealing with this? So what we did with containers and our container deployment was actually engage with the development organization and say, can we solve your micro-service deployment problem? And micro-services are typically written in about a month. They're less than a thousand lines of code and they're back-ending to traditional data stores. They're not packaged with data stores. So we thought this is a use case that containers would actually be complementary to. So we went down that path and at the end of the month we packaged up 30 different micro-service apps and found that the developer experience and the time to get these apps out was much quicker than that of VMs. But we still have a fairly big VM use case for persistent storage and applications that don't lend themselves to being containerized databases, those kind of different applications. Depending on what kind of data system they are. Yeah, absolutely. I mean one of the use cases we had for containers was ZooKeeper, ZooKeeper, Kafka. And these things scaled, you know, they completely distributed data storage systems, right? And we containerized them and threw them out and gave us tremendous ability to scale up and scale down very quickly. Did you end up mind-mounting their data directories to volumes that were on the host? Yes. And did some clever namespacing. Good. Okay, that's what I do. So that's what I'm doing at the moment but I see not too far down the line just directly attaching storage to the container via the underlying host but not having a shared mount point. And that's an area I think we're going to be working on. We have a design session later this week to look at how exactly are we going to integrate things like Cinder. Yes. Things like, you know, Minilla where we can have a shared file system that can be connected to multiple containers simultaneously even if they're on different hosts, those sort of things. I think there's some really cool stuff that we could do with storage that's right on the horizon. I think, you know, that's a great point because when I look at OpenStack and what it provides and with the keynote yesterday we were able to pivot and use OpenStack as a platform to deliver containers, right? What I would love to see is integration with more of the other projects because I already have solutions for block storage or I have solutions for networking to be able to tap in and leverage those APIs that we already know and love to deliver networking and persistent storage and object storage and all these different services inside containers. I think that would be a great win and make a really compelling use case for containers on OpenStack. Well, a lot of times people ask me, like, what's the difference between using Magnum or using Heat just to slap Kubernetes on top of OpenStack? And that's exactly the point, right? It's that it's more than just setting up a container orchestration engine and plopping it on top of some compute. How do you take advantage of everything else you have around? You've already poolified all your resources with well-defined APIs. So one of my questions now is I'm going out and mounting these data stores, not using Cinder. I would love to use Cinder as my block storage endpoint to provision container block storage on OpenStack. I think that would be a great win. And as you mentioned, vanilla and all these other services, object storage, this would be a great win for containers and help move them forward. So anything you're curious about with respect to Magnum that I can answer? I think the Magnum piece to us is really getting those really nice integration points with the rest of the OpenStack projects. And there are some pieces to container orchestration now that are lacking, especially in an enterprise setting, user management, role-based authentication. And I'm interested on what that front end keystone as it relates to the Magnum project. So we intentionally are not trying to duplicate any identity solution. Many platforms make the mistake of taking on identity as one of the things that they need to be concerned about. And we really wanted to not overlap with things that already existed in the OpenStack ecosystem and recycle the things that were already there. So from an identity perspective, we're going to use Keystone for that. We're already integrated with Keystone version 3. We're using trusts in order to make it possible for bays to scale even when they're unattended. There's no user there, but they can still interact with OpenStack using that. So it's a really important integration piece. You mentioned role-based access control. That's something that you need to use features from Keystone, but they need to be actually implemented in the service itself. Now, we've focused most of our energy into making Magnum multi-tenant. So from that perspective, we've got it set. But we haven't really gone down the path of making a very granular RBAC implementation within Magnum. That would be something I'd love to get your input on so that we could spec that out. But from an identity perspective, there's another topic. One of the things Magnum tries to do is give you a choice of what your native container experience should be. And I demonstrated my keynote this morning in native Docker experience. So in order to do that, we need some way that Docker clients can securely communicate with the Docker APIs. And they don't use Keystone, they use TLS certificates. So Magnum needed a way to generate these TLS certificates on behalf of these clients and make a way so that we could easily get them down. And Karina by Rackspace is showing you an implementation that uses this process of downloading the credentials, just running a quick script, and all of a sudden you're set to go. So we have kind of fixed the cross-religion kind of identity bridge by implementing this. It's hard. Most system administrators don't fully understand what it takes to generate certificates, do the signing properly, make sure that you're protecting the private key, get all these bits working together, right? It's not just about from the client to the API, but then it's the different components within the service. And then within the orchestration, the container orchestration system, all the bits within that all need to be securely using TLS secure communications. So there needs to be credentials all over the place. And making sure the distribution of those credentials is secure. How do you do that properly? Okay, and that is a hard problem to solve if it's not something that you're doing every single day. And most system administrators are not concerned with that today, and so Magnum kind of takes that and makes that easier. So that's an area where we've done some work. But obviously for like integration with enterprise identity systems through a Keystone plug-in, might be something that will be really powerful for an RBAC solution. Love to get your input on that. Fantastic. What are you with Magnum? What's the buy-in from the other projects? Are you just overlaying it onto something like Neutron and using its primitives that are already available? Or is there some buy-in from another project that you need in order to get Magnum integrated cross-project? Great question. So Magnum wants to recycle what's there, right? So it's overlay on what's already there. It's on top of there. Okay, fantastic. So we make the bay, and when we create the bay, we use the heat template. And in the heat template, we're actually creating Neutron resources in order to connect the components of the bay together, all right? And the challenge is once we've got all of the open-stack parts properly networked, how do we get the Kubernetes-level things and the Docker-level things connected with it as well? Well, there's this new project that the Neutron development teams have spun out called Courier. K-U-R-Y-R. You heard about this this morning as well. And that is a Docker remote plug-in for LibNetwork that allows you to use the Docker network configuration and management API in order to produce through Neutron an actual open-stack network. And that you could actually have communication between containers and other open-stack resources as a result of these things being on the same Neutron network together, which is extremely exciting. So that's something we've been working on from a requirements perspective with the Courier team. They've been working closely with the Magnum team, and we think this could be really exciting. It could be the next thing that we show at an upcoming summit in... Fantastic, I'd love to show it. ...in Austin, yeah. Fantastic. So what are you seeing as the uptake of Magnum? What are the types of people that are uptaking Magnum? If I'm looking at it, is it enterprises? Do you have a good grasp on who's using it? Well, I can tell by who shows up to the IRC meetings. Okay, fantastic. Well, that's a good asset test, isn't it? There's a pretty wide diversity there. We know that by organizations that have committed code to Magnum, there's about 25 or more different affiliations who have actually contributed code and about 80 to 90 different engineers who have contributed from those different affiliations. And if you look at who they are, right, they're retail organizations, they're end users, they're financial institutions, they are hosting providers, cloud providers. So there's a pretty wide range of both public and private use cases. And of course, Rackspace is in there as a public cloud user. So it's a pretty diverse group, and I haven't seen any strong tendency to any one particular interest area. Okay. So what do you think the future is if you were to look down a year from now? It's kind of a tricky question, but application run times and kind of just VM-less compute and, you know, for instance, Lambda. A year from now is not that long. No, I know. Now, something like an application run time. So serverless, I would say, is pretty far down the road. Okay. Because that requires a developer mind shift. Yes. Right? So I could provide a solution right away that would make a solution where you just buy cloud capacity by the function call. Yes. Right? I could do that, but it probably wouldn't get a lot of adoption until developers are kind of like used to running their stuff entirely in hosted environments. And the truth is most software today does not run in hosted environments. It runs in co-location facilities or runs in private data centers. It doesn't run in the cloud yet. Yes. So once we can get developers to the point where they're comfortable with just running things containerized, I think we're the first step there. And then the next step beyond that is what you're alluding to, which is that I don't even have containers. I just put my software in the cloud and it just runs and I pay for exactly the amount of function calls that I run. Yeah, exactly. Exactly. Yeah. I completely agree. That's been our experience is it's a journey. So VMs and containers and figuring out the use cases. But not only that, just you have to refactor a lot and a lot of your tooling and understanding. So when you go to a VMless, you have to retool everything to give the developers a level of transparency that they're comfortable with running their apps in that environment. And it's different. It's different. Not only just your tools. Yes. But what is your development environment? What's your development process? How do you debug things? Exactly. What's the operational impact when something's not working properly? How do you deal with it? These are all questions that we're just going to need to sort out. And we're not quite there. But I think there's a lot of stuff that we're working on, both in the open source and in the product world that you start to have some options soon. We'll see. Yeah. Are you getting many people asking about different types of workloads? Because I see a lot of movement around integrating, let's say, big data workloads like Hadoop workloads inside the same scheduler as your container scheduler, for example, like in a solution, like a MISO solution. So you're seeing a lot of traction in that space because I think that's something that would be interesting to us having large big data infrastructure having a common scheduler that can utilize the same resources and pulling them together. In all honesty, not yet. OK. We had some interest in the last design summit. We have talked about it. But we really don't know enough about the actual use cases. Use cases. And until we have some solid use cases, we're reluctant to get too energized on that. Yeah. Well, fantastic. Well, this has been really fun talking about containers so much. We're here at the OpenStack Summit in Tokyo. Thanks for tuning in. Great. Thanks, Adrian. It's been great chatting.