 So, as we get towards the end of the day, we now have a panel on OpenStack and containers. And as you travel around the summit this week, you'll probably see that there are a lot of sessions about different container technologies, because it's something that people think about a lot. Over the last couple of summits, we've featured, like, container projects in OpenStack like Magnum, we've had Google come and do an application that's spread across their Google cloud and OpenStack private cloud, we've talked about Kubernetes, we've talked about people running Mesos, you know, we've talked about this a lot. But one of the questions that I still get all the time is, you know, what are these projects inside of OpenStack like Magnum and Kola and Courier, and how do they fit into the container landscape? So what I wanted to do today was to try to focus in on that particular issue. And if you just try to simplify it down to its core, you can think of them as kind of functioning in different layers. Kola is really at the layer of the infrastructure. Step one of Mark's deployment process yesterday, you know, building a cloud. Magnum is running on top of that cloud to deliver container services, and Courier is kind of glue in the middle that ties them together. But I want to dive deep on that concept now and bring out the PTLs of these projects for Kola, Magnum, and Courier. Come on out, guys. So we have Mikhail, Adrian, and Antony, who are the PTLs for Kola, Magnum, and Courier. So to start out, Mikhail, maybe you can tell us just kind of in just quick summary what is the purpose of the Kola project? Okay. So Docker is a great thing to deploy applications, microservices. And OpenStack services are in fact microservices, so it's only obvious to deploy them with Docker, and that's Kola. Okay. So Kola is focused on, you mentioned Docker, which is kind of, you know, a container packaging format. Is that correct? Yes. And so the output of Kola, what is the, you know, the kind of deliverable from the Kola project? So Kola itself is about containers. We create reusable containers, but containers are pretty much useless unless you deploy it. So in LibertyCycle, we already included Ansible playbooks to deploy the containers. And now in Newton, we also started in Project Kola Kubernetes to deploy OpenStack on Kubernetes with Kola containers. Right. And, you know, as you describe that, I think it's important to mention, like you said, you know, in the container world, you have the container that has kind of the functionality, the service packaged up, but then you also have a model for operating that. And so Kola, you know, builds those packages and then there's Kola Ansible and Kola Kubernetes and other projects that sort of deliver that in the operational mode. Is that right? Exactly. Okay. Cool. So Adrian, tell us about Magnum. You know, we shut off Magnum a couple of times, and I think that, you know, since you were first demonstrating this year and a half ago, it's changed a little bit in terms of the scope and focus. Yeah. So Magnum is about making a place for you to run your container workloads on top of OpenStack. So where Kola is kind of down at the control plane level, we're up at the data plane level. So like Trove gives you a place to run your favorite database workload, Magnum will allow you to run your container workload. And it supplies a set of drivers that allow you to choose what kind of a container environment you want those to run in. So we offer Kubernetes, we offer Docker Swarm, and we also have Apache Amisos. There's also a new driver in development for DCOS as well. Okay. And it's interesting that, you know, that you talk about that shift in focus. I think that as you look at the broader landscape, that's something that we've been talking about a lot this morning. It seems like if you look at Google's container service or Amazon's EC2 container service or some of the other public cloud services like that, they've started to land on this model where you create a cluster on top of infrastructure services that's kind of controlled and configurable and then you deploy into that a container orchestration tool. And I noticed yesterday in the demo that Mark did, you know, he did a deployment of Kubernetes and that was using Magnum, right? It was. So with OpenStack, when you use Magnum together, you get a choice in what kind of OpenStack or what kind of a container environment you're going to run. If you go to a public cloud, you're going to get that cloud's flavor of container cluster. When you choose Magnum, you can have a variety, even in the same account. You can have the same user who has two different kinds of clusters side by side because there's a multi-tenancy implementation in Magnum that allows that to be safe. Okay. So one of the questions that someone had sent in that they wanted to understand was how you all work together. And I think that, you know, this is a great place to talk to you, Antony, about Courier and that's kind of the purpose of Courier, right, is tying it all together. Right. So we believe that a lot of the value in OpenStack lives in the production-ready services that it has, like Neutron, like Glance, Cinder, and so on, right? And our goal is to bring all those non-VM virtualization services to containers so that the people can use them with their container runtimes like Docker. This is where we started, bringing Neutron to Docker. And also then to orchestration engines. And this is what ties us together, that Kola can deploy Magnum and Courier. And then in the future, Magnum will be able to use Courier as a to bring Neutron networking to the containers that are running in the clusters it deployed. Okay. So, you know, one of the things that a lot of people wonder about with containers is how does networking and storage work in that world? And what you're saying is Courier takes those container network libraries and ties them into a Neutron network. And so what do you get with that when you are able to attach a container onto a Neutron network? So the good thing is, like, when you have an environment where you have current applications, they are running usually on VMs. And when you have a new development team, they may want to use Kubernetes or Mezos or whatnot. And in that case, if they want to access services that are provided by the applications that you already have in your data center served by VMs, the best way is that they live on the same overlay so that you don't need to care about anything else. You have the same support. You already have the knowledge about how the network goes, the storage also that Magnum provides access to, and the network administration gets simplified. And also, you can leverage things like firewall as a service or the security groups and so on, because you get that down to the container level. It's not anymore another overlay that runs there. Right. And when we talk about OpenStack being one platform for bare metal virtual machines and containers, it sounds like courier is really a key part of helping all that to live together. Yeah, right. It's a way to get so that you can have workloads running on VMs, workloads running on containers in VMs, and workloads running on bare metal containers, and to have all that be treated equally and under a single backplane, let's say. That's an awesome concept. So what's the future for Kola look like? Where do you want to see it going? So we just introduced a new way of deploying Kola, which is Kola Kubernetes. This is still in development. We aim to develop the version 1.0 in Okata time cycle. And after that, well, I would hope that other orchestration engines like Puppet, Chef, Mezzos, whoever wants to deploy Kola would just consume our images and deploy the containers in a way they want and how they want. Yeah, I mean, this is a really active area that we've definitely seen a lot of different people working in. And so I think it's good to see the approach that Kola enables, because I think that some of this needs to start to rationalize. And as I mentioned earlier, we need to show people the best ways to do it so that they can replicate the success. So when they did this demo yesterday, that deployment, was that using Kola? I think you... Yes, that was Kola Ansible, which is already production ready and it's ready to use. So yeah, that was Kola. We deployed it in Sunday during an hour or so. Cool, that's awesome. So Adrian, one of the other questions that I had, and this maybe gets to your future plans for Magnum, they said, what is the life cycle strategy for upgrades and things like that for those orchestration engines once they've been deployed into a cloud environment? So this is one of the things we're going to be having in a session later today and again tomorrow. We'll talk about it in a minute. But once you deploy your container workloads, you may want to upgrade the COE. And so how to actually do that without being disruptive is something that we're currently working on. Today you can just... I mean, it's so easy to create a Magnum Bay or we don't call them bays anymore. This is one of the changes in Mataka. We call them clusters now. We don't call them bays anymore. But when you create a cluster, it just takes a few minutes and you can create another one. And it's easy to just produce these things really quick and maybe you have a way to move your application. But if you don't and you want it to be upgraded in place, that's something that we're really focusing on during this cycle. Okay, so you mentioned that there's a design summit session. Where can people find you this week? Where are you going to be talking or where are these sessions if they want to dive deep? Because we can only kind of scratch the surface here. Where do they go to? Yeah, so there are two fishbowl sessions today, one at 305 and one at 355. One is going to be talking about this upgrade issue and another is going to be talking about kind of the next generation of Magnum. How do we do multi-location? How do we do multi-region? How do we have different pools or different types of hardware that scale independently doing those kind of more advanced clusters? There's a specification for that already that's been under review and we want to get community input on it. And Courier, what's going on with Courier this week? So this week we had some workshop. We will have some work sessions about developing support for Kubernetes, which is on the way. We demoed it on Austin and it's now being done in a rearchitected way because we split the repos. And tomorrow what we are going to show is how we bring the new turn networking to the containers that Ryan VMs. We're going to have a demo about that in the session at 2.40, sorry. OK, so 2.40 tomorrow and so, Kola. So today we're going to have session about Kola Kubernetes because OpenStack on Kubernetes seems to be a very interesting topic lately. So today at 2.15 we invite everyone to look at how Kola Kubernetes looks like today and show a short demo. And of course all of the call-up design sessions will start today later after the presentations. Excellent. All right, well thank you guys for joining me and for helping us to wrap our heads around the different approaches and the different integration points that you're working on for containers in OpenStack. Thank you for having us.