 Hello, everyone. Sorry. One second. Hello, everyone. I'm Gal Sagi from Waui European Research Center. With me presenting, Tony, you want to introduce yourself? No, I'd find out later. And Muhammad. We are going to present to you project career. Just a quick survey before we start. Anyone heard about project career before? Okay, so relatively good number. I'm going to do a quick introduction to the new people here. And then Tony and Muhammad are going to introduce some of the exciting features we have for this release. Kubernetes integration, integration with Docker Swarm, and attaching to existing networks. Before I'm going to delve into career, what led us to start this project? We noticed that there is this new workload that people and users are deploying next to OpenStack. We have OpenStack networking and VMs. And on the other side, we see users starting to deploy containers, orchestration engines like Kubernetes, Mesos, and Docker. And in these engines, they have their own networking abstraction. Their own new networking abstractions like LibNetwork for Docker, like CNI for Kubernetes. And when you look at this networking abstraction, you see that they are still experimental. They are still evolving. They are less mature than what we have in Neutron in terms of their features. And we know that it takes time to build a good networking abstraction that is generic and flexible enough. We also saw a lot of vendor-specific solutions just targeting these new environments. And again, they are less feature-rich than what we have in Neutron. And it's very hard to keep track on APIs and on networking abstraction that are constantly changing. Another problem that we see in this environment, and I'll touch it in a few seconds, is what we call the double overlay problem. A very common deployment is when you want to achieve tenant isolation for your containers, you are running them nested inside VMs, inside tenant VMs. And as I will soon show, these environments tend to be very, very complex and provide us a big penalty and overhead that are not needed. We also want to, with Project Career, we also want to enhance our containers' environments and our containers' workloads with policy-level construct that exists in OpenStack today, like security, like isolation, and advanced networking services. In general, what we saw is that it's pretty hard and complex to connect VMs, nested containers, bare metal, and everything in one networking infrastructure together. And then we looked at these new abstractions, like LibNetwork, and we compared it to Neutron and what we have right now in OpenStack, and it has a similar concept. Right, networking is evolving, things are changing, but still, in a high-level look, things are pretty similar to what we have. This slide shows you the problem that I talked previously about, the double overlay problem. Usually, when we want tenant isolation, we deploy containers inside VMs. This is Project Magnum for anyone that is familiar with. And in this environment, we usually have one networking infrastructure connecting the VMs, usually a Neutron plug-in of some sort. And then we have a whole different networking solution inside the VM itself to connect the containers, like Flannel, for example, with Kubernetes. And this deployment brings many, many complications. It brings, of course, the performance penalty and the latency penalty of all these layers, but this can be avoided, like someone mentioned a few days ago with Flannel host mode and so on. This is not the main problem in my eyes. The main problem is that we now have two different solutions, two different networking infrastructure, just to connect two containers in our environment. And this brings, and you know, ping is working, right? But when you come to think about how we orchestrate this, how we deploy this, how we manage these environments, how we debug and monitor anyone that is debugging a virtualized network, today knows that it's hard enough with one solution. And now we have to correlate this between two different solutions. And our take in career about this is that there are already Neutron plug-ins and Neutron solutions that address this problem. So we are trying in career to expose this to the user. And this is essentially one sentence that says it all about career. We saw all of these problems. And then we realized that we have a network in abstraction already that is relatively mature. It's relatively production grade. It has, it is, there are CI's, there are gate tests for it. There are richness of solutions that are implementing this. Why not use this abstraction? Neutron opens the network in abstraction for our containers' workloads just as well. So we have one infrastructure that manages all of our networking in our environment. And this is a quick overview about career. So career is all about bridging and mapping between all of these new models, all of these new containers world. If it's individual containers like Docker and Rocket, if it's containers orchestration engines like Docker, Docker Swarm, Mesos and Kubernetes, and we are mapping them to OpenStack Network in Abstraction Neutron and the Network in Advanced Services. And we are working on both sides, right? There is the containers communities, but we are also working on the OpenStack side, working on containers-oriented projects like Magnum and Kola, and of course pushing things to Neutron just as well to enhance and to make this connection much better. And by doing this, we're able to enhance our containers' environment with all the richness and flexibility that OpenStack provides. Now, career is fully open-source. We have a weekly IRC meeting. We are a big-tent OpenStack project, so we have a design summit. You are all invited today to join the session, see our roadmap, our features, and talk with us about everything you want. So a few of the current features that are supported, and Tony and Muhammad will soon go over them in more detail. But we have integration with Docker, with Docker Networking and the Plugable IPAM. Seamless integration with Docker Swarm. Being able to attach constructs from containers like networks and subnets to a Neutron resource that already exists. And by doing this connection to Neutron, we essentially provide all of Neutron features to containers' workloads. So we can have security groups. We can have nothing. We can have port security, quality of service, quota management, all of the richness and flexibility of Neutron, including advanced networking services like LBus, Fowl as a service, VPN as a service, two containers. And this is important because as the two evolve together, Neutron and containers, we are getting this with zero effort, right? Every new feature, every new thing that is added to Neutron and is tested and is deployed can now be used to our containers' workloads. So this was a quick introduction to Courier and what we are trying to do. I'll hand over to Muhammad now to show you some of our existing features. Thank you, Gav. So let's have a closer look at Courier and what it is made of. Like, these are the components of Courier. Like most, if not all, OpenStack projects, it has a configuration management module using Oslo config to configure various options for Courier. Courier uses two components of OpenStack right now, Neutron and Keystone. The connection to Neutron is to Neutron Client, hopefully OpenStack Client, and the authentication is done through Keystone. I will talk about the generic binding in a moment, but at the heart of Courier, we have the modules that you see, the boxes on the left side. Currently, there are two container networking models. One, the container network model proposed and put forward by Docker, CNM, and we have the support for that. Another is called container network interface. That's an app-C project, and that's what Kubernetes uses, and that's something that we are working to support and merge hopefully in this cycle very soon. Tony is going to talk about integration with Kubernetes and CNI and all that. I'm going to focus on what we have right now in the tree, and that's essentially supporting the Docker networking model. A little bit over a year ago, Docker created LibNetwork as a component that does the networking for its containers. As Gal mentioned, it turned out that the abstractions or the API is very similar to what Neutron provides. In last summit and even in a talk earlier yesterday, we showed the mapping between container network model and Neutron. It is really straightforward in the sense that container network model has the notion of networks, endpoints instead of ports. IP address ranges, subnets essentially, and things of that kind. What we have done in Korea have provided a plug-in for Docker networking. One of the things that was added to Docker in various projects, including the networking and storage, was the pluggable architecture, where you could plug your own solution to do the networking for you, to do the storage for you. We take advantage of that plug-in architecture. We developed a network driver that essentially satisfies all the requests coming from Docker. You ask Docker to create something for you that essentially comes to Korea and Korea satisfies that request by utilizing what else Neutron. In addition to the simple driver for the networking, an IP address management module was added to LibNetwork in Docker, and Korea essentially provides that support as well, again, by utilizing address management that Neutron provides. So everything is realized by using Neutron. So let's get back to the Korea-generic binding that I referred to a bit earlier. At the end of the day, when you want to connect your containers to a network, most of the time, how this is done is by creating a pair of virtual interfaces. One end of this pair is in the container namespace, and the other one is kind of connected to your hypervisor, networking, and plumbing. And there is a bit of work that is required to be done for essentially making the connection to your underlying network. If you are familiar with how things are done in OpenStack with NOVA, that's kind of the VIF plug and unplug operation that NOVA does before Neutron kind of takes over and process the networking request. So we have a rather straightforward but powerful network binding that allows you to do these plug and unplug operations for different technologies. Depending on what your Neutron instance is using, you may need to use one of these binding methods that we have. Right now, we have support for the reference implementations in Neutron. That's OVS and Linux Bridge. We have support for the open-source OVN oven. And we have support for Dragonflow, Meadonet, and Iowizer. And it is very straightforward. These are small scripts. If you have a use case of a different type of plug that you need, we can really easily work through it and add that, or at least help you to add that to Korea. And both our solutions for Docker and Kubernetes use the same binding methods. So let's get a closer look about how we use this. At the end of the day, you want to take advantage of what Korea is providing. All you need to do is essentially use the native Docker API. Docker provides the networking commands that you can issue. All you need to do is specify the driver that you want to use. As soon as you specify that the driver you want to use for realizing this network request is Korea, everything is then handled by Korea. The same way you specify the IPAM driver and you specify Korea, Korea will take care of managing addresses for you. And as soon as you create a network with a Docker network create, which is the native Docker command, you have a network with a UUID that you can later on use to start your container on. When you say Docker run and you specify a particular network, that container is connected to that particular network. And that's as simple as it gets. To look at the behind the scenes, when you say Docker network create, we end up creating a neutron network for you. You are not aware of it, you don't need to know about it, but just to see how this thing is implemented, as you can see from last example, there is a network created that the name is picked for you. Again, you don't need to worry about these details, but the network is created, a subnet is also created by neutron, the port is created and all that. And we use the network tags that were added just in the later stages of the Mitaka cycle to neutron where you can tag neutron resources with extra information or metadata to make sure that we keep track of the association between Docker networks and a neutron network. So looking at the neutron network here, you know which Docker network it corresponds to. Maybe you can mention that you added backwards compatibility. It's coming up. Another feature that we have is the fact that you can use whatever already exists in neutron. So if you have an installation that is using neutron already, it is using neutron for connecting VMs together, you can use the same network for connecting your containers. So that kind of goes to the team that neutron is becoming kind of your networking for everything you need to do. It can be containers, virtual machines, bare metals to ironic, all that, you have a seamless one single underlying networking infrastructure. So here we have an example where we create a neutron network and you can assume that you had created it for other reasons. And then when you want to create the Docker network, you just specify the name of the network by passing this option to the Docker runtime or you can specify the UID. And as soon as you do that, that network that is already existing will be used for the Docker networks that you are creating. And also Korea adds a tag to the neutron network resources to identify that neutron network or subnet as something that was already existing before Docker was even involved. And that is used to make sure that when you remove your Docker network, that network doesn't get removed because you are using it for other purposes. And as I mentioned, we are using neutron tags, which were essentially added to neutron in Mitaka. So if you are using Mitaka, this will work just fine. If not, if you are using Liberty, again, things will work fine. But underneath in the implementation, we won't be able to use tags. So we use the name of the neutron network to store the Docker ID. So that's a caveat to know if you are using Liberty or earlier, Docker career will possibly change the name of the network that you are using. So you have to kind of rely on UUID to be consistent across the board. And also, since we don't have a place to identify this network as an existing resource, if there is nothing connected to the network and you delete the Docker network, the network in neutron will be deleted. So considering those two caveats, this thing works with earlier releases as well. And so you have the Docker on single host and it works. Now you are providing multi-node networking, multi-host networking to your system. So you need some way of orchestrating having dockers on multiple hosts. And Docker Swarm is a clustering method that native to Docker and allows you to essentially use a bunch of nodes that are running Docker to essentially distribute the work across those. And the essentially use of career with Docker Swarm is practically seamless. Nothing really you need to do specifically for a career to make it enabled. Because Docker Swarm at the end of the day passes a Docker command to a Docker host, things work seamlessly without any requirement to do anything special for a career. So you get the clustering also from Docker Swarm for free practically. And with that, I'm going to pass the mic to Tony to talk about integration with Kubernetes. Thank you. So just to see a bit how much I need to go into detail, how many of you have used or have looked a bit into Kubernetes? Wow, okay. That's more than I expected. I'm glad to see that. All right, so once we got Docker Swarm working, we looked a bit at what the other people would want. And as the audience in the room shows, like a lot of people are setting their eyes on Kubernetes. And what's important for us is that we do these integrations in a way that feels native in respect to usage, but that it still respects the principles of OpenStack networking and security in general. So we don't want to add vectors of attack if possible for any integration and as much as we can, we're going to avoid that. So to be able to do that, and I will go into more detail about the structure in the next slide, we have a component that watches the Kubernetes API, and that doesn't need to, it can run whatever, and it doesn't need to be set somewhere that is accessible by the tenant because what it does is just translating the Kubernetes request or the Kubernetes events into Newton resources, and that should be completely transparent to the user that you are providing your Kubernetes deployment to. So as Mohammed covered, there is CNI. There is more than one project that is thinking about integrating with CNI, and it is not clear yet if our CNI driver for Kubernetes is just going to be the same that we use for everything, or we're going to adapt it to each CNI usage, but the good thing about it is that since it needs to run on each worker node, it shouldn't have access to Newton, so that's something that we made sure, and it only needs as much access as Nova Compute does to do the binding, and it uses the generic BIF binding that Mohammed presented. So there is a new part, which is policy, which is still being defined by the special interest group of networking in Kubernetes. We're following that. Probably it's going to be there in 1.3, and what it's going to be is something a bit like security groups that allows you to say, I want to enable this policy. So it will change the default from allow everything, which is the current de facto situation to deny everything for that service or that bot, and then you're going to be able to say, so allow connections, ingress connections from here or from this other network, and we're going to translate that to security groups, and they are going to apply to the ports that we create for the Kubernetes workloads. So since you're all familiar with it, I don't need to explain a lot about the Kubernetes part of it, but there is the master node, and then there are the worker nodes, and I'm going to show that in the demo later. So the only things that we add are the CNI driver that needs to be in each node, of course, and the Neutron Asians. The CNI driver is something that you could deploy containerized together with KubeNet, so we could inherit it from KubeNet and make a container out of that that just has our binding code, the CNI driver, and the binding library that is common to all our work. And then I think that you can see here is that if you use different namespaces, they will get different networks. And the location for these networks is when Raven starts, it automatically sets up a network for the default namespace, and when you create namespaces, that's going to go find an address pool in Neutron that will be configurable, which you want to use, and it's just going to take the predefined chunks of the subnet pool for the new namespaces that you create. And as you can see here, as a difference to other implementations like Canvifano or others, you can have several different networks that will be isolated running on the same worker node. So if you have a Slash16 as your default size, you're not going to get a Slash24 on each machine. They can be whatever the master schedules it to be. One could raise the point, like, it would be nice if the scheduler would be able to get extra information about networking to make those sort of decisions. We're not there yet, but it's something that, when we have everything else figured out, we're probably going to tackle. And again, so this part doesn't change. And the only thing, if you don't run OpenStack now, is you have to have Neutron and Keystone, that hopefully you can use Scola for that and deploy it in containers and have everything feeling very native to the environment. So I talked a bit about how we map the pod. The pod is, as I said, mapped to a port, and then that gets a security group, right? So for the Kubernetes services that are backed with a few pods created by the replication controller, what we did is to see that this part here, which is the abstract model of Kubernetes, looks a lot like the pools in Neutron. So what we did was just take the subnet that is defined in the Kubernetes deployment for cluster IPs, set that as the subnet that you will use for your pools. So you get a VIP, and then the backing, you just add, when the pod starts, it will be detected by Raven, translated into a port creation, and adding that as a member of the pool. So then you could potentially configure Raven to use whichever policy you think adequate for your load balancing needs. So currently we use round robin as a default, but you can imagine that you could use really whatever. And when a pod wants to talk to a service of an on another tier, it will use just the cluster IP, and that will just work. The only necessary thing is that the subnet for the pods, for each namespace, be connected in a router with a subnet that is used for the VIPs, and really it's as easy as it gets. I'm not going to show this on the demo, because this is just being baked in in our prototype right now. Tony, time check. Okay. Yeah, I tend to talk too much. Yeah, all right. So we're going to go into the demo very soon. This is the nested container solution. So as I said, you can have a VM started by Magnum, or you can start it yourself. You don't need to own that. Like the tenant doesn't need to. And we can have the watcher there. And this machine can be the only one that can access the Neutron API and the Kubernetes API. And these machines don't need any kind of access to your control plane. So that's a nice feature. So about packaging, yeah. So I'm starting to have that each component that we add to Courier, every new merge that we do, is going to generate a container here. Then call integration for getting those pieces, like Keystone and Neutron is something that we should work more on. If there is somebody from Ubuntu and from Red Hat and really would want to join us and work on the packaging, just let me know because we have some prototype packages. We just need somebody to own them. And heat templates for the Magnum integration. We're going to talk more about that later with the session that we have with Fawad. So please join that session. And we're going to go more into the nested case and what we use and what not. So currently the focus before the work sessions that we have today is the CNI and watcher part because now I have it as a prototype with my team at Midokura. And we need to upstream that. You can see it. It's open and so on. But it's just skipping a bit the reviews. So nested containers. Hopefully the Neutron parts will fall very soon. We're going to be able to deal with that. And then depending on what we decide today in the work sessions and tomorrow, DNS integration, methods, and so on. So it all depends if you join us and provide a bit of help and direction into what you want we're going to give. So for storage, this is something that was changed very recently. So people want to do the same that we did for networking in storage and we're here to help. And there is all these projects that can provide a lot of value to containers. And if you have questions about that because I don't know anything about storage, you can ask in the Q&A to Gal. So we really want you to join. These are some resources that you can get before I go into the demo because I'm running out of time. And we're going to post the slides. I'm going to tweet about that. Yeah, IRC, we're always there around the clock. So you will always find somebody. And here are very interesting links. You should really check that out, like how to get started with Swarm, how to get good at running with OVN. And we're going to post one for the Kubernetes integration very soon. So live demo time. I don't know how many of you were in Tokyo. But sometimes the career, the most can play nice tricks on you. For added value, I even changed the laptop in the middle of the session, which is a bit dangerous. And I have to say the last night this was not working. So let's see. Let's not talk too much. Yeah. You forget one thing to configure. All right. So let's see if I can move that over there. I'm not going to see anything. So let me put display, mirror them. Who is that scooter driver, Tony? That's my son. He looks exactly like me. Same beard, same everything. So all right. So here I have the DevStack machine that if Marsh is respecting me, yeah, we will see. There we are. So here we have the CubeMaster. Can you guys see? Well, or should I make the phone bigger? Bigger, right? Is that right? This is even better, right? So we have in this demo setup Kubernetes Master, two Kubernetes slaves, and then we have SwarmMaster, two SwarmSlaves. And what I want to show is that we're going to be able to ping from one to the other. We could also do that from VMs, so that should just work. So let's run, yeah, create it. We get the pods. We see that they are being created. Okay. We have that running. That's good. So now in DevStack, what we can do is list the networks. You see here that that is a raven, oh wait, that's too little, right? Yeah. So we have the raven default network, as I said. If you added more namespaces, and we had the code to translate that, because we still don't have it, you would see networks per namespace. And then in the bottom, you can see some couldier networks that those are the ones that you can create with Swarm. So let's go to Swarm and let's, oh well, I can show that that is, let's get up here because this is too big. 172.16. So we have the port 34 and 35 for these two NGINX containers that we started. So let's go into Swarm. That's a little right. So let's create a network called Texas in 10.35. And now this will go and create it into one of the Docker engines that gets backed up in the cluster store. So when you do Docker Network LS, you will see that this Texas network is not namespaced by the node. It's present for all your Swarm deployment. Now we can just run a container on the Texas network, Alpine, just a shell. Now it takes a while because my VMs are not that fast. All right, so we see that the IP that it got, it's one provided by the IPAM that we have in 10.35.0, plus 24. And now it would be the time to ping between them. So for being able to do that, we have to add the router between the two networks obviously because otherwise they would not be able to talk to each other. So router list. We have the inter router that I pre-created because I always forget the commands. So let's list the networks. I think 10.35. So this is the subnet. Now we can just do the router addition. I always forget this. Neutron, help, grep, router. I wish I would have put the auto completion. Router interface, add. And now what we want is this. The router can use the name. And the network. Sorry? Yep. Yeah. Yeah, this part I remembered because I did this morning again. All right, so here we are in the Alpine machine running in Swarm. So just let's try if the demo gods and muses are with me today. Which port did I say? What's the port list? 35, for example. 35. Yay! So it worked. Cool. So if you have any question, feel free to ask any of us an hour later whenever you want. So please go to the microphones and ask any questions you have. I guess there is just one microphone on this side. No. Ah, yeah, that's true. All right. What is the use case? We are running VMs on OpenStack and then you're running Docker on top of those VMs. Was that the intent? So that is one of the things you can do. And we're going to cover that at 11.50 with Fawait in another session. So the idea is if you want to provide multi-tenancy, Kubernetes doesn't have that right now. Okay, okay. So Magnum, for example, or you can do it manually, can create for your tenant a Kubernetes deployment. And you can have containers running there. And what Kudir will allow you to do is... So this whole team about using Docker, containers, and OpenStack together is all about providing the multi-tenancy which you would not be able to do without OpenStack. No, that is one aspect. And it is very important for a lot of operators. The other aspect is that by doing that, you have all your workloads with VM and Bermuda with Ironic and just containers. So if you already need to open stack... You have everything under the same API. Okay, you just want a common API. Yeah, you have a common API. You can use the same vendor for everything. You don't have any lock-in and so on. But if I have bare metal Kubernetes running, maybe on CoreOS or something, is that integrating with any of this? Sure, sure. Yeah, this demo is exactly that. Bare metal. I'm not using CoreOS because I have to make a couple of containers for that still. But yeah, that's what it's showing. You're going to have your CoreOS running containers. And those containers are going to get a neutron port and you're going to be able to... But CoreOS is running on VMs in OpenStack? No, not necessarily. Okay. You can use bare metal. As long as you have your neutron agent running there? Oh, you've got to have neutron agent running on those. Yeah. Depending on the neutron plug-in you're using. If it is requiring agents, you will have agents. If not, no. Yeah. And just one more question quickly. Any plans to do... I guess it's only neutron. If we are using Contrail or some other networking, that would not work. Any vendor that integrates... It will work as well. Yeah. If OpenContrail has a plug-in for neutron and they do... Yes, they do, right? Then it will work seamlessly with CoreOS. They only need to integrate with it. So in this cycle, we've had Mohammed adding the Linux bit support. We've had Fawad at the Plumgrid support. So we really hope that other vendors come and it's just like three lines that you have to add to CoreOS to make your vendors supported in CoreOS and all the containers. Yeah. The extra work is just to bind the container to your network infrastructure. Okay. I'll combine it until there is a line. Essentially, any neutron plug-in will work. Okay. Thank you. Any more questions? We have one last... Do you have a moment? Yeah. Sure. I guess if they don't kick us out... I just wanted to ask, in the case where you were attaching to an existing neutron network when you were kind of using an existing neutron network. So would that be... Is that for really the swarm case? So basically when you've got Docker instances on multiple VMs, then you want them all to attach to the same neutron network. Or can it also be the same neutron network which you would be using for the VM level? Yeah. Yeah. And we demoed that in the last summit. If you go back to the video, we have a VM and container connected to the same network talking to each other. Okay. And another... You can do that. And another... When the container is running on that VM. Yeah. Yeah. So you could have the container running on the VM talk to the VM. Okay. That works. But that's... No, that's when we add the nested support that we're going to talk later in the other presentation because like just as a little advance, you're going to have a port for the container... For the container that will be a support of the port that provides for the VM and they don't need to be on the same network but you can add a router or they can be in the same network between each other. Okay. Cool. And one other question. There's no one else? I don't have time. So I'm not wondering how do you think this kind of approach is different from say, Nova Docker where you kind of add Docker as another alternative hypervisor within Nova? Or something like that. So you don't have to use Nova Docker in the sense you don't have to use VM in terms of performance that has its pitfalls. If you are using Nova Docker then you are kind of running out of VMs and here your native containers you get it up and running in a second and... Well it was more than Nova Docker was just using the Nova API to provide Docker lifecycle. That is something that I actually miss because some people just want to use Nova as an API for that but the difference is that what we're trying to give here is that the people that just wanted to use the container native APIs like Docker, Rocket or whatever they will be able to get access to all the wealth of services that OpenStack gives you and with Nova Docker you had to use the Nova lifecycle. So in a sense it achieves the same thing that it allows you to use all of the neutron abstractions with the Docker ecosystem but it's a question of who's ultimately driving the API and driving the lifecycle. Yeah, exactly. It gives you more choice. Thank you. You're welcome. So thank you all for joining.