 Hello, okay So hello open stackers. Thank you for coming We it's a last session of Wednesday, and we have house full This is great. I'll try to make sure all of you find something useful out of the session My name is Fawad Khalik, and I'm an engineer at Lumgrid and today we're going to talk about Docker networking in open stack So by the way, this session is not going to be about what Docker or container technology is will focus on its networking aspects Because at Lumgrid a lot of people have come and asked us about how to manage this explosion around Docker containers and There's specifically a lot of confusion around its networking aspects so with that a Brief introduction about myself. I've been a member of the open stack community for some time now. I'm also a developer in the networking group Neutron and I'm also the maintainer of a Plumgrid Neutron plugin also known as networking Plumgrid starting kilo if any of you would like to Contact me or get in touch with me. This is my contact information is so we are all at open stack summit and we all have open stack and We would all like Docker to be seen as part of open stack So I've kept my agenda keeping in mind the statement. I just made So we'll talk about what is Docker? Why is it being rapidly adopted? We'll talk about some of the use cases that industry is adopting around Docker containers We'll talk about what is the state of Docker inside open stack heat? Magnum Nova and We'll also talk about what is the state of Docker networking in general outside open stack inside open stack What was there when Docker came out? What is it that is there today? What is it that we can do inside Neutron in inside open stack and then we'll go over how we can unify or have a Common homogenous networking for containers and VMs Inside open stack and we'll also go over demonstration of that part and in the end. We'll discuss some of the key takeaways so Docker 101 Before you proceed a quick show of hands. How many people are using Docker today? Wow, that's a lot of people excellent So just to give you some background over here containers and specifically Linux containers are not new some some companies have been using them for years and Now this open-source project called Docker has completely redefined the way we look at containers today so Docker is Offers a common packaging format or a standard API to provision containers in a way that you can run processes in isolation and As a result, I mean, it's no surprise. It's a huge adoption Just like open stack. I mean if you look at these numbers over here, these are stunning numbers like 100 million plus Docker engine downloads 45,000 plus application on Docker hub hundreds of community members Excellent stats on GitHub and this shows that how the industry has already embraced Container technology and specifically Docker in this case So what's the reason behind it? The lightweight so typical containers are very small size and They were extremely fast in milliseconds versus virtual machines and that's why people are using them You could spin up thousands of containers on a host and They would offer you easier management of Libraries and binaries if we talk about the use cases of Docker containers These are ubiquitous use cases you could swap them one-on-one with virtual machines You had bare metal you have virtual machines now you have containers the use cases over here I did not show up out of the blue. They've always been there So think of containers as you thought of VMs and thought of bare metal at some point from the use kit perspective and This also goes to the point That the issues that we saw for Networking for bare metal and networking for VMs are going to be the same for containers and that's where we are here So let's talk about the love of Docker With open stack today Docker is part of open stack under some programs I'll start with Nova and then heat the orchestration system and then you have Magnum and by the way I'm over here. This is this is by no means. I'm taking I'm saying one is good versus the other There's specifically for Magnum. There are a lot of design sessions going on. Please go and contribute There's a lot of good work happening over there and in this session. I'm not going to go into details of heat and Magnum I'll focus on the Nova compute part, which is there today. So in Nova compute We have a Docker driver Which allows you to spin up Docker containers as Nova instances So Docker has some feature set and then Nova has some APIs and the features that the Docker has is not completely exposed by Nova APIs and Similarly, all the Nova APIs that are there are not supported by Docker. So you see an overlap of that and The project or the word driver inside inside stack force these days and there's a discussion going on to bring it back Inside open stack it may be part of Magnum. Let's see what happens moving forward. So this is where the fun part comes in the picture We say a Docker technology is awesome We also must realize that networking for Docker should be awesome as well, right? so when Docker came out there were some networking options out there and Let's start with the most popular one. How many people sitting over here know what Docker zero bridges? Nice. So Docker zero bridges a Linux bridge which is like an L2 learning switch built into the kernel and Every container has a static IP address and your containers can talk to your or your host can talk to a container Over this Linux bridge or this Docker zero bridge and this is Docker proxy with IP tables magic to do some extra stuff you could also have Container to container communication Happen over Unix domain sockets and this is like a file descriptor based communication Which is restricted to one host then you could also do give full host network access to your container and That would allow Your host to talk to your container over the loop back and your container could have Access to or be able to listen on some privileged ports This what you're looking at over here is is in the past This is very primitive networking if you take this to any network ID admins They would have they would not be comfortable with it and Things have changed and improved Over the past few months and what we have today is a New feature set or a new framework or new interfaces docker networking and that is live network So live network provides you this pluggable framework through which you can provision some Some things like just the way we do neutron today So this what you're looking at over here is a container a network model which live network has introduced this container network model has three main terminologies We have a network sandbox and an endpoint and there's a notion of network so network sandbox is It's like an isolated environment. We're networking for a docker container resides an Endpoint is like a network interface which allows you to communicate over a specific network and Then a network is a group of endpoints Which allows you to communicate? over all the endpoints inside that group with each other So live network is a pluggable Framework which allows you to have different networking implementations. It has a standard single API Which is agreed upon by all the developers user stakeholders of docker and This also has a notion of drivers and extensions which will give you power to plug in one networking implementation versus the other and In my opinion things are definitely going into the right direction when we talk about docking working Let's talk a little bit about neutron as well the way new neutron does thing today So Newton has an API and this plug-in framework and you can actually have some plug-in in there in this example Let's say a neutron plug-in neutron plug-in and the call goes to plug-in plug-in calls the back end and you're able to provision on-demand networks from Newton API's So I'm pretty sure I've done a great job at confusing you guys at the new terminology coming from live network So this is an open-stack crowd. I assume all of us understand what neutron Networking terminology means so I'll try to do some mapping with what live network is trying to do and what Newton does today today So you have this notion of network Which is an isolated entity a group of endpoints which in neutron today maps to a network as well So from the current definition, it can be a simple network. It can be a shared network. It can be an external network Doesn't matter Then endpoint maps to a port in neutron So important neutron is something when you let's say provision of VM you would request neutron to give you a port which has an IP address and a MAC address and That's where a VM is able to communicate over the network and for the network sandbox There is no direct comparison today and we see we'll see how the networking at all evolves moving forward So the point I'm trying to make over here is that networking has to be and must be unified for containers and VMs Let's see how that goes moving forward and why does it have to be unified? Somebody came to me a few months ago. They asked that can you do networking for in open stack for VMs and containers at the same time? There are all sorts of use cases. So I picked one use case, which is very simple Which is uses Nova which uses existing neutron so all the components that we have today which work So you have Nova API and Nova API can talk to Multiple compute nodes, which are of different types. You have one with the compute type of Docker the other one with the compute type of Libvert, so one compute node is capable of provisioning containers The other one is capable of provisioning VMs and then you have this common networking layer, which is neutron to provision networks So what you have over here is that you? Provisioned networks from neutron and you are able to connect your containers to containers containers to VMs and VMs to containers and VMs to VMs and whites were sent all the permutation combinations that you would want The use case that I picked over here. Let's say I have Some web app or some application servers Which I want to run in containers because I want them to be elastic up and down and then I Have my database server, which I don't trust the security model of containers today And I want to want them to be protected inside VMs so that's one use case and We can talk about some of the use cases and there are plenty of those So how exactly it works when when I said you have Docker Engine running as a compute node and you spawn containers and containers have to be connected to a network Which is provision from neutron In this model, they are no different than the way we treat VMs right now They have to have a tap interface which is plugged into a networking Implementation in this case. Let's say a call goes to Nova API and the call is that provisional Nova instance, which is of type Docker container and No, I pay calls the compute Compute request Nova API a neutron API that this is the network I've been given Can you provision a port for me and that port is provision call goes to the back end? Let's say in this case plum good plug in a plug and plug and cause the back end and then one that is done It it goes to the Nova Docker driver, which has a vif driver, which has to call The back end again to plug that virtual interface and then Containers launched at that point container will have connectivity on the network, which is provision from neutron So in this way Containers are treated no different than VMs When we talk about networking This will be true for different type of containers so some containers application containers So to the user they they should not be no difference. You should have a single comment at working layer to achieve all of this So the question is is this in real like does it work? Yeah, looks good on the as a block diagram, but Yes, I think yes will will go over demonstration of Exactly this use case where we have to compute nodes. So what I have over here is I have two physical servers They're any kilo Dev stack by the way Congratulations to all of you kilo is out stable is out and all the developers and contributors and people who are at Paris Summit In the design sessions. So what we're running over here is kilo Dev stack multi node deployment with the with new Tron and plum good new Tron plug in and we have One node running a controller on a compute the other node is running a compute one node is running Nova Docker Driver and the other one is running a Nova Libre driver And there's some plum good complex running over here and this stack is capable of giving you multi hypervisor Container and VM with single networking across nodes With all the network use cases or capabilities at Newton offers. So what I'll do in the demo will be I'll provision a network from Neutron I'll provision or launch To Nova instances one is going to be a Docker. The other one is going to be a VM I'll connect both of them to the same Neutron network. These are two on different physical hosts And that'll check connectivity between them and there's some some Advanced use cases where you provision or outer connect to a record and then check external connectivity and then you have floating IPs Then you have security policies to see The behavior is there So if we have to See how will it look like from a physical to a virtual network point of view? So we have two servers server one is running Docker compute and server two is running Libre compute They are connected to each other over a physical network and One is running a Docker container and the other one is running a virtual machine and they're connected to the same virtual network and They're able to communicate with each other seamlessly and Then they also capable of communicating with the external world through this router and an ad. That's the cloud. They have the So with that I'll I'll jump over the demonstration so what I have over here is a horizon open stack dashboard and I have Plumgrid console will be using these two dashboards to over the over this demo and Horizon will be used for provisioning and Plumgrid console will only be used for Looking at how the topology looks like over there. So login as admin admin Let's look at some of the system resources first in this setup So we have in the list of hypervisors. We have two hypervisors Docker type and chem you in this case and And I'm using host aggregates in this setup So so make sure that I can schedule my VMs on the correct availability zone I have a Docker zone and a leverage zone which are using the relevant hosts or compute nodes and then in the image list I have Two images one is a simple seros. I perform Docker and the other one is a Libre seros so that they They can be prison in the way. They're compatible with their back and type And I have one and in in in Plumgrid UI. I Have two compute nodes and in this deployment what I'm looking at over here is One node is over here and the other one is there So with that Let's go over a project that I've created. It's called Docker networking demo project And in this project, we'll actually provision these networks and containers and VMs and check connectivity So as a starter, let's look at the network topology that we have it's empty right now I just provision an external network, which is part of the deployment and if you look at its visualization in Plumgrid we have a Tenant docker networking demo, which is empty. Of course, it will have the default security groups So now let's go and provision a network and a subnet So let's call this network Docker Demo network and with that we'll also have a subnet on top of this Let's get Docker demo subnet It's good IP address one and two And success zero slash 24 make sure The HCP is enabled so at this point you have a network and a subnet the dhcp and in the network topology this is what it looks like and In the open stack dashboard if you go to Plumgrid dashboard we see its corresponding part which is a Network and a subnet that's the HCP over there Just like a provision in neutron so now Let's go over provisioning The Nova side of things So there are no instances right now running. I have two compute nodes. I'm calling one is docker compute and the other one is a Livered compute and we should check if I have any docker containers running at this point. I Don't at this point any in the list and we should also check the worst list over there To see if we have any VMs none and now we provision a docker container And attach it to the network so in this over here I'm going to pick the availability zone which is going to be the docker zone so that it gets provision on that a correct compute host and The image I'm going to pick the zeros image, which is the Which is the one which is compatible docker and then I have the default security group and which is connected to now Docker demo network and The container should be up pretty quickly That's active now with IP one and two one six eight zero dot two So if you look at in the docker compute node In the list of containers we have a container which got created 14 seconds ago and Let's compare its UID with the one which is in Nova. It should start with the 3b to ee And if you see we're looking at the the correct container And now let's check if this container has the IP address which is supposed to be assigned from Neutron It's because of that container ID. Let's do if config and It has the IP address one and two one six eight zero dot two which is the same one as the one we see in Open stack dashboard. So now let's let's go over and provision a virtual machine Using the other availability zone Which is going to use the live bird hypervisor It's called an AVM one and in the images. Let's pick the live bird zeros image And again from the networking standpoint, it's going to be the same network in default security group Docker demo network and as soon as that comes up, let's let's check if our compute node running live bird has this virtual machine So we're schlist So we have a VM which came up And let's compare its EU ID with the one which is in open stack dashboard and that should start with 303 at 5 and that's correct. We're looking at the same VM these are two different compute nodes and Now let's see how that looks from the network standpoint So in the network network topology diagram, we have the network and Two Nova instances by the way, Nova does not distinguish between Containers and VMs. However, we see some distinction over here We have a Docker and a VM which are connected to the same network here That's the Docker demo network and we'll see how the communication will happen over this path So now let's go and log into The virtual machine and let's look at how we can communicate From VM to Docker and Docker to VM And also check if our VM got an IP address or not. Let's go to instances Let's let's open the console for this guy, okay So it has an IP address one and two one six eight zero dot three that should it's the same one as the one we see in Nova and now let's ping from this VM on one compute node to a container on the other compute node and I see Ping is going through It's the same IP address and let's ping from the container to this VM which is on the other compute node on the same network and Using state-of-the-art ping utility. We see the packets are going through Okay, so I'm going to stop over here I have some other things going on as well in the demo right now. I'm going to skip to the end part So what's going to happen now is that in the end we'll have a full topology with with the router and NAT and floating IPs and security policy and What's going to happen in the end is we'll see a full networking diagram will be created If any of you would like to see the full demonstration of this Please come to our booth and we'll be happy to show you that Right now. I want to I know everybody knows newton networking and this is standard stuff So not much extra details there Okay, so what you're looking at over here is the network topology where you have the same Network that you created from newtron. It has a docker Container attached to it and a VM attached to it. They're able to communicate over to each other over the cell to broadcast domain And then you have vhcp over there giving the IP address of these guys to static leases And this network is connected to a router and the output is connected to a NAT And then that is there for floating IPs to communicate with the external world and this maps directly to The topology that you see in newtron and that topology should look like look something like this where you have your Network connected to the router and now connected to your external network With that I'd like to go back to my presentation So just just to give an overcap Overview of or e-cap of what I just did in the demo. I provisioned a network. I provisioned Two novices a docker container and a VM. I connected them both to the same newtron network And I was able to communicate between these two across two compute nodes And then there was some additional functionality where you would see the anthropology where you had a router connected to an external network and the router connected to a physical private network and then Communication was happening in a way that you could have floating IPs and security policies and etc So what's coming next? This is the important part. I'm pretty sure a lot of us sitting over here were wondering There are the use cases and there's magnum. So I just came out of a magnum session One and a half hour ago and we were discussing exactly How we can do networking for that aspect So there are interesting use cases where you want to run container inside a VM Which is a nova VM you want to do networking for that as well. So What's coming next is that we'll be contributing to Magnum networking design how that goes currently It's in design phase It's in under discussions So we see how that goes and Adrian is doing a great job. We'll be collaborating And then in the end we'll we'll provide a common Neutron api which allows you to connect your containers And your virtual machines in a seamless way that from from an operator or user point of view this absolutely no difference So we know docker is awesome and finally it has arrived inside open stack And we know networking has to work for both And I just showed you a demo of networking for VMs and docker works or container works So please come and visit plumgrid at booth s14 if you have any other questions And I have the most important message after that for the session Networking is the most awesome thing in open stack now. So please come and contribute to the great cause And be part of the networking umbrella Thank you questions So we saw live networking working. We saw one network on there. We know libvert Implies a hard capped limit on the number of vifs that can be plugged from a vm Into the switching on the compute host What's the limit on the number of vifs that if you know of For a docker container. So I was not using lib network over here. I was using lib network over here. I was using uh nova and neutron And the containers which are provisioned from nova would provision in that exactly the same way as we provisioned vm today So the number of tab devices that you can create for VMs is exactly the same as for containers because this is these are the example that I gave over here is for Well, it's limited by quota. Is that what you're saying? Codas or your system limitation or whatever the case. So yeah, well live work caps you a 10 Sure, if quota is a 50 Can I create 50 tap devices for that? Thank you Have a question. So how do you uh, criticize the docker image? Do we have any uh, Yes, so this setup was brought up using dev stack and if you go to GitHub slash stack force slash nova dash docker They have instructions over there how to modify the local rc and configuration files and any files to actually bring up this stack And when you bring up this pack stack, you specify what kind of image you want to pull It pulls it from blocker registry and you're able to um Yeah for the docker image do you need to provide additional uh glance properties to so from From the image point of view it works exactly the same way as other images are handled but uh Under under the hood, uh, it needs to have some compatibility with uh With the kind of images that docker supports So you need to have an image which is actually a container image if there's something else it will error out So they're a good error and checks in nova docker driver that if you try to spin up Uh a a container through an image which is not compatible. It will give you an error that is not compatible All right. Thank you so for your demo you use the um Plum grid neutron driver. Yes, that's correct um Does this work with other drivers? uh There i'm i'm aware of some of uh, some of Some of the implementations which are part of nova docker today So as long as they have A vif driver to actually plug your vif into whatever networking implementation is there, uh, that should work. Yes I I that's I've not I have not tried it out, but it should work threatively yes Since you use mic I'll repeat the question. So the question is that uh You you have nova docker driver over here. It's making a bind vif call to some back end. So um How do we actually Create tab devices in nova today in live word nova driver and bind them to whatever back end is there Let's take an example of Let's say using obs right So in obs if you go to live word you have a word driver inside the word driver you have this call Create port or bind obs right port. So there is an implementation in there which actually made some system calls to actually bind it Yes, so So it's it's exactly the same way the way it works for other implementations exactly the same way and exactly the same way It works for vm's in live word driver Similarly in nova docker driver Yes, I'm not expert at lxd. So I can't comment. Maybe I have some other folks who can you can talk to yes Any other questions? Yes Yes, that's correct. Absolutely. Yes. Why not? So, uh, this from Nova just like I mentioned nova Has some apis and then docker has some apis or docker has some functionality So there's an overlap over here in some functionality in some cases. It's not there in some cases there So create delete add an image or all those things are there But there are some implementation which is Which is not exposed to nova apis or which nova exposes or docker does not so there's a there's a list of things on I think nova docker Or or nova wiki page You can go through that and see what maps to one to one and what is not supported where they check marks and crosses over there I think that that will help So maven networking today is in a completely discussion phase. I just came out of the session And that's something we're trying to solve And we look forward to having more Feedback from other folks as well and let's see where that goes. But the idea for that will be But architecturally it makes sense to have One common networking layer which does networking for everything And that will be neutron Yeah, actually you might have just mentioned answered it, but you said you're not using lib network in this example How will things change when when that becomes Released for docker so lib network is something which doctor team has introduced And this is an improvement on to whatever was there the existing implementation was very primitive and They realized and they came up with this framework which to me seems very much like neutron. That's outside open stack One way could it be it becomes part of open stack has a plug-in and that somehow it works Or in some other shape and form it gets introduced But that's completely not in open stack today It's out of open stack and there are a lot of other technologies outside of open stack which are using containers today It might make sense over there, but it's very early stages It has one experimental project called docker network Which allows you to create network topologies and currently what we have over there is a very basic L2 switch that you can create that's all and let's see how that goes We got yeah I can I can comment on networking aspects of it, but Containers it's everybody has a different opinion how they want to use them so Yeah, there are different ways people have worked around those And it really depends on the use case how you want to so the question was that is any security concern about docker containers Or containers in particular. So networking guy I can Yes It's it's uh, I I didn't want to cover any specific implementation Which is outside open stack and I really want to talk about what's neutron today and what we can do today And I'm pretty sure it's going to be a completely different Talk in next summit and we expect to have more developments in the magnum community how we can handle because I see a magnum as becoming the The main thing for managing containers Good I think yes We're good. Thank you