 All right. Welcome. We have a few folks still coming in, but it's time to get started. Thank you. I've got some fans up here in the front, so come join them if you'd like. So today we hope to connect a few dots for you. Things that you've already heard about this morning in the main keynote session. Hopefully give you some new information on connecting containers and VMs. Some of this was just mentioned even in the last session in here, so maybe you're already aware of some of that work. So we're going to discuss that. But first let me give a quick introduction myself and Mohamed. So we both work for IBM. We work in different divisions. I'm in the IBM open cloud technologies area. And my current role is that I'm actually a maintainer in the upstream Docker community. We have a handful of people at IBM involved in Docker. And prior to that I had 10 plus years involved in Linux and open source software, so open source is not new to me. And I've been in that world and enjoyed that world for a good long time. And I'm definitely enjoying my time working in the Docker community. Mohamed Barakazami, who from now on I think we'll just say Mohamed B to keep it simple. But he is a research staff member at IBM Research. He is a contributor to both Neutron and LibNetwork, which we'll get into in a few minutes. He's an SDN cloud expert in the field. Many in the Neutron community know Mohamed well and does research obviously in those areas of cloud and networking. So we're going to take you through a few things. For us it's the middle of the night on the East Coast. So hopefully we're going to stay awake. Hopefully we'll be able to keep you awake for at least the next 30, 40 minutes. And hopefully it'll be of interest to you as well. Now one of the first things we want to talk about is that hopefully we're not just putting, connecting some things together because we can. A lot of times as technologists it's fun to hack around on the code and make things work. But we actually believe, and interestingly enough, although this picture is slightly different, you saw a picture very similar to this. Jonathan Brice showed this morning that in this OpenStack ecosystem we have people running containers. We have people running VMs. We have people interested in bare metal and various combinations of those. And the reason we drew our picture this way is to show that today in some cases, obviously these network virtualization layers are disparate. They're using different technologies. And the question we want to answer is, you know, is there a way to unify those and bring those together so that these various compute technologies can work together in a combined network model? And hopefully we can answer that today, show a demonstration, and we'll go from there. So I'm going to hand it over to Mohamed to talk through Neutron, that's his area of expertise and look at the basic concepts there. All right. So if you guys attended the session that Kyle Mystery and Mark McLean had this morning, you heard about Neutron to finally becoming what it was supposed to be, the API server with the database backend. And that's what we have been witnessing during the past couple of cycles. We are going towards that model that Neutron Core becomes the API server, where the realization of that API and also several services around it are built in the ecosystem around Neutron, but not necessarily as part of the core Neutron. So Neutron has a very simple and straightforward API. More importantly, it has a pluggable architecture and allows different backend implementation. The abstractions that it uses are close to the physical resources. So for better or worse, that's something that people who are familiar with networking are very familiar with. And it has turned out that people are using the same kind of basic API in different areas as well for network virtualization. So what are the basic concepts in Neutron? Networks, essentially isolated layer 2 broadcast domain. They can be private or shared. In addition to networks, subnets are defined as IP address blocks, and they are associated with networks. And they can have other services associated with them, DNS, DHCP, can have gate phase. In addition to subnets and network, ports are one of the core components of Neutron. They are essentially virtual switch ports that allows compute entities to get connected to networks. They have MAC address and IP properties. And even though routers are not part of the official API, core API, they are practically widely used such that you can call them one of the more basic concepts in Neutron. And they provide connectivity between networks to external networks, performing network address translations and providing capabilities such as floating IPs. So these are the basic concepts in Neutron. And now Phil will talk about LibNetwork. Alright, so we've looked at the concepts in API around Neutron. Who's heard of LibNetwork? It's not a good, roughly almost a half of you have heard of LibNetwork. It's a somewhat recent project as far as when it was actually merged. Actually the LibNetwork guys and myself have become better friends given that when they were merged it blocked a PR of mine from getting merged because there was a clash and so we had to spend a few months working all that out. So we've gotten to know each other fairly well, but LibNetwork is basically pulling the network model out of the Docker engine into a separated module. And so that project began, I want to say, early summer, late spring, and was merged into Docker in the Docker 1.7 timeframe. One of the important things about making it separate from the engine was that now it is a pluggable framework that allows other implementations and so that's going to be key here for what we're discussing. And the LibNetwork implementation contains something called the container network model and we'll look in a minute at its concepts and how they map to Neutron after that. But as I said it was merged in 1.7, but in that original implementation all it did was take the Docker engine's current bridge networking and move it into LibNetwork. What you will hopefully hear about this week if you follow the Docker community is that Docker 1.9 will be released which is currently in release candidate form and the full capabilities that were promised in the original announcement will be their plugins, overlay networking, and so on. So again, we just looked at the basic concepts that Neutron has. LibNetwork has sort of three key concepts that are shown here, a network which is basically a collection of endpoints that all communicate with one another, an endpoint which connects a network to a sandbox, and a sandbox is actually this has the configuration of an actual network stack and so on Linux you can think of that as a network namespace that's abstracted such that other operating systems can have their sandbox implementation. So with these concepts you then have an API, again using the same network endpoint sandbox, a very simplified, mostly around create, delete, join, leave. And then obviously there's some housekeeping that has to be done now that you're allowing for plugability and so there's a handshake interaction that happens with someone who wants to implement this API as a remote plugin. So today, and I guess I should say very soon, not exactly today unless you've downloaded the Docker 1.9 release candidate, you have a set of options when you use LibNetwork as far as the driver for what will actually implement the address management, etc. And today so you have the null driver, obviously no networking at all, provided host networking, sharing your network stack with the host, and again that's something that's been available already. The bridge which is the traditional Docker networking that's already been there and then overlay which is the new multi-host networking which you can take Docker's implementation which uses specific technologies but this is where you can also plug in a remote driver which will then provide that capability. And so although remote driver is shown separately, it really is a network driver but it uses the network plugin feature to implement that with your own custom backend. And so here's what a basic remote driver would look like. There'll be a JSON RPC transport between you and the proxy driver. You'll be called with requests and you'll respond with a JSON payload. Obviously we're not going to dive real deep into what that looks like but the documentation is already there in GitHub and you can take a look at what it looks like to implement a remote driver today in LibNetwork. Again when Docker 1.9 is available, you'll find that there's a new sub-command within Docker that allows you to now create containers to a created network, obviously disconnect, remove, inspect, and list the networks that are available and we'll actually demonstrate that in a few moments. A little bit deeper on the create command is that this is where you can now specify a driver such as an external network plugin that will provide the network capabilities when you create a container connected to that network. And with that you can specify an IP address management driver or take the default one that's provided for you and also specify a subnet again mapping to the same neutron concept that Mohammed just mentioned. So at this point I will turn it back over to Mohammed so we've looked at Neutron, we've looked at LibNetwork and now Mohammed will show us a way that we can combine these technologies. Thank you. So now that Docker provides a separate module for networking, namely LibNetwork and that module is pluggable and extendable. It's a good time that we look at what we have for networking for our virtual machines and bare metals and see if we could utilize it directly for interconnecting our containers as well. With that idea in mind we started looking at Neutron as that unifying networking layer and it turns out that other people have been thinking along the same line and we joined efforts with the project career which is aiming to provide that Docker network plugin that provides networking functionality to Docker through Neutron and as part of that project we essentially are going to demonstrate how Neutron can be utilized for interconnecting containers and VMs and it is in our plans to provide containerized images of Neutron network plugins as well something that we are hopefully going to get into very soon and as you can see from the tweet that we have quoted here from Kyle it is coming together everything with Neutron kind of becoming the API server other open source projects such as often providing the network functionality and career bridging the gap between all these technologies. So career is an open stack project so it is part of the ecosystem that uses what is available in open stack keystone for authentication, Neutron client for accessing and utilizing Neutron and other libraries and projects that are available such as Oslo config or other libraries that we will end up using. So I wanted to emphasize that and get to how the mapping is done in project career or the Docker network plugin career. It turns out that the mapping is kind of straightforward as LibNetwork also provides similar kind of abstractions. Networks in LibNetwork are similar to Neutron networks endpoints are similar to Neutron ports. Just before the latest RC had released 1.9 RC1 IPAM was not something that was included in LibNetwork but in 1.9 LibNetwork will provide IP address management as well. Regardless of that the notion of subnets is something that we have been using for creating our ports on a specific network with certain IP addresses. And the other important concept in LibNetwork is join and leave a way of connecting a container to a network which is pretty much similar to what happens either in Neutron itself for some of the services or NOVA and MVM gets created the port or the virtual interface get plugged into a network or unplugged from a network. And that again similar to what we have today in NOVA or Neutron requires a special code for different types of network that are using OVS or Linux bridge or more specialized kind of interfaces. So the mapping is also rather simple just to show how career works. I'm going to try to demonstrate a small setup that is described here where I have two nodes that are set up with DevStack. One is the full DevStack with all OpenStack services. The other one is essentially a compute node and has also L2 agent. I am using the default Neutron which uses ML2 plugin with OVS mechanism driver with VXLAN. And in these two nodes I have Docker running with LibNetwork and career also. And I'm going to try and start containers and using the driver career for connecting them together. I just want to mention before getting to the demo that LibNetwork uses a key value store for communicating some data among multiple hosts. That's what you see here as console. I believe you could use other key value stores as well. So with that let me try to get to the demo I have two nodes. The one you see here in yellow is one of them and this is the second node. So I'm going to use a couple of simple commands. As I said you can use the network command in Docker now. And what you get here is the list of networks that are created. And these are the networks that get created by default. And as you can see each network has also a driver associated with it. So we have one for null, one for host and one for breach. These are created by default. Alternatively if I look at Neutron here this is a DevStack setup default. By default I get the two networks that DevStack sets up. That's what we have. So I'm going to go and create a network with Docker and specify the driver. If you don't specify the driver by default the bridge driver will be used and pick a name. So a new network got created. If we look at the networks now you can see there is a network here. And as you can see the driver is career. Alternatively again if we look at Neutron networks. Neutron network list. You see Phil I told you he used that recorded demo. So a network got created. So the version of Docker is being used here is the one just before Docker 1.9. So it doesn't have the IPAM in it. And by default we use a subnet pool that is created and new networks get a subnet out of that subnet pool. When the first endpoint gets created for that particular network. So now that we have a network let me just go and check and see if that network is showing up on the other node as well. Here is net one on career. So let's go and create a virtual container. And I think somewhere here I have a command. This is a standard Docker run. As you can see now you can specify the network you want your container to be connected to. So container one is already there. Let's try something else. Let's do something easier. Oh, so if you guys could just look that way. I will. So as you can see during the past couple of weeks we have been going through a lot of churns because live network was going through a lot of churns. So our configurations is not as great as they should be. So let me see if I can get this thing working. My token got expired. And even though we can use this looks good. I think I can do the same here. This is our career here. So you think this will work? All right. So we created a network, a container. Let's do the same on the other node. Let's use a different name. So if you notice the other one was 1010.0.2. This guy is 1010.0.4. If you have used Neutron before you can guess what happened to the other addresses. And the ping works. Phil wanted to have a real application. And I told him that if it is a networking demo we just do the ping. Ping works. We declare victory and say goodbye. But the ping works. So let's see if we can continue along this line. So what happened? Neutron now we have maybe some other some stuff that we may not need. But let me see. Neutron netlist grep410.10. This is the network that got created for our container. This is net1. I want to boot a VM using the same network to do that. Let me do this. And since I don't remember as simple as this is. Okay. Let me try to remember. So we say nova boot. We specify the image. This is the image. We say flavor. Say one. We say net. We specify the network ID now. Oh, it is Nick. I see. Here. Yeah. This one? Yeah. I think so. You are right. Thank you. And this is the network ID. Oh, yes. VM. Exactly. So the VM is booting. And this is where I use Horizon to show you what is happening on the Horizon dashboard. But considering that I am running all of these out of a VM sitting in Yorktown, I am going to switch to a recorded demo, which is exactly the same thing. But we won't need to wait for the Horizon to come up. So let's see. Do you guys see that? This is exactly the same thing. Actually, what you were seeing was the recorded demo. No, just kidding. So here I am just booting a VM, but then I just go switch to Horizon and show you a couple of other things that we have. So here the demo is, I think I can view this thing in full screen. Do you guys see it properly? Okay, I think this is reasonable enough. So you can see the instances. We don't show containers. This is something that in Korea we are discussing how to show containers, whether we need a new tab or where to show them. But this is the VM that just got created. And I go to the console and I feel a lot more confident now that I am using the recorded demo. Logging to the VM. And I am going to do what we do in networking demos. This guy got 1010.05 as the IP address. And I am going to just ping one of the containers and see if it works. There we go. So another ping now is between the VM and the container. And just to show you how these things are kind of getting pulled together, I am going to show another ping this time from one of the containers to the VM. Let's see. So this is the address for the VM. And if you have worked again with Neutron, one way ping is working, the other is not working, you immediately say security groups. So by default there is no security group associated with the networks we are using here for our containers. That can change. I will briefly discuss about that in future work. But if that's the case, if I set the security group properly for my container, I should be able to ping. So I am going to go find the port that is connected to the VM to find its security group. I get the security group and that's the default security group that allows outbound traffic but no inbound traffic. And I am going to try and apply it to the port that was used for the container. I get the port ID of the container that we have running on the top part of the screen, still trying to ping. I am going to set the security group. So you specify the security group and the port ID and yet another ping. So this is essentially the simple demo that we have. So now practically you saw how ... What's that? I think it's ... No, not here? Yeah. So we showed Neutron could be used for just ... We knew that Neutron is utilized for bare metals through Ironic, for VMs, but also now for containers. So what are the future directions, some of the gaps and mismatches? One of the main issues that we need to address in both communities is the issue, the fact that OpenStack is a multi-tenant project and Docker is not at the moment. So that is something that we need to figure out how to provide these services for different tenants. And it looks like that the foundations for supporting multi-tenancy may be coming in Docker. We need to figure out how security groups can be applied in the context of Docker networking. But the port mapping is really important or not. Now that every container gets its own IP is another mismatches that I have listed here. And there are a lot of things. The project got started just a few months ago. It has several contributors that you will hear from tomorrow as well. But we have a significant amount of work ahead of us, and we are hoping that Mitaka will be the cycle where we will achieve a lot of these goals. Docker labels are there, and that's obviously the first thing that comes to mind for adding functionality that is not there. Integration with Docker Swarm is important if you are providing multi-host networking. Shouldn't be using multi-host orchestration system such as Swarm. Another item that is significant for us and we are going to address it in the coming cycle is integration with Magnum. We are engaged with the Magnum community and we have participation from them as well. The most important thing is one of the things that we showed on the pictures, having containers in VMs and having efficient networking for such containers. And it looks like that we need some enhancement in Neutron itself. VMs turns out to be the solution. A lot of people are interested in the work. The work has been going on for some time, but I think now with the interest from many, many communities, that work will speed up. And as I mentioned early on, we are going to have integration with Kola and provide containerized solutions as well. With that, we are ready for questions. Yeah, sure. So basically if I understand correctly, this driver cannot work in a situation where they have container running inside the VM, right? Right. Yeah, it doesn't provide any networking in those situations. And I think we are going to work on that and there are solutions that are already been planned out. That's one of the major things that we are going to address. The question was how you deal with nested architectures where you have containers running in VMs and whether you could do something better than what is being done now, like having overlays on top of overlays or whether you could utilize the carrier in that situation. And the answer was not right now, but that's something that is in our plans. And we will have a session tomorrow. Tony and Gal who started the career project will be presenting career. What time is it? So everybody knows. Sometime tomorrow. It's tomorrow. Just tomorrow. Yeah. Yeah. So that's essentially the plan. Integration with Magnum. Now the networking in Magnum is evolved such that it's going to allow different networking back ends. And we are going to take advantage of changes that are being made in Neutron to provide efficient connectivity between containers that are running in VM. Right now that integration is not there. Stickers are like really important. I don't know if you saw on one of these slides, but there were some stickers off to the left side. But these stickers actually exist. And we have the sticker designer here with us today. He's famous for stickers. But seriously, Magnum, Keystone, if you want stickers, come up and give them afterwards. This is the place to be. Yeah. That uses Neutron itself, right? So, correct. Right, none of that has happened. Yeah, we are just at the beginning of this journey. Yeah, we just started. Thank you very much. Yeah, thank you.