 Hello, yeah, all right. I think we're we're on yeah, we're live. I think so all right, so My name is Phil Estes. This is Kyle Mestri and we're gonna be talking to you about Networking between containers VMs bare metal Kyle's the networking guy. I'm the container guy. So we're together. We make the awesome awesome team I think so and I just wanted to say also that I'm really disappointed in Angel and Kershaw They didn't get the memo on the shirt. So I don't know what's going on Disappointing at best. Where is Angel? He's not even here. That's it. He does this. We'll get him next time All right, so First I think I'll give us hopefully what will be a whirlwind introduction to containers How many of you have played with containers in any form or fashion? All right, good. So this is a brand new to you, but I Think to kind of connect to if you were here Mark Charlottes was just here He spoke at Container World a few months ago And I thought this was a great way to to think about containers as the containers are a lie We tell a process. So if you've kind of understood the the model of containers on Linux Unlike VMs, you're simply just starting a process, but you're giving it a containment model That are a set of namespaces. They're listed out there to the right The mountain namespace network is highlighted because we're going to focus on that in this talk the user namespace the PID namespace and so these containment mechanisms give us the the The concept of a container that feels like it's its own system But obviously because of this it's extremely lightweight because it's just another Linux process on your system. It has the same kind of startup time as starting a process And then obviously another benefit so containers have been around for a very long time the container technology and the Linux kernel has been there for a long time, but recent Ecosystem players like Docker have really added to this model this idea of a very standardized and simple Packaging method for taking this process that I want to put in a container put all its dependencies and that really As we've seen I think really had containers taking off in the last couple years Not to mention that it's a great fit with kind of this cloud era All the talk around microservices CI CD containers have been a great fit for these use cases So let's start with a bold statement There's no such thing as Linux container networking because again We just talked about containers being a process on your system a Process doesn't have anything special about it that really relates to networking Other than the kernel has handed us if we've asked for a new network namespace All that really means is that we within that namespace we have our own list of interfaces And we have our own routing table which means It's really up to me to decide how I'm going to create those interfaces How I'm going to set up the routing and that's really up to the implementer of that runtime. So Docker has a way of doing that LXD LXC has a way of doing that Rocket has a way of doing that and we're going to talk about that a little bit But that kind of sets up hopefully if you if you're new to containers or or the concept is new That gives you an idea of what we're talking about You know a Linux bridge is obviously the most sort of default and standard way to do that That's the original Docker networking style But today we're going to talk about some more advanced features and I think at this point Kyle can Can talk about SDN? Yeah, I get to talk about SDN how exciting is that right so so SDN right much like containers How many people have heard of SDN has anyone used it right if you use this script to configure your network Devices maybe that's SDN to I mean so so what exactly is it right? I mean this is a pretty good pretty good definition here right an umbrella term it comes in several kinds of networking technology It's pretty interesting as well right but fundamentally, you know What is what is software-defined networking really about if you look at it from its core? It's about things like operational scale right being able to scale things into huge numbers Whether that's the components of your network whether their hardware or physical So that's really important as well agility and speed one of the premises of SDN is is being able to to not only scale this stuff but but to To give you more agility and speed as far as your deployments go I mean it might tie into a CICD process as well all of these things help with that But then the other thing is maybe moving some of this complexity from hardware to software And complexity is a funny term especially when it comes to networking, but but I think that's part of it as well So I thought it would be interesting to to take a look at Open-stack neutron and some of the abstractions that it provides in its API as well I mean on a lot of ways neutron is SDN as well, especially the API as it provides because it allows you to do things That previously you couldn't with just physical hardware that was easy so you can create things like this you can create a network Might be composed of a of the virtual network itself the subnet that's associated with it, and then it has virtual ports Those are all things that are part of open-stack neutron in an open-stack deployment as well Then on the top you have the virtual interfaces associated with the VMs Or potentially the containers as well right that could be and those are associated with the virtual machine as well So all of these abstractions together kind of allow you to Do some interesting things right? By the way Phil's a whiz with the slides. These are spectacular. This is good. He made my bad slides look this good This is amazing. So thanks Phil. This is good. This is this is what it allows you to do right you could do some interesting things with it You can build complex topologies with it Private networks you can tie things together across them you can put VMs in different places You can route traffic set up routers and and tie your networks together that way So it's it's a pretty interesting thing that it lets you do The Neutron API also lets you do all this with with different plugins on the back end Whether that's a core plugin or a driver with the ml2 modular layer to plug in so there's all kinds of interesting things You can do with this as well This is kind of fundamentally what what SDN is about I think so So I thought I thought it was also interesting that that we wanted to talk specifically about some of the technology that That we're excited about at IBM and that we're Involved in and that we're really working with upstream You know this is all about open source and that and so we're you know We're heavily engaged in working with the open v-switch community as well So how many people know what open v-switch is most people in the room about not as many that's good See so open v-switch is an open source virtual switch It's it's you know it runs on a host or a hypervisor The project has been around since 2009 when it was first open sourced as well So it's almost seven years old coming up soon Open v-switch itself architecturally is composed of three things. It has a kernel module Which is actually upstream in the Linux kernel at this point and it also has Demon that runs on the host which programs flows into the kernel module and then a local OBS DB server as well on the host So another piece of technology associated with open v-switch Which is a new virtual networking technology is is something called oven open virtual networking So oven is actually something that provides virtual networking On top of open v-switch instances as well And so essentially what it does is it's going to manage open v-switch demons across a bunch of hosts It adds a local controller agent Which runs on the local hypervisors programming the local local open v-switch there? It also adds a centralized set of databases the north and the southbound database as well And that's where all of the state is stored and it does a lot of really interesting things with pushing logical flows around Back and forth up and down the systems as well So oven was actually announced in January of 2015 by the open v-switch team We've been working on it since then. It's coming along really nicely We're looking to do an initial first release of this actually in the fall as well And so let's see we'll go to the next slide here, which is So this this is actually architecturally what the oven architecture looks like It's pretty simple when you look at it there. You basically have the databases At each level there you also have in this case We're using open stack to drive this but you can use other CMS systems as well You could even do something like have Kubernetes drive it you could have something drive the oven control playing as well But it's really the key points are those databases as well, and then the the OVN controllers locally as well So the northbound database which is right on the top the northbound database has the logical and the desired state of the network So whatever the the CMS whatever open stack neutron is telling oven to to do whatever it wants that logical state to be That's stored in that that lot that northbound database over there And then the southbound database contains the logical pipelines and the flows So so north D will we'll go ahead and convert convert that logical state of the network into a bunch of logical flows That says things like this VM can talk to this VM and this VM You know VMA can talk to VMB and VMA can talk to VMC things like that Those are the logical flows and then those get pushed down into the oven controller and then the oven controller will create more complex Open flow rules and push those down into open V switch itself So you know it's not uncommon for example for some of these logical flows with all of these eckles that it's building and everything like that to see maybe hundreds of thousands of flows On each of these OBS V switch demons at this point So so there's there's a lot of things right there so you know how is this different than the built in Reference implementation that's in neutron. That's a question we get asked a lot of the times because neutron has an implementation that Looks kind of like this. It replaces the oven controller The neutron implementation has a python agent there as well. There is no northbound database. It's just using the neutron database One of the big things that that this provides right off the front is this removes all of the RPC traffic off of the The rabbit bus in open stack and anyone who's tried to scale open stack knows that the rabbit bus is one of the first things you hit Scalability wise with it. That's a challenge. So that's a huge advantage right there. All that traffic is moved off It's now using OBS DB to push things around you can kind of scale that separately The database you can scale separately. So that's that's a huge win right there You know so the other thing is that this this this is actually for us at least for what we've done You know we've seen a this has allowed us to scale pretty high and much higher than than what we were doing with the reference implementation as well So we've been pretty happy with that Some some things that it that it that this allows that this supports currently It this does L3 routing as well both distributed routing for IPv4 and soon IPv6 the patches are landing You can do hardware and software gateways with this as well So that would be if you want to take these virtual networks and tie them back into physical networks Or or if you want well that that's the main use case for that as well Also, it supports rolling upgrades be rolling upgrades because the scheme the old the databases have a schema So that's actually pretty important as well Especially if you want to be able to roll this out continuously and update this as well So that's that's a pretty huge advantage All right, okay, so So we spent a very few minutes Talking about containers and the fact that in essence we have a blank slate when it comes to networking for containers on Linux Kyle's sort of brought in the networking side the the sort of deep dive on on OBS and oven So trying to connect those two worlds, you know, where where is the ecosystem right now? Well It's important to note that there are more than than one model at the moment if you've Been following the community. You know that there's CNI the court out of the CoroS app C project the container network model which came via Docker soccer plane acquisition that that lib network implements and So we're going to actually talk more about project courier But we just wanted to make sure that that obviously this is a changing space. There's a lot of Churn so to speak in an ecosystem that's growing this fast but You know as far as as the the ecosystem players, this isn't an exhaustive list But because Docker is enabled this plugability more than just at the networking layer You can plug in storage drivers networking. There's now an authorization plug-in You can do external graph drivers for the actual storage of your images But specific to networking there are currently Several players who have plugins available today. You've probably heard of project Calico weave flannel and oven as well as we've just been talking about is also Plugable into into Docker. So a very brief overview of the network style of container networking At this point the the concepts are fairly simple there are networks there are end points Each container has a network sandbox, which of course on Linux is implemented as a network namespace So I can basically take a container Connected sandbox as an endpoint into a network so I can do things like back-end front-end network separation Obviously we can do more interesting things, but that's really the the level of the API as it is today and Helps us set up what we're going to talk about with courier. Yeah, definitely and I I think It's like Phil said there's multiple networking models right now The current courier work has kind of focused around the lib network work So the CNN I think we have to contain the networking model But it's worth noting that that there is a speck out here as well I think I saw Mohammed out here somewhere. He may actually know there he is over there He knows more about this than me too But but there is work going on to support Kubernetes the CNI as well inside courier as well But but we'll take a step back and say, you know, what is project courier? Because it's worth talking about that as well Project courier is a new open-stack project that was created within the last year and it implements a lib network plug-in For docker's lib network and the interesting thing about that is is that it it ties into the neutron API is Translating the Docker API is over to neutron and so what it does is it allows you to take your existing That's pretty loud. I know what's Hope the other guy is winning too, but I hope that anyways It allows you to translate it allows you to utilize your existing neutron infrastructure And use it with containers So an obvious initial use case is something like, you know You could think of you have an open-stack cloud with with neutron running with neutron API access out there Someone could could get courier going utilizing keystone with their credentials and the neutron API access And they could then spin up whatever they wanted with docker docker containers doctor swarm and they can tie it into their existing Implementation and lo and behold their VMs and their containers can now talk and this works with you know a multitude of different Neutron plug-ins that are available today as well So it's it's pretty interesting and it's pretty exciting stuff here Did you want to talk about this a bit around well go ahead? Yeah, okay, so we'll talk about this right so doctor to neutron the mapping What do we have here right so so Phil had the slide up that talked about the docker? CNM model and what it is And then we also talked about the neutron one and the reason we kind of did all that was so we could tie it together Here and show you what what maps to what here so on the CNM side you can see On the left there, you know a network maps over to a neutron network an endpoint Maps over to a neutron port The IPAM layer in lib network and the CNM maps over to a neutron subnet And then the join and leave events are actually translated over to plug-in unplug And the interesting thing there is that that requires some code to actually do the plug in the unplug over there But you can see that things map fairly nicely across between the two so it works pretty good so you know we've already talked about this a bit but As our little pirate friend there would say there are some advantages to using courier right I mean the biggest one is you can use your existing neutron install So if you're someone who's already installed open stack who's already using open stack and they're using neutron You can offer that up to container people and they can utilize that that networking layer and tie containers and VMs together as well And you know the other interesting thing is you can actually also tie it together with bare metal and with ironic as well Because if you integrate ironic with neutron as well now you've kind of tied everything together, which is pretty slick So yeah, if you want to get ready for the demo So we'd like to show you How some of this works but but just briefly before that you know So what does this mean for IBM Kyle has already mentioned that IBM is involved in these in these projects Some of the contributors are here in the room You know so we have our blue mix public cloud Hopefully many of you have heard of that we have a container service built around Docker containers That's running on open stack today and neutron is providing that network layer for our containers today We're now working on our next generation And again our work on courier is beneficial to that because we want this unified networking access across these substrates And then we can exploit some of these improvements to OVS and oven through that so Definitely, you know, we see value in these projects Not just for the open source, but also to enable our own platform. Definitely. Yeah so I think Kyle is going to show us some great stuff We'll see what we can do live demos, right? Let's see what we can do So we thought it would be interesting to actually show this Obviously the easiest way for us to do that was we have a simple dev stack VM running here I'm going to try to Walk through the demo and Phil's going to try to explain what's going on and we'll show you how everything kind of connects together Utilizing, you know neutron and open stack CLIs as well as the Docker CLI That's what we're going to do. So let's see the first thing we're going to do is we will go ahead and create a network here A neutron network. Let's just call that You know courier net for lack of anything else. That wasn't very exciting, but you know, let's go ahead and create Let's go and create a subnet Maybe it's worth mentioning that that courier in its current form can either So Kyle's creating a network But also if you use the dot work Docker network create and specify the driver is courier It will do these steps for you So either way you can either exploit an existing neutron network or create a new one via courier, right? Exactly. So that's what we'll show you. So right now we have Right now we created the the second network here this this courier net down here You just gave it a random sider at this point So now, you know, if we look at the Docker network commands These are just the default ones that Docker creates. We don't have a courier with the courier driver here We don't have one of those created yet. So let's go ahead The first thing we'll do is like Phil said we'll create a courier network or a network on lib network network Docker network Docker network Docker network will create a Docker network that maps to the existing neutron network and we'll show you Show you how that works here. So Let's do Give it the IPAM driver as well So courier is we're specifying that it'll also do the IP address management Which if anyone was in Tokyo and Muhammad and I actually gave you talk at that point I don't think the IPAM driver was complete but in the current version of courier that we're using we specify courier to handle IPAM Address assignment and this is a test to see how well yeah, how well Kyle types Yeah, especially I was gonna try this is where we're specifying that that instead of creating a new neutron network We're gonna use an existing one that Kyle's already created and let's see if I have any typos Beautiful. Oh look at that So let's see Docker network LS you can see it was created there We can do a neutron net list to show how it maps over. We already had that actually that wasn't very exciting Let's do a Docker network inspect and Phil can Can walk us through that on there on the screen sure so so this is just asking Lib network about a network name that we specified we can see here that the driver is correct. We've used courier Also supporting IPAM through courier. We have no containers attached to it But it it is linked to this neutron net with the same name Yeah, definitely so now let's So we've shown you that let's let's create a Docker network now That will then go and create the equivalent neutron network as well so we can show you that Docker network I create Actually, you know what I'm gonna cheat and do this Let's just call this sure. Let's get rid of this so we're not specifying an existing one And I think we'll want to change the IP range exactly Or it could be exciting Yeah, that we definitely want to do that Well, that would actually work though right because these would be these are private networks. So we could do that Let's go confuse that Shawn looks confused already He's always confused There we go Docker container. There we go. Look at that. All right. So a neutron net list and we'll show that courier actually Created a less exciting name, but yeah So the interesting thing is you can see it, you know, it takes the first part of the uuid here And right creates the name to map it across that way So all right, so we should do some more exciting and connect We can do that. Yeah. Yeah, okay. So let's do that while we're while we're here. Let's do Let's let's look at some let's Let's do some let's boot something here. So let's in this window here. Let's let's boot a VM Let's do this here We'll boot a VM on this one. Let's go up here and grab We'll use this this one that was auto-created here Okay, so we'll boot the VM over here We'll let that boot up while it's going on Then over here, let's do the same thing Except over here, let's get a container up and running. How about sure So the way that you specify a network very simple dash dash net equals and the network name from one of this networks that already exists And live network if that's a courier network will obviously use courier couriers plug-in to actually do that work To connect that end point to the sandbox like we had shown the container came up before the VM in case anyone was curious I don't know. Maybe no one was okay. So you can see we got you know an IP address here That's interesting. So Let's go over here now Let's take a look at The neutron ports as well. That's a lot of ports Gonna cause me some issues. I think this is the one we just looked at. I think that's the IP address 1010 for yeah Yeah, so that's our container. So that's there. So now we have I Could type we'll see so we'll see we'll take a look at the Nova VM. So the VM is running as well Let's Let's take a look at the console log make sure the VM made it all the way up It looks like it made it all the way up, right What was that IP address again over there three? I think it was three right? Yeah, so in theory now This is the container So we should be able to To ping across there and look at that another ping demo success So, thank you. Thank you. Thank you. Yeah, we were we were gonna do something much more exciting But the challenges of nested virtualization or you know and all that sort of that's just an excuse actually We really wanted to do something but but pings are cool nonetheless So the other interesting thing you could do is you you in theory could also Look at creating multiple networks and hooking up neutron routers between them and things like that So that's that's all definitely really good. I believe there's also Muhammad there's a career talk or that already happened Tomorrow at 9 a.m. There's a talk on career tomorrow at no Thursday at Thursday, I want to say I want to say 150, but it might be wrong. There's an oven talk as well There's an oven boff session tomorrow at to something So there's still a lot of content available and a lot of the technology that we kind of a demo today If people are interested in more of a deep dive and some of this stuff as well So, oh, we have to go back to the yeah, you gotta do the charts. We got to go back Yeah, oh, no there it is look at that much better Okay, that's the demo. Oh, there we go. All right We are we're awesome awesome. Yes. Yeah That's our thing if anyone wants to make this a meme on Twitter or help keep it going you guys. Yeah, yeah Good, okay, I think this will get weary some after another 30 seconds, but are there any questions? I could watch this all day. I don't know about the rest of you, but it's just fascinating Anyone else? Yeah, someone must have some questions Yeah, yeah, we've got a mic here and here or challenge Kyle to more typing. He could yeah, maybe show other Oh, we've got a question. Look at that. Thank you. Oh the microphone isn't on is that volume volume. No Help us on the way. Yep. Here comes someone. We got it Hold on one sec. He's right there. We got it. Oh, look at that No, not yet. Let him go. Oh, he's going back. He's gonna. There it is This is this is gonna be good when it finally. Oh, I hear volume. Go ahead. Hello. There it is. Yes Maybe it was evident to everybody else But you showed two ways and you were also kind of showing how the whole Docker interface and the open stack interface converges But my question is What's the value of the the Docker interface is that just purely illustrative or what you're essentially I'm Assuming that you're trying to say as you can just use the neutron interface to configure your docker networking, but It's actually it's actually the reverse, right? You use the Docker stuff to drive the creation of the neutron networks there So in other words career is a client of the neutron API, right? Think of it that way. Yeah. Yeah Yeah, so new neutron isn't driving the creation of of the Docker stuff I mean if you're creating docker networks, you're doing it with the Docker CLI Actually, and then that's just creating the constructs on the neutron side because neutron is actually whatever back-end implementation You have in this case oven. That's actually what's implementing the virtual networking. Well, I guess and then in my question is Could you not do the other way around? Could you not create the docker networks using the neutron interface using? So I yeah, if I understand correctly, I think I understand the question so Given you might have that lower-level knowledge that docker's container just has a network namespace you could obviously do some insertion Using external tooling. In fact, I don't want to get too far in the weeds But the open container initiative has a project called run C which docker. No uses run C to start containers It's like the execution engine at the operating system level Run C has hooks and you can actually have a pre-start hook that Many people who use run C without docker have scripts that do the networking component So yes at that point you could call out to neutron in this case if someone's comfortable with the docker Interface then they need to have a libnetwork plug-in So they need something that marries those two worlds courier and libnetwork happen to be that What we've shown here is it required absolutely? No, you could write your own interface, but for the docker world This is the proper way to interface with what they've already put together in libnetwork So yeah, yep, so question over here. Yeah Current implementations of some some IPAM controllers or drivers Do a lot of damage to neutron they take a lot of functionality out of neutron that Some network teams that I'm working with don't like How does courier interface with the IPAM? You know and how you know is it is it working more like the native neutron service? Which is what you know our folks want right, but we want also on the IP engineering side to know where our IPs are What right works there on et cetera? Yeah, it's making use of the neutron IPAM So that that's what it's using so it will use like the demo It was using just the default neutron IPAM Implementation as well in theory and Mohammed, can you correct me if I'm wrong? But if you are using a different pluggable IPAM layer That should work as well because it's just utilizing all the internal APIs for that. So, okay. Yeah. Thank you. Yeah Go ahead For OBS and OVN When they consume from rabbit or well not consume from rabbit But when that when the RPC calls get offloaded to their handlers are those like stored in the database and comparable to how Though I like the event driven different states and stuff like that gets pushed to rabbit Is it like are they comparable is it like a whole other set of data that it collects? So that's actually really interesting and that's a good question There are some people here at the break that will be able to answer that in much more detail But but essentially what you can think of it is yeah, yeah, Ryan's over here Yeah, we'll talk to Ryan afterwards, but I'll give you a quick summary and Ryan can give you the details, right? So essentially this is pushing down that right now it pushes down the entire state down to all of the controllers and that There's a lot of work going on to to kind of optimize that so it only needs to get For example logical flows that are relevant for that specific hypervisor and things like that But yeah, it's it's it's done doing a Well, and right now there's no there's a lot of optimization going on there You should definitely talk to Ryan after this. Okay. Thanks. Yeah Just don't let you guys off too easy preferred container runtime I'll look Phil answer that I mean our container service is currently aligned with Docker, that's what we built it on We are currently I'd say designing our next generation and I Don't think we're considering changing out our underlying runtime, but we're definitely looking at Kubernetes Swarm and considering the proper, you know orchestration layer to put on top of that with Run C some of that becomes pluggable now But at this point, you know, we continue with a Docker runtime Excellent. Okay. Oh one other question. Yeah. Yeah, we have time. Yeah, what is the relationship with? Yeah, so there's no relation Yeah, yeah, yeah, I if I understand glue on correct and you'd you'd have to talk to the glue on folks It's it's agnostic of controller anyway Yeah, yeah Yes, one more. This was more of an integration of Docker with neutron So how about the Nova like for instance, I want my container to be available on a specific hypervisor and So how these could be achieved So is that do you mean placement kind of a Container would like container networking would have like a lot of issues which is called as noisy neighbors Like if those are like I want my specific hypervisor should contain those set of containers and not the other because there might be some other VMs which Requires a specific more information or more resources on the other hypervisor So I want to avoid those issues of noisy neighbors when it come to container networking. So How do you guys plan to achieve those? Those are good questions. So number one, I would say This actually gets into a much broader question philosophical question that the entire open stack foundation is you know Is looking at and you've probably seen it as you've been here, you know, where do containers fit into all of this? So number one, I think that discussion is ongoing number two Magnum is currently the way to deploy Bays and they have bays that support if I'm not mistaken. You can correct me from on but kubernetes and docker swarm Yeah, so they have that capability And so that that would be handled there but but then there's the other option is I mean you in theory You could just run the containers on either your ironic bare metal nodes yourself or on the VMs that you spin up yourself I mean in that sort of scenario you're you're dealing with that problem yourself with whatever orchestration engine you have on top. Yeah, I mean, I think orchestration and Concept of labels, you know, whether it's affinity with other containers or or anti-finite I think that's probably the layer at which That has to drive the networking decisions. Yeah, I don't know if the solution is sort of at this layer Probably more at how you're gonna place and orchestrate those Thank you. You're welcome. Definitely There's no other questions. I I Think we're done. I think there was you know, wasn't there Nate was gonna be here with where he's given Where's trusty Nate? He's given a top there. He is so Nate. Do we get the raffle? We do the raffle for the raffle Sorry IBMers. No iPad for you Sure. Yeah, let's do that So we'll draw. Thank you everyone. Yeah, thank you. Oh Yeah, everyone get your ticket in yep Alex has the bucket if anyone needs to drop their slip in here it goes Yeah, yeah No Which which there's also free there's also free drinks if anyone anyone like beer and soda. Yeah, beer is good Nate, where's the beer? Back in the corner is the beer Kyle's beer Anyone wants to talk networking and neutron Ryan Matt you guys raise your hand actually Martin's here. I saw Mickey over there Muhammad. This is everybody working team. Yeah Dustin's over there Henry owns the networking team. So maybe yeah, one of them could yeah Oh, yeah, go talk to those guys over they will answer. Yeah, Matt Matt. Come here Over here bring Martin with you These guys will answer your questions right here. Yeah, I'm a blue shirt right there. Yeah Okay, we're gonna draw that for the iPad here. Did you make some up? I'll look over this way Okay, you got it oh one more there we go Okay, mix them up. I'm not looking oh One more here it is one more Okay ready, okay if we're going in let's see Here it is Okay, here we go. Let's see Shawn Roberts From Cisco congratulations Look at that nice There you go. Thank you Yeah Yeah, there you go. Oh, yeah, definitely you got the