 We're good to go. Excellent. Good morning, everybody. We're getting towards the end of a very long week. I don't know about you all, but I'm looking forward to sleeping all weekend. OK, informal survey before I get started. How many people here are intimately aware of Open Daylight? OK, intimately aware of the OVS ML2 driver in Neutron? And how many people are very familiar with OpenFlow 1.0, 1.3? OK, and how many people here read the talk description and said this is a presentation for people who are not familiar with networking? OK, so just to level set, the idea here is to give an overview of how things are today with the OVS ML2 mechanism. Give an overview of how they're different when you use Open Daylight, just to be precise, with the OVS DB network, southbound interface of Open Daylight, and why you would even want to use Open Daylight. So since many of you seem to be familiar with Open Daylight already, I'm going to go pretty briefly here. But what is Open Daylight? It's an SDN controller. In the context of OpenStack, it provides network virtualization. It provides the ability to define a network topology for your virtual infrastructure, which is independent of the physical fabric, the underlay network. It is also a platform for network engineering. So one of the key audiences of user bases and developer constituencies of the Open Daylight project is people who are doing advanced research on the network. It gives you access to some very detailed and low-level information about what's going on on the network, and allows you to act on traffic that's going across the network. So OpenDaylight.org, I invite you all if you aren't familiar with the project to go and read more there. So that first one, what's an SDN controller? The idea is that you centrally define the way traffic flows across your network, and that policy, that central definition of the way the traffic flows is pushed to the edge and enforced on the edge by physical or virtual network devices, virtual switches, physical switches, physical routers. OpenDaylight has a specificity that it manages multiple southbound interfaces. So it can talk to open flow devices. It can talk to open V-switches via OpenFlow and OVSDB. It can handle NetConf. And it can also handle there are multiple mechanisms for integrating other network management protocols, device types, vendor plugins, and even handing off to other SDN controllers for part of your management. So one of the projects, for example, in the Helium release is a plug-in for OpenContrail. There is VTN, which hands off to alternative network management pieces of infrastructure. So it manages multiple pieces on the southbound, and it can handle multiple protocols, L2, L3, and so on. So why would you use one? Well, a few sample applications. And I don't want to go very into detail, but the one I've mentioned already, network virtualization, is the one that's probably of the most interest to open stack users. But it also gives you the ability to interact with how your network interfaces with the WAN. So perhaps optimizing WAN traffic to ensure that a certain type of traffic gets a higher quality of service than, let's say, Skype or BitTorrent. Traffic engineering, identifying hotspots and routing around them on your network. This is getting quality of service information in real time from your network. These kinds of applications are the things that are enabled by an SDN controller. And also software-based network applications, the ability to do things like intrusion detection, intrusion prevention, DDoS protection, handling VPN as a service. All of these things are possible in an SDN controller because you've got direct access to your network traffic. OpenDalight has a lot of projects. This is the OpenDalight Helium release. The Lithium release, which is in final preparations now and is due for release in about a month, has many more projects. Some of these projects have, in some sense, fallen by the wayside. But you can see there's a rough breakdown here of we have multiple northbound interfaces, multiple ways to interact with OpenDalight. We have multiple southbound interfaces that can handle multiple device types on the south. And then in the middle, we have a controller which understands the state of the network and has multiple network applications that can act on it. In the context of OVSDB managing OpenVswitch, the OVSDB project in OpenDalight, to be precise, these are the pieces that matter. So we've got an OpenStack Neutron northbound interface. We deal with OpenDalight via a northbound REST API. We have the ability to integrate AAA to make sure that authentication and authorization are consistent across OpenDalight and OpenStack. We have an OpenStack service, which replicates what Neutron can do, OVSDB, the Neutron service, which defines the network piece that talks to the OVSDB plugin southbound and the OpenFlow 1.3 plugin southbound. We'll get into all of the details of that a little bit later. The point here is there are, in the context of OpenStack, about half a dozen projects that are of particular importance, and the rest provide you the ability, by working with OpenDalight directly, to do more advanced features, but these are the projects that Neutron will deal with in the context of network virtualization alone. Okay, so some of the core OpenDalight use cases. OpenDalight traditionally has had a big tent approach, but the OpenDalight board has recently said we want to focus on two specific zones as areas of focus, and one of them is OpenStack network virtualization, and there are multiple aspects to that, and the other one is service function chaining, and the NFV use case, which you may have heard quite a bit about this week. NFV is a very hot topic, it's network function virtualization. It's the ability to migrate physical workloads to virtual machines in the context of telco applications. Core packets, IMS, voice data, all of the things we're used to in telco services, they're currently physical machines and they're moving to VMs, and all of those things need very specific network, they have very specific network constraints, so something like OpenDalight is required to enforce those constraints and to provide the ability for an application developer to define how that application is gonna live on the network. So these are the core use cases that OpenDalight is focusing on. Things like SFC is service function chaining, the ability to chain multiple virtual machines together as network services on the way, so when you get, for example, just to take one example, traffic comes into your network, you do some deep packet inspection on it, you detect that it's a video flow and you say, okay, let's go through the video bandwidth limiter because we don't want to use all of our bandwidth on video and let's go through the parental control because we don't want kiddies watching porn, for example, and that would be a service chain which would have three nodes in it, and then finally, it gets to the endpoint. Okay, so OpenDalight and OpenStack. So to start with, I promised a brief overview of how OpenStack works with just the OpenVSwitch ML2 plugin. I don't want to go into too much detail here, but essentially the ML2 mechanism replaced the Neutron plugin architecture which was there before. The idea is that you've got the modular Layer 2 plugin hands-off defines, you have essentially two things that the tunnel type between hosts which can be VLAN, VXLAN, or GRE, the mechanism manager and there are multiple mechanisms OpenDalight is one of them. This is a way for Neutron to delegate certain operations to third parties. The way Neutron does it with the OVS plugin is that you have Neutron server receives an API call. It records, it actually sends out first the message to the queue. You've got one L2 agent, the OpenVSwitch agent on each compute node which is managing the flow table on your OpenVSwitch instance on the compute node. We have an L3 agent per router that's created. So the L3 agent will get to what it does a little bit later and we've got a DHCP agent per subnet which has DHCP associated with it. Neutron passes on the request via the agents. It gets a okay, this is done and records the change of state of the network in the Neutron database. This is also, this is a bottleneck here. The potential bottleneck around the message queue, the scaling issue of L2 agents coordinating across multiple hosts. Anyway, so this is how traffic flows with OpenSack. So there are two things, how do we define the flow tables across the hosts and then how does the actual traffic flow when we've defined them. So this is a pretty simple setup. We've got a compute node here on the left which has two VMs on it and we've got the Neutron control node here on the right which has a DHCP agent and an L3 agent running out of the purple boxes here. Okay, so this is all from Lars, Lars Kallagstedman, a colleague of mine in Red Hat who has described everything here in very deep detail and it's an awesome page. It's networking in too much detail if you look for it. It's really good. This was huge education for me. So if we look at the first level, what happens from the instance? Well, we've got E0 on your ether, on your Nova instance. It's connected to a tap device in the host, name space. That tap device is linked to a Linux bridge. That's where security rules, when you define your security rules associated with an instance, there is one of these per instance living on a compute node and that's where there's a set of IP tables that say what's allowed for that instance. So we define a chain, there's the multiple chains that get defined, so you see the same hex string, 7C7AE, 61E, 0, and then there are two tables that are defined, one for output, one for input, O and I, and they define just, it's normal IP tables that is defined on that bridge. And some would say that's a bug that really should be able to do that in the open V-switch bridge. I'm not gonna get into that debate today. Then we bridge from that Linux bridge down to an integration bridge, BRint, and that's essentially where traffic from instances are tagged with VLAN, so the tenant that you're associated with or the subnet that you're on, the network isolation, the isolation of traffic from that VM is defined there. So just showing the V-switch, OVS, VS control, show you see the port, the QVO, so you'll see the same hex string appearing throughout the chain, right? The 7C7AE, 61E, I think that's got some significance to Neutron, but I don't know what it is. If you ask questions afterwards, I may defer to some people in the audience who know this stuff a lot better than I do. And we tag, so there's a tag one, so you're tagging all of the traffic coming in here with VLAN ID one. Then you get handed off to a tunnel, BR tunnel, and that's where we do the tunneling between all of the hosts, so GRE, VXLAN, VLAN, whatever you define in your ML2, not mechanism, the other one. Type, tunnel type. And then it's sent to the physical NIC on the compute node and tunneled to the peer, whichever is the peer that you're peering to. So it's converted to from, so and then it gets to the other tunnel and it's converted back from GRE back to VLAN and we'll see how that happens. And then traffic is sent with the appropriate VLAN tag back up to BRN. So this is a flow table, and what you can see here in red is if traffic arriving has a tunnel ID of two and its destination is broadcast, so if you're sending broadcast or multicast traffic from GRE tunnel two, it's going to, you're going to set mod VLAN, VID, so we're going to set the VLAN, VID to one and we're going to output that to port one. So obviously somebody knows what the port one corresponds to where this should be going. If you look further down, if it's a tunnel ID zero dot two and it's going to a specific MAC address, then we modify the VLAN ID and send it back up to the back up to the BRN. And if it's coming in from port one and the VLAN is one, then we set the GRE tunnel to two and we send it off to the ethernet character. So that's, we get to BR tunnel on the neutron host, passes up to BR int on the neutron host and we get that GRE to VLAN conversion and BR tunnel and the BR int bridges to the neutral agents, right, the DHCP agent, the L3 agent. So if we look at the BR int, we see two top devices here, top F14, blah, blah, blah. That's got a VLAN tag one, which is the same as on the other side and top C2D7DD, VLAN tag one. So these are all associated with the same instance that we saw on the compute node. Each network with DHCP has a, well network actually subnet with DHCP has its own network namespace and each router has its own namespace so if we do an IP net NS, we see a QDHCP namespace and we see a Q router namespace and that's how we can allow multiple DHCP domains with overlapping IP addresses to be, to exist on the same open stack cluster and if we dig into the namespaces, for example, on this QDHCP, we see F14C that corresponds to, remember the one of the ports that we saw on the bridge and this is essentially just a DNS mask process, which is doing DHCP and you can do a PS-EF on the neutral hosting, you'll see the DNS mask process, you can grep for this F14C59AD and you will see the process that corresponds to just that VLAN. On the routing side, we get to the router which is essentially a set of routing tables, IP tables and the traffic is sent on to BRX so this is where we do the netting that happens on BRX so if you've got floating IPs, I think they're handled there, correct me if I'm wrong, I'm right, good. So the routing tables for the router defined just with normal IP tables on the control host and in that namespace. So we've got two connections, QG which is the gateway associated with the router and QR which is the router interface back to BRint. If we look at the OVSVS control, we see again interface TAPC2D7 that corresponds to the C2D7, the router, QR. So we've matched up the router with the port. Okay, so that's how it works with ML2. I don't know if anybody here learned anything there, I hope so. How does it work if you swap out ML2 and you put in Open Daylight in this place? Well, first things first, how does Neutron talk to Open Daylight? Well, we've got a single endpoint so there's a common service northbound endpoint which is configured as the API connection point when you do the mechanism, we'll see how that works a little bit later. That talks to the Neutron service so there's Neutron ML2 talks to the Open Daylight through the REST API. That talks to Neutron service. It says, okay, I need to handle this type of request. I'm talking to the Open V-Switch. Can anybody handle Open V-Switch stuff? Can anybody do Open V-Switch? So we've got a request goes down to the OpenFlow plugin which actually codes the flows and we've got an OVSTB provider which is listening southbound for events from the Open V-Switches via OVSTB server on the compute nodes. It simplifies things because you've got a single central control point and you've got the ability to scale that out in all the ways that you can scale out Open Daylight, yes. So how do you do it? First, install OpenStack. I would love to tell you something differently but if you've already got OpenStack and you've got a bunch of networks and subnets and network configuration in place, there is not currently an easy way to migrate when you're migrating from the OVS ML2 plugin to Open Daylight so you need to sort of clean out your Neutron config and replay it once you've got Open Daylight in place. Install your Open Daylight, you can do it either on your Neutron control host if you wanna have them both on the same host or you can do it on a separate host it's just an API endpoint. You can load balance that API endpoint if you're doing a cluster. You clean your OVS DB configuration on all of the hosts and you do an Open V-Switch set manager command on each of the compute host to say, okay, this Open V-Switch is being managed by this host, this controller. And then in the OpenStack side and we'll see the config you set Open Daylight as the ML2 provider and then all of the Neutron commands are going to Open Daylight and all of the Open V-Switches are already attached to your Open Daylight instance and everything should just work. I see Flavio nodding his head so I don't think I've said anything too well. Okay, step one, Neutron config, unfortunately no migration path so you need to delete all your subnets, networks, routers, ports, stop the Neutron service, stop and disable all of the L2 agents on all of your compute nodes but I think I mentioned that later. So installing Open Daylight, there are a number of things that are required. Anybody who is intimately familiar with Open Daylight will say, well, shouldn't this all be picked up by dependencies when I install just the ODL, OVSTB, OpenStack feature? And yes, it should. With Helium, it doesn't quite work that way so you need to install a number of things. So the base is basic services, NSF is a kind of a catch all basket for features which includes among other things some of the Neutron northbound stuff. In Helium, and this will change in Lithium, ODL uses ADSAL which is application driven, the service abstraction layer, it's moving to MD-SAL, model driven service abstraction layer which is essentially you define a Yang model for the service and then that generates the north and southbound interfaces that it needs. The OVSTB OpenStack, OVSTB northbound. Why do we need OVSTB northbound? For the, yes, the OVSTB server is talking to the OVSTB plugin. OVSTB is kind of namespace overloaded here because OVSTB is actually the database that manages the OpenVswitch configuration on the compute nodes and a plugin to Open Daylight which manages the network and the southbound interface from OVSTB, the service, the network service to the OVSTB on the switch. So it's got at least three different meanings depending on where you are. And Deluxe, this is just if you want to have the GUI. Deluxe is the daylight user experience. So it's the web-based UI for Helium and Lithium. Okay, so after step two, if you go to, slides will be online, you'll be able to get the URL but it's essentially if you go to your Open Daylight host, port 8181, you will see this. You've got Open Daylight, it doesn't have any devices that it's connected to. This is what it looks like when Open Daylight is installed correctly. Then for each host you stop and disable your Neutron OpenVswitch agent. You don't need it anymore. The L2 agent is going away and you're going to replace it with just the normal OVSTB server. You stop your OpenVswitch service to clean out your local database. There are good instructions on this in the Open Daylight wiki. You restart your OpenVswitch service and you connect OVSTVS control set manager and you connect it to your, this IP address is just whatever your Open Daylight control host is. You may need set in force zero and that's because this port is not open in the default SE Linux policy. It's better if you allow the port traffic but consider this a bug that will be fixed. So after that you've got an empty OVSTVS control. You've got an empty OpenVswitch configuration which shows that you've got, your config is being managed by a manager. It's connected and the bridge brint, we don't have any brtun anymore. Brtun goes away because all of the tunneling is done on the integration bridge. It's also being controlled by port 6633 and is also connected and that's all you should see. If we look at the flows and I've snipped here but you've got that this is an empty flow table. OVSTV sets up, it uses tables, open flow tables. So we've got table zero corresponds and there are semantics associated with what each of these tables does. So it's just table zero. You can see, yeah, so there's one thing there is that the controller is connected on table zero. So this is, it allows, or there will be open flow traffic across them. So essentially what that means. We'll see what each of these tables mean here later but the idea is it's kind of a flow table where semantically packets come in and they're handled with table zero then they're sent to table 20, to table 10, to 30 and so on. That's the way it works. So this is what you should be seeing if your open V switches are correctly connected to your controller. Then you connect, you configure Neutron. So there's an innate file, you just say the mechanism you set it to, open daylight, your tenant network type to VLAN and this is the API endpoint, so ODL control, whatever the ODL control host is, port 8080 controller, NBV2, Neutron. That's just the API endpoint that's exposed by open daylight. And so that's, you're done. We can stop now. So going back to the networking and too much detail, what does the same thing look like with open daylight? Well the only thing that's changed materially is that we don't have that BR ton anymore. There's another thing that may change. You may, especially with lithium, no longer have an L3 agent and you may end up having your BRX on each host and having the routing handled directly by open daylight on the BR int. We'll get to a little bit about that later but that's not the case with helium, which is what I used as my reference here. So it's essentially the same thing, right? So why would you use it? But let's see how the flows look like. So we've got, if we look at the bridge on, this would be a compute node. We see there's a VxLan that gets created so it's peering the compute node to the control node. If we have multiple compute nodes, it will appear to them too. So you'll see an interface of VxLan which will bridge to each node, physical node on your OpenStack instance and we've got a couple of ports here. Those are going to correspond to your instances. If we look at the, so this is listed interfaces just to get some ideas of the kinds of information that you need or that you have an open flow. So let's look at list interface and external ID is an important one because this is what allows you to associate a MAC address and an open flow port with a specific UUID which is the port UUID in neutron. So that's what allows OVSDB in open daylight to make the connection between the virtual port in neutron and the physical port on the V-switch. And also the interface so you can see that the actual tap device there as well. If we look at the flow tables, so this is the first thing here is if traffic is coming in on port three and its destination is this MAC address which is for want of a better 75, 95 at the end, then set the tunnel to zero by three A so it's setting the VLAN or VXLAN ID. If the traffic is tagged with, so later that's table zero. Table zero is for all of the tagging that's done. Table 70 which is L2 routing or L3, it's part of L3, is if your destination is the IP address then set the destination MAC address to 75, 95. So it's doing essentially a reverse ARP lookup and the last rule here, table 110 is the actual L2 routing. If your tunnel ID is zero by three E A and your destination MAC address is 75, 95, then you send it out to port three which we saw earlier. So that's how traffic is routed to and from the instance. If we look at, there are some rules in the, and this is just I've selected because if you look to the flow table it would be, oh yes, there's a 70 line flow table and I can't read it because the text is too small. So I've extracted some of the key lines here to just to give you an idea of the kinds of things that happen in the flows. So here, if we're sending out broadcast traffic and it's on tunnel ID zero by three nine, if it's local then we send it to output to port two and to port one. So we're gonna send it to the instance that's attached and over the VXLAN bridge and if it's remote, then we just send it over the VXLAN bridge. Is that right? And then finally, it's one of the things that's kind of cool, table 110, the ARP. Okay, so if the tunnel ID is zero by three A, so this is all of the instances in that group of instances, that subnet and the destination MAC address is either of the instances on the other compute node, then it'll be sent over the tunnel. So basically it says, okay, if you're going to MAC address whatever, then I'll send you to the right port and each of these compute nodes is coded with the correct complete ARP table for your network. One of the things that's kind of cool is distributed ARP. So you don't get any ARP traffic on the backplane because what happens is because every compute node has a complete ARP table of your network, whenever an ARP request comes in on any port, you can say, well, I know where that guy is. So you just swap some fields around and you can do all of this with openflow tables. You say, okay, I'm gonna move field one to field two, move field three to field four and then I'm gonna set this value in field one and field three and I'm gonna just send the packet back on the same port I got it in from, which is kind of cool. It's one of the nice things. Okay. So that's where we are with Helium. Coming in lithium, so this is the OVSTB project in Open Daylight. I've been hard at work migrating from AD cell to MD cell, I mentioned that. That's a big chunk of work. So that's gonna enable, there is some historical reasons why we had both. The general decision of the community is that everything should be MD cell from here on and so a lot of the AD cell legacy projects are migrating to MD cell. There is a name for feature parity with Neutron. So all of the features that are currently available in Neutron should be available directly and provided by Open Daylight, including load balancing as a service and currently the East-West is working and the North-South is not finished but the idea is to have native DVR. So essentially every compute node will have the ability to route traffic directly. So you've got a native distributed virtual router using just open flow on the compute nodes with controlled by the controller and the Neutron northbound interface has been split out from the Open Daylight controller and I'm done. So I hope that was educational. I have about seven minutes for questions. Do we wanna get people behind the mic or will I just repeat the questions? Maybe it's just easiest if I repeat the questions. Okay, you and then you. Yep. So the question was, I know that it's currently not possible to migrate from OVS to ODL without dropping the entire config. Are there plans to enable that because it makes it hard to move to ODL if you have to drop everything? I am going to defer to Ed. Ed, are there any plans to do that? To enable. So I guess the idea would be to have ODL read your network config and reconfigure the V-switch using ODL instead and so allow people to keep their Neutron config and just reflect that Neutron config in an ODL config. Enable migration from OVS to ODL more easily. That should be a doable thing, he said. So there were two questions. One is what about OVN? I don't know the answer to that. But Flavio does. And the second is isn't having the entire ARP table on each compute node a scaling issue. And I don't know the answer to that either. Okay, so Flavio says what OVN will do is roughly the same as what Open Daylight does but it's just co-located with OVS. So it's moving things a little bit closer to the edge. But OVN's an accelerated day of playing. I'm just gonna suggest that you take this conversation offline. I'm feeling a bit weird repeating. Yes. So is security groups actually supported in ODL? Question it, oh, oops, you had the mic so I don't have to repeat the question. I don't know. Are security groups supported in ODL? Are there plans to do that? There are plans to do that. But it's not supported right now. And then the North-South for the North-South routing, right? Like internal network to external network. For that the netting is actually done using the OVS rules in the BREX or, you know? So the question is in North-South routing, oh. Yeah. The mic. I don't know, I really, I'm sorry. So it's newer development. I'm going to defer again to Flavio. Is the netting done in with the open V-switch tables on each, it's with installed open flow rules on the compute nodes? Okay, thanks. Yes, I'm gonna take this question before I take you and then that's, I think, gonna be it. Okay, so the question is feature part to you with Neutron, which version of Neutron? I'm deferring to my colleague who's, okay, I'm sorry. I don't know the answer to that. So security groups are still supported on the instance, but what happens is that, so there's an issue with open V-switch having IP tables applying to an open V-switch instance. I don't know if that's been fixed or if there are plans to enable that. So you still need the Linux bridge there, but I mean that's created by Neutron or Libret, I don't know, one of them. With 2.4 and connection tracking, we might not need the Linux bridge. And then last question, I'm sure. Yeah, on the command to set the controller as the manager, I think it was set manager. That's a CLI command right now. Is there any plans to make it more dynamic, like put in a configuration file, and if so, do you have any thoughts on which configuration, like the ML2 plugin in it? I'm really not the best person to talk to about that. The question, so are there plans to, that was one of the questions I had, is can I not just sort of register a new compute node somewhere and have that automatically connect to the controller? Maybe that's part of the system management, whatever system management layer you're using to manage your open stack installer would take care of that, I don't know. Okay, thank you all. And enjoy the rest of the conference.