 All right, are we about ready to start? OK, welcome to the deep dive on the Modular Layer 2 OpenStack Neutron plug-in. I'm Bob Cacora from Red Hat. This is Kyle Mestri from Cisco. And we're going to co-present. So why are you here? Why are we all here? That doesn't really look like Hong Kong's beach front, but it'll do. So many of you may be here because you've heard that some of the plug-ins you might be using, OpenVswitch and LinuxBridge, are being deprecated. Want to find out what they're getting replaced with? valid reason. You may also have heard that it has some capabilities that are interesting. We'll learn about those. And you may not know anything about it, but just want to learn about it, see what it does. So we'll get on with that. So what is it? What is Modular Layer 2? It's a new Neutron core plug-in. It's released in Havana. It's the default plug-in that runs if you run DevStack and don't choose a plug-in. It's modular. It has drivers. So the plug-in itself does very little. It's built up of drivers. There's drivers for different network types, things like VLANs, GRE tunnels, so forth. And there's drivers for mechanisms. So mechanisms would be things like the Layer 2 agents or potentially talking to an SDN controller. Also, there's drivers to talk to hardware, like top of rack switches, that might work in conjunction with other drivers. So one other thing with the modularity of this plug-in is that the Layer 3 routing is handled as a service plug-in. So that gives you the option of leaving it out or picking among different implementations. So the Modular Layer 2 plug-in, we mentioned that those existing plug-ins were being deprecated. Their agents are not being deprecated. So this is really just the plug-in side. The OpenV switch LinuxBridge Hyper-V agents are all supported by this plug-in. Those will continue to evolve. And ML2 will support new features in them. And as we mentioned, the existing monolithic plug-ins listed there are being deprecated. Hyper-V is sort of up to the maintainers of at what point they want to decide to deprecate it if they do. Kyle? So I think the next section of the talk will talk about the motivations for why we decided to do the Modular Layer 2 plug-in. So as this slide shows, before the Modular Layer 2 plug-in, if you were going to use OpenStack Neutron, you were limited to the OpenV switch plug-in, or the LinuxBridge plug-in, or vendor X, or vendor Y. Neutron effectively supported a single plug-in at a time. So this was kind of a big limitation, I think, that a lot of people saw, especially people that wanted to deploy and use multiple technologies. And whether that was multiple different vendor technologies or even multiple hypervisors with different technologies like Hyper-V and KVM or VMware. So the Modular Layer 2 plug-in was designed to solve this problem by allowing for multiple mechanism drivers to work at the same time and by implementing the PortBinding API, which Bob is going to talk about later, so when VMs or their VIFs are bound on hosts, they'll be bound to the right driver. So this slide is speaking maybe to future plug-in writers, people who want to interact. So before the Modular Layer 2 plug-in, you had your Neutron server with your different plug-ins. Now a new vendor, a new open source implementation, whatever comes along and says, so I want to write a Neutron plug-in. Hey, that's great. Welcome to Neutron, right? But wait a sec. I'm duplicating a lot of code. There's a lot of database code, segmentation code. If you look at a lot of the plug-ins, a lot of that is duplicated. So now you might be bummed, right? What a pain. I'm duplicating all this work. So ML2 was designed to kind of take a lot of that duplicate code, a lot of that duplicate effort, and move it out into a common area so that people can focus on enabling specific mechanisms for their Neutron implementations, whether that's, like I said, a vendor implementation or an open source implementation. So this is kind of the three main use cases, I think, that we see with ML2 at this point. Replacing the monolithic plug-ins, as we've already talked about, eliminating a lot of the duplicate code, reducing both development and maintenance effort, that's actually a really huge thing as well because it also reduces test coverage because a lot of the duplicate code had duplicate test coverage, which increases unit test runtime. It leads to a lot of undesirable side effects as well. So ML2 also enables these new features in addition as well, whether that's things like top of racks, switch control. There's actually a really, really great feature that was implemented in ML2 called L2 Population, which we'll talk about later, which effectively is avoiding tunnel flooding and doing some interesting things there. And there's certainly more to come. The Ice House design cycle will see a lot of interesting ML2 innovations as well. And then like I indicated before as well, I think the heterogeneous deployment is also a really good use case and another reason why people should look at the ML2 plug-in for their Neutron deployments as well, whether that's different appliances, different as-a-service appliances or hypervisors, or all kinds of new technology as well. Okay, Bob? That's good. So I'm gonna talk for a couple of minutes about the Modular 2 architecture. This is advertised as a deep dive, but we'll get some detail there. This is just a quote from the Read Me. It's a framework, so just emphasizing that the plug-in itself does very little. Framework for these drivers, that's where all the interesting bits are. And it supports a variety of technologies concurrently supporting complex use cases as well as simple ones. So how is ML2 similar to the existing plug-ins that it's deprecating? It's basically a superset of the functionality that each of these monolithic plug-ins provides. It's based on the Neutron DB plug-in V2 base class. That class manages all the database state models having to do with the Neutron resources, port, network, subnet. So pretty much the same code handling all that, very little different in that case. It models the network in terms of provider attributes. So as an admin, when you look at a network, you'll usually see, depending on the plug-in you're using, but network type, segmentation ID, physical network attributes that might be describing VLANs or GRE tunnels or whatever. So it uses that same type of approach. It supports the RPC interface that the existing layer two agents use. There was a little work in Havana to align those so that they could all talk to the same plug-in, but very minor changes that were done in a backward compatible way. And it supports almost all the same extension APIs as those existing plug-ins. So what's different? So in order to achieve its goals that Kyle talked about, there are a number of innovations introduced. One is really the clean separation of the management of network types from the management of mechanisms for accessing those networks. Basically, each of those is a different type of driver and sets of those drivers are loaded at runtime and available for use. A new vendor plug-in or open source plug-in that needs new network types can add those, but oftentimes the same network type will be supported by a number of different mechanisms. In the same way that Linux Bridge and Open VSwitch and Hyper-V all can talk about VLANs. There's just one VLAN type driver so that the notion of VLAN is kind of independent of which mechanisms are being used to access it. So that access can not only be one or the other as you would with the monolithic plug-ins, they can be accessed concurrently. So it is possible to build a neutron deployment that has maybe Nova cells, some of which are using a KVM hypervisor and others using a Hyper-V hypervisor or something like that to where the same VLANs, the same tenant network, same provider networks are accessed by different mechanisms at the same time. And from a networking perspective, those are all seamlessly integrated. Also features such as what Kyle was talking about with the Layer 2 population, controlling top of rack switches, things like that, those also can be packaged and configured as mechanism drivers. They get their hooks into the system that way to know about the operations going on related to the neutron core resources. And so they can add that. The nice thing there is those might be features you need, you configure them in, they might be experimental, you don't want to use them in your deployment. I'll just control by whether you configure those drivers for the neutron server. Another major innovation here is the support for multi-segment networks. With the previous monolithic plug-ins, a network was either a VLAN or a GRE network or a local network. And with ML2, it can model networks made up of different segments that have different details in different places and connecting the network can be achieved by connecting to any of those segments. That's still one L2 broadcast domain. We'll talk about that in a little more detail in a slide that's coming up. Portbinding, that's the aspect that was also mentioned of how when Nova launches an instance in the VIF port, the virtual interfaces are bound, the interaction with the plug-in and deciding kind of which segment they connect to, what details are involved there, what drivers used in the generic, Nova's generic VIF driver, those sorts of things. We'll talk about that in more detail. And then again, also one innovation here is that the L3 router is pulled out and treated as a service plug-in. So no real change there, but more flexibility for deployments. So this diagram here shows a little bit of what the neutron server looks like when running the ML2 plug-in. So here's the server code. The ML2 plug-in here, the various extensions, these are all shared. This is all code that is very similar to other plug-ins. So the structure of the neutron of the ML2 plug-in itself is really made up of a type manager and a mechanism manager. And then in the Havana release, there's a set of various type drivers, GRE, VLAN, VXLAN are listed here. There's a few more. The mechanism manager then manages a set of mechanism drivers. There's a couple of different categories here. We see here open V switch, Linux bridge, and Hyper-V. Those are ones whose job is to communicate with the Layer 2 agents. There's Arista, Cisco Networks, TALF, Network Control System. Those are drivers that can drive top-of-rack switches and kind of configure the core of your network to trunk the VLANs where needed. And then the L2 population, which has been talked about. All these things also deal with the database, deal with the RPC layer. So this shows you visually kind of what's going on with multi-segment networks. This whole big arrow is one network. It's got a network ID. Over here, it's VLAN 37 on FizNet 1. Over here, it's VLAN 413 on FizNet 2. And tying those together may be through different connecting switches or something like that. It's VXLAN with some tunnel ID there. So VMs can connect to any segment that they have that connectivity to. So right now, Neutron and the ML2 plug-in don't really do anything to manage the bridging between these segments. That's done administratively. It might be something that is handled by high-level tooling. But it just has the capability to represent networks made up this way and bind virtual interfaces to any of those segments. There's a new extension called the multi-provider extension. At least one other plug-in also implements this that lets you access these through the API. So ports are associated in the API with a network not with a specific segment. And port binding covers how the appropriate segment gets chosen. Next slide, please, Chair. So I said it's a deep dive. We'll show a little code here. This is going to give you a quick overview of the TypeDriver API. There's some housekeeping methods up top that we don't need to worry about. Basically what these are doing is, in the case of, let's say, VLANs, managing the pools. So allocate tenant segment here is what would happen when a normal tenant network is created without any provider attributes. And inside this driver, there might be a pool of VLANs on different trunks that are available. This is what's going to take care of within a database transaction, picking one from the pool, marking it as being used. And that will then be associated with this network and be one segment in that network. Other methods could be used to add segments to the network. When you're done with a network, eventually all the segments are released through this so they can get returned to the pool. And then when provider networks are created, this validate provider segment that's used to just check that the right information has been provided. And then reserve provider segment that's part of a database transaction that actually marks that segment as being in use. It may have been, in case of the same VLAN tag that could have come from the pool, so you want to make sure it doesn't get allocated to a tenant network as well. Next slide, please. Even more code. This is the mechanism driver API. Basically, there's, again, some housekeeping. But you'll see here, for each of the core resources of Neutron, there's a set of methods. So create network, pre-commit, create network methods here, subnet methods, and port methods there. There's also a number of methods related to port binding that we'll get into a little more detail on. For each of these actions on each of the resources, there's the create, update, and delete. There's a pre-commit and a post-commit method. Basically, the pre-commit happens as part of the database transaction. That's something that you want to execute quickly. But then there's a post-commit equivalent method that gets called with the same information. That's where talking to a top of REC switch or talking to a controller and so forth can be done outside the transaction. All these methods, you'll see, take a context argument and very little detail. There's a number of different types of context. This is an example. This is the network context. Basically, what this is doing is providing, in the case of an update, access to the current, the new values, the attributes describing that network, also the original ones. So on an update, you can tell what's changed and the list of segments that make up that network. This approach was taken so that this stuff can evolve with less disruption to existing drivers. New information is added. New methods can be added to the context. The old ones can be kept working while there's been talk of moving from dictionaries to an object representation of these things. We can still make the dictionary representation work in terms of the other one, so forth. So the idea here is to make the API that the drivers are implementing relatively stable. Next slide, please. So port binding. This is kind of the interesting part here. This is really picking which segments being used, picking which mechanism drivers being used. So basically, this gets triggered from an API perspective, if you look at a port with admin credentials, you'll see that there's binding attributes, binding vif type, binding capabilities. Those are things that the plug-in sets and there's a binding host ID that Nova sets when it's determined which compute node the port is being plugged on. So basically, when that host ID gets set or when something happens with the existing binding, there's other situations that can trigger this, the ML2 plug-in will walk through all the registered mechanism drivers. They're in order, so they can be prioritized. Call bind port on each of those. So bind port is, actually, we saw that on the previous slide, passing this port context as passed to the bind port call. Basically, those are each tried in order until one succeeds or all have failed. When a driver's bind port gets called, it can look at information that's available from that context, so there's network segments, there's the host ID attribute, and there's also some hooks to get information about agents. So Neutron has this agents DB facility. This is basically a helper function that's gonna take the host that's being plugged and find out about the agents running on that node basically the type that you're looking for and get information about them. So then, basically, for those agent-based drivers, they can look at that. They need to make sure that the network has a segment with a network type that can be supported. They need to look at, for the agent that's on that node, does it have a mapping for one of those segments? So in the case of VLANs or flat networks, there's a physical network that's important there. That information is available to that agent's DB, so the driver can look at that, decide whether connectivity is available there. If it can bind, it's gonna call the context set binding method, where it's right here, passing in which segment ID has been selected, the VIF type, this will then let the generic VIF driver in Nova know how to plug it, and the capabilities that controls things like how security groups are implemented and potentially could be extended to handle other things as well. So once one driver succeeds in binding, we're done. If none of them can succeed, then the ML2 plugin sets the VIF type to binding failed. So that could be useful information, trying to debug situations. If this doesn't happen, you see that VIF type attribute is winding up in that binding failed state, the thing to really look at is are the agents running or you expect them? Are they alive? That sort of thing. Put the last slide here, okay. Thanks for the overview on deep dev and the architecture there. So now I thought we could discuss exactly what's implemented in Havana with regards to ML2. And some of this has been covered a little bit in other slides, but these are the exact things that are done. So as far as type drivers, you can create networks with these types for ML2. And these are all, you know, you can do local flat VLAN GRE or VXLine network types. Not all of the mechanism drivers will support these. So it's up to the mechanism drivers to decide which types they want to support. But again, as Bob indicated with the multi-segment API support, you could potentially, and bridge is a bad term, but you could bridge these networks together if you needed to outside of the scope of ML2. Again, here's a list of the mechanism drivers that are supported as well. We kind of talked through these a little bit before. But I think what you can see here is there's kind of a nice ecosystem of mechanism drivers that are supported up here as well. And we're hoping that in the Ice House cycle that more vendors and even more open source projects decide to implement ML2 mechanism drivers. For example, we're looking at Open Daylight as a mechanism driver as well. For an example of an open source project which could integrate with ML2 as well in the future. So the next two slides are actually pretty interesting. I thought we'd highlight this L2 population mechanism driver which is supported in ML2. And this was done by some folks at Orange Telecom and Inovance as well. And it's a great example of the type of innovation that we hope that ML2 will allow people to do. So what was done here was if you look at this, you have kind of your standard five host full mesh with tunnels here. It doesn't matter if it's GRV or VXLAN. Let's assume this is VXLAN. And so you can see that the purple VMs are a part of one tenant network and the oranges VMs are part of another tenant network. And you can also see that not all of the hosts have VMs from both of the tenants. For example, host one does not have purple VMs and so forth. So what would happen before the L2 population driver in this sort of tunnel configuration with the OBS plugin, for example, if VMA on host one wanted to talk to VMG down there on host four, it's gonna have to send an ARP request at first. And effectively what's gonna happen is that ARP request is gonna be flooded out across the tunnel mesh. But really host one and host two, they don't have any VMs for that network. So there's really no reason for that to have happened. But now you've done that anyway. So now you've sent that ARP request off. So with ML2, again, we have the similar diagram here with the mesh. What was done was with the Linux bridge agent and the Linux bridge agent now has support for VXLAN over the Linux bridge as well. So what I'm gonna talk about here is particularly supported with Linux bridge right now. Those support for OBS is gonna be done in Icehouse for this. So what was happened now? Now again, we have the same thing. VMA wants to talk to VMG in this environment. So with the Linux bridge agent and ML2, instead we actually do a proxy ARP response here locally on the host. So we don't send the ARP request out at this point. And now, now when they wanna communicate, effectively we're sending the traffic directly rather than broadcasting ARP requests everywhere. So this was kind of a, the original blueprint for this was called, I forget, actually I forget the exact name. No, that's pretty silly. So this was, but effectively this is an optimization for tunnel scalability. Because really, if you look at this full mesh of tunnels, you start talking about hundreds or thousands of hosts and you start sending broadcast traffic everywhere. You're gonna run into scalability problems. So this was designed, again, by the Orange folks to specifically solve that particular problem. Okay, go ahead. All right, gonna talk for a couple of minutes about some of the ideas been kicked around during the design summit part of the Icehouse Summit for future work on ML2. So basically deprecation is one item you've already heard that the monolithic plugins are being deprecated. I think they're actually being removed from the code base right at the end of Icehouse or right at the beginning of the J release. I'm not sure which. The ML2 plugin supports all their functionality. We wanna keep it that way. So new features aren't, they're being added, are being added in ML2 rather than to the monolithic plugins during the Icehouse phase. And one thing that has been discussed at the design summit and that works underway on is a migration tool. So people who already have deployed either the monolithic plugins running Havana, at some point during the Havana phase will be able to use this tool as a way to kind of, for a very short time, shut down their neutron server, run this tool that will basically take the existing OpenVswitch or LinuxBridge plugins database and move that information into the tables needed by ML2 and basically bring neutron server back up with ML2 running and it'll connect with the existing running agents. So it's a relatively manageable cut over. So that tool's gonna be worked on during the Havana cycle, or excuse me, during the Icehouse cycle. Another area is basically, we've got mechanism drivers. We've got people writing new mechanism drivers for Icehouse and we also still have people writing new plugins for Icehouse and a lot of existing plugins. There's some discussion going on. There's a design session at, I think, 130 today that's gonna discuss kind of when's it appropriate to write a new monolithic plugin versus support your technology through an ML2 mechanism driver. So some of the advantages of using a driver is there's less code. Maintenance is a key part of it there. Another really is that as new features are added, new extensions, integrations of different agents, things like that are always happening. Typically, those have to be updated for every plugin. Plugins can lag behind on that, it's work. They can, there's a lot more chance that the sets of features supported by different plugins is gonna vary. The intent is to do that work as quickly as possible on the ML2 plugin itself and any technologies integrated as mechanism drivers in that can get the advantage for free. So also the support for heterogeneous deployments is an advantage. Oftentimes, if you've got some technology that you're using, you wanna migrate to a new one, there might be a short time when they coexist or you may have a complicated data setter where you really have needs for multiple technologies at the same time. So basically, if you're integrating, if you're thinking of writing a new plugin, you should consider whether implementing an ML2 mechanism driver instead makes sense. Also, if you've got an existing plugin you can consider replacing it with a driver or adding a driver. So the L2 agents, this is basically a deployment, a bit of a heterogeneous situation here where different hosts have the different layer two agents running on them, the ML2 plugin talking to them. Those are all possible right now. One of the things we're looking at whether to do is to try to develop a modular layer two agent. So the idea here is that the same agent could run on different nodes and have drivers supporting the different kind of virtual switching technologies. This could also support special features, you know, SRIOV, things like that. PCI pass-through is under a lot of discussion in Nova. Maybe those things would plug into these agencies here and help eliminate the tendency to clone an agent and add a feature. Dr. Kyle. Okay, so now we're gonna go through and do a demo of ML2 in action here. So really quickly, what this is gonna show, this is gonna show multiple ML2 mechanism drivers working with the OpenV switch agent mechanism driver and the Cisco Nexus mechanism driver as well. We're gonna boot VMs on multiple compute hosts running Fedora and we're gonna basically configure VLANs across, you know, the virtual and the physical infrastructure here as well. So this is an incredibly hard to read example of what the demo is gonna be, but you could see that host one effectively is acting as a compute node and also a controller node running the OpenStack infrastructure. Host two is effectively only running Neutron, yeah, Neutron. Then we've got our Cisco Nexus switch down there. We're gonna configure things up. Next. There's a VM here on the host one. Oh yeah, there we go. VM one is gonna appear on host one over there. And effectively a VLAN is gonna be added on the VIF for that VM there as well and also on the BRE2 ports by the ML2 OBS mechanism driver. And then we're going to, we're gonna also then add that VLAN to the trunk port on the Nexus switch down there. And then effectively the same thing's gonna happen on host two. We're gonna bring up a VM over there and we're going to add the VLAN on the VIF there, add it on the Cisco Nexus switch as well and then magically they'll be able to ping each other. I think is the final thing. There we go, yes. We've completed the standard network test as well. Okay, so let's get, let's get to the demo here. Okay, give me one second. Actually, so actually a show of hands. How many people have used either the Open V switch or Linux bridge plugins? Anybody? Okay, how many people have used them in DevStack? Okay, what about in either a production cluster or like a test cluster or what they're trying to use? Okay, so that's pretty good. That's pretty good actually. Okay, just let me, let me just mirror this display. There is Mary. Oh, we're already set up for mirroring. Perfect, okay. Okay, so here we go. So this is your standard Horizon dashboard but before we get to that, what I'm going to show you is, I think this should show up. That shows up pretty good. Okay, so here's the interface to the Nexus switch and what I wanted to show was right now, we don't have any VLANs created down there other than VLAN one and we do not, and if we look at the configuration for these ports right now, this one set up as a trunk port, it doesn't have any VLANs allowed and the same thing for this one. So those are the ports connected to the two hosts. So now we'll go ahead and log in right here. So this is the standard Horizon interface. You can see we've got two hypervisors here. So we will go ahead and in the demo project here, launch a bunch of instances. Let's go ahead and launch, let's launch five. Just so we can get it to balance them across multiple hosts here because my two compute nodes are a little bit unbalanced. Oh, I forgot to select an image, there we go. Okay, so it's gonna build all of these, we'll go back over to the admin tabs so we can have a look at where they land. And you can see that, so it's, I think you can make that out the host column which is the second column from the left actually shows you which host they've landed on. Four of them ended up on the compute slash controller node because it has more RAM and more resources than the other node as well. But one of them ended up over there. So while these are booting, let's flip back over here to the Nexus switch and we can see, actually wait, before we do that, one thing I wanted to show you was if we look at these networks here and we click on this network, you can see as far as the provider attributes go, that's a little hard to read but it actually says segmentation ID 240. So since we're using VLAN's, neutron picked VLAN 240 out of the pool we gave it for this particular network. And now if we go over here to the Nexus switch, we can actually see that VLAN 240 was created down here. It's a little bit hard to make out, I think, but and we can actually also now see if we look, we trunked this VLAN on this port as well when the VMs came up and same with the other one as well. So they both had VLAN 240 added as well. If we created another tenant network, it would assign a different VLAN from the VLAN pool that you gave it as well. Now if we go back over here, let's see. Actually one thing I wanted to show was as well as this here. So the control node is also running the network services for neutron, which means it's running DHCP. So you can see right here the DHCP server for this tenant network got assigned 1002 as well. So we'll go over here to this as well. We'll look at the console for this VM here. And we're just using standard zeroes images for this. So we'll go ahead and log in. I think I typed the password wrong. There we go, okay. So you can see this particular VM got assigned address 1003. So we should be able to ping the, and this VM specifically was on the second host, not the control node. So it's not running the neutron network services, right? So we can, yay, we did a ping, right? But just to show you that it's really working, if we actually go into one of these interfaces here and shut down that interface, the ping stopped, right? So now we can go ahead and unshut that interface and get the ping traffic working. So it was a real demo, right? So there we go. I think that may be the best response ever for a ping demo in the history of network. So you guys rock. So, okay. So I think we have a few minutes here. I mean, we'd be happy to take any questions that anyone might have. Are there any microphones available? Oh, perfect. Okay. Can you show the configuration? How are you, the order in which the mechanism drivers are specified? Oh, sure. We can take a look at the configuration as well. Just a second here. Although, just a sec. So if we look at the configuration, now this is, for the demo, we're using two DevStack VMs effectively right here. First simplicity. I think it's a lot better than the Apple's one. Yeah, yeah. It's gonna potentially, is that a little better? More. How's that? Sorry, good? So let's take a look at the Neutron here, ML2. So this is what the configuration file looks like. Let's go a little bit bigger even. Is that, that's pretty huge. Okay, so we're setting up in the ML2 section, for example, VLANs. We're also loading all of the type drivers, because that's what DevStack will do by default. It's not an environment, you can load just the type drivers you want as well. And you can see we're loading the OpenVswitch, LinuxBridge, and CiscoNexus agents as well. So the other interesting parts of this configuration are. VLAN ranges, that's the 240. Yeah, exactly, right there. You can see the network VLAN ranges. That's where we've assigned VLANs 240 to 249 for FizzNet1, and the rest of this is pretty standard. We have some OVS configuration options that are used because you could create a network, a GRE network with this configuration as well. You're not limited to just VLANs with ML2 here as well. So, yes. Kyle, great demo by the way. Thanks. My question was, Bob, you were showing the, when you were trying to do a port binding, trying to go through, figure out which mechanism driver to use. Is it possible to pass a hint? That's something for the future. I mean, I think kind of a general quality of service approach or something like that, where you kind of declare things that would, the mechanism drivers be able to take that into account and deciding whether they can provide the needed connectivity. In the grizzly release, the Cisco driver really didn't do much when you did GRE tunnels. Is there any more functionality implemented in it for GRE tunnels or VX lands in Havana? So, I should preface that by saying that it did actually use the open, so this, a similar architecture to this was, existed in the Cisco plugin previously and actually still does in Havana as well. We're currently deciding, it's likely that we'll remove everything to ML2 would be my guess, but yeah, in the ISO cycle. Yes. What role will the ML2 play in delivery of QOS and bandwidth resolution? Is it part of the roadmap or it's relevant in the discussion? Oh yeah, definitely. I mean, for QOS, there was actually a design summit discussion around QOS extension APIs for Neutron. That might fall under a broader policy type of model in Neutron perhaps, but certainly that would be implemented in this reference implementation of any sort of QOS APIs would be implemented in ML2. Yeah, the idea is to implement those extensions usually initially in ML2 if possible, package the mechanics of that as a mechanism driver so it's something you can use if you want, leave it out if you don't need it or if you don't want to introduce the risk. Great. Yes. Yeah, really happy to see finally this kind of improvement really. How painful it was when we used Neutron initially but finally seems like we got to this point where all necessary framework seems like we can really enjoy it. So thanks for that. Just a quick question, when you actually demonstrate this one actually for the next slide, did you use actually OpenFlow protocol or not? For the next slide? No, no, no, it's using just NetConf APIs just like the existing Cisco plugin, yeah. But it is possible to also utilize OpenFlow protocol. I mean, certainly it's like the mechanism driver itself, whatever API it wants to use to talk to either the switches, the controller, whatever, it's up to the mechanism driver, yep. Yeah, yeah. What's different in the ML2? Is it a complete new set of tables or? Well the core tables are still the same. I mentioned that it uses the same base class that implements the core attributes of the core resources. So really what ML2 adds is its own specific mapping of networks to segments and the details of each of those segments and some information about the port binding that gets stored in the database as well. So the idea is that when you do that migration, the network UUIDs and details, all your subnets, they don't get touched. It's really just the information that used to be in a OpenV switch or Linux bridge specific table will be recreated in the ML2 mechanism or type drivers table actually. Can we share information across vendor plugins? So what type of information? Like the VLAN configuration. Yeah, yeah, absolutely. That's what the abstraction of the type drivers and the mechanism drivers allows. So the VLAN ID specific for this network that Neutron picked, if for example we also had the ERISTA top of rack driver in there, it would share that same VLAN ID for that same Neutron network as well and you can configure that on the ERISTA switches as well. Can we do it dynamically? What I meant is it is taking 240, right? Here on the trunk port, do we need to configure everything or can we use just 240 to be configured on the Cisco port? So the Cisco plugin dynamically configures that VLAN for you on the ports and I believe the ERISTA plugin does as well. Yeah. Thank you. Yep, any questions? Okay. Well, thanks for coming everyone. We hope you learned some stuff. Thank you.