 All right, is this working? Guess it's time to start. Start by introducing myself. I'm Bob Kokora. I work for the neural networks group at Cisco, where we focus mainly on open source. I've been working on Neutron since, I guess, what? The Folsom release cycle and had served as a core for some of that time. I was also the originator of the Modular Layer 2 plug-in. And I'm here today to talk about ML2 port binding. It works. So in case anybody's not familiar with Neutron at all, I'll quickly describe what a port is. A port is an access point to a virtual network. It can be used by various kinds of things to connect to that network. It's an abstraction. So virtual machines, bare metal hosts, containers, appliances providing network services, and many other things, the Linux namespaces that provide network services. Any of those might attach to ports. What is port binding? In ML2, it's a process by which the core plug-in decides how a port will be physically connected to the network that that port belongs to. And basically the goal of this presentation is to an understanding of how that works, why that needs to be something somewhat complex and something that can be a failure mode of Neutron, and I'll give a little bit of tips on how to troubleshoot issues that you might have with port binding and some thoughts on how this might evolve in the future. So why do I care? You might be a user or an operator. If port binding doesn't work, your users are not happy. So can I say a quick show of hands of people that fall into that category? That's why they're here is because they need port binding to work and sometimes it doesn't. All right, good to represent it. You might also be interested in port binding because you're a developer. You might be working on ML2 itself, working on mechanism drivers or other drivers from ML2, or even on other services that didn't great with Neutron. Can I say a show of hands of people in that category? Great. I might also just be curious. Must be some of those here. And maybe you don't care, but you're here anyway. I was going to say you can leave now, but I won't. So a quick overview of what ML2 is, I don't want to spend too much time on this, but it is a core plug-in for Neutron. It's modular, it was introduced in Havana. It originally was there to just unify and replace and eliminate duplicate code between the open v-switch and Linux bridge plug-ins that were previously there and support their existing L2 agents, not requiring any changes to those. So it basically became the reference implementation of the Neutron server functionality and would work with those reference L2 agents. It's composed of drivers. Those drivers were originally mostly in tree. The reference ones are still in tree, but many vendors have out-of-tree drivers and different open source projects and so forth that integrate with Neutron have out-of-tree drivers for ML2. These are loaded based on configuration information. One thing kind of unique about ML2 is that it supports multi-segment networks. Up through Mataka, the assumption is that those are bridged at L2. So the broadcast domains basically, that could be changing in the Newton cycle where we're adding bridge networks, but we'll have to look at how that impacts port binding. One big goal of ML2 was to support heterogeneous deployments. We'll talk a lot about that in some of the upcoming slides. So there are drivers for numerous open source and proprietary controllers, switches, fabrics, and all kinds of things. It's a quick overview of the ML2 architecture and the drivers. Looking at the right side, you'll see there's the Neutron REST API, RPC handlers, database, and the core plug-in that in this case happens to be ML2. So with ML2, pretty much all the actual functionality is implemented as drivers. And there are three types of drivers. Extension drivers are there to allow extension of Neutron core resources that certainly can be abused, but it's also how things like quality of service and various other features within Neutron are implemented. So it's a good packaging device for that that lets you configure those things when you're going to use them and leave them out when you're not. So we're not going to cover that too much. Type drivers are the ones that define ways that virtual networks couldn't be encapsulated within Neutron. So those are kind of independent from the ways at which you access them. So the type drivers define things like VLANs or VXLAN managed by Neutron agents and so forth, or potentially switch fabrics and things like that. So type drivers maintain pools from which tenant networks are allocated. They validate state when you create provider networks, where you're trying to basically create a Neutron network that corresponds to something that might already exist in your data center. So they manage these pools for allocation of segmentation IDs. And mechanism drivers are what we're going to talk about mostly today. They basically are responsible for configuring the physical infrastructure. And they're responsible for attaching ports to that physical infrastructure in order to give that port access to the network that it needs. So this is just a quick overview of the sort of basic mechanism driver API. All the different core resources in Neutron have sets of operations for create, update, and delete. And each of those is structured as pre-commit and post-commit methods. So I'm showing the ones for port here, but that's what we're most interested in. All these take as an argument a context object, so in this case a port context object. That port context has currently dictionaries for the current and original values of all the attributes of that object. So that allows the driver to kind of see the state of the object, or in the case of an update, the previous state, so it can see what's changed. And in the case of the port, it also has access to the network that that port belongs to. All right, so one of the big goals for ML2 was to support heterogeneity. There's various ways in which heterogeneity can come from. One is the type of devices that are connecting to the network. So we all start out with VMs, bare metal servers came along, containers are certainly of interest now, various appliances that might provide network services, load balancers, firewalls, physical routers, anything like that that's plugging in. These things may have different ways of connecting. Even for VMs, you can have a variety of different hypervisors and L2 agents that may or may not be required with those hypervisors. So like OpenVswitch and Linux Bridge have their own L2 agents that run under Linux nodes. Using Hyper-V, there's a similar L2 agent. Things like VMware have been integrated with ML2, maybe not with an L2 agent running on the compute nodes. So there's basically differences that need to be resolved here and the mechanism drivers are able to handle this. There's also special capabilities that you might need like SRIO-V. You might have a cloud where most of your compute nodes are normal, but some have capability like SRIO-V. So those can be supported through mechanism drivers. You may have a mixture of network infrastructure. You might have different brands of switches. You also may have situations where you don't have uniform connectivity throughout your data center. Certain clusters of compute nodes might be connected to certain physical networks and others not. So all these things are what make port binding sort of a requirement. It's not just a statically coded, we know everything. It's just we're connecting to the network. There's things that can be different in different parts of the network depending on who's connecting. So basically when you're deploying Neutron with ML2, you can configure whatever combination of mechanism drivers you need in order to support your environment. So it can be simple or it can be very complex. Port binding determines which mechanism drivers handle a port and as part of that, the network infrastructure gets configured to provide the connectivity that you need. And some of you are here because port binding can fail. Sometimes it's not possible, sometimes something goes wrong. One of the things here is that when you do have a heterogeneous environment, you need some way of making sure that it is possible to bind. You might ask for something that's just not possible. So I just wanted to mention that NOVA availability zones and host aggregates and things like that can be used to basically target your VM at the environment you need. If you need SRIO-V, those are the mechanisms available as a client specified that that's where you want your VM to run. If you basically have, I'll get into this a little bit later, but if you're asking for SRIO-V and it's not running on a host that has that available, it's not going to work. Anyway, so this sort of a black box view of ML2's port binding. There's inputs and there's outputs. The inputs basically are the attributes of the port. And some of those are binding specific. We'll look at that in a bit. But there's an extension in Neutron that's been part of the core for quite a while that defines binding host ID, profile, and VNIC type. Those are all inputs to port binding. If any of those change on a port, it basically invalidates any existing binding. And binding needs to be redone for that port. There's also the network that that port belongs to is made up of one or more segments. Each of those segments is defined by network type, physical network, and segmentation ID fields. Depending on the type, some of those may apply or may not apply. And there's also the actual topology of the data center and what's connected to what. The host that I'm trying to bind a port on, or the device I'm trying to bind it for, what's it connected to, that kind of information can be an input to port binding. The outputs from port binding are, there's again some port bindings attributes, VIF type and VIF details. Those are things that are determined by the port binding. And in the case of virtual machines, NOVA uses those to plug the port. We'll see how that works shortly. The binding can be made up of levels. We'll get into hierarchical port bindings in a bit. But those levels each have a driver that was responsible for that level and the segment that was bound at that level. So that's output here. It's not necessarily visible through the API right now. That's something we'll look at in the future. There's also, as a side effect of the port binding, configuration of the network infrastructure to give you the connectivity that you need. Quick overview of the history of port binding in Neutron. Back in, I guess it was Folsom or so, when Neutron, or at the time Quantum was introduced, basically there was configuration in Quantum that determined which core plug-in you're using. And on the compute nodes, you'd need to run an L2 agent that corresponded to that plug-in. And in NOVA, on that compute node, you need to configure the right VIF driver that worked. And if you got that all right, it worked fine. It was simple. Then, I guess this is even before ML2 came along, in Grizzly, a port bindings extension was added. So we saw those binding attributes. The first of those appeared in this port bindings extension that was during Grizzly. And basically it allowed, instead of hard coding information into NOVA's config, this let the binding VIF type coming from Neutron tell NOVA how the VIF could be plugged, to turn into a generic VIF driver that would pay attention to that and do what was needed, whether it was open V-switch, or Linux bridge, or something else. So ML2 came along in Havana. And it had a sort of basic port binding mechanism right from the start. And the mechanism drivers that were configured would return a value for binding VIF type that would then get sent over to NOVA. And things would work. And different mechanism drivers could result in NOVA plugging ports differently. So we'll see how that works. Icehouse, things evolved a little bit further. SROV was being, support for SROV was being added to Neutron and to NOVA. And we needed to pass some additional information. So VIF details was added. So that let the mechanism driver that binds the port in Neutron pass kind of whatever's needed to NOVA that, in the case of SROV, is used to basically configure the physical NIC hardware there to attach to the right VLAN and things like that. In Juno, distributed ports were added for DVR. That affected port binding. We'll talk about that briefly in an upcoming slide. Kilo is where hierarchical port binding came along. And that was basically allowing scalability when using VLANs way beyond the 4K limits that you would have with VLANs on a single physical network. So we'll see that in a little more detail. So next slide kind of gives an overview of how port binding fits into the overall interaction between NOVA and Neutron. On the left, we see a controller node. On the right, a compute node. On the controller node, we see the NOVA set of servers. There's the NOVA API and the various other servers that make up NOVA. And we see the Neutron server. And on the right side is the compute node, where in this case we have NOVA compute and some Neutron L2 agent. It doesn't really matter what it is, but it's kind of managing some sort of bridge or v-switch or whatever. So you see that. So when the compute node comes up, the Neutron L2 agent comes up. And Neutron has an agents DB facility where agents of various sorts periodically do a report state RPC. And that basically indicates their health and can also pass additional information about that agent from the agent to Neutron server. So in this case, that happens. And that agents DB information is stored in Neutron's database that's available, even if the server's replicated, all the agents are visible and all the Neutron servers and so forth. So along comes a client who's trying to boot a VM. That basically comes into NOVA's REST API. And NOVA, at that point, is going to either create a port for that VM, or maybe the port was passed in and pre-created and passed into Neutron or passed into NOVA. And at the point when NOVA schedules that port to run on a particular compute node, it's going to update so that we have all the information we need to do the binding. So the binding host ID indicates what compute node the VM was scheduled on. So that update or create that sets all that information basically has the results of port binding. So port binding occurred kind of during that update or that create, the results come back, the VIF type and VIF details are shown there on that arrow. So then NOVA goes ahead and tells its NOVA compute server on the compute node to launch the VM. The VM plugs the VIF using the information that got passed along, the VIF type and VIF details and so forth. At some point there, the Neutron agent discovers that VIF has been plugged into the V-switch and it does an RPC back to the Neutron server to get the details that it needs to be able to connect up that VIF and the results come back to the Neutron agent and it configures the V-switch so that you have the connectivity. So that's kind of a quick overview of where this fits in. This is just a, we looked previously at the sort of basic API for mechanism drivers where you saw the create port, pre-commit, create port, post-commit, update port and so forth. So basically, port binding adds a little bit to that. It adds, at the bottom you see there's a bind port method on the mechanism drivers. And on the top, on that port context, there's some additional attributes as a host that shows you which host we're trying to bind on. And segments to bind is the set of network segments that are eligible to be bound at that point in port binding. We'll see how that's used in a bit. There's also a method called host agents that provides the driver with easy access to the agents DB to find out if agents are running on that host and of the type that the driver's interested in and so forth and get the information about those. And then set binding. That's what's used actually to, by that mechanism driver, if it is able to bind to specify that it is the one that bound and specify the segment that it bound to, the VIF type and VIF details. So a quick little diagram showing how that works. This isn't all animated or anything like that. But basically, that reports date we talked about keeps the information up to date. So when Nova will basically enable the port binding by setting the host ID on the port, ML2 will make a mistake on there, called mechanism drivers. Yeah, the mechanism driver corresponding to the agent basically goes through with all the registered mechanism drivers, the one that's corresponding to the agent. We'll see that it's a good check to see that its agent is running on the host that we tried to bind. It will look at the segments of the network, see that it has connectivity to one of those. And so the details are on the slide. You can look at that afterward. I don't want to take too much time on that. So port binding has some considerations regarding concurrency and transactions. So the neutron API operations almost always involve some sort of database transactions. Updates, crates, deletes, things like that. And mechanism drivers are generally trying to interface to physical hardware, talk to controllers, talk to switches, things like that. Those operations can take time. They can block the process from doing anything else. You don't want to do those inside the transaction. So when we saw the basic operations on the basic API of create, port, pre-commit, post-commit, those sorts of things, the pre-commits are called inside transactions. The post-commits are called after that transaction is successfully committed. And it's pre-commits you should generally not talk to any external systems. Post-commits, you're welcome to do that. So that all applies to the CRUD operations. But when port binding occurs, there's also generally a need for the mechanism driver to communicate with something external. So we can't do that inside of a transaction. So basically what happens here is one of the CRUD operations, creator update, is triggering binding. So that could be one that specifies the host ID as an update, or creates it with a host ID, or changes something that affects the binding. When we saw the list of inputs to the black box port binding, there's things like vNIC type and so forth. If any of those are changed, we'll trigger rebinding. So basically what happens is the mechanism driver sees the port update that's triggering that rebinding. So it'll see the pre-commit for that as part of the transaction, and it'll see the post-commit after it. And in those, the new value for binding vif type is unbound. So that's basically the transition from the whatever state it was in to the unbound state. If it's a create, it'll see those as create pre-commits and create post-commits. So then the port binding occurs outside of any transaction. The core plugin will call bind port on all the registered mechanism drivers until one calls the set binding to indicate that it was successful. It'll proceed until that happens. It's done once that happens. If it doesn't happen, then we're in a situation where we're not able to bind and the client will end up seeing an exception or seeing that the, actually not seeing an exception, but seeing that the port is in a binding failed state. So after making those calls and successfully completing the binding process, we need to commit the results. So that's a separate transaction. And that's actually seen as a separate port update from the point of view of the drivers. So they'll see the pre-commit call and the post-commit calls. And in those, they will see that now the binding vif type is indicating some particular vif type other than unbound or failed and any other details that they care about. We'll see some situations where that might be new. All right, so since that gap basically between those two transactions, other things can happen concurrently. We may have multi-threaded servers or replicated servers. So it's possible that other updates come in and change binding inputs during that time. Or some of the thread may actually have also got an update that triggered rebinding for that same port and succeeded before this one and committed its results. So basically there's a loop there. Basically there's validation that happens within that second transaction to see if all the inputs are still the same, that nobody else bound first. And if that's all OK, then we go ahead and do the update to commit the results. If it's not OK, we may have an existing binding now that we just used, so we don't have to do anything more. Or we may try again in a loop, just so you have an idea what's going on, particularly if you're looking at log messages and so forth. You may see those loops. There's limits on how many times those loops will iterate so that in some real bad failure mode you might see those limits being hit. All that should be logged as errors if that ever occurs. All right, so I'm just going to quickly walk through a couple of the different cases where heterogeneity impacts port binding. So one case would be where you have multiple network segments up through Metaca. Like I said, these are assumed to be all bridged. During Newton, there's work to add rooted networks where these segments might actually be different L2 domains and routing happening between those. For now, this is all assuming that if you want to connect to the network, you're able to connect to any of these segments. That may still hold true after the routing networks are implemented, but I'm not sure. So like we said, the bind port iterates over. The bind port gets called on a particular mechanism driver. That mechanism driver iterates over the segments. And as soon as it finds one, let's say it's a mechanism driver for an agent. If it has connectivity, if the agent's DB info for that agent says that it has a bridge mapping for that network, it will connect. So some of the details were in that other slide that I kind of glossed over there. But I'm just going to give an overview here. So if you have different types of L2 agents on different nodes, you might have OpenV switch here in LinuxBridge there, or HyperV, or you might have something else, you might have SRIOV in places, things like that. Well, we'll get to the SRIOV, but with different types of L2 agents, each has their own mechanism driver. Bind port gets called on them in the order in which they're configured until one succeeds. So in a typical environment where you had LinuxBridge on one and OpenV switch on another, either one should be able to bind depending on what agent's running on that node. That's pretty straightforward. SRIOV, in that case, what happens is the clients indicating that they need SRIOV by setting a special vNIC type value, binding vNIC type. That's one of the inputs to the binding black box. So direct is one of the values that requires SRIOV. So basically, the SRIOV driver would be configured to run before other mechanism drivers. If it can bind, it will. If not, then maybe a normal L2 agent mechanism driver would bind. So that lets you support situations where you have some SRIOV capable compute nodes and others that aren't on the same networks in the same data center. All right, so talk a little bit about when you have top of rec switches and fabrics and things like that. Kind of building on the scenarios I was talking about before, basically, you could have a very simple top of rec switch or switch that basically all your compute nodes are connected to. All VLANs are trunked everywhere. There's basically a lot of data going places. It doesn't need to go. So kind of the first optimization of all that is say, we want to have a switch that only enables the VLANs that are needed on each port. So basically, that's done without really affecting port binding. So we can have a separate mechanism driver that manages the switches of the type that it knows about. These switches are all sort of assumed to be connected to the same VLAN trunks. A normal L2 agent mechanism driver would do the binding for that port to a VLAN. And then, like I said, after the binding is committed, we have an update port that's other mechanism drivers that are managing the top of rec switch. We'll see that particular port has been bound and can basically look at whatever topology information they have to say the host that that's bound on is connected to a certain port on a certain top of rec switch and then enable that VLAN on that switch. So many of the vendor mechanism drivers for ML2 do that sort of thing right now. So there's some topology information needed there to know what's connected to what, but it's basically a very useful optimization. There's still a limit kind of that each with VLANs, each physical network can only support 4k different tags. One way to get beyond that in these kind of environments is to have multiple switches that each compute node's connected to, each of those is a separate physical network, each has its own 4k space of VLAN tags. And it's easy to set up the VLAN type driver to allocate tenant networks across multiple physical networks like that, but that's certainly hardware intensive. Fabrics that typically you might have top of rec switches that communicate with each other over VXLAN or something like that, managing kind of the tunnel end points between those switches themselves. So this is different than the VXLAN support that's in the OpenV switch or Linux bridge L2 agents right now. But basically, if these switches are implemented as a fabric like that, the sort of default way that you would map that to VLANs going to the hosts would be still as one global physical network with a 4k possible VLAN tags. So that's where hierarchical port binding comes in. Basically, rather than have a global space of 4k VLAN tags, you can treat each switch or even each switch port if you wanted to, if the switch is capable of that, as a separate physical network connecting some set of compute nodes to the infrastructure. And in that case, what happens is that top of rec switches mechanism driver now will have to participate in binding. So to implement bind port, it'll look at the static network segment, something representing the fabric that that network is connecting to. So that might be some VXLAN ID in there or something like that. But what it does then is it either creates or finds an existing dynamic segment that's on this physical network that connects to the host that we want to bind on and basically allocates a VLAN tag on that or uses what's already existing. If a previous compute node is already bound on that same rack or whatever, connect to the same switch. So basically, once that occurs, then it makes a call. I'll show you that on the next thing. But there's a probably should have put the slide after the other one. Continue binding, where it actually specifies the new set of segments over which we could bind. Once that happens, then any normal mechanism driver combine to it. So that makes these top of rec switch drivers that are supporting various ways of managing dynamic VLANs, connecting to some fabric, work with anything that combined to a normal VLAN. So open V switch Linux bridge or many other things are able to connect. So this is one way of getting past that 4K limit so basically as long as you don't have 4K networks in use in the same rack or with the same switch, you can scale beyond that. So for hierarchical port binding, we've basically extended port context with visibility to the binding levels. So you see binding levels and original binding levels there. We still use set binding by the mechanism driver that finishes the binding. But if a mechanism driver partially binds, it calls continue binding on the context, passing in the segment that it's binding to, and then the set of segments for the next step of the binding. That's typically just one, but it could be more than one. So there's also methods there allocate and release dynamic segment that can be used to create those segments, basically allocating them from pools. And the same bind port applies there. So this is sort of a depth first kind of thing. What happens is the top rock mechanism driver will look at the host that it's trying to bind for. That'll map to a physical network that's specific to that switch. I guess it will find or allocate a dynamic segment for that network on that, set things up so that the virtual network and the fabric is exposed on that switchboard as that network. It'll then call continue binding. And then what happens is ML2 will basically cycle through the drivers again asking if anybody can bind with that information. So it's very flexible. There's also limits in there to prevent loops where something might keep continuing binding on the same thing, that kind of thing. Again, anything that you hit should result in errors logged and really shouldn't be in a properly configured system, should not run into that sort of thing. DVR port binding, so that's basically when distributed virtual routers were added to Neutron. These are implemented as Linux namespaces on each compute node. And the interface where private networks are connected to the router end up needing to be bound on each compute node where basically VMs run that use that private network that's connected to the router. So rather than have a separate port on each router, that would kind of break the whole router model in Neutron. They added a capability to do distributed bindings. So that happens independently for each. There's some RPCs that the router uses to tell ML2 the various hosts that we need bindings for. Everything pretty much works the way it does for normal nondistributive ports. And ideally, it should be composable in that you can, if you have top of rack mechanism drivers, you should be able to use those with DVR and so forth. I don't know how much testing's gone on in various combinations there. But in theory, combinations of that stuff should work. All right. How am I doing on time here? Five more minutes. All right. A couple of quick tips on troubleshooting issues with port binding. All of this basically requires admin privileges. So if you run Neutron port show, look at the binding vif type attribute. That'll be visible if you're running as admin. And if you see binding failed there or unbound, you don't have a binding for that port. That might be a clue to why you don't have connectivity expected, that sort of thing. In the case of DVR ports, you'll see that as distributed. I think indicating that it's a distributed port, then it's really not all visible through the API. So once you've done that, you also want to look at the binding host ID value. That should match where the VM is running. You can use Neutron network show to get the segments that make up the network. Currently, only the static segments are shown. So if you are doing hierarchical port binding, there's really no easy way now to see the dynamic segments that might be created for that network. We hope to fix that soon. So assuming that you're running with an L2 agent on the host as the way that ports are plugged on that host, the most useful thing to do is run Neutron agents list. When you run that, you want to look for the L2 agent of the type you expect, make sure that it is showing up as alive and so forth on the node where this port is being bound. You want to make sure that the host name basically of that agent and the ports binding host ID value are an exact match. If one's fully qualified and the other isn't, they won't match. That kind of thing can happen in real life. Then we use agent show to look at more details on the agent. If assuming you're using VLANs, you want to look at the bridge mappings that's part of the data that the agent publishes to that agent's DB, make sure that there's a mapping for the physical network of the segment that you're trying to connect to. If it's tunnel, you want to make sure that tunnel types indicates that that L2 agent can support the network type of the segment that you'd be binding to. And if you can't resolve that, then you're basically going through log files looking for error. So it's definitely an area that can be improved over time. So there's been a number of enhancements to port binding and so forth and things that affect port binding that have been discussed or underway. One idea is to generalize the DVR distributed port bindings and be able to use that for things like DHCP servers to run on each node or things like that just could be useful also. There's some talk of using that for live migration and actually allowing the port to be bound on the original host where the VM is running and the one that's migrating to at the same time to make all that work a little more smoothly. When you have extensions like quality of service or things like that, we really need ways to ensure that the port binding that you end up is able to provide the semantics. Even like security groups as an extension, there's cases like with bare metal where you don't have an L2 agent running to be able to enforce security groups. You'd like to make sure that if your port has security groups associated with it in that case, that it's not going to be able to bind. But if you have, let's say, a top of rec switch that's able to enforce the security groups as ACLs, then maybe you do want to bind even as long as the security group rules can be enforced by that. So that's an area where we certainly have work to do on ML2. Routed networks is doing one thing that's really helpful for debugging issues with port binding and so forth. It'll make segments into first class resources so a lot of that will be easier to deal with. It'll be also possible to add and remove segments, I think, after networks are created that you currently can't do. And it's also probably going to change port binding in some ways to be able to support router networks. There's work going on in versioning the binding with details and basically making sure that the version of NOVA you're running and the version of Neutron you're running agree on what information is needed there and that kind of thing. Again, in trying to debug problems with port binding, making the results of the binding visible via APIs would be very helpful. One of the reasons binding might fail in certain cases is because it's just not possible. You're asking for SOV, but your VM's landed on a host that doesn't have that. It would be nice to have NOVA scheduling better integrated with the network topology and what's going to be possible to bind. Right now you're depending on host aggregates and things like that to manage that. And I mentioned that sometimes the mechanism drivers will need information about topology. There's been proposals to do a sort of generic topology service that would be useful to those mechanism drivers as well as plenty of other uses. So that's basically all I have here. Be happy to take any questions. On binding stage, when something went wrong, the port getting state with type binding failed. Why is getting state binding failed instead of returning error on port creation? Because from operator point of view, it's very uncomfortable to deal with this. When port cannot bind itself, why it's not return error on creation? Why it's set state to bind failed? I guess the same comment would apply to a port update that either requires or enables binding to occur. What should it just fail? I think the original thinking there was that that inability to bind may be sort of a transient thing. And the port might be created, it does kind of make sense. That's maybe something we should consider. Yes, I got it. But for case when there is a misconfiguration on L2 agent, basically wrong config file, wrong physical segment name, or something like that, then error can be detected on creation stage. Why it's passing down through every stages? I think that's a very good point. I think binding failures being returned as an error, whether it's from the create or from an update, probably would be useful. Because really being able to see the vif type and see that binding state, you can only do as an admin. And this would be a way to allow non-admin clients to at least know that something's wrong. Right now, that basically is detected when the NOVA VM doesn't see that the port transitions to the up state. So you do get some clue that something's wrong there and can track it down to the binding. But I think that's something that definitely is worth looking into. Thanks. OK, thank you. Are there questions? All right, contact info's there if you do have any. Oh, go ahead. To that. So in managers, there is like call on driver, which loops over enabled mechanism drivers. But when actually it does continue on failure because you need to try all of them, right? But when the particular mechanism driver returns an exception, it just try, accepts it, and returns like generic ML2 mechanism driver. So there is no actually way to get some information to the user why failure happened. Yeah, I mean, the model here is that there can be more than one possible way to bind a port. So you need to try all the ways until you either succeed or run out of ways to try and then declare the failure. So it's not like if a mechanism driver looks and sees that its agent's not running on that node or that the agent running on that node doesn't have connectivity to the segments, it doesn't really mean that binding is failing. It's just meaning that mechanism driver can't bind, but another one might. So I mean, it would be very valuable to capture more detail of what mechanism drivers failed for what reasons, and then when binding fails at the end, maybe concatenating that together in some way that's available through the API. Right now, that information would typically be available through logs, right? Anything else? All right, thank you.