 Okay, we'll get started now. My name is Kevin Benton. This is Carl Baldwin. We're going to talk about neutron networks, segments of neutron physical networks, and then the routed network stuff that Carl's been pushing that'll bring some changes to how traffic is actually carried by neutron underneath. So I'll go over the current network API semantics. What is a neutron network? What is a segment in neutron? What is a fizznet? How do these all mix together and translate into what actually gets put on the wire when instances send traffic to each other? And then Carl's going to cover the changes to support networks that are spanning different broadcast domains that all look like they're behind one neutron network, and these are referred to as routed networks. There's been a lot of discussion about it with large employers over the last two cycles. So inside neutron, we have these tenant-facing objects in the API. We have the port, the subnet, and the network, which is what I'm going to talk about right now. And then we also have the routers, subnet pools, floating IPs, and security groups, but that's not part of this talk right now. So a port has all the information that's required for a single device to get connectivity to the network. It has the IP address that ends up getting assigned to it, the security groups that it's members of, the MAC address, that kind of information. Then there's the subnet, which in neutron is just controlling the IP allocation for ports that are attached to a network. The subnet doesn't have anything to do with defining broadcast domains or anything like that. It's purely an IP allocation thing inside of neutron. So that has DHP info, IP addresses that you get, that kind of stuff. The gateway your instances will use, name servers, all that. Then there's the network, which in neutron is the container for subnets and ports. So it kind of brings them together. So any port that gets attached to a network will get an IP address that's from a subnet that's also on that same network. And the high-level guarantee that we kind of offer is a tenant can attach two ports to a network and those two ports should be able to communicate with each other somehow without additional configuration. So in neutron we have core plug, different core plug-ins. And the way a network is actually implemented underneath how that traffic is carried across a wire is really an implementation detail of the core plug-in. So for this talk I'm going to be mainly focusing on the ML2 core plug-in and how it ends up realizing this, how it ends up carrying network traffic. Because another core plug-in, like VMware NSX or something, could end up doing something completely different. So this is all specific to ML2. Right now, networks in ML2 are implemented as single-layer two domains. So two ports that are attached to the same network will get IP addresses on the same subnet. They'll try to communicate with each other by directly sending an ARP request of saying, hey, where's the MAC address of this thing? And then sending a layer two frame directly over to the other VM. And it's kind of expected that ports will be able to communicate with each other using broadcast and multicast traffic because ARP obviously is broadcast traffic. So if this is blocked you have to do a lot of hacks to work around to make it look like to the VM that it has broadcast connectivity or something. So from a tenant's perspective, if you're just a neutron user, you're not an operator or anything, this is it. This is all you see for the network. You don't see all the nitty gritty details of VLANs or VXLAN or GRE or anything like that. But for the operator admin use case, there's a lot of different ways to configure how a neutron network ends up being carried across the wires in the data center. And that's where these visnets and segments come in. So a segment contains encapsulation details used to carry L2 traffic between the compute and network nodes. Each network in ML2 has at least one segment that's associated with it. And inside this segment it just has some basic details, network type, which defines how the traffic is going to be encapsulated. So you could have VXLAN, VLAN, or there's also a special option called flat, which means there's no encapsulation at all. And then there's the segmentation ID, which is the details for the encapsulation protocol. So in the VLAN case, this will be the VLAN tag that gets used to take the traffic, or in the VXLAN case it will be the VNID header. And then the physical network is the, an identifier that defines which interface the agents will ultimately end up using to send the traffic on. So it's supposed to correspond to kind of a layer two network in the operator's network. And right now, multiple segments on the same network in ML2 are assumed to be bridged together. This is a pretty rare use case outside of hierarchical portbinding, which I'll talk a little bit about. But there can be multiple segments in a neutron network, and we kind of just assume that they're connected together right now, but there's nothing explicit about that, and that's one of the changes that Carl will be talking about for routed networks. So we had segments, and then segments refer to physical networks for VLAN and flat use cases. A physical network is just an identifier that represents a real operator network that ethernet frames are going to be sent out on. So the agents have mappings from fizznets to a specific bridge they're going to send something on. So if they have a port that's connected to a network, and that network has a segment that says this belongs to fizznet external or something, the agent will have a mapping that says, okay, anything that goes to external, I know I have to send out this bridge external or something like that. For overlay protocols like VXLAN and GRE, that kind of stuff, there's no physical network provided because that just uses the compute host kernel networking to send the traffic. It's all encapsulated in IP, so there's no bridge mappings in that case. And each fizznet is meant to be a completely separate layer two domain. So the way that works in neutron is VLAN IDs can be duplicated between different fizznets. So you can have 4,000 VLANs all used up on one fizznet and then have another fizznet with all this with the same 4,000 VLANs. They're meant to be different networks and corresponding to different interfaces on the compute nodes. So here, this diagram just kind of shows how everything fits together. We have the networks on the left-hand side. These are the user-facing constructs. They don't see any of the segments or fizznets or anything like that. Network A is a VXLAN based network, so it has one segment associated with it of the type VXLAN and it has a segmentation ID of 10,000. And that means the VNI will have 10,000 on all the VXLAN packets that it's sending around. And since that's an overlay protocol, it has no corresponding fizznet. Networks B and C are VLAN type and they just each have one segment and they each get their own VLAN ID and they correspond to a single internal fizznet. So that means when traffic is being sent by the agents, traffic for network B and network C will both be going out over the same physical interface when it actually goes into the data center. They'll just have different VLAN tags. Network D is an example of a network that has two segments associated with it. It has a VLAN one that corresponds to the internal fizznet and a flat one that corresponds to this alt fizznet. So you might have a use case where you have some SROV based nodes or something like that and they don't want to use any VLAN tagging or anything like that. They just want it sent untagged on to the network. And in that case, those ones might be bound to segment five, which goes out a special fizznet description on the compute nodes. And then external network, completely different segment or completely different fizznet. So it is also a VLAN type and it also uses VLAN 100. So even though the external network and network B have the same VLAN tag, they correspond to different fizznets. So they shouldn't overlap because they should go out different interfaces on the compute nodes. So the way this looks on the server side configuration is you have network VLAN ranges for ML2 that define a fizznet and a set of VLANs that can be used for each fizznet. So in this case, we have the internal fizznet. This says you can use VLANs from 10 to 4,000 when it's automatically allocating segments. And for the external physical network, it can use VLANs from 100 to 110. And for VXLan, it's just a global pool of VNIs that it's allowed to use. And then for the flat network, since there's no tags, this just defines, hey, there's this fizznet, it's called alt fizznet, and that allows it to be used when you're creating a network. On the agent side, the only thing the agent needs to know is how to map these fizznets to some interface that's actually local on the compute node. So this is saying anything that's going on internal fizznet needs to be sent out on this bridge, br tenant. Anything that's going to a network that's on fizznet, external fizznet needs to be sent out this brx bridge. So all the agent needs is this mapping to tell it which interface is to send traffic out when it's going to one of the corresponding fizznets. So segments are currently created as part of the network creation process. And admin is allowed to specify segment details. So if you're an operator and you want to set up networks specifically to use certain VLANs or not use a VLAN, for example, you can pass these extra provider options. So in this case, we're creating network B from that previous slide. So we say neutron network create, network type VLAN, physical network internal fizznet and segmentation ID 100. So that'll set it up as a VLAN network on the internal fizznet with VLAN tag of 100. If this isn't specified, so if you have regular tenants or your regular tenant, you're creating a network, you won't be allowed to do this. So if this isn't specified in ml2, there's another option called tenant network types, and that will define the default type that ml2 tries to use. So it will, if you have this set to VLAN, for example, it will try to create a VLAN segment for every neutron network that's created, it'll allocate from that pool of VLAN IDs that are set up for each fizznet. So since a neutron network can have multiple segments, and they can even be the same type, how do we know which one is used for a given port? So when Nova creates a port for a VM or it's using a port for a VM, after it has decided which host to place the VM on, it will populate this binding host ID field in the port object. And this triggers a process inside ml2 called port binding, and each driver iterates through, they see the host that this port is being bound to, and then each driver has its own knowledge about what agents it has running on that host, what the capabilities are of that agent, and can then determine if it has the capabilities to bind it to one of the segments. So it'll see this is a, coming in on, for example, on the OVS agent, it'll get a request and it'll say, it's being bound on host A, and then it'll go and look at its agent and say, hey, I have an agent that's running on host A, and this is being done with fizznet X, and I have my agent has bridge mappings for fizznet X, so I know that it knows how to access that specific fizznet, so it'll say, okay, I can bind the port, and then that port will become bound to that specific segment. So every usable port in ml2 has been bound by this process, and a port is bound to a specific segment inside of a neutron network. And none of this is visible to the tenant, but as an admin, you'll be able to see this by looking at the neutron port show, you'll see some provider details. Then in the case of hierarchical port binding, there are multiple ml2 drivers can become involved to get a single port connectivity to a network. The use case for this is typically an operator wants VLANs to be used for the communication between their compute nodes and their top-arack switches, because there's no encapsulation overhead that has to be processed on the compute node. And then once it gets up to the top-arack switch, it switches to VXLAN and is then carried over the network, so you get the performance benefits of no encapsulation on the compute node, and then you still get the scaling benefits of the number of VNI's available in VXLAN. I don't want to talk too much about this. If you're interested in this, there's a detailed talk later today by Bhakra called understanding ml2 port binding, and that's a link to it there. So then the question comes up, well, what if operators don't want to offer l2 connectivity? Hierarchical port binding is something that requires drivers for your top-arack switches in your data center, which might be managed by a different team, and they don't want to give control of that. Even without hierarchical port binding scaling, l2 networks beyond 4K VLANs is not possible without switching to an overlay like VXLAN, and some operators don't want to have overlays running on their network. Additionally, some operators don't want tenants to use virtual networking at all. They want tenants to just pick from one network or maybe two networks, like production and development, and just have these giant networks that tenants attach to. And in that case, if they do that with just an l2 network, they have these really huge broadcast domains that become really expensive and difficult to scale. So that brings in this topic of routed networks, which I'm going to let Carl talk about now. All right, thanks, Kevin. So for me, this goes back about three years. When I first started at HP, we were standing up Neutron in HP's public cloud. And we built this large l2 network for our external network with floating IPs and spanned this l2 network across multiple availability zones and probably over 1,000 compute hosts. And we were planning to have a lot of floating IPs or at least be able to have a lot of floating IPs on that network. And in looking at how that scaled and the issues that we had in building out that l2 network and where are we going to hit scaling issues with MAC tables and ARP tables and things and the routers and the switches, I started to think, well, what if we could route these? Instead of having a large l2, what if we could have a routing infrastructure and route it down to a very small l2 segment within the network? So that got me thinking. And so over the last three years, I've actually kind of thought of this in a lot of different ways and made a few proposals. And this is the one that stuck. And so last one year ago in Vancouver, the large deployers team, which is a working group within the Obstract, they got together. I think it was the first time in Vancouver. Somebody can correct me if I'm wrong, but they got together and a lot of them talked and said, you know, we've got this unique network architecture that we want Neutron to map to. And they described their routed architecture. And another operator would say, well, we have our unique infrastructure that we want Neutron to map to and they would describe their routed infrastructure. And we finally figured out that they all kind of looked the same. And the large deployer team put together an RFE and filed it in Neutron. And that's when I picked up on it. And I said, hey, that's what I've been thinking about for a couple of years already. So let's do it. So that was kind of some wind in my sail and it caught it. And I've been trying to move forward with it ever since. The basic idea is, you know, whether it's a rack or a cell or a couple of racks, the basic idea is you've got a router on top of multiple segments within your network. And here I just showed a simple example with four racks, each with a top of rack router. Each one has its own L2 domain and its own subnets. And on top of that, there's routing. I didn't show anything complex like a spine leaf or a class network or anything. I just, that big routers, whatever kind of routing you want, that moves the packets between the L2 segments at layer three. So I'm just going to walk you through the changes. Kevin's given you a good background of how fizznets and segments and things work. I'm going to walk you through the changes we're planning to make to make these fizznets and segments work and map to these routed topologies that some of us want. So the first step was to take the segment which already existed. ML2 has the concept of segment. A couple other of neutron plugins have the concept of segment. In fact, anything that implements the provider net extension or the multi-provider net extension in neutron has this concept of a segment. So the first thing was to promote that concept of a segment to, from an implementation detail of the network to a first class thing with a UUID and its own endpoint in the API, something you can go query. You can say, list me all the segments that belong to this network. And by Newton we'll be able to create and delete segments from a network. Right now you have to create all your segments all at once at network create time. But we're going to relax that a little bit, allow you to create and delete so segments can come and go from a network over the lifecycle of the network. And we're exposing this as a service plugin. So now we have an endpoint, we have segments, they have UUIDs and this is something we can work with. The next problem that you run into is segments have always been envisioned as multiple segments within one single broadcast domain. So a network is still 1L2 but we have different segments and it was just kind of assumed that, and Kevin mentioned this, it was assumed that they were all bridged together somehow so that if you have two ports on the network they can get their L2 traffic between them and that was a guarantee. Well, with routed networks that's no longer a guarantee, we're not going to guarantee L2 connectivity. But how do we tell the difference? How do we tell the difference between an existing network with multiple segments and L2 connectivity throughout and a new routed network with multiple segments and no L2 connectivity between them? Only L3 connectivity. And the way we're going to do that is the subnets that sit on a network, subnets will still associate with a network but we can optionally associate a subnet with a segment. And if you think about, you take the subnet which is, as Kevin said, it's just the container of addresses. It's really just the L3 addressing that you're using on the network. If you take that subnet and you associate it with one segment instead of just the network as a whole, that's how we tell the difference between L2 connectivity throughout and L2 connectivity that's just confined to a segment. Because now we've associated the subnet with a segment and so that subnet's only available. It's only viable on that subnet. So your broadcasts to that subnet are going to be confined to that segment. So now subnets are optionally linking to segments. And we're going to add one new API and actually this is the only new API that's tenant facing. We'll add a new L2 adjacency attribute on the network that reflects this so that end users can tell what to expect from a network, whether to expect L2 adjacency or not. And that's just going to be a true false flag on the network. We're not going to store it, we're just going to compute it on the fly. It'll be false if subnets are associated segments and true otherwise. So now we've got subnets, we've got multiple L2 domains, each with its own addressing. The next problem is how do we tell which hosts are connected to which segment? Because now it matters. When we create a port on a network, we can't just give it any IP address from the network. We've got to know which segment that port's going to be used on. And so the next model element that we're adding to Neutron is this map from hosts to segments. With ML2 we kind of get this for free because Kevin showed you the bridge mappings that you configure the agent with. We can use those bridge mappings which map the agents to a fizznet. We can use those to derive a host to segment mapping. So with ML2, an agent-based operation, all we need is a piece of code to receive those bridge mappings, derive the host to segment mapping, and store that in a new table for mapping hosts to segments. And this mapping will be exposed through the API, somehow associated to segments. I haven't figured out exactly how that's going to work, but an admin will be able to see the hosts that are mapped to any given segment within a network. Now what happens when you create a port on a network? I already mentioned that when you create a port, you can't just give it any IP address. And in fact, if you create a port without port binding on a multi-segment, routed network, you can't give it an IP address at all. So we have the concept of deferred IP allocation. You can do a neutron port create, give it the network ID, and you'll get a port back. You'll know the UUID of the port. It won't have any port binding information, and it actually won't have any L3 information either because at that point we don't know enough to give it an L3 address. And this is something that tripped up NOVA a little bit, but in the, let me talk, Newton release, NOVA will allow this deferred host binding so that we can coordinate NOVA scheduling, NOVA port binding with the IP allocation. And then we can wait until later until a port update that actually gives us some host binding information to allocate the IP address. So now that gets into NOVA scheduling a little bit. This is actually one of the trickier parts of this project in it, but it's turned out to be a really cool opportunity to work cross-project between NOVA and Neutron. The problem here is now you've got your IP addressing available differently throughout your network, and you might have one segment that's used up all its IP addresses, where the other segments still have some availability. Well, when you go do NOVA boot, you don't want your instance to land on a segment that doesn't have any IP addresses available. So we want to make NOVA scheduling IP address, IP availability aware, I guess. And this slide doesn't do the complexity any justice, because there's a lot of complexity in here. But what we've decided to do, and we talked about this yesterday in the NOVA Neutron design session, is the conductor, the NOVA conductor, will have kind of a three-stage process, a pre-scheduling step, a scheduling step, and a post-scheduling step. And the idea is to stick everything we can do for preparation in the pre-step before we get into the scheduling step, because that's the part that kind of does allocations and maybe commits some resources and that's really the part that could be contentious and needs to be atomic, isolated, all that stuff. And then in the post-scheduling, hold on, in the post-scheduling step, that's the part where, okay, now we've exited the supercritical, possibly contentious section. Now we can go claim the resources and get things set up before we actually push this out to the compute host and boot the VM. So with deferred IP allocation, we defer that IP allocation to the post-scheduling step. So NOVA will actually do a port update in the post-scheduling step, look for an IP address to come back in the response. And most cases, everything's happy, they get an IP address, they go out to the compute host. But there is a possibility of a race where Neutron may say, yeah, sorry, I just gave out my last one. I'm out of IP addresses and we get an exception and we just kick that back to the pre-scheduling step. And it's a pretty tight loop that, and shouldn't happen often, it really will only happen when there's that last final race for the last IP address on a segment. The other thing that will affect you when you go to router networks is DHCP scheduling. If you're using the DHCP agent as it is today, you'll find that DHCP will be scheduled to some random segment and your VMs may not be able to contact that DHCP server because, guess what, there's no L2 connectivity throughout the network. So we do have work in progress to enhance the DHCP scheduler so that on a routed network, we make sure that all of the isolated segments are covered by a DHCP agent. So your network is no longer just scheduled to one or maybe two DHCP servers for redundancy. Your network is scheduled for every segment within the network and this means that we need a DHCP agent available and connected to every single one of the segments within the network. So as an admin or an operator, the way that you'll roll out routed networks is first you'll want to prepare your physical environment. We've gone with one FizNet per segment, so you, let's say you're going for one segment per rack. You create your VLANs within each rack and guess what, you can actually use, since the FizNets are unique per segment, you can reuse VLAN numbering between between your FizNets, which is kind of cool. You also need a plan for a DHCP agent for each FizNet and then your choice of routing architecture. It's up to you. The next thing is to configure Neutron. So Kevin showed you the bridge mappings in ML2. We're actually, one of the things I'm really excited about is we've got this project, we're not just doing this in ML2, we're actually doing this in OVN in parallel and we intend to release this feature equally for both plugins. They'll have their own way to provide provide the mappings from hosts to FizNets or segments. Once you have those in place, there's the network creation and Kevin also showed you this one, but you're going to want to create multiple segments under the network instead of just the single segment. Forgive me, I actually don't know if that's possible in the Neutron Client. I never looked, I had a to-do to look and I've always used the API. So in the API, you know, the analog to NeutronNet create is a post to the network endpoint with JSON blob with all of the attributes and one of those is a list of segments. And so you have a JSON attribute that's segments, that's a list and each one of those is the FizNet segmentation ID and what am I forgetting? The segmentation type. That tuple defines your segment, but by the time this is released in Neutron, we'll have the ability to also come and create and delete these after creating the network. Then there's creating your subnets and this is almost the same as it is today except that with each subnet create, you want to include one more attribute and that is the UUID of the segment to which you want that subnet to be on. So if you've got your four segments, like in my diagram at the beginning, you'll do four subnet creates at least and with each one, you'll provide the segment ID of one of those four segments. And then it's pretty much available to you have a network now that's one single network, you're not presenting multiple networks to your end users to allow for partitioning the network, you're presenting one network. Like Kevin said, the production or the development network or the red and the blue network or however you want to distinguish those networks, which simplifies things for the end user. Then now I kind of want to, let me see how much time is left. Did you want any more time? Okay, cool. I'll go really quick through some stuff. This is kind of a stretch goal for Neutron. Can we have floating IPs on these routed networks? Floating IPs or IPs that aren't confined to the segment, but can be used anywhere within the network? And the answer is yes, we can do this, hopefully in Newton. I'd be ecstatic if we could get this into Newton and we're going to try our best. But we're going to use BGP to do this. And in Mitaka we already have a BGP speaker in Neutron and that BGP project will be enhanced to be aware of segments so that we can now, instead of just associating BGP with a network, we can associate BGP with a segment and peer BGP with each one of those routers that's at the top of the segment. And so when we have floating IP subnets and those are allocated and bound to a port, we can actually use BGP to route those straight to the right segment and to the right next hop for that floating IP. And we can actually do this with or without a Neutron router. So today, floating IPs are always one, you know, the floating IPs on one side of the Neutron router and the fixed IPs on the other side of the Neutron router. You pair the two and the Neutron router takes care of natting, net address translation. We have to take care of net address. We have to take care of the address somehow. And there are a couple of ways we could do this and we haven't actually decided on which way to go with. One is address translation on the port. Another which one of our operators actually does today, they actually use allowed address pairs on the port and they allow that floating IP address to go straight into the VM. And then the VM understands that it has that floating IP address and can use it to communicate. And then another tricky part is DVR. But we haven't completely worked that out. But routing into tenant networks with DVR will eventually work with the magic of BGP and all of that cool stuff. So that's all I have. I'd like to open it up for questions for the last five minutes or so. Excellent presentation. Probably it's an eye-opener after a long time that we have such a huge improvement in Neutron. I would like to ask a question. What is the impact of... That is the ports for SFC, service function chaining. Did you look at that? What is it going to impact? And I'm just speculating. I don't know. So I'm trying to understand that. Right. If you have any thoughts on that. I'll be honest. That's not something I've thought about much. Have you thought about that? Are you talking about the VLAN-aware VMs to allow network or to allow instances to connect to different networks using VLAN tags? Is that what you mean by supports? No. Right now I'm thinking of networking SFC as a project which supports for service function chaining. Since you are dealing with the port and you are doing a deferred binding, one thing is the IPAM is external IPAM. I don't know what's the impact of that. The other thing is if we do a service function chaining, we need to chain the VMs. So certainly I see there is a great advantage for this to put the network. That is the you put the graph of network first and then put the VMs on top of it. That's a good thinking. But how do we do it? What's the impact? I think we need a little more understanding. Yeah, I think we need more details about how networking SFC does that and we'll need to sync up with them to see if it'll be compatible with this at all. Yeah, I'll ask Kathy who's and Furie to work with you. But excellent, okay. One of the best things I have seen in at least five summits, I would say. Oh, thank you. Thank you. And go ahead. Hi, thank you for the presentation. So what you're presenting is taking one layer two networks, slicing into pieces, and assigning it another ID so the host can attach to it. It kind of reminds me to Q&Q provider bridging, but you didn't mention it. So how your solution differs from Q&Q? Thank you. So with Q&Q, you still have a VLAN that you're extending across the whole data center, right? In this case, the L2 boundary ends at the top of rack. It's all routing from the top of rack onwards. So your L2 domain is just limited to this particular rack and you have a router at the top of the rack and then it's fully routed after that. So it's different from Q&Q in that you're not extending L2 across the whole data center. Yeah, or within some small domain, whatever the operator wants to set up. But yeah, it's L2 within a very small domain and then L3 beyond that. Yeah, thanks. Very interesting work as well. I echo that earlier comment. Just to be perfectly clear, the tenant is no longer specifying their own IP addresses. Just the first question. Second question is, can you talk about where isolation in the tenancy model is enforced with this model versus the existing OVS segment model? Right. So a tenant can still, they'll still be able to specify an IP address, but it will need to be passed to Neutron, or it will need to be taken into account in the Nova scheduling. There's a, yeah. So this is kind of a, the deployers who are asking this for this, they're not as interested in multi-tenant networking with like the L3 agent routers and tenant private networks. This is more of a shared provider network model. Although we are looking, this last slide shows, we are looking at being able to take a router and connect its external gateway to a routed network and have that work too. In that case, then isolation is handled in the same way it is today, because those isolated networks are tenant private and they're behind a router. And all that. But this is more for a shared, I just want an IP address on the internet kind of model. And security groups, yeah, is still there, right? So with the flexibility of floating IPs that you're affording with some of the new work that's going into Neutron, do you anticipate any overhauls or changes to the way the metrics and the telemetry data for those things are going to be generated in sort? That's possible. I don't know. Do you mean like the packet counters and that that are collected at the L3 agent? The usage statistics, I've got how many floating IPs are available, that kind of stuff. I think the number of floating IPs available, that should stay pretty similar. For the packet count, for example, we'll have to make sure that the, wherever we do realize the translation at, we'll have to make sure that the IP tables counters that are in the L3 agent right now are carried over to wherever it's done, maybe on the L2 agent or something. One more? Yeah, so maybe you answered this in the second to last question, but what I'm seeing here is a really kick-ass underlay network, but is there no way that we could have a tenant-isolated overlays on top of this? Well, yeah, the overlays, you can still have tenants create networks. And you can still have them create routers and connect those routers. This would be the shared external network that you set your router external gateway to. Okay, so the external network is one per rack, instead of having a single L2 external network across all my racks. Yeah, this works is that, and guess what, it also works as, I just want to boot my VM straight to the external network. Okay, but we're still restricted in having to live migrate within the single external network. Right. But yeah, this is, I mean, this is... Right, this does restrict live migration and any other move of a VM. Right, but it's within a rack, so it's okay. I mean, this is what we've been waiting for for years. Right. This is kick-ass. Cool, thank you. Thank you very much, and thanks to Kevin. He's really driven this a lot. I presented a lot of ideas. He's told me I'm crazy, and I come back with another idea, and we finally settled on something good, so I'm really excited about this, and thank you for coming.