 It's going to flag it five minutes by minus DVR, so it's just our progress. Yeah. Check. Check. Check. There we go. Hey, guys. Good afternoon to everyone here. Today we are going to talk about the DVR's progress from, like, Juno to Kelo. For people who have attended the Paris Summit, you might have attended a session on the DVR. So we gave a high-level overview of DVR and its architecture. And today we're going to cover some more topics on DVR, which we missed it in the Paris Summit because I wanted to go deep dive into the DVR and show you how the architecture fits into the existing model of L3 agents and the namespaces, compute node, and network node. So there was a lot of questions that came up after our discussions and also through the community. People wanted to understand more on, okay, what are the internals of DVR, how it works, so that we wanted to make sure that we conveyed to the audience here what are the internals. And then we will also cover some of the topics that we touched from Juno to Kelo. Most of the stuff that we did for DVR, I think we gave a future plan for DVR when we presented for the Paris Summit. And from there we took all the backlog items and we have been working on those backlog items. And we will try to cover what are the items that we completed in this cycle and what are the items that we are actually foreseeing to finish it off during the Liberty cycle. So the agenda here would be just, as I said, we're going to give a deep dive on DVR and we're going to talk about what are the service support by DVR with respect to Neutron. Because when we presented in Paris during the Juno time cycle, we had VPN as a service not working with DVR. So that was one other thing that we achieved during the Juno cycle. So right now VPN as a service will be working with DVR. And then we'll be talking about the VLAN support for DVR, which was not in the Juno release, which we have done it for the Kelo release. And then the multiple network support is still there in the Kelo release. And we had a legacy router migration to DVR for the admins who wanted to actually migrate their legacy routers to the DVR. So we have accomplished that during the Juno cycle. And most of the, like around 70% of the work that we did in the Juno cycle was to have stability in DVR because we have seen a lot of instability issues that was causing, because of multiple patches added on top of DVR. So now we are trying to reduce the stability issues that was caused. So that's one other issue. And then we are going to talk about the DVR performance in this session to give you a heads up on how much we have improved the performance on DVR with respect to Neutron. And then we'll go over the future plans for DVR. So before diving deep into DVR, deep dive discussion, I just want to give you an overview of what DVR is and what legacy routing means. There might be some new audience in here. So for their benefit, I just wanted to go over an overview. So if you look at the legacy routing scenario, so the picture that you're seeing here is a legacy router where all the routing happens in a centralized network node. So in this case, what happens is like when a VM1 wants to talk to VM2, in this case, it's shown as compute node 1 and compute node 2. They both are in two different compute nodes. But when they try to ping to each other, so all the traffic has to hit the network node and then come back to the compute node 2 where the VM2 resides. This is the same case even if VM1 and VM2 reside in the same compute node. Even if they are residing, they are neighbors, they still cannot reach each other before hitting the network node. So this was one of the major issues that we were seeing where intersubnet traffic has to go hit the network node and then come back. And then again, for floating IP, so all the floating IP traffic that you wanted to forward for your VMs has to go through the network node, which made the network node as a single bottleneck because you can't run many namespaces on the network node. And the next one is like a default SNET because right now the default SNET is also done in the network node where you provide VMs with external gateway connectivity. So that traffic also goes through the centralized network node. Again metadata agent, all those things work normally through the network node. So in the case of DVR, what we have done is basically instead of having the router just on the network node, we have actually duplicated the router in the compute nodes. So wherever the VM is, we now start the router on the particular compute node and then you have a local router in the compute node to route the traffic from one VM to another VM. So the traffic does not hit the network node all the way just because it wants to talk to another VM, which is its own neighbor or it's residing on a different compute node. So this one we call it as east-west traffic. So all these east-west traffic does not need to go to the network node and come back, so it's just from compute node to compute node. And again for the floating IP, as I said, in order to reduce the bottleneck on the network node, we moved the floating IP to be local on the compute nodes. So any north-south direction that you wanted to send traffic back and forth, we tried to implement the floating IP in a separate namespace within the compute node. So any traffic that's coming in from outside that wants to hit the VM, it can directly hit the compute node and then reach your VM and go out. It doesn't need to come to the network node and then go to the compute nodes. The only thing that we retained in the legacy network node is basically the SNAT, because the SNAT portion, we still wanted to have it as a centralized one. This one I think already Carl spoke about this one in today's morning discussion, but again I'm reiterating this one. So the SNAT stuff is still in the centralized network node. The reason we wanted to maintain that was because there were some of the services that we had was centralized, basically the VPN as a service. We cannot distribute VPN as a service everywhere while distributing the SNAT. We wanted to have certain singleton services that we wanted to run in a centralized node and certain services we can distribute. So giving a thought on that, so we just retained the SNAT to be on the centralized node. So that's why it's in the centralized node. So if only for default SNAT traffic, your traffic will hit the network node and go out, but for the rest of the traffic, you can actually go from your compute node outside. Can you forward the slides? So before I get in the deep dive for the benefit of the audience here, so if you guys have missed the Paris Summit for the overview of the DVR, I have posted a link in here, so this takes you to the video recording of the session that we had for the overview of DVR. So for your benefit, you can actually take a look at the link and then go through the overview. So before diving deep into the DVR discussion, I just wanted to give again a refreshment course of the configuration of DVR. So basically we have a configuration that we need to do on the neutron.conf file and then also there's a configuration that we need to do on l3agent.ini. So the one on the neutron.conf that it says router underscore distributed equal to true is a global flag which allows the plugin to configure a distributed router. So this has nothing to do with the agent, it's just on the plugin side. On the agent side, you can actually start the agent in three different modes. In order to retain the old legacy router functionality, we have left it and provided one agent mode called as DVR legacy. Agent mode equal to legacy, that would actually exactly replicate the old model where it was a centralized router, it has nothing to do with it. And then the agent mode equal to DVR underscore SNAT is basically a combination of legacy mode as well as service node, what we call it as or a network node. So it can act as a network node where it can actually host both the legacy routers as well as the distributed routers. So when I say DVR underscore SNAT can do both legacy and the distributed routers, so that can be configured by admins. So we have admin only privileges for that. So admin can actually override because we do have APIs to create router equal to distributed true or router distributed equal to false, but that is only done by admins. So tenants are not allowed to modify that because tenants normally when they create routers they actually follow the global flag that has been configured. If router underscore distributed equal to true configured in your data center then any tenants who is coming in on that cloud infrastructure if they create a router it will be a distributed router. But admins can override and if they wanted to create legacies they can do it and admin can also migrate or convert a legacy router to a centralized router. So there is another configuration file for the L2 agent. So on the L2 side there is another flag enable distributed underscore routing equal to true and then for tunneling we need to enable tunneling equal to true. And local IP I think you can provide the IP of the local compute node host and tunnel type what we supported initially was VXLAN. We have also tested with GRE, it works with VXLAN and GRE and we have also implemented VLAN version of it. So all those three types are taken care. And we have a dependency on L2 population so make sure that you turn on the L2 population for these items. So all these configurable items that I have mentioned in this configuration file can be enabled by a single flag in DevStack. So if you guys are using DevStack to test DVR there is a single flag called dvrq underscore dvr underscore mode. You can just enable q underscore dvr underscore mode equal to either dvr or dvr underscore SNAT or legacy. Based on that it will actually configure all these config files for you and make your life easier and then you can test your dvr for infrastructure. Now moving on to the deep dive portion of the dvr. So the picture that you guys are seeing here is basically all in one a single node where you wanted to deploy. Basically this is for test purpose if you want to test it. So the blue circles that you are seeing is all namespaces. Those are specific namespaces that have been created for dvr. The orange circle that you are seeing is basically the legacy router qr namespace. So you have a qr namespace, you have a flip namespace and you have an SNAT namespace. And this node is all in one node so it supports both legacy and dvr. So you need to start up this node as a dvr underscore SNAT mode. So the agent that is L3 agent that is actually running in this node is actually running in dvr underscore SNAT mode. So if you look at the agents that you are running, you have dhcp agent, you have L3, L2, metadata and NOVA. So it's kind of all in one single box to test. It makes it easy for you guys to test it and validate it. The only one thing that on a single node you will not realize is basically when vxlan tunnels are getting created and flow rules are getting added. So when traffic, when you want to really test the east-west traffic, you basically need a two node setup. So in a single node setup you cannot realize what is happening because you don't see those or don't realize those things. So the better thing to do is basically go to the dual node setup. So in a dual node setup, what I'm showing here is basically a network node with the dvr underscore SNAT on the right of my side. On the left of my side here is a compute node. So in a compute node you have basically L3 service running, L3 agent is running, L2 agent is running, metadata is running, NOVA is running. So previously in legacy routers you don't run L3 agents on all compute nodes but in dvr you make sure that you need to run L3 agents on all the compute nodes because that is a must. That's how we configure the routers and other namespaces on the compute node. So that is a basic requirement. And if you see here on compute nodes we have a QR namespace, we have another QR namespace. Basically these two represent three different networks. And if you say the red and yellow network belongs to like tenant A and then the pink one represents tenant B. So tenant B has his own router like QR, so he has his own QR namespace and tenant A has his own QR namespace. But for both tenant A and tenant B are basically sharing a single flip namespace on a compute node. So for the traffic to go out and come in for the VM, you basically use one single flip namespace for one external network and it has been shared between the tenants. That's how it works. So for a single external network that you always need to see is there a flip namespace that's being created on a compute node? And then if the flip namespace is created and if you have all the rules, IP tables, rules configured in the flip namespace then your flip traffic will actually go through for any VMs that you have configured for flip. And for again on the compute node the things that you need to make sure that is in there is basically the BREX bridge. You need to create a BREX bridge for external traffic if you are intending to provide the north-south traffic but if you wanted to just have east-west then you don't need to create a BREX and east-west traffic. For any SNAT traffic that has to go in for a VM that does not have a floating IP all the way the traffic will go through the BR int, come through the VEAT pairs to the BR term and then reach the SNAT node through the BR 10, go to the BR int and then go through the SNAT namespace outside. So that's how the traffic from compute node to the network node will go and then get out. So I just showed you in the picture like what are the namespaces that are to be created but once you are running a DVR on a DVR SNAT node basically just run a sudo ipnetns command and you should be able to see all the namespaces that are in there. So this one shows, okay, I do have a single node configuration in this case and then I have configured a fit for a VM and then I have an SNAT namespace, a queued outer namespace and basically a DHCP namespace for the particular network that you have configured for. So if you see all these namespaces configured then you can actually go into each of these namespaces and see, okay, what are the ports that are actually in each of these namespaces because I'll show you go into the in-depth and show you what are the ports that are in the namespace. So the single node one, just what I showed. So this is an internal diagram that shows you all the ports all the virtual ports, all the vEath pairs that are currently configured for DVR and for legacy routers. So it's an all-in-one. So if you see here the left side legacy qr namespace that I've created between the brint and the brex that is basically a queued outer namespace in the case of a legacy router. On the middle one, what you're seeing is the qr namespace and a FIP namespace. So that is basically if one of your VM requires a FIP you have a qr namespace and a FIP namespace. And the last one that you're seeing is basically an SNAT namespace. For any of the VMs that are in here you need public access. You just use the SNAT namespace. So in this case you can see the qr namespace and the FIP namespace. These two are two different namespaces. The way that you tie up these two namespaces is through vEath pairs. So basically the one that you're seeing in between the FPR, AA and RFP FPR denotes floating IP namespace to router namespace and RFP denotes router namespace to floating IP namespace. So the suffix will tell you exactly what the namespace or what the intention for the vEath pairs are. And these two vEath pairs internally has an IP address which is 169 address the local slash 30 subnet address that 32 slash address that we are using. And that's not been seen outside but internally we are using those IP addresses. And the one that you're seeing down below the FGAAA is basically the we call it as a floating IP agent gateway port. The reason we created that port is because we are providing North-South connectivity from a compute node you don't have a gateway port in there in order to send traffic outside. So the reason we don't have a gateway port we need to kind of replicate some sort of gateway port. So if SNAT is created or configured on your network node we just create another port on the same subnet and then call it as a floating IP agent gateway port and make it reside and bind it to a host where your compute resides. So that QFG is basically the floating IP agent gateway port and that will actually consume an IP address from your public subnet. So we are actually working with Hall and other people in the community to actually get away from that logic because we don't want to consume one public IP address. So when BGP based routing is actually enabled and then router networks are being introduced so we will actually slowly get away from that but for the for time being we will actually be consuming one IP address on the FG port. So this is basically the high level overview of the namespace and you see the SNAT namespace you see the QR namespace and you see the FIP namespace and this is basically for the legacy routers and you also see the DHCP namespaces. So I think this gives you an overview of how the DVR is being achieved in a kind of a single node. In the next slide I will show you what's in the compute node basically. So this one is basically so I showed you the pseudo IP net NS and then if you want to get into the SNAT namespace and see what are in there so in an SNAT namespace you see the QG port which is basically the gateway port that was already in the legacy router apart from that for DVR purpose so for N number of subnets that you are trying to add to the router for those many number of networks that you are trying to add to that router there will be an additional SG port the internal port that we are creating this SG port denotes the SNAT gateway port this is basically internal phasing private IP address ports so that we can actually route the traffic from the SNAT namespace to any of the subnets that are residing behind the scenes. So if you have a 10.0 network you will have a 10.0. something port and then if you have a 10.1 address you will have a 10.1. address on that SNAT namespace and DVR namespace on compute node so again as I said if you are only having a network node and a compute node so this is a clear picture that shows you what you can expect on a compute node on a compute node you can have a BREX-BR tunnel and then you have a QR namespace which actually tries to route traffic between the red and the green network and for some reason you wanted to assign a floating IP to the green VM that is in there then you create a fifth namespace and then you have a SG port on the fifth namespace and then you have the VEAT pads connecting these two things and then your traffic will actually flow through the BREX outside. So this is again just if you have like two tenets or two routers so this is how it's connected and then these two tenets are as I mentioned in the previous picture in the first picture they're showed all in one so they are actually now connected through the fifth namespace and the single fifth namespace is being shared between the two tenets QR namespace. Again this one is the internals on the namespace so if you look at the fifth namespace as I showed you the fifth namespace has the FG port which is the floating IP agent gateway port which has the public IP address and then there is an internal port the VEAT pad that's being created and that I said that the VEAT pad consumes an IP address slash 32 address 169 address that is what it's shown in here like 169 address. So this is the basic QR routers so in QR the additional one that we have added for the DVR is basically the RFP this is to tie up between the fifth namespace and the router namespace. So this is the one that shows you the 169 address that's on the router side. So these are the things that you can actually verify after you create a port or a router you can verify go into the BRINT show the OBS VS CTL will show you what are the ports that are associated and then you can actually associate those ports with the namespace ports that are in the namespace so that will actually clearly tell you everything is working as per expectations. So I think that's all I had on the namespaces on the services basically as I mentioned for the Juno cycle for the Kilo we did finish off the VPNS support for DVR still I think we worked on a blueprint for supporting the east-west firewall as a service but we couldn't achieve that in time because there was not enough consensus on that idea for the east-west firewall as a service so we will be continuing our effort on the firewall as a service for east-west during the Liberty cycle but for the rest of all the services that are currently supported by DVR and then so this is the namespace diagram that shows you how the services are implemented so if you see here an LBAS LBAS does not have any correlation with routers so you can actually associate a WIP port so that currently DVR supports LBAS because we do the only issue that we had with LBAS was when we create a floating IP to the WIP port so we need to make sure if that floating IP port is on that compute node we need to allow the floating IP agent gateway port to be created on that compute node so we have fixed that problem so that should work as expected and then in the legacy QR namespace if you see it in the centralized network node you have firewall as a service and VPN as a service running within the QR namespace so now what we have done is we have actually for VPN as a service with the DVR the VPN as a service basically the strong swan driver or the open swan driver will be running within the SNAT namespace it will not be running within the QR namespace as expected in the legacy case and for firewall as a service we do have support for north-south as I mentioned not for the east-west for the north-south it is for the legacy SNAT traffic that actually goes for VM through the SNAT namespace you have firewall as a service on the SNAT namespace and also we have firewall as a service on the QR namespace that is actually listening on the RFP port for any traffic that's going down north-south I think that's all I have on the services and on the namespace so Vivek will actually now cover the VLAN part for the DVR thanks for me so as you might know in the initial release for Juno we happen to cover the VXLAN under the support for DVR and in Kylo we managed to plug the gap for VLAN so as of Kylo release what you can expect from distributed routers is that you will be able to actually route packets across two networks that are both VLAN based you could actually route packets across two networks one being VLAN, one being VXLAN you could also have a distributed router route packets across two networks where one being a VLAN based network another being a GRE based network and we continue to retain the facility of routing across VXLAN networks and routing across GRE networks from a stable Juno code base as well so I think like I'll just quickly go through how DVR works with VLAN under release so as you could see if VM1 wants to communicate to VM2 and VM1 and VM2 are in two different VLAN networks so VM1 actually sends a packet to its default gateway which is a router so that packet would be captured by the local distributed router QR which is a replicated router and then that will route the packet when it routes the packet back it will actually put the packet back onto the green network and then the packet on the green network comes into the integration bridge goes to the physical bridge and as you could see like in the common data network on the wire you will see that the packet actually carries the green network segmentation ID which is a MAC called LMAC which is a DVR unique MAC it's not the source VMs MAC but it's a DVR unique MAC which is a translated MAC and we happen to do that because since the router names faces are replicated if we happen to actually transmit the router interface MAC right on the data then what we will encounter in the underlay is MAC table thrashing so we took the router interface MAC which is a replicated router and we translated it to a unique MAC that's assigned per compute node and then actually the routing happens in the source the originating node and after that all the packages switched all the way back to the destination VM so the routing happens in the source node packet gets switched to the destination node and similarly when VM2 wants to communicate to VM1 the routing for VM2 to VM1 actually happen into the QR on that node and then the packet will all the way get switched back all the way to VM1 and then like traffic handling at L2 like I just want to mention the facts like L2 is a bit of a challenge to do in DVR because of bum traffic handling so there are certain things that we took care in the L2 level to ensure that you know like we don't actually like goof up the data network because of replicated routers so as you could see like you know ARP requests we have been careful like we are blocking all ARP requests that are generated you know for the interfaces we also block packets that are distant for a DVR interface max and we ensure that we translate router interface max to unique max in egress packets and we do reverse translation for egress packets okay so this is typically like a bit of a deep dive which you know be easy to follow as you know like in a typical VLAN flow we will have two bridges that participate in a KVM compute node one is the integration bridge and other is the physical bridge so as you could see like these are the tables within those bridges that are responsible in order to actually forward a routed you know distributively routed packet so when a packet comes into table zero say VM1 tries to send a router packet to VM2 packet comes to table zero on the originating compute node it goes to the local router for you know for routing the local router routes the packet puts it back into table zero and at this point normal action is taken and the packet comes to the physical bridge in the originating node and as you could see like the physical bridge actually transfers the packet that comes from the integration bridge to table one and here what we actually do is we replace the source which will be the distributed router interface max to a unique local max for that node and from there we do the usual translation of swapping the local VLAN to the segment VLAN that's configured by the tenant and then the packet goes out with whatever the tenant VLAN is configured on the destination network so that's how the egress to the cloud happens the one in blues are the new tables that we introduced in order to support VLAN for DVR and for ingress from the cloud the packet comes into the Ethernet data trunk port it hits the table zero on physical bridge it's forwarded to a learning blocker table because since we are using a unique local max we don't want those max to be actually learned by our you know our bridges so we skip the learning process and then we forward the frame all the way to integration bridge integration bridge actually figures out that this is a DVR outer packet because it uses a unique mac and so what it does is it forward to a DVR to lmac table which is again a new table we introduced for VLAN and that table actually takes the job of swapping out this unique mac and puts the right router interface mac and it also strips off the VLAN and then it puts the packet directly to the VM port so VM would for VM it will be very transparent that it distributed routing happen it will see the packet as though it's appearing from its own default route so that's how we accomplish ingress to cloud so I leave it to Adolfo to pursue on stability and performance. My name is Adolfo I did some of the DVR testing during the development and so I'll talk about a couple of things that we have in there now the first one is multiple external networks as Swami mentioned before we are actually supporting that on DVR and what that basically means is you can actually have multiple external networks you will get a fit name space that is the floating IP name space if you configure that on each one of the compute nodes but the fit name space will not be duplicated for tenants it will be shared across tenants it will be duplicated across compute nodes because it is DVR so it is on the right default and it is working now now another thing that we also introduced was the migration of legacy routers to DVR routers if you really think about it it's actually quite complicated we're moving name spaces around ports around we're taking the router put it in the compute nodes it's a bit of a challenge to actually do that however the process itself is actually pretty simple from the CLI perspective you do have to take a couple of things into consideration one of them is you have to disassociate any services that you have associated with that legacy router especially because some of the services maybe don't work with DVR because of their nature or other features something that happens if you try to migrate a legacy router to a DVR router and you don't disassociate those services you would actually get an error message and you won't be allowed to do it so you'll be reminded of that another thing you have to be kind of careful about is if you have floating IPs associated with the router you might want to take them and disassociate them before you do this or you could get yourself into some sort of corner special cases where you end up with something you don't want so it's always just easier to associate the floating IP once you do that it's pretty simple obviously you have to have elevated credentials but you go to the router you say there's a Neutron CLI command you do an update you set the admin state up to false which basically means admin state up admin state down and then you go and then you set the flag router ID distributed to true and then you bring the router back up with admin state up and I'm sorry that's the type on the slide it should actually admin state up true once you do that the router is now a DVR router and then it will do whatever you need to do you don't have to worry about all that happens and then back around we move the namespaces around and all the other ports next slide so I'm going to talk about a bit it's still a stability and performance one of the things that we run into when developing DVR and actually pushing it upstream DVR by definition is distributed most of the tests that you guys are familiar are upstream running a single node set up if you ever push a tempest test or if you ever run a tempest run they use a single node set up that's great except that it doesn't really touch all of the parts of DVR because you need a multi node so that's one of the things that we worked on was to create a multi node set up upstream for running multi node test and multi node also there were a couple of stability actually quite a few stability issues that we had to work on Geno because the schedule does have to move a lot more things around to make everything work and it becomes exponential the more compute nodes you add so just so you know DVR now has two tests upstream, a single node test and a multi node the multi node is actually still in experimental so it won't be run unless you actually request that and if you know how to request that I'm pretty sure if you know how to do it we are hoping that at some point the multi node job will become a voting job and it will help prevent some of the defects that we're getting in most of them are due because the feature that was being broken or the defect was being introduced in a place where it only would show up if you had multi nodes so I'm going to talk about performance and benefits of DVR and this is actually I'm kind of fond of this one as I said I was a test engineer for this so let's talk about performance and data and the benefits that DVR brings to you so we have two types of traffic and I'm talking about all this traffic is routing traffic, switching traffic doesn't touch the router so I'm going to just say when I say flows I mean routed flows so there's two types of flows, north and south I mean north south and east and west north south is your floating IP address it comes out of your cloud out to the internet or to whatever it is outside of it and it's between the VMs what I want you to take from this next set of slides is the following two benefits on the north south you actually have a lot of performance benefits because you no longer have to go to the network node to get out you go directly from the compute node to whatever is outside most of the time that's the internet maybe it's another cloud on the east west you also have a lot of benefit because the router now has the compute node the VMs so the routing of the traffic is actually just done in memory you don't have to go out of the box come back to the box and get routed now that happens only if the VMs are in the same hypervisor but if they are in different hypervisors you still get a small benefit but it's not as big as if they are in the same one so this diagram tries to explain what I just said on the right hand side we have the legacy router setup and the top is east west and the bottom is north south so again on the right hand side we have the legacy routing as you can see the traffic flows can we go back to the as you can see on the traffic flows the lines represent the flow you see on the right side they all have to go out even for the VMs that are in the same compute node they have to go out, go to the network node and come back on the south they have to do the same thing they have to go out of the compute node go to the network node and then they go out now on the left hand side you can see it's pretty obvious the benefit you can get one if the VMs are in the same compute node traffic does not leave that compute node you don't touch the actual physical switching hardware it's all done in the compute node if they are in different compute nodes even then you still get to skip the network node you go from compute node to compute node now on the north south they go directly from the compute nodes out so you skip, everyone skips the north south I'm sorry the network node on a DVR now this is just to show the benefits next slide I actually did, I run this test and it's a pretty simple test and please don't take these numbers the absolute value because that's very dependent on what type of hardware you're running but I did this test I think Friday once there's CVR legacy and the greener DVR now on the north south table if you can see that the actual throughput of VMs almost the maximum of my system my system is 10 gigs in here okay the legacy routers only reached 21% now again please don't take that too hard this is a system that I didn't spend much time setting up but I just all I wanted to do is the exact same system and the tests are being run at the exact same time so everything's being equal except the type of router you're using DVR or CVR now on the right hand side that's the one that is pretty impressive this VMs, the green VMs happen to be in the same compute node the red ones are also in the same compute nodes but the red ones are legacy router and the green ones are DVR now in this system the throughput was 10 times more for the DVR system again, this numbers depends on what you do with it your system might be different but everything held constant on the same test the legacy router versus a DVR router outperform almost 10 times now I encourage everybody to go home and try this test out because it's very simple just put the VMs in the same compute nodes put the VMs where you want it to do and run traffic through them and you'll see the performance increase that you get from DVR that's it do we have any other slides? yeah, so I think before we wind up we just wanted to give you overview of what are the features we are yet to work on for the liberty cycle as Carl mentioned in the morning session IPv6 support for DVR will be one of the highest topic that we will be working on and then the HA service for DVR we already have patches out there but we need to finish it off during the liberty cycle patches also for manual move of SNAP from one node to another node in case of maintenance shutdown like you wanted to move the SNAP namespace from one node to another node, we need that feature so we'll be working on that and again as I said on the stability side we wanted to make sure the multi-node stability CI is more stable enough and then we wanted to make it voting so that everyone takes into consideration that DVR is there and then no one breaks DVR and then distributed SNAP will be another thing that we wanted to venture because if you wanted to distribute everything so I think hopefully you guys all enjoyed, I didn't make anyone to sleep if I made you to sleep just wake up okay guys have any questions we can answer that yeah can you go to the mic what how many I think we tested like more than how many nodes you guys tested you know it's not in a single digit it's as that you're asking it's actually quite, we have tested production level setups it's in hundreds yes question here is the no SNAP case working and is that going sort of centralized rather so I don't want to do NAT I will route directly into the virtual networks so can you come again so the no SNAP case there was a DVR underscore SNAP okay no SNAP so no NAT directly into the tenant networks is that something you were working on or considering is that working because in some neutral implementation it's actually working if I hook up the uplink network I can say no SNAP did you do that work oh you're talking about enable SNAP equal to true or false when you configure the SNAP no that we have not done it yet so you always assumed there was no we always assume that there is a NAT when you configure an external gateway okay thank you I have a question on the north south traffic about the the physical network the assumptions on the physical network are we assuming that all the compute nodes in the data center are on one gigantic layer to bridge network across all top of rack switches or is there how is it handled if if you don't have one giant external network that all your compute nodes can sit on so that's one of the options you have you can write you have a DVR compute node for a legacy router so I mean we don't really get into the much details you play around with that to make sure you get the one so you can do east west with DVR and north south with legacy simultaneously no I think you can't do that you can still do it like as I mentioned here like you can have a network node configured as DVR as NAT and then you can still do I think we are getting late right now so let me take it offline we can discuss it thanks everyone for joining the session