 Hello. Welcome, everyone. So today we are going to see a DVR SNADHA for Neutron. So I'm not sure, like, how many are aware about the DVR and its features. So what we are going to do is we're going to touch base on the introduction about the DVR just to give an insight for the new audiences. And then we are going to move into the details about the DVR. So here, along with me, so my name is Swaminathan Vasudevan, and I'm an active technical contributor in Neutron and working on DVR for the last two years. And along with me here, Adolfo Duarte, and he was one of the active contributor for the SNADHA. And here's Hardik Italia, who is going to co-present with us, and he's going to show a demo along with the presentation. So without wasting much time, let's go into the actual presentation. And then, so just to give you an expectation, so we will give you some time at the end to have some questions. So if you have any questions, feel free to get to the speakers or mics on both sides, and then you can ask your questions, and then we will try to answer your questions. So the agenda for today is, as I mentioned, we will have an intro to DVR, and then what are the current problems that we face with DVR and SNADNode, and what solution we have today to address that. And then we'll move on to the DVRHA to address the problems that we have today. And then we'll go into the details of the DVR SNADHA to tell you what contents it has in the namespace, and how it takes over using VRRP and KIPA LiveD. And then we will show you the configuration options, and then we will also go over a short recorded demo. We don't want to show you a live demo, and most of the time it doesn't work. So we make sure we have recorded it, and we can show you the recorded demo. And then we will also touch base. If we have time, we'll touch base what are the features that we achieved during the Mitaka timeframe for the DVR, basically with respect to scheduling and control frame changes. And then we will go over the roadmap that we will be working on for the Newton timeframe. So let's get started. So on an intro, so Neutron has legacy routing capability, which it had a routing capability. We call it as a legacy router. And then we introduced the DVR routers. So that's why we had a name. We gave a name to it as legacy routing. So if you look at the pictures in here, you see there's a VM1, VM2, there's a router. And the router is basically created in a network node. You have compute node. And this one shows you how a VM from compute node A can talk to compute node B. If this has to, if they both are in two different networks, if it has to route the traffic, the traffic has to always go through the network node. It has to get routered and then packet has to hit the VM. This is not only the case for VMs that are existing on two different nodes, but it is also the case for VMs that are residing on the same host. So if you have two VMs on the same host, even if they want to communicate between each other, the traffic has to go all the way to the network node and come back. In order to address this issue and to make the solution more scalable, the other issue that we have here is like if you have a fixed IP on this VM and if you want to expose the fixed IP to the outside world and you assign a floating IP to this VM, all your floating IP traffic has to go through the network node. So it's all the traffic, all your floating IP traffic, all your SNAP traffic, all your inter-VM traffic or intra-VM traffic will all flow through the network node. So you are stressing a lot on the network node and that was the reason that led to the invention of the DVR. So what the distributor router does is basically what we did is we took the same architecture as the legacy routers and then we were trying to create routers on all the compute nodes with the help of the agents that are running in each and every compute node. So if you see here, we are running L3 agent in the network node, we are running L3 agent in the compute node one, we are running L3 agent in compute node two. So we have routers here, Qrouter one and Qrouter two and we have Qrouter one, Qrouter two, Qrouter one and Qrouter two. So what we do is we basically replicate the routers on all the nodes that we have and these routers are replicated on demand basis. They are not created by default on all the nodes, they are only created on demand. So what happens, which is the, even that actually triggers the VMs to be getting created. So if you have a VM port, for example, I'm giving an example as a VM port because that's the most dominant one, but there are other ports. We call it as DVR serviceable ports. So a VM port is a DVR serviceable port and LBAS port is a DVR serviceable port or a DHCP port is a DVR serviceable port. On all these three cases, you need a router, a local router to be created on the compute node. So any of these nodes pop up on these compute nodes and if that port has a port binding associated with this compute node one, as soon as the L2 agent sees that, it communicates to the neutron server and the neutron server instructs the L3 agent to go create a router for that port in here. So a local router is created in here and your traffic, if you want to send traffic from red VM to the pink VM, it'll get actually routed and then sent back to this VM locally. So you don't need to send the traffic all the way to the network node and get routed back. So again, if you want to have an inter-VM between the nodes, the traffic will actually flow, it'll get routed on compute node one and the traffic will flow through this one and go here and then reach the VM on the other node. Again, if you want to have a floating IP, if one of these VMs have floating IP assigned, we do create a floating IP namespace on the compute node and the floating IP namespace will be connected to the router namespace for every routers. So this floating IP is basically a shared namespace between all the tenants and it will be created on the basis of external networks. So for every external network that you have public network, you will have one floating IP namespace that will be shared by all the tenants. So here in this example, I have one external network, so I have one floating IP namespace in here which is being shared by all the routers. So in this case, one of these routers is connected to the floating IP. So all your VMs floating IP traffic need not hit the network node and it'll actually bypass the network node and it can flow through this one. The one limitation that we had with the DVR is even though we actually distribute the East-West traffic as well as the North-South floating IP traffic, our SNAT traffic will still be flowing through the network node. So in this case, if you want to do, if these VMs have a default SNAT configured, what we do is in the case of legacy routers, we don't have a concept of SNAT namespace but in the case of DVR routers, we do have a concept of SNAT namespace. What we did is we kind of split the functionality within the router namespace for the legacy routers because the legacy routers have one single router namespace which does floating IP functionality, Natting functionality, as well as the routing functionality. So we kind of split apart that functionality into three different namespaces, a router namespace for doing router functionality and then a floating IP namespace to do the floating IP translation and then an SNAT namespace to do the SNAT translation. So based on its functionality, we kept it in the place where it is required. So we kept the SNAT namespace only in the network node where it is required. We kept the floating IP namespace in the other nodes. Again, the DVR can operate in two different modes, basically the L3 agent. One is the DVR SNAT mode and the other one is the DVR mode. So if you are running in a compute node, you don't need SNAT, just run the L3 agent in a DVR mode. But if you're running, if you need SNAT on that node, just run it in a DVR SNAT node. But if you want to go back to the legacy one, just configure it as a legacy mode and it will still work with the legacy routers. So those are the options that you have. And then, as I mentioned, the SNAT was centralized. And the issue that we have here is a single point of failure. So Newton has an L3HA that was introduced in Juneau. But the DVR was introduced at the same time, but we did not have the DVR SNATHA for the routers. But they had L3HA. So there was an issue running L3HA along with the DVRs. So that's a solution that we tried to solve here. So what happened is now we are trying to provide a high availability for this SNAT namespace that is residing in here, which exactly uses the same technology that the L3HA uses. So we are not making any change to that protocol. Moving on to the next slide. So this is the network node issues that I talked about. The one thing that I mentioned about, the reason we went to the centralized SNAT was the VPN as a service is a singleton service. And it has to run in an SNAT. And we had it running previous to Juneau. So when we introduced DVR, we don't want to break the feature. So we said in order to support VPN as a service, we need to have SNAT as a centralized one. So that's the reason we had it as a centralized one. And we still maintain it. So going to the next one, as I said, before we introduced the HA, the only solution we had for SNAT was, in case of failures, what will happen? How will you transfer the SNAT functionality from one node to another node? So you bring up another DVR SNAT node, then you try to move your SNAT routers. Basically, there is no CLI command that says move my SNAT into node 1 to node 2. But it's basically we have been using the same API that routers has for router move. But if it is a router move that has an SNAT, what we do is, if you are moving from one DVR SNAT to another DVR SNAT, we basically check if it is a DVR SNAT node. And then if you want to move it, just remove your association to that agent and associate to a different agent. So in order to do this one, we are not allowing any routers to getting migrated from a DVR node to a DVR node, because that is not required, because we already have a high availability solution as a scalable solution by creating routers on each and every nodes. So we are not allowing you to move routers from one compute node to another compute node, but we are allowing you to move a SNAT from one node to another node because of the restriction that we had. So that was the solution we had. Now, I think with Mitaka release, we do have the SNAT HECHE, which is similar to L3 HECHE. So Adolfo will take over from here, and he is going to go into the high-level details about the SNAT HECHE and how it works. Here you go. Hi, everyone. My name is Adolfo Duarte. I work for HP Enterprises. And as Swami said, I'm one of the many contributors to work on the DVR HP SNAT. By the way, I see a couple of you out here. Thank you very much. You took a lot of people working on this to get it out the door. Very much appreciated. So the subject of DVR HP SNAT HECHE is a bit complex to try and just get it explained in a couple of slides. So what we're going to do is I'm going to go over a high-level overview of how it works, and then Swami will go again over the same subjects in a little more detail, and hopefully the repetition will help us out here so that we can understand this. And even if you don't, we wrote the slides so that you can use them as reference. So after this, if you still have questions that we didn't answer, please come back to the slides. With that said, we're going to see a lot of little details, little labels, little connections on these slides. Don't try to read them all at once right now. Again, we put them there just so you can have them as reference later on. And hopefully if I do my job right, I can give you a high-level overview of how it works, how the functionality was implemented. And then Swami can take it in more detail, and then later on, Hardik can show you some more configuration. So before I start, can I see a race of hands of any network engineers, administrators, or you've been a network engineer? I see a couple out there. So I was actually a network engineer in my previous jobs before I started working OpenStack Neutron. So I thought I would approach this from that perspective, because to me, it makes sense. It's things that I know, terms that I've used before. So I will be using some terms that I usually use network engineering to refer to certain things here, and hopefully it'll be clear when I make that reference. All right, so let's see. And by the way, I'm not reading text right now. I'm just having to, these are my notes. So I have to go and make sure that I cover everything that I wanted to. OK, so let's get started. All right, so let me go over for a couple of facts about DVR SNATHA right out the door. One thing is base, like Swami said, is based on L3HA, which was already implemented since Juneau. The DVR has not high availability. Just like L3HA, legacy L3HA is provided by Keep Alive D processes. If you don't know what Keep Alive D process, it's a Linux process, open source package that provides redundancy. Particularly it uses VRP to provide redundancy, but I'll go into more details. The point being, it's the same way that L3HA legacy does it. We just did the same thing for DVR SNATHA. Now the Keep Alive processes, which are doing this high availability redundancy, they use an HA network. Now this network is a tenant network. In other words, it's created under the tenant. However, only the Keep Alive processes have access to it. Actually, theoretically, anyone has access to it, but it's meant only the Keep Alive processes have access to it. When a high availability group is pre-created, and that's a router group, all the routers are pre-created. This is a little different than DVR. In DVR, usually the router is not created until you actually need it. In other words, you create a router. You won't see a router name space anywhere until you either associate a VM or attach it to an external network or do something that requires the interface. That's not the case with DVRHA. DVRHA takes the legacy L3HA approach, which is you create the router the moment you enter, for example, the Celia and the CLI, create router, router name. You will see the router name space is created in the network controllers, or what we usually call the network controller, which is the node that runs the L3 agent. So this diagram right here, this is DVR without SNATHA. In other words, this is what's available before Mitaka. And again, I try to do a network diagram, if you will. If you take a look, the bottom two squares, which I have labeled compute node, which is the open stack usual term we use for it. If you're a network engineer, think of that as your server racks. All the servers are down there. The top rack, I'm sorry, the top square, which is the network node in open stack terms. In networking engineer terms, it will be your router. So all the servers in the bottom two stacks are connected to the router on the top. Now, the orange lines and blue lines represent your networks. In open stack, the orange line represents your data network. That's the internal network, the network that VMs use to communicate with each other. The blue line represents the external network. That's where the other side of your floating IP lives, where the traffic that goes out to the internet or any other external network is used. Now, think of the orange and blue lines as physical connections. Just think ethernet, if you will. That makes you feel comfortable. Now, in DVR without SNAT HA, any traffic that requires SNAT services, and by the way, SNAT stands for source network address translation, flows to the router, and you can see it by the green lines. That's the flow of traffic. Everyone is sending traffic to that guy because he's providing the SNAT services. Again, network engineers, just think of it this way, it's your router interface. You have a router that is doing NAT. Everybody who needs NAT is going to send their traffic that way. That's DVR. Now, the problem with this is, obviously, if the router dies, that's it. You lost your services, right? So that is the point of DVR with SNAT HA. It's a pretty simple concept. We just bring another router, and then they create an active standby configuration. By the way, you can add more than one router. This diagram only shows one, but you can add two, three, however many you want. Malcolm would. I haven't tested up to 10, so I can't tell you that it works, but at least it makes it obvious you can have an active and a standby. Now, in the case of failure, just like a regular router that is providing redundancy for another router in the networking world, what happens is that, in reality, what's happening is that the standby is usually monitoring the active router. If you have more than one standby router, they're all monitoring the active router. The moment that active router fails or something goes wrong, of many reasons why it could fail, the standby routers select a new active router. And they do their magic and say, I'm going to be active. No, I'm going to be active. No, you be active. Well, somebody becomes active. The other routers stay a standby. The new roles are informed back up to the Neutron server so that all traffic can be redirected to the new active router. If anybody's seen network engineers, if you've seen BRRP, that's exactly what this is. It's basically BRRP on the Neutron, but we are using it to provide redundancy for the SNAP services. Very strict, very important distinction that I will go over later. So that's pretty straightforward, right? Now I'm going to go a little bit in detail about how the KIPA live processes are keeping track of what's going on. This is a new diagram. It's a little more logical. I added a third router here just to make it more fun. So the previous slide, you have two routers on the top. This slide, I added a third one. There's no difference. I just wanted to add more boxes to confuse people. But what's basically happened is that each one of the routers is running a KIPA live process. Now, for those of you who know how namespaces work in Neutron, the KIPA live processes are running in the SNAP namespace, okay? That's one of the difference between DVRHA and legacy HA. So we introduced that for DVRHA. Again, the KIPA live processes are running in the SNAP namespace of the router. So in this diagram, as you can see, what's going on is that that, no, I don't want to go there yet. The, that green circle represents your HA network. Remember I mentioned there was an HA network created so that the KIPA live processes can talk to each other? That's the tenant network, and it's HA network. And you can see it. If you do Neutron net list, you'll see it. And this is from L3HA, so it's nothing new. If you've used L3HA in Neutron, then you know what I'm talking about. If not, take my word that's a tenant network that is used for KIPA live process to talk to each other. Now the protocol being used between KIPA live processes is VRRP. All the rules of VRRP are of A, who becomes active, who becomes stand by, all of that is used. So in this scenario what happens is once the KIPA live process is detect that the active KIPA live process has gone away, I won't go in detail so how to do it, but it's basically a ping, right? Just like VRRP, they kind of keep each other saying hello's and all that sort of stuff. Once that guy goes away, all the other guys say oops, our active guy just went away. We better elect one. They go through the VRRP rules and they say I'm the active guy. Once that selection is made, they reconfigure the interfaces, the router interfaces. Now I'm talking about, still talking about the router interfaces in the snot namespace. They get reconfigured to provide the correct traffic flow for the new active router. Once the roles are selected, the Neutron network server is informed so that he can do whatever is necessary to make sure that all the compute nodes, network nodes, and anybody else who needs to send traffic that requires snot services is sent to the correct, newly active router. So it's pretty straightforward. I mean this is just VRRP in the open stack space. So that's a high overview of how it works. Hopefully it'll become clear as Swami gets a little more details. Now before I give the mic back, I want to go over what DVR snot HA is and snot, just to be clear so that when you try to implement it or you plan to use it, you know what you're gonna need to and you know what to expect. So DVR snot HA provides high availability for the snot process of DVR. So DVR snot HA provides high availability for the snot process of DVR. Okay? DVR HA is available when two or more agents, these are L3 agents, are configured with DVR snot. It's very important. What we usually refer to as the DVR or network controller in open stack, in DVR terms, it's an L3 agent that is running in the DVR snot mode. If you have used DVR before, this is clear to you. If you haven't, then don't forget that, okay? Very important. And you need two because you're providing redundancy. That's a matter, you can configure how many you need but you're gonna need at least two. So that's why you have two. And finally, which is the big deal and anticlimactic for all those people who have worked here on this is that before Mitaka, if you try to create a router and you try to set the distributed flag to true and you try to set the HA flag to true, you will get a message that said, no, you can't do that. We don't support that. Well, what happened is that we finally got rid of that and now when you enter that command, you get a router that's DVR HA. So now let me go what it's not. DVR HA is not, I'm sorry, DVR is not HA does not provide redundancy for other services running on the network node. And let me be clear why I put this in here. When we're talking about DVR is not HA, if you enable it, it does not mean that you automatically get high availability for other processes like DHCP or VPN or LBAS. I'm not saying you can't do it. I'm saying that enabling DVR is not HA is not enough to provide redundancy for those. You have to do a little bit more work, right? Those are two separate processes that are going on. DVR is not HA is not FIP HA. Again, another very important distinction and I'm not trying to wiggle out of it, it's just that's a different discussion, different work, different approach, different namespaces. So we're not talking about FIP HA here. That if you want FIP HA, that's a different thing that you have to do. DVR is not HA only provides redundancy for SNAT services. One good thing is that no new configuration options have been added to the configuration files. In other words, etsyneutronneutron.com, the ml2.ini for the agent, l3agentini, no new configurations. You just have to set different value ones for the ones that are already established. And then finally, upgrading for regular all the types of routers to a DVR HA router is not currently supported. Many reasons for that. One of them is we haven't tested enough to say that it's supported. And personally, I rather not say that it's supported and then have somebody try it and break their system. And there's also other discussions that have to happen on that. Okay, thank you. Now I will give it back to Swami. Just want to go back again to the old slide that I have. This is my familiar diagram that I have, which exposes everything. So it's pretty clear how the traffic flows. So you just wanted to go back again to there to show you how the traffic flows. So I showed you the first picture where it was just a single node. Now we have two nodes in here as net nodes. One is the primary, which is basically the active and the standby. So if you see the next slide, the traffic flow external to internal network. So if you see here, your external traffic will be flowing through the bridge here, goes through the SNAT namespace. And actually it is headed towards the green VM. So you have the green network already connected to the SNAT namespace. So you don't need to do any kind of special routing in here. So it just passes to the green network VRN and it knows based on the ARP details in here, it knows where the VM resides, and then the traffic goes to the node. Similarly, for the flow on the reverse direction. So if you see, similarly on the reverse direction, if you see for the traffic that is originating from the red VM, the traffic goes into the router and it has a default route which says, okay, for the red network, my default gateway port is residing in this node where the red interface is connected to the SNAT namespace. So the traffic goes into the VRN, it flows through the VR tunnel, comes here, goes into the VRN, comes to the red network here, goes to the SNAT namespace and the traffic flows outside. Okay, so what happens when you have an SNAT failure? So the traffic that was actually flowing through the green node will now start flowing through the yellow node because you have already the SNAT namespaces that you have created, but the only problem is this SNAT namespace will not have an IP on it. So basically once the IP is assigned, the traffic will start flowing through this one. So in order to go more deep into the namespace details, I just had another picture to show you a clear picture of what actually the namespaces will contain. So if you see in here, I have shown the QRouter namespace and the SNAT namespace. So you have this red network and the green network in here which has the QR ports in here and then you have the SG ports in here which we normally create for every interface that you add to a QR, we normally create a port on the SNAT which has a prefix of SG which is basically an SNAT gateway port and because of this port, all the traffic is able to be reaching the red network and the green network and the vice versa and you see the actual QG gateway port residing in here. So the next one is basically showing the network node with a HA. So in this case, I'm showing you both the namespaces. The one addition to this one is basically a key polite is actually running on this namespace and you see an extra port, an interface that's been created which is basically an HA interface and if you see the active one has an IP on this one and in this case, what happens is this HA interface has two IPs. There is 169.xxx.1 and .2, whichever gets the .1 that's a primary and whichever gets the second one, it's basically the secondary. So if you have any issues with SNAT HA, you can actually go into the namespace and see which one has .1 and which one has .2 and which is a standard and which is the primary, standby and primary and you can also have a CLI command to see that. So I will actually basically, so this is a dump which basically shows all the information that I just shared with you with the HA on the active node and then this is a dump on the standby node. So this one, you can actually use it for reference, we will actually publish the slides, you can take it from there and then I will actually hand it over to Hardik for a short demo and a configuration option. Thank you. Thank you, Swami. So for the configuration, good thing is that you don't have to learn about any new configurations file or a new configuration parameters. As far as you know how to enable the distributed virtual routing in the L3 HA, you should be good to turn on this DVR SNAT HA functionality, right? So as mentioned, no new configs, it's just old config. The word you just have to set is just number of agents, minimum and a maximum number of agents which will participate in hosting the DVR SNAT routers and then the router type, right? So if you see the table which is basically a neutron con, there are two flags. One is for the L3 HA and the second one is for the router distributed. So based on that particular values, you can create a default router, right? So you can create DVR legacy router. You can create a legacy with L3 and now with DVR and HA, right? So based on those values. So again, just go into the little bit detail about like controller configurations. These are the pretty well documented neutron con again, there is nothing new here. You just turn on the router distributed. Also turn on the L3 HA. You specify the cider to be created for the HA networks to talk to each other. And then again, you configure the max and minimum of agents that will participate into the hosting the router, HA router. This configuration is for the ML2 configuration. Again, you just turn on the routing. And for the L3 agent, the main point here is that for the controller or a network node, you specify the agent mode as a DVRS net. And for all other configurations are just a VRRP configurations. And on the compute node, your agent mode needs to be configured as a DVR. So this is just a slide of how to create now distributed and HA router. Again, based on your default settings, you may not have to specify those flags. And then I'll just show the pretty much recorded demo to you guys. And before I start the demo, so demo was prepared with a three node DevStack. Two of them are controllers less network node where two L3 agents are running in a DVRS net node or DVRS net mode. And one compute node which is running on the DVR mode. So just to give you on the top corner, two screens are basically capturing the TCP dump on the active SNET namespace and the other side is on the standby. And the middle top, you have a VM which will start just sending the traffic that you will see it. And the bottom two window is just to execute some neutron command to see what we are doing. So I'll just play it. So when we do the router show, it says the distributed is true, which is DVR and HA is true. So it's a DVRS net HA. And when we do neutron L3 agent list hosting router, now we have a status, HA status, which shows which node and which L3 agent is hosting the active and the which one is hosting the standby. And as I mentioned, these are the two window where I'm just capturing the packets on the SNET namespace where we will see that traffic is going out. So now from VM console, just pinging some external IP. And we can see that on the active SNET's namespace traffic, it starts moving, right? So once it's there, I just do disable the HA interface. I just do the hard way to do the failover. And now we see that the ping stops. It takes some time, but okay. Let's start. And now we see it's just flip over to the different namespace. And now we go back to the hosting router. We see that the HA status just flip from active to standby and standby to active, okay? That's all. And I'll hand it over to Swami. So I think that's all we had for SNET HA. And then the next one I just wanted to cover is basically what we have done for DVR during the Mitaka cycle. So one of the major things that we have done for DVR for Mitaka cycle is the scheduling changes because as I mentioned, previously we used to have scheduling options for different scheduling options for routers as well as different scheduling options for DVR for the SNET. So instead of having two different scheduling options and making it more complex, we have actually nailed down to a single scheduling option similar to the legacy routers where we schedule the routers to the DVR SNET node as soon as the routers comes up. And for scheduling it to the, basically to the compute nodes, we are no more scheduling it. Basically we are creating routers based on demand whenever the port comes in. So that was one of the major change that we made, which kind of simplifies the scheduling aspect on the DVR. So this is a summary of the things that I wanted to show. And the one thing that has been deprecated or not probably not deprecated, but we are not using post Mitaka is basically the CSNAT binding stable. We used to have a CSNAT binding stable which actually keeps track of the SNAT bindings to the agent. And right now, as I said, we are going back to a single scheduling option. So both SNAT and router is being tracked by the router L3 agent binding stable, not the CSNAT binding stable. So the next one that we, we did make change to the control plane to increase the scalability on performance is basically is, we have done as a community a lot of changes to in creating the routers and informing the agents what we have to do. Because previously what we have been doing is we have been sending notifications to all the agents which was creating a lot of control plane issues and scalability issues. And now what we do is we only specifically send messages to the agents that are required to know. So we know where these VMs are or where these routers are located or where these floating IPs are installed. So if any credit operations that happens that affect those DVR serviceable ports, we only send notification to that specific host and we don't send notifications to every other host. And there was also another change that was made was during an agent sync up with the servers. If we have too many routers that you have configured, it takes a while for it to fetch all the details through a RPC from this agent. So we kind of had a throttle mechanism to actually fetch a bunch of router information at a time and then it goes in a sequence. So those were the changes that was made. Thanks to Oleg from Marantis who did the work. He did a great job on doing all these works. So that was a great improvement for scalability and control plane performance that we did during the MetaCycle. And last but not least, we also made the DVR job voting in the check queue and we almost have pretty much stable in the multi-node and we'll make it voting in the next cycle. That's all we had for today. And the one thing that I wanted to touch base again is for the Newton, we have some more work to do with the DVR. So we are planning to have some support for BGP speakers because today the BGP speakers have some issues in talking or exposing routes to the next stop gateway using the floating IPs. So that would be addressed during this cycle and then we will also work on IPv6 addressability using North, South from the compute node. So those are the two things we will work on during the Newton cycle and that's all we have today. Thanks for the audience who came in today and if you have any questions, we can take up pretty quickly. Hey Prashant. Time is actually ended for the session. Any questions can be asked out in the hall or in the side of the stage. Sorry, thank you. Sorry guys, I'll take up the questions outside like if you want.