 Hi, good morning So I guess this must be the people who didn't go to the party last night everyone else perhaps Nine o'clock in the morning right of the party. This is the best part So my name is Tarek Khan and I'm here with my colleague Arun Thulasi Yeah, so we're part of HP's network functions virtualization business unit and Been working with with the cloud and open stack Pretty much since it started and for the last couple of years focusing on how how we can we can start using Open stack for for virtualizing the network functions. I know a couple of people in the room have been working on it quite a bit So for them, maybe some of what we're talking about is not gonna be as new But I hope the others are able to get something out of it so the topic today is a Rather narrow topic where we're essentially trying to look at the problem of being able to get exhalated network performance with an open stack and and then look for talk about a little bit of options and then you know cool close out with What solutions HP has and how we are trying to to solve this problem for for carriers Especially in a in a carrier grade production quality open stack distribution So with that we're gonna get started. So the first I wanted to open it up with with Something just a reminder of why we need to accelerate that performance I know almost everyone perhaps knows it But we wanted to open it up then wanted to just talk a little bit about what options are there and if you take everything away, there's You know at the heart of it There's just a couple of ways of going about it and there's different implementations of of addressing those those ways Then we were gonna talk about where is open stack with it and some of the other efforts going on in providing this exhalated performance and then close it out as I said earlier with our solutions So with that Getting started and I'm sure so everyone has seen it has anyone not seen this Probably not so this is all any NFE discussion starts with this. What are we doing instead of building the monolithic large network systems which have Essentially are the mainframes of network instead of having a single company providing the hardware the quote-unquote operating system and the applications Why not try to use the abstraction that that cloud technologies provide and then be able to have just the functions I eat the network applications be deployed to it. So What it boils down to at a at a hardware level is that you had dedicated systems that you were using earlier Now these are not dedicated anymore You are and you're introducing some new some new component which weren't there in the the traditional network devices and What what that happens is and you know such technical questions and comments over here But what that it really introduces it earlier you had the full use of the resources So the the network operating systems They didn't have to worry about you know protecting the memory going from you know Kernel space to the user space trying to make sure that you don't have too many copies of of the data packets All of those problems weren't there but now you have you have to share resources with other workloads and But as part of the sharing resources you have you have to go back to some basics of how how operating systems and servers work You know, there's interrupts that come in when when you need to be able to do things there's other other processes that are vying for the processor and IO time and And and then to be able to solve it the the you know open stack With an open stack open v-switch pretty much has become the de facto v-switch I know there's a number of other options available But but this is predominantly over 90% of the deployments are using open v-switch in some shape or form and it's Providing a very flexible Environment a full-featured v-switch but but to be able to to deploy it for open stack There was some complexity that was put in place Essentially to to have the agility and to be able to to move things around and and as you can see and these are the Different tap ports that you're able to do it the VMs don't don't go straight to to the to the v-switch you go through Linux bridge and and essentially the more things you're going to add the more copies that that are introduced and As a result when there's a packet that needs to go from from one VM to a VM on a different host You essentially have to go Sometimes as much not sometimes a minimum of nine copies have to happen and these nine copies and if you look at a traditional Operating system each of these copies requires an interrupt the processor needs to be interrupted It needs to be copied when we are talking about the some of the data plane intensive apps some of the signaling apps that that are there powering our cell phones and especially when some of these have the the Packet signaling packet for these phones is about 60 bytes in in size And if you have those 60 byte packets each of these having to go You know nine copies to go from one side to the other one It's going to create latency and it's going to put a lot of overhead on the on the processor Whereas they were used to be able to doing these with a normal t-cam based Isic-based switch just three copies against these nine copies and To be able to do this I'm gonna hand it over to to my colleague Arun Who's gonna talk a little bit about you know, what are the options for for addressing these nine copies against something else? I'm sorry, so Tariq talked about the distinction between what an enterprise environment typically requires and what category environments need so with the current open v-switch and The kind of packets that we see in in typical telco environments We'd be lucky to get you know 30 or 40 percent of The available bandwidth through the open v-switch. Let's say you have a 10 gig nick You're gonna get as good as you know four gigs three and a half gigs The goal for us is to be able to get as close to wire speed as we can get because you know, you open up your phone You want to get a stock code it gets lower People get unhappy So what are the options that are available for us today? One is the ability to take out the slow switch and have what we're going to call code a faster switch You know what that faster switches we look at in the next couple of slides Or the other option is to completely bypass the switch and be able to use the underlying hardware and the capabilities that it provides What if you're gonna call PCA pass through NSRI or we? Similar to an extent, but have they have some key differences, but each approach Comes with a cost of its own if you if you need a new switch You're probably going to have to acquire something that's different from what the community provides as open v-switch Or a that have open v-switch in itself enhanced on The on the other hand if you if you want SR IOV or PCA pass through capabilities You most definitely need new hardware and in certain cases possible drivers that support these technologies So each of these options comes with a cost that you need to incur So what does what does a faster v-switch mean again as Tariq pointed out earlier? The major reason why we lose out on bandwidth is is the number of copies that are involved Going from user space to colon space back from colon space to user space and so forth a Colonel bypass technology and this example uses dpdk a Colonel bypass technologies are a gateway to get those copies eliminated and have the kind of performance We desire Regardless of the packet size So if you if you're running jumbo frames probably you have less a number of copies But if you go for these kind of smaller packets then your copies is just multiply So if you look at what dpdk is trying to accomplish it's trying to eliminate the additional copies that are required so dpdk Requires applications to be recompiled with certain options enabled there are other technologies that do not require The applications to be recompiled as well But the underlying goal is to make sure that the application can use a user space driver that can directly talk To an enabled nick, you know dpdk open on load or any other technology that you desire So that's that's one way for us to move from a slower switch to a faster switch So on the hardware side we talked about two options PCA pass through an sRIV and one of the biggest differences between these two is How a nick is visible to the VMs on top PCA pass through effectively allows you access to the Functions the physical function of the nick from the VM. So in essence if you're going to have four VMs They typically need one nick of their own But again going back to some of the telco basics each VM is going to require multiple nicks each host is required to host multiple VMs So PCA pass through in itself might not be an ideal choice for all your VNFs So that's the reason We have sRIOV. So sRIOV enhances PCA pass through capabilities in that it allows your VM to talk to a virtual function that can go from one to 64 on your physical nicks with each Virtual function inside a nick is exposed to a VM possibly as a v-nick So this allows you to talk directly to the to the nick without having to go through a v-switch and It also allows you a number of different VMs to be hosted on the same server. So if you if you look at The example here so VM 1 and VM 0 Effectively talk to the same shared resource, which is a single nick which is an sRIOV capable nick But it exposes up to six to four virtual functions each of which is effectively embedded Into the domain XML file of the VM itself So in an in an ideal world we do not want Customers to be restricted by using just one technology or the other You need to have a compute host which should provide you a variety of different options to accelerate your networking performance The VMs that you deploy on top should be should be free to choose which mechanism suits them the best Do they want to work through sRIOV because sRIOV as we understand has you know certain limitations or certain challenges around migrations and so forth or Can it use AVS and the flexibility that it provides? So a typical compute host should be able to Provide you both the capability to use v-switch or the capability to use a Technology such as sRIOV and as you can see in the slide here each VM Effectively has its own networking mechanism. You could go through the v-switch as part of your acceleration process or you could directly Access the virtual function of the nick So we talked about some of the primary models we did not touch on all the available options There are some options that that are still gathering traction You know RDMA Rocky and the other options, but what today is available for you to order and be able to Seamlessly deploy your current applications. We talked about it. So what is what is open stack doing? To support these technologies that we saw where are we with respect to you know What the community is doing for either sRIOV or PCA pass through or some kind of enhanced v-switch So sRIOV has gained good traction with with open stack. You could do sRIOV with upstream open stack today It's been available since you know, it's it's being enhanced as we speak in kilo It's gonna have additional enhancements both NOVA and neutron are enabled to work with sRIOV port. So you could You could you could add You could create a VM Plug in any neutron port you want and I said I know I will support and recognize an sRIOV port same way for Neutron as well, but the key challenge that we see today is You know live migration so In in a cloud native world You probably do not rely as much on live migration. You expect the applications to be inherently aware You expect the applications to be redundant. You don't care about live migration, but As a telco that transition does not happen overnight. You do have applications where you are heavily invested in That require the platform to provide hA so until you get to the day where your applications are all hA aware You know, you can throw in chaos monkey and your applications will still be up. You need a pathway To use and and live migration is key So today sRIOV and live migration don't don't go hand in hand So that's something you know as as members of the community as partners in the community We need to look at That provides a pathway, you know the flexibility that we talked about in the earlier slide I bring up a VM. I'd be able to choose AVS or sRIOV and still get the kind of functionality and availability I need and also the capabilities of Knicks so today there are a handful of Knicks. I think that are being Supported with an upstream. I think Intel and Melanox are probably the two Well-known Knicks that to sRIOV and function well with upstream open stack We definitely need, you know, some kind of ubiquitous Nick support I should be able to you know, if I'm talking about platforms like open computer and whatnot I should be able to have other Knicks identified and other Nick supported in the community PCI pass through I'm going to say as Weird as it sounds. It's a subset of a sRIOV in that it essentially exposes one function Where sRIOV exposes multiple living alone some of the other differences But as long as we could get a sRIOV going with the flexibility that it offers we should still be okay getting Getting the telcos moving on and and I don't just to add other Challenges with sRIOV that that we have right now in that in this one since you are Bypassing the V-switch you're essentially your VM is connecting to a physical network directly and and what that requires is that now in your you're in any Scale deployment you you will need to include some kind of control or SDN capabilities where where you want to be able to if you want to create a new VLAN for example on your With with the with a V-switch based deployment your VLANs are created within the you can create them using your V-switch and the integration is there But now that that all the network Activities need to happen on the physical network You need to make sure that your physical network is controllable through neutron So you need to figure out you know either control them directly with the with the vendors neutral plug plugins Or you abstract it away using some kind of SDN controller So there's what going on in this, but but it There's like Arun talked about there's pros and cons that we want to be a we're gonna have so in an ideal world We want where we need that that Accelerated performance near line speed with with low CPU overhead We want to use a sRIOV a bypass technology where we don't for example for almost all VNFs that we deploy There's a O and M port. There's you know other control ports where you don't need it So you can't want to be able to use some V-switch based based technology. I'll swear by V-switch any day. I guess Go back to our discussion You know as we talked about a sRIOV we need to address OVS as well So one of the good things about deploying upstream open stack is it is well integrated with OVS I'd be able to just get a deployment going in hours, but you know, we talked about this earlier the performance out of OVS Does not meet carry needs by any means even enterprises would have challenges carriers by no means so there has to be a way for OVS or To be enhanced to work with some kind of kernel bypass enablement mechanism It could be DPDK could be open on load could be something else the community comes up with but as critical as Something as critical as open V-switch that is required for open stack to fully function I think we as a community need to be able to influence that position and have OVS ready for You know any one of the mechanisms that we are in defense a community The second challenge with any of these new technologies is how does it impact the application that's already deployed? Some telcos develop applications of your own they are constantly Deployed so it's easier for you to go in make some change and have it work with any new environment But when you work with a partner when you work with a legacy application How easy would it be for us to go in make some changes to either recompile the application or add a new Driver to the host of the VM. So these are challenges that we are facing today How do we ensure application? Work well in this new framework that we build out So, you know there has to be again a community driven effort to minimize those kind of restrictions on the VNF providers having the applications enabled for These technologies without a recompile or any cumbersome procedures Lastly, there are there are various different V-switches that are in the market There are there are customers who have taken for instance OVS They have built their own additional technologies on top. How do we bring them back into the community? How do we how do we ensure, you know, whatever work has been going on and in different spheres? Flows back into the product itself so that You know what what external vendors get any upstream user would eventually be able to get? That said I'll pass it back to Tariq to talk about how we as HP are trying to address this problem So, thank you Aaron So I wanted to take just a couple of minutes talking about our solutions in this area and our solutions We HP being an engineering company at heart, you know before we start working on it We have to agree on the strategy and then everything has to align back. Just a normal architecture principle. Everyone follows them, right? So the vision that we put together was that that the and and this is not just for telco This is you know in general we be and just the attestation of you know 5,000 people showing over here is the fact that people have acknowledged that open source is going to play a big role in the IT of future and And it's going to be a developer led world which essentially means that when you are trying to be able to interact with with Any IT systems you got to be able to use some kind of APIs to be able to to address that So this is something that HP we have completely internalized and and all our solutions are working towards it Obviously open stack being a big part of it and the other thing that we are we are trying to enable is How can we bring the IT style agility and cost structures not necessarily cost the actual pricing but cost structures Into the telco and and if you look at it the telco cost structures used to be very different I mean they're still very different But the IT style essentially means that that the product the solutions that are being developed They develop not for a small cross-section of customers, but a very wide where the development cost is amortized across all of those Development engineering support cost. So this is what we are trying to enable and to be able to enable it We have some some of these core technologies So we have and as you know most folks over here can attest that that the the world's within the network Providers the CSP's it is coalescing into a single organization that that will be providing both IT and network services and And the the when when all of these services are provided Then what you want to be able to do is that there are certain applications Which have very you know IT style requirements where you don't need this accelerator exploration like Arun talked about But there are some some applications the the ones that I kind of talked about the the what's called the bearer applications or a data plane intensive applications will require something which is very focused on on on on Ensuring that the the the packet can come in and out as as quickly as possible near line Near Near line speeds So all of these need to be able the vision is that we want to be able to get together where you're you have a single control plane and Across the single control plane based on the requirement characteristics of their workload. You're able to to position it appropriately So for this there's I think this is just a subset of different standards Bodies or open source organizations that are working on it. We have been very active in it And I know you know again attestation that you guys are active in it. Well, that's why you're here so so where we the way we are working on it is that as For you to be able to have an open stack Distribution you got to be able to to Use put the guidelines in place and ensure that you are you're not creating yet another fork of which which happened in Linux and a Lot of other open source solutions. So so we are completely aligned with you know, what opens daughter are Essentially has has put the the guidelines together But wherever there's there's some capability that needs to be developed There's only so many ways of doing it either you put a new blueprint What with the community to get it included or you you use the pluggable architecture that open stack provides and be able to To create those plugins so that you you're using the open stack API endpoints instead of coming up with yet another Endpoint and then there are certain things that open stack has left it open Being able to to use what flavor of Linux you're going to use what we switch and it just being the pluggable architecture You've got to be able to build some value around it Which is what what did HP we are doing and we are upstreaming anything that that's coming out So this one just a I charge just wanted to leave it over here that HP Helion is is a brand it has a number of different products There's a couple of products that that are very focused on open stack that we are we're talking about over here So there's an enterprise version of open stack that that we have and there's a carrier grade version of open stack that we have And it's in the carrier grade version We're we're essentially trying to bring some of these capabilities that are very very important to These network workloads and the way we are building our solutions is like anyone else builds it You got to start with core open stack it is coming from the open stack trunk Then you put some value around it and the value that that you know each distribution provider Obviously including us we are putting around it to be able to do the life cycle management of the platform itself Open stack for example has not said how should you install it? How should you get it up and running? How do you update it? What kind of security policies do you put in place within your your compute node your servers and so and so forth? so we we are putting that together and and with these configuration management life cycle management capabilities we bring in as as Helion open stack, which is the the enterprise version of our of our distribution Then taking that as base we're able to to add Capabilities that are only important to to telcos to it now quite likely these capabilities are going to make into mainstream pretty quickly Someone use the example that you know there there is a model that cars Auto manufacturers use quite a bit that any new capability that you bring in you bringing in in your premium line And then you know those capabilities are available everywhere So I like my wife drives a Prius and that Prius has adaptive cruise control It has lead lane keep assist which are just a couple of years ago. What only available for at you know the the highest You know luxury car models, which is the same thing that's happening over here as well the capability is moving down here and then we have put an umbrella of The services so anything that's not available in the product the anything that needs to be customized for a specific Customer we are able to do it with both professional services and a global support organization that we are providing So now that kind of I said the stage of how this premium version of open stack is structured Just wanted to call out at a very high level what the difference is between the telco and the compute workloads are and I know each each line over there. We can we can discuss it and there's going to be nuances around it But at the heart of it The best network is where you don't know there's a network that and that is what the the the core difference between Compute and networking workload that when you go use the compute workload You're essentially going to an endpoint you go to a website when you're using network You're basically using the network to go somewhere else So and and then in fact the next talk gotta put a plug on it is is around The wire line service on how do you basically go from point a to point B? Create the the what's called as a virtualized CPE So which which essentially shows that the network we may want want to show it as a you know big cloud But it's really you know network has a shape. There are things that are distributed and and which which in in Enterprise workload you want to be able to aggregate it You don't want to know the shape whether the network you need to know the shape to be able to to use it appropriately so I Was having these couple of conversations and I would love to get the feedback of of the carriers over here the the Question of what is carrier grade kind of comes up quite a bit For for Linux, there's no question because Linux foundation came and put the carrier grade specification there so for For someone to call their Linux distribution at carrier grade It needs to pass through or comply with that specification the last specification is version 5 which came out I believe in 2011 But so for up to that point there is a carrier grade specification, but beyond that there's no carrier grade specification So this is our definition of what carrier grade is and I'd love to get your feedback on if you know this Definition aligns with yours which really boils down to these three top three three topics one It's talked about quite a lot You know people align the carrier grade to be with resiliency, you know five ninths out above availability And what do we mean by that? It this availability means both in the platform Which is the compute the the open stack services need to be five ninths or above available And they need to be able to provide these self healing healing capabilities that VNFs or applications can leverage To be able to provide the end service that people care about to be five ninths or above available Performance essentially saying and then what we touched on earlier that how can you get near line rate? So for you to be carrier grade you got to be able to get as close as possible to a line rate performance and The third thing kind of goes into that these networks that are running these are you know These are the lifeline for so many things So you got to be able to have these Manageability capabilities which I know If how many of you got a chance to sit in in the talk about upgrading open stack how how easy or hard it is So the in-service software update capability is a requirement now There's multiple ways of doing it, but you got to be able to support it and there's other capabilities off of When you'd use these these exploration technology sometimes you have to share the memory or share some other resources So it requires them enhanced security that you put around it so that You're you're not giving up one thing for for being able to get something else and the advanced resource scheduling Which essentially means that you because network has a shape You need to be able to to schedule your VMs or workloads where it makes sense In certain cases you want to schedule them them separately for high availability in certain cases You want to set schedule them close together for for low latency So when you take these three things together, then we essentially can say that the platform is its carrier grade and to bring this HP we partnered with with wind river to be able to bring some of the Carrier grade Linux capabilities essentially the compute nodes and the host operating system that you're running and with the work that We have been doing with the community over here The open-stack work from HP and making sure that when we bring these two things together that we are able to stay true to open-stack principles and be able to upstream any any or Modifications or any updates that we are making to open-stack and And the way that we are we we build a solution together was as as Arun talked about that If we are going to be able to support we'll have to support for these data plane intensive app You've got to start from from the hardware and the hardware needs to be able to support these these off-load technologies And then when you start working on the software layer on top of it the the the only way the IT style cost structures and and the agility is going to be possible is for you for us to Be able to leverage what's happening in community, especially around open source So you start with the open source projects be it open stack be it the carrier grade Linux or be it the KVM extensions that you need to put in place the real-time extensions some some patches that are sitting outside of Linux kernel to be able to Make sure that the kernel is preemptible or non preemptible and and once you have these these enhancements in place now you're able to to Number one have lower latency But more important that you're able to reduce the jitter and and have more predictable performance with these real-time Extensions and real-time extensions really mean that you're you're getting rid of the buffering you you essentially the the the processor is Is addressing the request coming in in real time? and and then one once you're able to do it the you need a v-switch that that provides both accelerated workload accelerated networking performance and also the v-switch to be able to provide Quote-unquote carrier grade or resiliency that that carriers carrier grade requires So you're able to do things like that if there's a fault that happens in the In the nick you need to be able to pass it all the way through so that the application That's running that can make real-time decisions on on on any any errors that that may be coming things like making sure that you don't have Have memory leaks you don't have to restart your your v-switch very often now for for this this release It is a close-source solution It's called accelerated v-switch, but it is a DP DK enabled user space DP DK enabled v-switch that's able to to to reduce the number of copies In the host and then using with a with a DP DK Pole mode driver in the in the VM. We're able to get as close to line speed as possible and and then Like I talked about you basically are able to leverage this using DP DK With using a a nick driver in the VM and once you have the nick driver You essentially are able to to run pretty much any operating system now the this was just related to to the Exploration but for carrier-grade capabilities that I touched on earlier you do need some other capabilities as well Which the middleware part provide things like high availability and sub-second fault Fault detection and some cases sub-second fault recovery as well By the way, there is a demo at the at the HP booth So if you're around please please stop by and we'll be able to see how you know this some of these capabilities are working on life systems So when you put all of these things in place and you you are able to to Schedule the VMs properly and being able to to assign a specific Number of cores to the v-switch and to the kernel the the benefits are phenomenal that you have a Significant decrease in average latency, which is important, especially like we talked about You know for a 60-byte packet and for a cell phone where one of the 3g pp requirements is that when you are trying to Register your cell phone it needs to happen within within 200 millisecond and for 200 millisecond if we talk about so many copies and if you're using you know this kind of environment so you you need to be able to reduce the latency because if I believe and Fred you may know of hand. I believe. What is it when we power us our phone on it touches? 72 more than 70 systems before The the carrier is able to validate that I am taric. I have paid my bill and I'm supposed to be able to use my phone So what for those 70 copies all of that needs to happen in 200 millisecond? so if I'm not able to reduce it it's gonna have have an impact and So definitely the latency we are able to reduce but the more important that we are able to reduce the jitter Which is the variance in and what we're getting and these this is what you're able to get by by essentially able to use a Real-time or what we are calling carrier grade KVM. It's some patches that you need to be able to apply to to the kernel.org KVM and Very interesting story why it's not part of the the trunk of kernel Then you need to have a user space dpdk enabled switch for providing some of this this level of Performance and just some numbers now. I completely understand that you know, there's Vendors putting numbers out everyone looks at those numbers. Yeah, you did it in your lab Let's see how it's gonna work on on my My floor, but but with that said these numbers the testing that we did we When and by the way, this is just an example of showing that you can run any host operating system It doesn't mean that a specific performance is only possible with a specific. Oh, sorry guest operating system But what we're able to do is that just being able to use a user space dpdk switch V-switch and Using a a a kernel driver in the VM. You're able to get something around You know seven to ten times exploration and performance for a standard packet size of let's say around 256 bytes But when you're able to put pole mode driver over here, which is how you how you use dpdk It's a specific driver that you put in the VM a Trivial change in terms of coding just four or five lines of code and and inclusion of a dpdk library But yes, it does require a recompile of the application But if you do do that, then you're able to get close to what we are saying around, you know 40 times performance, but as you see, you know From 256 byte frame size and above you're essentially with this pole mode driver. You're able to run At line speed to next line speed just using two cores and and which is very important Because if you try to do it With with a non dpdk enable then in fact my next slide kind of shows it that if you're using in a upstream OBS then just to be able to get about Six million packets per second and at 256 byte you essentially end up using 20 cores Just for V-switch just to handle switching. So for a 24-core Server which are very common these days 12 cores running you're left with you know Just two or three cores to actually run the VMs with this one. You're able to to get Just apply, you know so many cores to the to the Host kernel to the V-switch rest of all of these are available for you to be able to run your VM So not only it's providing accelerated networking performance You're able to increase the VM density drastically which has a has quite a bit of impact and in in your overall cost of ownership and just wanted to close out with one one more slide and Just some some capabilities that that some of these the enhancements are able to bring over open stack and Happy to again show you some of these things that are at a demo downstairs being able to Detect a fall in sub-second isolate the fault and then be able to to recover and repair in in these kind of time frames And then being able to have the network failure detection down to 50 millisecond Which by the way the requirement for for the Linux to be carrier great so bringing it down to that level and then being able to bring some of the Capabilities like you know being able to do live migration not of just normal VMs But dpdk enabled VMs as well, which which is quite It's a little challenging doing doing in some of the upstream Capabilities that are available and with this I know I'm a minute over But just wanted to close out that some of the challenges We talked about how the our solution is able to address and yes any of the changes that we're making to open stack To to bring these to be we at HP are committed to to take all of these upstream so thank you very much and Open to questions I'm sorry So the question is what is the topology of the network when you're quoting these throughput numbers? Where is the from and where is the to and in particular? Are you doing layer 3 or are you just quoting throughput within the same subnet now? So these numbers are based on measured at the edge of the host operating system and These are are just packet in and back so we're not doing layer 3 so it's packet in and back and So in reality, you know these numbers would be slightly different But over here we're saying how much your v-switch is able to address slightly meaning a factor of 10 or 20 So in any real network, it's not in and out on the same subnet, right? It's it's traversing routed hops. Yeah, what what do you envision happening once you have to leave? You're you're very very Local subnet and actually traverse the net right No, no, I'm saying your source and destination will will rarely be on the same subnet So the solution still it's it still allows for network latency So whatever happens on the network is it's independent latency what we are trying to ensure is when you hit them When you are on the host When the when the host has a capability of let's say 40 gigs Can we get near wire line speed out of the host itself and that's what? These kernel bypass technologies help us accomplish. There is no latency for the packet getting out of the house So whatever happens on the larger network, so you're assuming that the L3 is handled non-virtualized the L3 is on a traditional Cisco or Juniper or Arista router correct So the L38 the L3 latency in itself is It's not what kernel bypass addresses. So what kernel bypass addresses? Can I get the maximum throughput out of my host when I get in a packet when I send a packet out now to your point? Yes, if you want to have throughput throughout the network then there are additional components that need to fit in the network like for instance a non-virtualized L3 router But that goes beyond, you know what the data center itself is If those vnfs are in different subnets or only if they're in the same subnet If you can reduce the number force now those course are available for you One one other short question. What's your what's your packet drop rate? Because we found that just doing a simple spiron test Yeah, the pattern the throughput at a 10 to the minus 6 versus a 10 to the minus 5 Yeah, it's vastly different and without quoting the packet drop rate these numbers are useless So it so happened our lead architect for this solution is sitting right there Vinod Chegu and he was the one you know His team running all these tests. Hi So this was done using RFC 2544 and By definition in RFC 2544 you do this whole range of packet sizes with zero packet loss Okay, that's the that's the requirement for RFC 2544 And and so but I appreciate your questions because you know what what we're trying to do over here is that Right now some of the latency and some of the questions that you were asking what what the Equipment providers are doing today is that this entire thing runs on bare metal today So all the latency associated with you know packet coming from here and going and hitting the the nick edge That is the part. We're trying to optimize as much as possible within this what's happening it's essentially we're trying to say that The the the guys who were building these these VMs they were they weren't VMs These are physical boxes sitting separately now you're putting them in here and the shorter you can make this to the edge edge Then you are essentially being able to replicate what was happening in the in the physical world and our effort over here is That's why we said that one of the carrier grade requirement is that you got to be able to get as close as possible to the bear Bear metal speed, which is what we are focusing on then there is DVR as well Which should at least reduce some L3 latency, but come beyond Come beyond the data and and we're gonna be around so happy to have a conversation with you Any other comments or questions? Oh, thank you very much