 All right, we're gonna start Good crowd And we're gonna try to go quickly. We're gonna do 30 minutes of throw acronyms and intense technical information And then we'll say 10 minutes the effort questions. So All right, we're gonna start. I am Anita Trikler. I'm a product manager at Red Hat focused on NFV and networking Looking at data path acceleration and network orchestration My name is Greg Smith. I'm a product manager the broadband group at Juniper Networks Actually based here in Westford my specialty is virtual BNG and I actually went to high school five blocks from here So glad people are here in Boston. Welcome. And how many of you have are new to NFV and Or starting just starting with NFV Wow, there are some and the rest of you I'm assuming already familiar with NFV and are working with it for some time Right moving on This is the only introduction side we have on NFV. I guess there's some delay Yeah, I think you got to take it off F7 That's a fate. Oh, sorry so for NFV we are Taking the network functions that are already on your appliance that used to be on your appliance and taking the software pieces of it and rerunning it on On servers on Caught servers what they call off-the-shelf commercial servers and you can see that all of the top service providers both in North America Europe and Asia and Europe are all moving or looking to move in the next three to five years to virtualize all of their networking functions This is an overview of what Etsy NFV defines as the network architecture At the top we have a lot of orchestration pieces that are very close to service providers And at the bottom is what we have our NFV infrastructure and we're going to focus mainly on the NFV infrastructure the virtualization and Then we see where OpenStack comes in in the virtual infrastructure manager and We have the VNF's like Juniper's VB and G sitting at the VNF layer right above the infrastructure This is showing where Red Hat and OpenStack fit in You have your OS and your OpenStack platform which both runs in your virtual infrastructure manager, which is the controller and Then you have all the agents that sit on your compute nodes Yeah, so we're gonna try to tuck in seven key challenges and I go over we're gonna take a very specific application And I was just watching the last presentation was fantastic But that was sort of thinking about how we would do it We're gonna talk about if you had to do it today or next week how we do it so we're gonna try to touch on Seven things if you will performance connectivity multi-site multi-tenancy Programmability high availability and security. So like I said, we're gonna have to throw it at you really fast But we're gonna take the use case my favorite use case broadband networks virtual broadband network gateway So here's your three minutes. What is broadband? The home so there's hundreds of thousands of homes on the left each one connects to it what we call an access node That's a one-to-one wire. There's probably thousands. So think about in a country like Italy There's four million broadband users. There's 4,000 access node locations. We'll call that the central office. Then there's potentially an aggregation network and then there's The pop if you will and there's about 600 of those pops where you see VP and GMV lack And then there's a core that's like two or three cores So it's think about like a tree aggregating up and the key broadband is you turn on the DSL modem Or your ONT in your home. It sends out a DHCP discover says I need an IP address That's a broadcast message. No destination IP. It's a layer to broadcast the access node Here's that inserts to Q to Vlan tags Q and Q Vlan and Forwards the broadcast message again. No destination IP address to the virtual B&G the virtual B&G here is that it says Oh, you're on Vlan 100 outer tag Vlan 10 inter tag Who is that? That's a hundred main street Raleigh. Oh, it's Anita Look at you up in the database says and does Anita pay her bill? Yes, how much bandwidth that you get voice video and data and so then we apply it We give you an IP address. So that's what we call a broadband network gateway So this dresses the open-stack cloud in a couple ways right now There's the broadband network gateways about half a rack high It can aggregate hundreds of thousands of broadband subscribers and terabits of bandwidth So there's probably you could do the city of Boston with like three and they cost hundreds of thousands So it's a lot of bandwidth you're gonna play in the broadband space I mean 5g is nice look forward to seeing 5g someday right now. We drive a lot of bandwidth The other thing is that L2 connectivity and we'll go back over that a couple of times But we need to see a Vlan from the home Through the network Through the cloud gateway router through the tour through the host operating system and the virtual switch if it's there into the Gasvm all both those layer two tags that have to be there. The layer two tags aren't there. It won't work So that's a biggie and then the other one is that there's a lot of locations That's one of the beautiful things about this application is that It's pretty much impossible to put in the public cloud because it's physically tied to the network So fortunately for us, I wouldn't you know, Amazon web services isn't really viable at this time Maybe someday in the future So a lot of locations and we'll talk about multi-tenancy once you get the packets into the cloud through the BNG You want to be able to do things with it You want to be able to program it and you want to be able to distribute security. So that's our application and generally I need a night talk every every now and then I say this is the hard thing I want to do and then she says well red hack and do that and tells me how so now we're gonna share that with you So the first thing I asked for is 40 gig bi-directional 512 byte packets on a single socket I like to have 10 cores for my application VP and G So that's what we can do with SR IOV today The open question is can we do that with a vert IO? so That was the challenge I gave to Anita and he has said Pretty much. Yes any day now and I think we're already doing it, right? Well, let's see. I want to see the text Show them how we would do it. So this is the data path options that we have today for open stack Opens the first one on the left is open V switch That is the default data path when you configure anything and you don't make any changes and you deploy your open stack You get open V switch on your compute node and this has a kernel data path. It has it has a user space Open V switch for configuration and then moves to the kernel data path And then we move on when you want exhilaration and you want higher performance and Greg says hey I want 10 million packets per second. We're like, okay. Let's try something Different. Let's move to SR IOV and SR IOV or DP DK Data plane development toolkit both of these are options to bypass the kernel And to give you higher performance to avoid the interrupts and the latencies from the kernel They both do it in different ways. It's our IOV has a dependence on the nick it you need Virtual functions available from the nick and you're tied to your hardware You have a hardware dependency in this case and a lot of deployments today are entirely SR IOV Very few are moving today with Red Hat open stack now has full GA support for OBS DP DK Which is open we switch with the DP DK accelerated path and now we are seeing customers starting to move and taking that path And the reason why is they don't want to be dependent on the hardware nick Even though the CPU overhead is low and they want some switching functions They want to be able to do bonding in the host itself. They want to be able to do some security groups. They want to be able to do Life migration for off their VNF's and with the DP DK you have the advantage of having a switch So you can do live migration. You can do bonding. You can do encapsulations some without having to depend on your top of rack You are and you can do some overlays, but with DP DK. There's a cost to it. You have to allocate some cores For and memory for your for your performance needs Yeah, let me just take this opportunity to say I keep flip ahead. We'll go quickly through these next few But the we talk a lot about OBS. I work for Juniper. We have for the contrail V router pretty much everything We say about OBS is doable with a V router, especially if any contrail people in the room So I think about it like the virtual switch So this one now this one shows a more complicated layout of how we can achieve all of these options together We have control plane and signaling VNF's virtual network functions that can sit and use OBS the way it is today. You can have your default configuration and the way OpenStack deploys today and OBS deployment it will have either a Linux bridge or a OBS Br integration and then you have OBS either an external bridge or a Tunnel bridge and that allows you to do encapsulation Usually the external bridge takes you directly out to the internet and it gives you the floating IPs that allow allow you to have internet Connectivity and gives you that's your data path Usually the blue is the north south or the data path for high throughput applications data going in and out and Anything that is east-west traffic that is going between VMs you use Tunneling like VX LAN or or VLAN and there you have your Tunneling bridge now if you're using SR IOV you're typically looking at your data path your fast throughput data path If your data plane VNF's we have two of them one of them is using SR IOV And the other one is OBS DPDK which has three nicks coming in and you have to separate Each nick because each nick you don't want to have your telecom traffic data traffic being tied up with your management traffic So you need to have separate nicks for your data and your control plane and your management And the management is the open-stack management to bring up your Nodes your compute nodes and to provide the services Nat DVR DHCP in addition to Deploying this the the data paths you have to worry about Performance tuning for SR IOV There is no cost you don't have to do much to performance tune on the host other than allocate a couple of cores for the host management purposes You could probably manage with one core But we recommend two cores and the rest of the cores available are available to your VNFs And we can get almost bare metal performance with SR IOV That means you can run 21 million packets per second bidirectional, which is line rate The best that we can get Now for OBS DPDK, that's when you have to allocate some cores You can see that you in addition to allocating CPU cores dedicated for OBS DPDK. They need to be isolated CPUs and you have to have huge pages You can see that we have taken out on each Numa node on each socket you have to take out at least a couple cores For for switching for OBS DPDK for your PMDs, which are your poor mode drivers Running at 100% CPU most of the time so that you can have the throughput that you want We will look at the performance that we have one caveat is you cannot have OBS and OBS DPDK Running running together on the same node, but in the in the previous Presentation we showed that with Newton we have composable roles today so you can do a Combination of your data path you can run OBS with SR IOV or you can run SR IOV with DPDK on the same host by by creating roles and identifying roles For your VNFs and say this VNF set of VNFs will have a combination of SR IOV and OBS And here's a sample of what we call a PVP test physical virtual physical throughput test that we do in All of our testing for benchmarking our V switch and and we want predictable linear performance That what that's what the BNG needs So it starts off with we start off with one queue and one core and then we move up And we assign more cores and each PMD is one VCPU or one thread So two PMDs make one core assuming hyper threading So you can see there's a nice linear graph for 64 byte frames And that's what we would like to have as we Increase or provide more cores to our V switch to our DPDK V switch So we can do with 64 byte frames about 5 million packets per second And if in order to achieve 10 million packets per second Which are required by the BNG we're asking for at least two to three cores Roundabout we're roughly just below 10 million packets per second at at two cores Yeah, so maybe just Sanity check that that that's the performance section. So Sir IOV is the best thing today Vert IO is getting better using the router OBS TPDK It's no free lunch. You have to give a couple of cores to the host operating system But it's going to approach 10 million packets and maybe that's enough Everyone come forward I own SRI OB Anyway, we'll be here afterwards if you want to hear it in more detail So, yeah, the throughput is getting there. The next question is layer tilt co-initivity So I kind of already went over this but basically there's two Q and Q tags And so if first of all I need V lands in the guest VM the virtual BNG is the guest I need to see the V land that arrives at the physical Nick in the guest VM Exactly as it arrives at the physical Nick the basic reason is that the outer tag represents Mainstreet and the inner tag represents a hundred mainstreet literally so the two tags together Give me the exact address I can look up Anita once I see those two tags and I hit course I have the database So hope that's clear. This is you know, you hear a lot of people say layer 2 why do you need layer 2? And most almost all clouds that I run into they're all layer 3. No, you can't have the V lands This it's a deal breaker for the virtual BNG. So Fortunately, it's mostly solved At this point. So yeah, Q and Q tags sometimes they're called SV land CV land Sometimes they're called stacked V land sometimes they're inner and outer tags, but the bottom line is two tags Is this mine? Okay, so I'll just mention that what often happens is the tags get stripped either by the driver in SR IOV or by OVS or Or by V router potentially and they get dropped in the host operating system People use use you use V lands to say V land one is guest one V land two is guest two So they use for other meanings, but but for broadband we need the raw V land as it came in so Also to add to this. This is a new feature added in Newton for open stack and the whole point is to be able to Not strip out the V lands as needed by the BNG in order to support V land trunking between Both your OVS V switch or your OVS DP DK V switch You need to have the V lands trunking available directly straight into the VNF and the other issue that Greg mentioned was V lands tripping at the Nick So your Nick vendor you have to verify the driver on the Nick vendor because it's highly dependent on on your driver on the Nick vendor some Nick vendors Don't support it and and that's where you have run into problems And so this is the the each two command with the RX V land filter off It sometimes does not work on some Nick drivers and this has been fixed. So that's something that to look out for Regarding L3 connectivity QoS policy, this is a new feature that was added in again in Newton to be able to Add or set your toss or DSCP this works great if you have a single IP tag But when you're running cases where you have VX land or GRE and I think Greg has some services where they're going to do VX land or GRE and then you have an outer tag and an inner IP tag and today We do not have the capability to inherit from the inner Or set a unique value for the inner and these are important because you want to set different You might want to change your QoS for your inner and outer and this is work in progress It is being looked at it's supported on OVS. It needs to be added to Neutron Yeah, third challenge so we get recovered through high-performance throughput Layer two connectivity multiple sites This one we're just going to touch on because there's other presentations to talk about it But as we saw before you'd like to put these Servers or compute nodes if you will out near the edge in in say your pops or eventually in the CO's again this Country your size of Italy there's hundreds of pops thousands of CO's so if you could have and is it critical that the Compute node be at that physical location in the network the layer two layer three boundary, which is what the BNG represents Has a certain very tight restrictions on where it could be and without going into too deep in the weeds here The backhaul that IP MPLS aggregation network is really expensive If you want to carry a VLAN from if let's say my BNG was in New York City And I'm trying to connect from Boston and you want to carry a layer two VLAN over a transport network You need a really expensive piece of fiber. It's a really expensive Rodums and other fiber equipment at either side much easier if it's IP so where the BNG sits You in Boston you have a BNG it turns the packet from layer two into IP now You're not tied to a VLAN over the long haul it gets carried through the MPLS tunnel and pops up the other side so Anyway, a little bit too much networking perhaps there, but the point is many locations and One of the things we think about when we look at locations is Compute open-stat compute node is actually doing money making work for me the control node is great It manages my cloud necessary thing But the more compute nodes I have the more revenue I can generate with from my hardware So I think of the control nodes as overhead not not to dis the great work people do on an open-stat control nodes, but You want to minimize the ratio of compute to control So if you put everything in one location, which I'm showing on the right there and put it in I think that's Montreal You have 16 servers two of them are control and 12 of them or what I say to our control and 16 are compute out of your 18 Servers that's good most of my hardware that I buy is actually making money for me sending 40 packets connecting broadband subscribers If you distribute it you might run into a situation where you have to have a control node in every location now I've got six control nodes and 12 compute nodes so of my 18 servers. I've got a smaller fraction of the making money so the question that I You know rhetorically saying Anita, but really opens that community the thing I throw out to you is How how can I have a remote location with a minimum of control node computing necessary? These are service provider networks So if someone would stand up and say 100 milliseconds maximum latency then you know no problem I'll go tell Bel can led they need 100 milliseconds between Montreal and Toronto, and they'll either do it Or they won't so that's the challenge for for the broadband network application is How can we distribute the compute nodes? And minimize the amount of control node capacity we need each location Right, so let's look at a let's look at Greg's problem in a little more detail. So here we have two POPs or points of presence That Greg was recommended is his remote sites that he's putting some services on the remote side some DHCP termination PPPOE termination and some video-on-demand cache for Netflix or something and he has a router sitting there and And he is only serving a small region or small area So he doesn't have a lot of computer He doesn't want to put a whole lot of control nodes over there And then in he has a score site and his data center where he has additional services that he puts in there in case optional or Maybe control plane services, but all his data plane and high throughput is moved to the edge so that he can He doesn't have to use a lot of bandwidth and let's look at what what we have with open stack Available today So this is a very new project open multi-site for open stack is a big Topic of discussion on how to solve this problem There is another talk tomorrow By a solutions architect from bread hat who's going into in-depth on multi-site And I would recommend if anybody's interested to definitely attend it He goes into all the different types of scenarios mini ultra and and really small ultra light Open stack deployments for remote edge sites So over here you have two options one is as Greg pointed out put a control node on the edge and Try to make it a hyper-converged architecture where you can get everything on instead of doing deploying high redundancy and Three control nodes required put a single node and hope for the best and give a very lightweight hyper-converged Compute as well or the other option is remove the control node run headless and then have your controller at your central side and then have Have a latency limit or what is the maximum number of latency that you can support and the scale of number of pops that you can Support and this needs to be benchmark. There are some numbers right now, but they're not concrete We're still working on them. So this is definitely work in progress and have a private VPN between your pops and your data center Good Moving on number four. All right number four. So why multi-tenancy and by multi-tenancy. I just mean different kinds of VNF's with each with their own path through the cloud and so what I'm proposing here is a High throughput SRI OV path that runs north-south. That's your left most Dotted lines and then a lower throughput but getting much better now with OBS and VRR DPDK east-west path through the through the cloud and so you might be bringing a broadband packet in Passing some fraction of the broadband packets the DPI It says, you know, my kids are watching something. They shouldn't be comes back to broadband says drop that packet But there's a you know, there's an east-west path and and there's multiple east-west paths Which Yeah, maybe you can explain that and then I'll go into the next one, right? But yeah, so multi people feel comfortable with multi-tenancy many different kinds of paths through the network and many different kinds of applications on your computer notes Also critical for BVNG Right. And so this one this one gives you the east-west services And you have your BNG sitting on the on the right and it has three types of Connections available and configured by your neutron Network of and services both firewall NAD DVR DHCP services and you have your nick nicks coming in that are carrying all these different traffic It could all be on the same nick or it could be on different nicks. That's up to you For your east-west traffic. So you have your high throughput traffic coming in and some of that traffic is Allocated by the BNG for Some value-added services that the BNG is offering and for example, you have deep deep packet inspection firewall services or you have video and demand services and those subset of customers or are Shipped over on an east-west train to a different VNF to in order to Service them and this could be in a different location It could be on a central side or it could be on a different host, but on the same side And then you have VXLan tunneling or Vlan tile tunneling to segregate your tenants and your services Okay, a little little hard to read So yeah, this is programmability. So you might want one path See I can even get the colors on that right customer a want you to be POE. That's the VPNG customer is gonna get firewall firewall NAT and DPI and Customer B only gets firewall and no DPI and these things change all the time So the idea is that the service by ourselves like a high-security firewall service to you and then there's some Operator in the service wire network. They click it a button in there provisioning GUI It this is my dream anyway for open a few people to deliver someday And then the BNG knows which subscriber that is it provisions a new path through the cloud Now we redirect the packets for that subscriber to the firewall service That dynamic that easy, but the idea is multiple paths easily programmable Which of course it's all possible with Python scripts today, but the question is how much Architecture do we need to put around it to make it service wire class stuff? Right, so Greg wants it on the click of a button So that's the hard part I think to be able to to be able to chain these services one after another Oh as on demand when services come up on demand to be able to do these capabilities and for that We need open stack and neutron to be able to talk to an SDN controller and then the policies are being pushed by the service provider to the SDN controller as well as The the and you have different options as well as the neutron agent Sending all configurations to the SDN controller and you have different options for an SDN controller I think the last presentation really covered it. Well, in fact, I don't know if they're still here But someone was asking for a test case That's I'm happy to give you that test case give me a button that says apply a firewall to any customer based on their Address I'll give you the customer database. I'll give you the filters to make the path All you got to do is edit the switches and you configuration the switches in the servers and also work in progress is adding support for NSH which is a new header which allows you to have multiple layers of Services that are that will follow. So this is a dynamic header that is injected by your SDN controller at the edge of the network and based on The stack of services you will know automatically via the data path. What what path that? Packet is going to take because you push on these headers on top of the packet as soon as it enters on the edge that Hey customer a is going to have this set of services. We got two minutes to get to questions So it just does anyone heard of NSH. They weren't really like it Just heard of it thinking about it. Okay, two more. So I'm one minute leech I mean each high availability to do it in one minute. It's incomprehensible But I'm just gonna first of all boil it down to the server and the and the tour And it's funny because the networking we call a lag or aggregate leethon ed and in everyone in servers They call bonding we're talking about the same thing bonding two ports together making them look like one port Really important feature definitely something you got to think about you got a multi chassis lag on your tour So they don't fail and you run what I call lag on the server Or bonding on the server to make sure that your nick or your wire gets cut You won't die Three different ways to do it depends on whether you're running SR IO V or vert IO So pay attention to lag and pick your the place where you're gonna do the lag bundling. That's really all I can say at this point Excuse me and looking quickly as what we have today for open stack deployments for SR IO V SR IO V the bonding happens at the top of rack switch and at the guest There is no switch in the in the host to do the bonding But you do need from your vnf vendor possibly not a vnf from your nick vendor You need vf trusted support, especially if you're running VLANs and VLAN tagging because you need to Overwrite the Mac of your when you're doing a failover you need to Overwrite the Mac so that your slave can take over your slave interface can take over and that is a limited feature on On certain nicks and so we have we are constantly adding new nicks that support trusted vf But this is a must if you don't have it your fellow will probably won't work and OBS dpdk we have dpdk bonds now and you can support LACP as well Now everything goes through dpdk and you will have separate nicks You have your provider network separated out from your management network But they're going through the same we host as the same OBS dpdk bridges And in this case bonding is happening on the v-switch and you can do dpdk bonds or if you're using just vanilla OBS with kernel you can do OBS or kernel bonds and Quickly about high availability of the vnfs today. We have Multiple vnfs that the way high ability works is vnfs work together in a cluster and you have Standby and active vnfs When one vnf is and they're synchronizing synchronizing their state with each other if one vnf goes down The standby vnf doesn't hear from it. It takes over all the sessions. So you don't lose your sessions So that's that's how it works today and between vnfs. We're also looking for And so the only catch for the orchestration is the standby and the active vnfs should not be on the same host And this is hard because open stack needs to know which ones a standby and which ones an active And that's hard to do and the second part is we've got a request to support live migration Oh, at least cold migration of vnfs This is when you want to upgrade to the latest greatest open stack version or you want to Upgrade your host or repair your host And change the nick drivers and you want to offload all of the vnfs onto a separate Host and then you have to either cold migration and we would like one day to support live migration but basically move all the Sessions off to your standby vnfs then migrate your copy all your vnfs on this host Into network storage copy them back on the second host and then move all the vnfs all the sessions back And that's that's what we would like to do and this is work in progress as well Yeah, briefly the last of the the seven pillars Distributed security and in this case we're specifically talking about Security groups blocking IP addresses and ports within the host operating system within OBS or within vrouter To make sure that you can't attack the control plane of the guest VM You can't use no DDoS attacks there today You can do that with Linux bridge security groups, but it can put a Penalty a big throughput penalty not to mention it's a real pain to configure because sometimes our B&G wants to control plane traffic of Special kinds UDP or other control plane you have to go through So it's been a real pain and it's been a performance ball neck and again. I think I'm happy to say it's mostly solved Yep, so We're trying to get rid of the mess of multiple bridges We have you have Linux bridges and then you have OBS bridges And the reason you need the Linux bridge in the middle is to put you up push your IP table rules and that gives you a performance hit So with connection tracking available in OBS 2.5 And 2.6 for the dpdk version. So now you can do All your all your IP table security groups in OBS itself this is giving you a 20 30 percent performance boost and Eventually there are nick vendors that are looking to offload this so you could get an additional boost if Security groups are moved to the nick At some point so so that's that's the plan and that's the end goal to be able to do all of your security and Distributed firewalling in the v-switch close as close as possible to the vnf that you want instead of far away on some third-party appliance And taking it away. This is repeat of all the seven yeah seven Pillars that we need for a lot a lot of information. I know I think that the Navadaya hopefully that was useful. I mean the bottom line is we're looking at if we had to deploy it today And we do deploy it today in a real production network and we actually are before deploying in real production networks now What would you do? How close are we and I think we're about to we could do it today And I'm sort of a basic way with a lot of compromises I think we're about to we're about to get better and move some of those compromises So and we highlighted some of the compromises that we've made and what we need to really get yeah get there No, I think there'll be a lot of improvements in the next year. I don't anyone had any questions Feel free to shout out. It's kind of a super specialized application, but when people say NFV We thought we'd bring you a real network function broadband network gateway Yeah Right, what's the performance Yeah Yeah, I'm not sure about the synchronization of IPTL from one server to another you mean the first stateful high availability of one VNF to another Not not a lot because all you're really passing is what is this state? So yeah, that's not exactly IP table related But what is the state how many if there's a hundred thousand broadband subscribers connected they'll have a hundred thousand IP addresses You can hand a hundred thousand 20 bite Messages across to the backup control plane fairly quickly. So it's on the order of you know less than a gig That's what you're asking about like this one is that the Yeah, the IP tables is used more for blocking things that come in from attacking the guest So your IP tables is just saying if it's coming from you know a closed network Don't let it through if it's got a the wrong protocol the wrong port number or the wrong protocol ID Don't let it through it's like a trivial for networking. We consider a trivial firewall that that just does port blocking So what we're saying here is tracking do so you can connection track you can do TCP connection tracking Yeah, so what we're saying here is at the Linux tape Linux that's done by Linux bridge today ancient technology It'll choke you down to 200,000 packets per second through your bridge. It sucks So that's gone. You could do everything you need to you could do before with Linux tables now You can do with OBS and now your throughput is up to 500,000 without DPDK and I think you're offering 4 million Yeah with DPDK No, no, that's just completely within the server itself Yeah, good question Yeah If you should come up come up How is this level of performance comparing to V routers with DPDK? V routers five percent better. No, it's about the same No, you gotta I can't I'm not gonna throw out a benchmark answer for you without having a real benchmark But V router is just as good as OBS DPDK last I heard 4 million seems doable a lot of variables there Sorry, you have to ask a control person But I really think that the rather DPDK is comparable to OBS DPDK. That is true. They are comparable but V router is a It has a kernel piece as well. They're moving to have a pure User space and that is that is the plan to have a DPDK V router at some point and that's work in progress So right now you do have a kernel piece That's tied to the kernel and they have tuned it to get similar much better performance than OBS kernel But DPDK is coming next So then you will see the performance at the level of DPDK, but I read 3.2 has already DPDK So, yeah, I'm not a contra help manager. You have to ask them pretty sure it's Yes, good good juniper style question. Thanks. Yeah, I open to her for anybody. Yeah that controls for anybody, yeah In your original list of seven must have yeah, probably one thing I may suggest is How about monitoring of your pieces because I've struggled a lot with them and in your entire stack If one of the piece goes a wire, yeah 50% of the effort is to find out where the problem is and what the problem is so in this list Yeah, I would like to see one thing on monitoring of this entire thing for NFE specifics great great comment Well, do you have a second to see to figure out where the fault is, right? I mean we were struggling with getting seven we wanted 10 we had No, but that's a really tough one Yeah, and especially if you're it's depending on if you're using SRI over your word IO You have to watch the bridge you have to watch the Nick you have to watch the guest We have a new juniper has a new tool app form mix that will solve a piece of the problem But that's an open question good good I hope that has a project called skydive that allows you to monitor all of open we switches all the open We switches are distributed across all your compute nodes So if you want to looking for monitoring that's something that we need a trace route Yeah, which touch everything that touches, you know for it for open sec Right, and that's a good point service assurance is a big one Great Absolutely good comment. Thanks and latency So we're doing performance right now throughput, but latency is the next one and jitter will be after More questions comments times up Well, Greg Smith the juniper I have a card here. Thanks for your time