 Hi, welcome everybody We're here to talk about smart NICs and moving to next generation 25 gig 50 gig 40 gig 100 gig in the data center. I am Anita Dragler I'm a product manager for Red Hat focused on NFE and networking Hi, and I'm Nick DeSanovich and I'm VP of solutions architecture at Netronome and I'm focused on smart NICs and NFE So Anita is going to talk in detail about some of the options for server-based networking data path implementation but I'm going to start off with a description of the problem statement, so Everybody's seen this chart about global network traffic exploding right going off the chart The point I want to make here is that the problem is even worse for NFV because in data centers implementing NFV We're doing a lot of service chaining where traffic is bouncing back and forth between different services running in different VMs And containers, so east-west traffic is growing even faster than this graph shows on the right side, we see that Silicon is not getting any cheaper In fact, the costs are leveling off and we're not getting the benefits of Moore's law And that means that we need to be very very smart about the way that we implement data path functions to get good Cost in the data center. So Anita Here's the data path options that we have today for 10 gigs or 10 gig servers We're talking With open stack our default default configuration is open v-switch with the kernel data path And this one gives you a ton of features switching bonding overlay But your performance is does not meet what you want if you want higher performance You got to move to the right and this is what we have today in deployment for NFV a lot of SR IO V and This is hardware dependent on your nick vendor But you can get line rate. You can get line rate 10 gig But if you want any switching options, you're dependent on your top of rack switch. There are no switching options Moving more on to the right, which is new GA for red hat open stack is Open we switch OVS TPDK with the data plane Toolkit with that you can get direct IO to the neck and some switching options bonding overlay some overlay offloads But you have to give up a few cores on your host to be able to do TPDK Entirely devoted to pole mode drivers So if you want to go beyond 10 gig 25 gig and 25 gig and 50 gig is the new 10 gig in the data center And if you want to go there and you have 25 gig servers, we have to move to OVS offload as our new next-gen Technology what is red hat and what are all our nick vendors doing? We are working with multiple nick vendors to get OVS offload both in the kernel and in OVS and in TPDK Our goal is make sure we have everything upstream all drivers upstream first and to integrate with open stack Must that's a must as well and our goal is to standardize the API both in the OVS kernel in the OVS TPDK and open stack So a little bit about smart nicks So smart nicks sole purpose is to accelerate and offload the server networking data path so Netronome is a vendor of smart nicks and There are many models for offload as Anita just alluded to basically at a high level what we're doing is moving functions that are Traditionally implemented in the server data path either in Linux or in something like an open v-switch for instance Where we're doing complex functionality involving overlay processing traffic classification QoS security rules and policies and Things like that that are not typically done in the nick card today The smart nick actually does do those things and so we are actually moving that functionality from the server Sorry into the smart nick as shown in this diagram and what we're doing is bypassing either partially or completely The server networking data path which is eating up CPU cycles in the server bogging down the server and actually Creating a bottleneck. That's in many cases starving In the end of vk starving the vnfs that are trying to execute on the server so the smart nick pulls all of that down into the Nick card itself in this particular case We're showing it being implemented in the Netronome flow processor, which is our silicon purpose-built silicon That's designed to implement the smart nick functionality And Anita is going to go into more detail now about how this is is done so over here We have a use case that we're working on today with a number of nick vendors The sri ov with ovs fallback. So you have sri ov So this is merging the worlds of ovs and sri ov So you have sri ov vfs directly to the vnf, but you also have a pf a Physical function going into the ovs bridge And so you have now ovs in two locations you have ovs in the kernel and you have ovs in the nick You have two ovs versions and you're off you're off loading your control Your ovs control is now going to offload using tc flower Which is the traffic control piece on the kernel is going to decide which flows are going to be offloaded to the switch And some flows will continue to stay in the ovs bridge on the host So there are some features. So most of the white features in the features in white are already Being worked on and available some of the advanced Connection tracking and fallback to ovs are not supported by all nick vendors. This is on a case-by-case basis These are vert IO options if you're not interested in sri ov and you want to do live migration Then these are some of the what IO options and Nick will go into a few more as well But this is if you're using you want to use Ovs dpdk and you want to do a partial offload So the whole flow is not offloaded, but you want to take advantage of Nick vendors like NETRONOMA providing QoS security groups contract overlay offload. You could still do that and still have pretty good for performance Or you can do another option with with another Nick render is looking at where they're moving your Ovs completely into the Nick there is no Ovs in the host and then you have to worry about tighter integration testing Because now you have to deal with Open stack talking directly to the Nick author and SDN controller Yeah, and then briefly this is These are the models that NETRONOM is very focused on So we have Basically a user space agent Which is a relay agent that implements the vert IO back end and talks to VMs So VMC a nice vert IO vendor agnostic interface. That's very good for for live migration But you get the very very high performance of the smart Nick So that's our model on the left that you're seeing there and NETRONOM supports that today And then we're moving to a model on the right which will Provide the same functionality in terms of vert IO and live migration But the relay agent now moves down into the silicon So it's even more efficient I will not go into depth in this since we're running out of time But these are the open source communities that we have to work with And you can see everything in white is going to be either delivered or shortly delivered everything in yellow is still Work in progress and some of it is in the design phase Just looking quickly at the advantages and limitations. Yes, we can get line rate Especially if you're using SR IOV if you're using any kind of vert IO, there will be some performance hit But not as bad. It's it's has to be benchmark. It's not as bad as doing pure OVS You have all the advantages of all the off-load capabilities and in some cases you may have fallback You have option to fall back to OVS if you're Having control plane traffic or you have a failure or you've exceeded the capacity of the nick There are some limitations, of course Using OVS off-load you have to you may need to do certification Every time a new open stack or a new OVS releases to make sure your nick vendor is compliant with the latest one Your bonding is limited now. You cannot go across Nick. You have to do within the same nick bonding your feature Availability depends on your nick firmware So you need to make sure your nick firmware is updating at the pace of yes and open stack updating What IO for live migration is needed. You need what IO you can't do it with SR IOV And there are different options with that and then you have varied flow capacity based on your Nick vendor and how much memory or cash is available and whether you're running contract or other security options And there's a five-tuple match. I Think we're done any questions Right as I said, it's not supported today. It's a future direction for an astronaut. It's coming upcoming release, right? That's that's the situation with where I owe the the driver is Not yet decided whether that will be a separate version or not you can come see me after the talk