 Hello, I'm Greg Alkenbart with Mirantis. I'm here with Abhijit Prabhune from Netronome, and we're going to talk to you about how to to recharge your VNFs. So first, a brief introduction to Mirantis. Mirantis provides one of the leading open source cloud platforms using OpenStack Kubernetes OpenContrail and Ceph to deliver clouds. We can help operate your clouds with up to four nines SLA, and then transfer it to your internal IT team. As part of our OpenContrail development, we have integrated with Netronome and are here to present the performance results to you. We focused during this development on the VM to VM performance, because that's necessary for distributed large-scale VNFs that we typically support for our talk of customers. As you can see here, we're using Divert IO Path, which provides a maximum VM compatibility. And Divert IO Relay from Netronome is fixed to consume only two cores. Each VM is provided with four vCPUs and four gigs of RAM, and we're using a simple packaging for this test tool. So as you can see by our performance, we were able to achieve close to 20 million packets per second. This is full duplex. So in and out counts as only a single packet. So 15 at 10 gig, 20 million packets per second at 40 gig. Well, at the same time, we've tested the base vRouter DPDK at two cores at only 4 million packets per second. So using the Netronome Accelerator, we're able to achieve roughly 3 to 5x performance improvement in throughput. Latency is not shown here, but latency has also have been significantly lowered. So Abhijit? Thanks, Greg. So I'll add one more point toward the performance results that Greg mentioned. So if you look at the 10 gig numbers, we're actually maxing out the line rate of the 10 gig at that two cores. And that's why we are also able to get much higher when you switch to the 40 gig card. So short introduction on Netronome. We've been delivering network accelerators for high growth markets for about 15 years now. We focus on three market segments, NFV infrastructure, cloud IS, networking, and security and analytics. We have pioneered network flow processing as the mechanism to deliver that and have established a leadership in smartNICS. We serve top tier OEMs, public, private, and telco cloud service providers. In order to deliver this, what we do is we offer acceleration for card servers. There's two aspects to our product portfolio, the hardware product portfolio consists of 2 by 10, 2 by 25, and 2 by 40 gig cards. And you saw the performance numbers for vRouter that Greg showed for 2 by 10 and 2 by 40. On the software portfolio side, we offer OVS acceleration, vRouter acceleration, and Kornic, which basically makes it's a traditional NIC. By doing this, we are reliving the performance bottleneck that happens on the host when you're deploying any kind of VM or VNF. It takes the cycles back that you would give to the NFV infrastructure and gives them back to the VM so you can run more VMs, generate more revenue. It also brings the speed of software innovation to hardware in this sense that we can do any kind of programming. These cards are highly programmable. We have OpenStack managed, OpenVswitch, Contrail, and Kornic. Specifically with Mirantis, we have partnered. We've been partnering for over a year, year and a half now. And we've done Mirantis OpenStack integration with OVS, Mirantis Cloud Platform integration with Open Contrail, and Mirantis OpenStack integration with standard Linux networking. In all cases, we can get up to 10x CPU core savings, 5x IO throughput that you already saw with vRouter, 100% VM and VNF onboarding and mobilities because we support both Vortio and SRIOV. Essentially, you can deliver an OpenStack managed, homogeneous server infrastructure, and get 326x better TCO. I'll show a little bit about our performance on the OVS as well. So this is how we do our OVS offload. The OVS from the host gets transparently offloaded onto the VNIC, onto the SmartNIC. And you can get 20x to 50x efficiency depending on what you're doing the performance measurement against. With that, I'd like to invite you guys to visit us at Booth 826 at Netronome or Booth C1 for Mirantis. And if you have time for questions, feel free to. Any questions? Yeah, so the question is SRIOV, Vortio, and what is XVIO? Yeah, so XVIO is essentially we are providing the Vortio interface up to the VM. And we are providing performance like SRIOV. So that's the Vortio relay that Greg showed. So we have a small relay that sits in between the VM and the SmartNIC and does the offload for that. So the performance numbers that Greg showed were with Vortio relay, absolutely, or with XVIO. Yeah. We did not test the performance was the SRV path. We expect it to be another five to seven million packets per second higher. Yeah, so the question is, is it a host module or is it a user space or kernel module? Yeah. Yeah, so the XVIO is a DPDG app. So it sits in a host and it presents the Vortio interface up to the VM and then it does a transparent offload onto the SmartNIC. You can start with one core. The performance results that presented for VRouter were for two cores. You can do both, one core and multi-cube. We were using multi-cube for the test, simply because DPDG packet gen can't really achieve 20 million packets with a single queue. So yes, there were four core, four vCPUs given to the VM for that reason, correct? So he got two cores for his relay and each VM had four cores for its packaging. Virtual function daemon, not familiar with that. Maybe you can stop by offline. I'll get the answer from my engineering team in the mind. We're trying to simulate lots of packet flows through the DPDG packaging application. Obviously, it does not really simulate the complexity of a real-world application because all packets are 64 bytes, but other than that, there is lots and lots of flows. So this is unoptimized performance. We try to achieve the performance results without trying to do anything weird. We could have gotten higher performance results. If, let's say, we would have constrained the VMs to run on the same NUMA node as the NIC, would have expected yet another 30% performance bump, but that's unrealistic, right? So you don't want to do that. So this is unoptimized performance, so you are expected to really achieve 20 million packets, as opposed to twisting yourself into a pretzel to do so. Yeah, I'll add to that. I mean, you brought a very good point. There's a lot of different ways to benchmark. And we do benchmark in different ways. I mean, this is just one of the benchmarks that we showed here. We benchmark all the way from 1,000 flows, 10,000 flows, 250,000 flows, so on and so forth for the solution. So absolutely, if you are more interested in that, then maybe you can work it out. So we'll hang around, folks. Unfortunately, we got the timing signal from a sincerest apologies, so we'll hang around here in the corner for a little bit answering additional questions and please visit us at the booth. Thanks.