 Why? From the Mandalay Bay Convention Center in Las Vegas, it's theCUBE covering VMworld 2016, brought to you by VMware and its ecosystem sponsors. Welcome back to theCUBE here at VMworld 2016. I'm Stu Miniman joined by my co-host Keith Townsend. Going to talk some networking here with a returned CUBE guest. Kevin Deerling from Vice President of Marketing from Melanox, thanks for joining us. Great to be here. All right, Kevin, we see you have a lot of different shows. The time we caught up, I think, was the Open Compute project event, which is a little bit smaller, kind of a different crowd than what we're seeing here. But why don't you bring us up to date is some new announcements you guys have had over the weeks leading up to VMworld. Sure, so it's great to be here at VMworld. They're a big partner of ours. We've been supporting VMware for quite a while. What we announced here was that we got complete inbox support for all of the next generation of our adapters. So that's 25, 50, and 100 gig. We say 25 is the new 10. It runs on the same infrastructure as 10 gigabit, and we're seeing that transition happen already. So we're really excited about that. And we also announced that we're going to have Rocky, the technology that's RDMA overconverged Ethernet inbox with vSphere 6.5. So that's really going to bring the VMware crowd up to where some of the other players in the hyperconverged infrastructure space and the software defined storage and software defined data center with Rocky technology. So we're excited about that. All right, so Kevin, explain to our audience why it's so important to have that inbox support and what that really means. Yeah, so we've had for a long time, Rocky support, which is what's called partner verified and supported program, so PVSP. And now when it's inbox, that means that it's just easy to install and you can just use it and it's supported by VMware. So if you have any issues, if you want to figure out how to use some of the advanced features to get the performance tuning, you can just pick up the phone, call VMware. It's inbox supported. They do a lot to validate the solutions. That means that when you plug it in, it's going to work. Yeah, and for our audience that isn't familiar with Rocky, maybe you can explain, of course, Melanox comes from the HPNC and InfiniBand background. This is some of the low latency environments. We've been talking about iWarp and Rocky. I think I've been talking about it for a decade. So it's good to see this coming to fruition. What is it? Why is it so important? Yeah, so Rocky is really the way that we've taken that heritage that we have in HPC high performance computing and brought that to the ethernet world. So it stands for RDMA overconverged ethernet. It talks about, it really allows you to bypass the kernel. So you get very, very low latency between VMs and it doesn't matter where your application is running or where your storage is, you can connect to storage on another machine as if it was on your own machine. So it's very low latency and best of all, it doesn't use the CPU. So the CPUs become available to run the workloads. And ultimately, when you buy a server infrastructure or storage platform, it's not about moving the data back and forth. It's about running workloads and running applications. So what we do is give the CPU and all of those cores back to the application and we let you run more applications. Okay. So that's a serious problem within the enterprise. As we get more and more abandoned with, it's really hard to utilize that on a server level. So you're telling me, like an Intel DPDK that's designed to maximize Xeon processors for network utilization. I don't no longer need to, those hacks, is that the idea? So Rocky is actually something that I would say is complimentary to DPDK. We actually love DPDK. We support DPDK. We get the best DPDK performance in the world. So we can do 93 gigabits per second using our DPDK drivers. And so it's a really a different way that allows you to use user space applications that access the network. There's lots of different ways that you can get the most bandwidth and efficiency out of the network. DPDK is one of them. Rocky is another one. It depends on the workload. We see DPDK being deployed in telco environments. So when you're doing NFV workloads, people like to use DPDK. We love it. We partner with a company called Sixwind that developed the DPDK drivers for our adapters. We have some nice advantages there. So it's not an either or. It's really a both. That's interesting. Some of the fun I've heard is that NVMe or SSD in general has blown away the network. You have to have data locally on individual nodes. You're saying something different. Absolutely. So it's a great point. NVMe is really a step above in terms of the performance that you get out of traditional SaaS and SATA. And it eliminates a whole bunch of the software that used to be there for legacy reasons to be SCSI compatible. Now with NVMe, you get really great performance. The great thing is there's a new protocol. It's called NVMe over Fabrics. And it runs over Rocky. So NVMe over Fabrics, you can look at the flash, whether it's in your box or another box, and you don't even know. You don't even care. It's a few tens of microseconds to go fetch data from another box and bring it back. So in the old days with spinning disks, you had a hard disk drive. It really mattered. You had 10 milliseconds to spin up. Who cared what the network latency is? When you have something that can access in 20 microseconds, suddenly the network latency becomes important. What we've done with Rocky and NVMe over Fabrics is overcome that. And we actually can deliver with three NVMe drives. We can get 100 gigabits per second. We can saturate the link. So hey, Kevin, I've heard the fiber channel guys saying that NVMe is going to be a great use case. They've been talking about how AFAs are using a lot of fiber channel. What's your take on that? Yeah. Right now there is no standard for MDME over fiber channel. So they're working on it. They're going to try to get it. It doesn't eliminate all of this excess stuff. Fiber channel at the end of the day is encapsulated, scuzzy. I think there is no fiber channel in the cloud, we always say. All of our customers are using ethernet or Infiniband in the cloud. So the hyperscale customers that we have, most people don't realize what our market share is there in the ethernet. And I talk about high performance ethernet greater than 10 gig. Really, we're the dominant player there. And that's been driven by the hyperscale guys and they're using it for storage. So for example, Microsoft is using SMB Direct. Everything they've done is using that. They've talked about that they're using Rocky at 40 gigabits per second in their Azure public cloud platform. And now they're enabling that with SMB Direct. We were at Flash Memory Summit recently. Again, showing something like 160 gigabits per second out of a single server with Microsoft Storage Spaces Direct. So with something like vSAN, how does Rocky integrate with vSAN and make my vSAN run fast? Yeah, so that's a great question. The announcement that we made was actually VM to VM. And so with VMware and vSphere, we're going to be talking virtual machine to virtual machine. They will be the first. VMware is the first to enable that. vSAN is something stay tuned. We've shown some demonstrations that we can do some neat things when we enable vSAN to run over Rocky. But that's something they haven't yet announced. Yeah, and is it limited to vSAN? What about some of the other hyperconverged players? I know Melanox was at the Nutanix conference. So if they're using VMware, I guess that would work. What about other hypervisors? Yeah, so the great thing about Melanox is we are born on the best performing ethernet solutions in the world that switches adapters and cables. So oftentimes when we're engaged with partners like Nutanix, like all of the guys that are here, I was going to say Pernex, but that's now part of Nutanix. But when we engage with these guys, we can run over TCPIP and you get the benefit of faster networks. So 25 is the new 10. We say faster storage needs faster networks. A single NVMe drive can saturate a 25 gig link. And so we can do that over TCPIP. And then above and beyond that, when you enable Rocky, what we do there is we get more efficiency. So the performance is the same, whether we run over TCPIP or running over Rocky, but we actually give all of the CPU cores back to the applications when we run over Rocky. So today with many of those hyperconverged infrastructure players, just to get the performance out of the box, the bandwidth out of the box, then we actually use 25 gig with TCPIP and they love those kind of things. So that's pretty amazing because when I read up on DPDK, you could dedicate a certain number of cores to improve network performance. So you're saying Rocky just eliminates the need to use any of these high price Xeon cores per year? Exactly right. So if you look at it, the most expensive portion of your server infrastructure, it's not the chassis, it's the CPU and the memory subsystem. And if you're using that to move data, then you've taken the most expensive part of your capital investment and you're devoting it just to move in the data. Presumably you have an application that you're trying to run, whether that's a database or you're running analytics or you're doing a telco application, you're doing a load balance or whatever that might be. What Rocky does is it gives you all of the data movement and all of the performance and it frees up those cores that you would have had to dedicate to moving the data and allows you to run the application. All right, Kevin, what about the whole SDN discussion? Of course NSX is one of the centerpieces of what VMware is talking about at the show here. How do you interact with that? What do you see at the show here about kind of the networking participation? Yeah, so it's really fascinating. If you look at the history of VMware, the first thing they did was virtualize as a computer and we added capabilities into our ethernet solutions that actually accelerate that network virtualization, so SRIOV and then you take the next step with the NYSERA acquisition they did, what's now become NSX, they virtualize the network and so we've actually now accelerated that as well. So things like overlay networks, we've accelerated that and now if you look at the next generation, which is the open virtual switch, we've accelerated all of that. We have a great partner in a company called Cumulus that runs the software, that runs on our ethernet switches. We also have our own software and also open switch from HPE as well as the Microsoft Sonic solution. So we have multiple software that runs on our ethernet switch, but really our partner for the NSX, so this SDN controller is Cumulus. They support not just NSX, but really a whole class of different SDN partners that we work with and so now we've virtualized the network. You not only have software defined storage, you have software defined networking, so really great stuff. So service providers, let's talk about NFV. What's the play when you're talking about quickly to deploy load balancers, layer 3D devices, how does Rocky help in that sense? Yeah, so there we actually have another company that we acquired recently called EasyChip and EasyChip does a network processor. The latest one is called the NPS. It's great for all of those functions. So it's really an intelligent networking solution. All of the edge routers that are out there today use that. We've announced new products that combine some of the EasyChip technologies with our adapter technologies. So we announced a product called Bluefield, which combines the networking with the processing capabilities. So now we have intelligent networks. They're really designed to address the problems associated with NFV. So all of those workloads now, instead of having the most expensive part of your server sitting there just dropping packets, we can do that in the network adapter. So if you're building a load balancer or a firewall, all of the intelligence can sit up there in NSX and then you just tell us to accelerate that. So if you go back in history, if you look at the very first Cisco routers, they actually forwarded packets in software running on a MIPS processor. Today there's not a switch in the world, certainly not ours, that's forwarding packets in the data path using software. It's the same thing that we see happening now with the hypervisors that VMware has defined and all these virtual switches. The first generation of that it's done in software, but you take a performance penalty in terms of what you're actually able to do in terms of throughput and CPU utilization. We build accelerators that then get you back to bare metal speed. So you get all the benefits of virtualization, but without the penalty of consuming the most expensive resource in your system. Kevin, why don't you share any kind of great customer stories you've had, interactions, things that are surprising you at this show for those that haven't been able to go around and do it? Sure. So we've got a lot of customers here. If we go out on the floor, you can see us all over the place doing NVMEover fabrics. We did a great announcement earlier with a partner that's a company called Mangstore. So they did a SQL service where they were showing TPCH benchmark. And again, they were running this over NVMEover fabrics using Rocky. And because they got so much more bandwidth associated without using the CPU cores, the CPU cores were then available to run the benchmarking activity, and they got great results. So we were really impressive. This is in a virtualized environment. So normally when you think about databases and some of the big data workloads, you think about that running on bare metal, we're enabling that to run in a virtualized environment without any penalties. So that was one of the exciting things that we had here. So obvious use cases are low latency applications. What are some of the surprises customers are finding once they get the solution in-house and they find expanded use cases? What are the expanded use cases? Yeah, so I think it's a great question. You know, one of the things that we always face is people say, well, you know, why do I need 25 or 50 or 100 gig? I don't run high performance applications. Again, it's surprising that we show benefits across all applications. People don't realize it, but how long it takes them to access their data to run means that they can run more jobs on the same infrastructure. So we call it total infrastructure efficiency. And so you're actually getting more out of your storage and your solutions. That can be for VDI, that can be for SQL Server, that can be for analytics, that can be for Oracle. Across the board, we see benefits. SAP, SAP HANA, for example, all of those things will benefit. So the workloads span across all industries. You know, our largest customers are the hyperscale guys. The largest cloud data centers, public clouds in the world run on our infrastructure. And for them, it's really important because if you think about what they're selling, they're selling workloads. They're selling virtual machines. If they can get more out of their infrastructure, they have more to sell. So it rips right down to the bottom line of their profitability. So they have been really on the vanguard of adopting the latest, greatest technologies that we offer. And now others are saying, hey, we want the same agility and efficiency that the hyperscale public cloud vendors get, but we wanted our own private data center. And so now with the announcements we've made with VMware and Abling Rocky, being able to deliver that on 25, 50, and 100 gig, we can give the same efficiency and agility that you get in the public cloud in the private cloud environment. Any special relationship with SAP HANA Spark? Absolutely. So SAP HANA, absolutely. We work closely with them. In fact, you can go look. They say that you need an RDMA capability like Rocky for their SAP HANA. You're doing an in-memory database. You can imagine that moving the data very efficiently and quickly with low latency between the nodes is important. So they actually specify that it's Rocky capable. So that's something great on Spark. There's a lot of great things that are coming soon. We'll have some more announcements about that. All right, Kevin Deerling, really appreciate all the updates, really culmination of a lot of long work that's gone into the networking industry. We know these changes take many years and great to see really the proofs in the pudding as to what's happening there. We're wrapping up day two here at VMworld 2016. You've been watching theCUBE.