 from San Jose, in the heart of Silicon Valley, extracting the signal from the noise. It's theCUBE, covering OCP, U.S. Summit 2016, brought to you by OCP. Now your host, Jeff Frick and Stu Miniman. Hey, welcome back everybody. Jeff Frick here with theCUBE. We are live in San Jose, California at the Open Compute Project Summit 2016. Our third year being here, like we said, this is where the cloud is. This is the hardware that runs the cloud, and we're excited to be back. The show is growing, there's a really great vibe. Here with Stu Miniman from Wikibon, and we're excited with our next guest, Kevin Deerling, Vice President of Marketing for Melanox. Welcome, Kevin. Thank you, it's great to be here. We were also been at OCP for a few years, so it's really great to be here. Yeah, you said you've been coming for a long time. Give us kind of your perspective of someone who's been here for a while. How the show's changed, how it's morphed, how really the community seems to really be growing. Sure, so it was nice that it's been at five years. We're actually one of the leading providers of Ethernet equipment, so we've done multiple contributions over the years. We did three contributions this year of our OCP cards. What's really nice to see the way it's developed is now we see a lot of end users, so we see the telco guys here for the first time. We saw Google join, and we see all the big guys that we're working with already, so it's really nice to see the way it's developed. And you had a ton of announcements, right? So I want to run through some of the top level announcements you guys are excited to share this week. Yeah, so we did the announcements of our contributions that I talked about, of the adapters. We also announced a partnership with Cumulus, so Cumulus Networks is running their network operating system on top of our switch. And we did a broader announcement that really encompassed Microsoft's new Sonic network operating system running on our switch, as well as what we call open composable networks. And so that's open platforms, open knots, networks, standard APIs, and really all of the management and automation to go along with that. So Kevin, having been here a couple of years, it must be kind of gratifying to come in and see, I mean, networking's front and center. Helps to have somebody like Microsoft coming up in the keynote and laying out the stack. Talk about what Mellanox saw, why it's so important that the networking piece gets taken care of as part of OCP. Yeah, so I think what's really interesting is we're seeing a change where it's this compute-centric infrastructure to a network-centric infrastructure. And if you look at the platform that Facebook's showing right here in their booth, there's actually four servers in there, and they're sharing multi-host NICs. So they have one of our NICs, and it's actually being shared by all four servers. So you can have 100 gig or 50 gig multi-host. We also have 25 gig, but it's really changed. It's turned things upside down. Microsoft really spoke to that when they were talking about their network operating system and having open platforms and the flexibility that gives to allow people to innovate. Okay, how about the switching? Mellanox, of course, has the history with InfiniBand. Big push on the Ethernet side over the last few years. Give us the update as to where that is, and especially for OCP. Sure, so we came from that HPC InfiniBand heritage. We really are the leading provider in that space. We've been able to leverage that and take all the core technologies to Ethernet. Today, in the greater than 10 gigabits per second space, we are the dominant supplier of Ethernet NICs. I don't think it's that well-recognized. So really, that's 40 gig today is where the dominant share is. That's mostly what's chipping. But I think we're right on the cusp of the largest network infrastructure upgrade cycle in my career, okay? So this 25, 50, and 100 is going to be huge. Everybody, 25 is the new 10. And so on the adapter side, this is big. I think people don't realize what's happening. There's tons of drivers for that that are going to make that happen. Yeah, I mean, Kevin, one of the things that surprised me at the show, I mean, Facebook basically said, you know, 100 gig, we're going to roll it out. It's going to be ubiquitous in our environments, I think by next January. And I think it was JR that actually laid out for us some of the pricing on that is impressive because that's usually what's held us back on some of these big steps on networking is, you know, whether it's the optics, the cabling has to change, you know? There's so many pieces of that hardware stack and cost usually holds us back. So why is this next wave kind of ready to launch? Yeah, so I think one of the key things is storage. If you look at these flash drives, a single NVMe flash drive can saturate a 25 gig link. So if you've got an NVMe flash drive in your server, you're throwing two thirds of it away if you're running on 10 gig. And the infrastructure upgrade here is really pretty transparent. So the difficult thing here is the fiber that you've laid in your data center and the 25 gig, we can run that over the standard fibers. So whether you're using multi-mode or single-mode fiber, you don't have to rip and replace. You just swap out a transceiver, you've got the infrastructure there. We're pricing our 25 gig gear, both switches and adapters, at a slight premium to 10, but very close. And 100 there, you know, it's a slight premium again, to the 40 gig, but you can move from 10 to 25, from 40 to 100 with relatively little and getting a huge two and a half X performance delta. So it's a big deal. So, you know, usually in this space, there's a lot of, you know, it's not just the speeds and feeds, but let's get the benchmarks done. Let's really, you know, go head to head on that. What have you guys done to kind of show leadership in this space? Yeah, so the area where we're really growing now is on the ethernet switch side. We just published a report yesterday called the Tali report, and that really goes down into some details about what happens when you're sending traffic across a switch, and we call it zero packet loss. We just don't drop packets. As you can imagine, if you're dropping packets in your network that you don't need to, because of some ASIC limitation, that's really bad at 25 and 50 and 100 gigabits per second. It's bad at 10 gig, but it really gets bad when you're sending massive amounts of data. Yeah, so you did a session yesterday with Netflix, and of course, you know, in this space, Netflix is one of the ones that, you know, we hear them talk a lot because they're doing some really cool things. You know, 100 gig ethernet, you know, content, delivery, you know, I've gone through and watched the new House of Cards season already, and thank you, only had one issue that I had, you know, the network standpoint. So can you speak for, you know, what's Netflix seeing, you know, what are they doing to move forward, and that, you know, is going to, you know, trickle down to the rest of the customers soon? So I think what they recognized is that in this environment where you've got all kinds of users, hundreds of thousands of users, you need to scale it very quickly, what they did is they said, for our content distribution network, we're going to go to 100 gig, okay? They're using a bunch of these flash drives, and they've deployed that in all the telcos in the co-location facilities. So if you're using Comcast or Verizon or AT&T, then why would you go back? They have their home base, is actually hosted on Amazon, and a lot of people know that about Netflix, that they're hosting on AWS, but actually the data itself is distributed throughout the world, and now they're expanding throughout the world, and it really solved the problem. They can get, you know, much better scalability with our 100 gigabit ethernet. That was a classic case, it was really good to have them up there on stage with us talking about the use case, because oftentimes we don't get enough end users, there's too many of us suppliers that are talking about the performance of our networks and things like that. It was great to have a use case that people can relate to, because I also watch Netflix and House of Cards, I'm not a binge watcher, but... Yeah, so if you look at videos, is one of the use cases that of course is pushing really hard on the bandwidth, what are their things like, Internet of Things are kind of putting stresses on the network, and what's Melanoff's doing to kind of look at the application side of things? Yeah, so I think there's a ton of things that are going to be generating data. People don't even realize that when you're using their phone and whether they're using Waze or Apple Maps that they're generating a ton of data, but video is one of the huge drivers. I mean, you guys here, here we are shooting a video, this is going to get streamed out. I got back from the session last night and on Twitter somebody had done Periscope, and those feeds are already up there. So people are using their phones, they're driving a ton of data without even realizing that they're generating massive amounts of data. All that data needs to get to a data center, be processed and then redistributed it, and all that big data and faster storage that's out there, faster storage, needs faster networks. One interesting ask, go hand in hand with the flash too, because we talk about the significant impacts of flash, not just for low latency, high value applications, which traditionally was all you could afford it for, but really moving it into other applications that now you can do things out here before you just couldn't do before. Exactly, so I think if you look at real time analytics, we've gone from being able to process sort of batch analytics and make business decisions, all of a sudden when somebody is searching the web and you want to feed them content that's relevant to them, that they're likely to buy, so advertising or promoted content, you need to make those decisions in real time, you need to do the analytics. So having that data on a flash system, we're seeing the crossover right now happening between flash and hard disk drives. So if you look at the overall total cost of ownership, you're actually spinning rust with motors and magnetics and shaking vibrations wears out, and the flash really has a nice life cycle. So if you look at the total cost of ownership in terms of the power and everything, we've seen the crossover happening right now. So all of a sudden as flash is achieving parity with spinning disks, you're going to have much faster storage and you can start doing things that weren't possible just a few years ago. So Kevin, as we look at kind of the disaggregation of what's happening in the stack and there's also open source heavily involved, how does a company like Melanox, both stay involved in so many pieces, differentiate and make money at this? I mean, look at something like Microsoft Sonic and shouldn't it be, it's like, wait, oh, you might have sold not only the Switch, but you guys sell an OS that goes on top of that, now there's Cumulus and others, so how do you guys take advantage of that? So you said disaggregation, you hear that a lot where you're taking the software and the hardware apart and we were talking about the open composable networks is how you put it all back together. And so that's a good example where you can take the best hardware platforms, the best software platforms and there is no one best software platform, it depends on what you want to do. And so we see lots of opportunities here where whether it's a Microsoft Sonic or a MetaSwitch or a Cumulus Networks or our own or the HPE, the OpenSwish initiative they've done, those are all running in our booth and you can put them together to meet your use cases and it really gives the end user, whoever that is, the agility to innovate on top of that platform. We call that open composable networks that you can put the pieces together compose them the way you want and develop them very quickly. So if you need some specific thing that only one of those vendors has, you have the freedom to go make that choice and build on top of it. Kevin, what's the next big hurdle for you guys to overcome? You know, when you come back next year, what are you working on over the next six, 12 months that's kind of another kind of breakthrough, you know, opportunity as we continue to move down this path to faster compute, better store and faster networking? Yeah, so I think, you know, one of the areas that we've really been driving is our silicon photonics, which is our being able to drive this data very, very cost effectively for long distances. So that's related to the fiber. That's an area where today we're shipping in 100 gig. We know that we're not going to stop there. So we know that there's 200 gig and 400 gig right on the horizons, 400 gigs been defined. You know, if you look at the way things are evolving this new open platforms, OCP for example, you don't always have to go back to these standard organizations. I was talking yesterday that when we went to do 25 and 50 gig, the IEEE actually initially said no, we don't want to define that. So five of us got together. They said they don't want to define it. They said they didn't want to find it. So we basically, they said there was a lot of big companies that are involved that probably didn't have it. And so they were resisting trying to slow it down to the IEEE. Five of us got together and formed our own consortium. So it was ourselves, Microsoft, Google, Broadcom and Arista formed the 25 gig consortium. And we just said, we'll build interoperable solutions. Today you see them, they're hitting the market. So that's the big push is, like I said, I think this is the biggest infrastructure rollout for networking and micro. That's 25 gig. Kevin, what about the competitive landscape? You know, we see a lot of the component players here. I did bump into somebody from Cisco that was at the show. They do have involvement. Broadcom, yourselves, give our viewers how you guys look at the competitive landscape. Yeah, so I think one of the things that we think is interesting, if you look at Microsoft, they're one of the largest public cloud providers in the world. What you're seeing is people want to deploy private cloud installations as well. And they're really well positioned. Microsoft is really well positioned to take what they have with their Azure public cloud. And they are also delivering that into the enterprise. So people are going to want to see hybrid cloud solutions. And really we're looking forward to that. I think it's public that they're using some of our Rocky gear. They've talked about that at 40 gig. And so we see looking forward that people are going to want to do hybrid cloud. Take the technologies where we've really driven our business in the public cloud and move that into the enterprise and the private cloud. I think that's a huge growth opportunity. I think the traditional enterprise opportunities, those businesses like FiberChannel, which is just slowly declining over time, people are going to adopt the new nimble, agile open platforms. Two last thing, Kevin, before we let you go, it's great insight. Consorcia versus open source. You guys are playing in both. Consorcia used to be kind of the only way. And then the huge rise of open source. How do those things fit together? What are the advantages and disadvantages of the two approaches? And how have you guys been able to integrate both to really kind of move the technology down the road? It's a great question. So I think both are important. You have to do both. If you look at what happened with the 25 gig, we can move very quickly. We just made something happen. We said, hey, we're going to publish this and we'll go make it happen. But at the end of the day, the IEEE came back and said, wait, we do want to actually support this. And so now there is a workgroup within the IEEE. They're defining all of the standards at a very detailed level. So I think at one level, you can come into an organization like OCP. We can make things happen. You saw Google come in yesterday and talk about the new form factors. So shorter racks, 48-volt power supplies. They can make something like that happen very quickly within an organization. And then you actually need to go flesh out all the details. And so I think both the Consorcia as well as the open source are both good. Yeah, interesting times, great times. I guess eventually just going to run into the speed of light. You're not quite there yet, but we'll keep moving closer. There you go. If we can fix that, then I get a startup idea. All right, Kevin, well thanks for spending a few minutes with us on theCUBE. Thanks again for sponsoring this being here. Obviously, you guys are a gold sponsor for the show as well. Absolutely. Important presence here at the OPE Compute Project. Thanks for stopping by. Super, appreciate it, great to be here. Absolutely. Jeff Frick here with Stu Miniman. Kevin Deerling from Melanox. We are at OPE Compute Project Summit 2016 in the heart of Silicon Valley. We'll be here all day, wall-to-wall coverage. Keep an eye out. We'll be back with our next guest after this short break. Thanks for watching.