 Live from Las Vegas, Nevada. It's theCUBE at HP Discover 2014. Brought to you by HP. Okay, welcome back. We're live in Las Vegas, we're here at HP Discover. This is theCUBE, our flagship program. We go out to the events, let's try to see from the noise. I'm John Furrier, the founder of SiliconANGLE. I'm Joe, I'm my co-host Dave Vellante, co-founder, chief analyst at wikibon.org. Our next guest is Frank Scheer, VP of Tragic Q-Logic. He's an alumni, I've been on before. Welcome back to theCUBE. It's a pleasure to be here. Great to see you again. So day one of three days of coverage quickly. What's your take on some of the announcements from HP? Well, since we have a personal cut on several of them in terms of, excuse me, the 20 gig offerings that HP has announced and is now available, we're very excited about that. So obviously I'm going to make that kind of front and center. Yeah, don't let us talk about it. What's so exciting about it? Well, you know, I think you hear an awful lot about faster and faster speeds. I think this whole Flex20, as some folks have called it, from the switch module standpoints, the virtual connect F8 module, what comes out of the chassis is really a tremendous amount of bandwidth. We have 240 gigabit worth of uplink bandwidth that comes up out of the chassis. And all of it either, you know, standard, IEEE standard 40 gigabit ethernet, or 10 gig uplinks, or there's Flexports that actually can be eight gigabit fiber channel as well. So tremendously flexible fabric interconnect. The interesting part inside the chassis though, is that we end up taking two lanes of 10 gigabit and we hardware bond them together so that from the fabric down to each one of the blades, there's actually 20 gigabit worth of performance to each one of the blades. What that really allows is a tremendous amount more, if you look at what a lot of the blade chassis are used for as virtualization, you can practically double up on the number of VMs on a per blade basis with that additional bandwidth offering. And it's not teaming, it's not taking two ports and then using a software team structure to go ahead and pick which port to send your stream on. It's a hardware 20 gigabit port. Just happens to be spread over two lanes of 10 gig. Not unlike 40 gig, he uses four lanes of 10 gig, this is, you can think of it logically, it's just half of 40 gig. But it's very exciting because you end up saving a tremendous amount of CPU usage over two ports and teaming software. Teaming software ends up having to hash the stream. It hashes the first end bytes of the header to figure out from an affinity standpoint which path it's gonna take for all that hash traffic. So it's CPU intensive. It's both CPU intensive and you never get a perfect hash because you never have 100% of your traffic. It's a little sloppy. Yeah. So you might get 1.2 to 1.5x the speed with a software team. On top of that, any one stream can't go faster than just one 10 gig port. Here, you have a 20 gigabit channel. So any one stream can go up to 20 gigabits in terms of performance. So this is huge in virtualization which you know, utilization is going up and up and up. So it's not like we had five, seven years ago just a bunch of free capacity to throw at this problem, right? The other piece that's kind of nice though is opposed to there's a number of different solutions out there that are say quad port 10 gigs so that you could take two ports of 10 gig to each of A and a B side from a fabric standpoint. But when you do that, you not only, even if you don't use teaming, so we'll assume now that we're not even using teaming, the disadvantages are any single stream is limited to 10 gigabit worth of performance. But on top of that, you also just because you have separate streams coming in, separate adapters per se or separate ports, you use more CPU utilization servicing that as opposed to one large pipe where from a interrupt affinity standpoint, processor affinity, you end up using, we've measured somewhere in the neighborhood of 35% less CPU utilization when you fully occupy that 20 gig pipe instead of two 10 gig pipes, not including the teaming software. So again, to your point, virtualization is huge because you want to use all those cycles instead of using them to drive your IO, you want to use those cycles to actually drive more VMs. That's where the whole ROI comes in. So you guys came over recently to QLogic, sad news out of QLogic this week, the former CEO and chairman, H.K. Decide passed away suddenly. He was quite an individual. John, you actually met him. But if you remember in December, October 2010 on our way to Barcelona, you and I, we stopped in, did a drive-by. I gave that little talk to the analysts. He was there. And that was really the only time I'd met him. I'd seen him speak a number of times. But he was kind of a legend in that world, wasn't he? Very much. You know, H.K. is a great guy, very much of a technical visionary, but also just had such a personal touch and a personal stake in the business. You know, I think it is just a very much of a shock for those of us that knew him and spent time with him. A very warm man as well is just a very technology-driven guy. And I remember, you know, QLogic spun out of Emulex in 1994. And I remember H.K. did something that I thought was so poignant. At that point in time, there were a number of analysts that were kind of needling him during one of his first public conference calls. We were saying, so what do you think? Do you think that company that you came from is you guys are gonna take market share from them and you're gonna beat them all over the map and so forth? And H.K. stopped and looked down and looked up and looked him square in the eye and said, in my country, we don't speak ill of our parents. And he set the tone for a very gracious relationship among the competitive landscape. And that's how it's been to this date. And I really attribute a lot of that, most of that, to H.K.'s graciousness. Well, he went for it, right? He made the call to go after Fiber Channel and go after it hard. At a time when a lot of people question that decision, right? Yes. And it obviously worked out well. Built a great asset there. So I think from all of us inside Qlogic, our point of view is that the best way to honor H.K.'s memory is to drive the company forward and to be successful as a way to honor who he was and who he is to those of us that knew him. So what are some of the big initiatives that you guys are working on these days? Well, certainly, the one that we just talked about is something that we're very excited, the 20 gigabit. You know, I can't really talk about unreleased products, but you can probably guess we have a lot faster speeds coming out. We also are working very closely with H.P. on other initiatives. Something that just released fairly recently is Switch Independent Nick Partitioning. Works on both RAC as well as Blade Environment. Explain that, yeah. So the whole concept here is that it takes a standard network adapter, including storage. So CNA, you can think of it as a full CNA, both Iskazi and FCOE as well as network partitions, or a NIC, rather. And instead of having, say, one 10 gig pipe, two 10 gig pipes, we can divide that up into multiple separate virtual NICs. So in our current product offering, it's eight different partitions. But the thing that's novel about this, this is not completely unlike, say, Flex 10 from that standpoint. It certainly allows a lot of flexibility in terms of partitioning that fat pipe, that 10 gig pipe, into manageable sizes and then have applications that can take advantage of that. A lot of virtualization environments say they want a dedicated NIC for management, or they want a dedicated NIC for VMotion or for VM migration aspects. And so in that respect, there's many, many environments that, Nick, partitioning, for all practical purposes, looks like you've plugged in up to eight separate adapters, but they're managed as if they're one. And you don't have the cable sprawl of having eight different cables coming out of the back of your computer. So Greg, I'd love to get your opinion on this. We watch what's happening in the hyperscale space. A lot of that is bleeding into the enterprise. So it's sort of a harbinger of things to come. So what is the role of adapters specifically in the hyperscale environment today and how will that evolve into the enterprise? Well, it's a great question, a loaded question. Yeah, it's kind of loaded, right? We're still seeing it evolve in terms of, you know, hyperscale is really, from my perspective, it's really not one market, but it's a number of different markets. There's a portion of the hyperscale market that maps very well to enterprise workloads. Some of this is, some people refer to it as more private cloud or managed services that are maybe off-premises data centers that large private cloud data center providers will go ahead and set up for the enterprise. Right, not Amazon, not Google, not Facebook. That's right. But some of the carriers are getting into this business as well. If you look at AT&T, Sprint, a lot of these folks are aiming some of their future businesses this direction. In that, I see it very different than I would the Amazon, Googles, those folks in terms of their needs. And I think there's a number of different partitions even among those companies. So there's lots of third party cloud providers or off-premises data center. And I think what we see more and more certainly the need for very specific features and capabilities in IO is really going up. The bar's being raised. We look at things like virtual networks, tunneling capability that just five years ago there wasn't even landscape that existed with tunneling. The idea of having, and this really does go along with the whole notion of hypervisors and virtual servers, multi-tenant operations of being able to move workloads across very vast geographies. So instead of having to make sure that you have two hosts that are in the same subnet and VLAN because they're in the same rack, we wanna be able to spoof that and make them look like they're in the same subnet and VLAN even though they might be in separate continents. That's really the job of say virtual networks. And the IO controller has a huge part to play in that in terms of handling all the offloads with an outer header, a header now that is really not the header that is a part of the inner packet. Literally what happens is the controller both adds that header and then sequences each packet is during transmit and then likewise on receive goes ahead and looks at the inner header to go ahead and do things like hashing to figure out which queue that packet needs to go into communicating with the V-switch to make sure that all the offloads are still present like CRC offload, large receive offloads all those sorts of things. So in many of these environments that are very applicable to the hyperscale environment IO is taking on a bigger and bigger role. So the data center hosting guys and the cloud service providers that are sort of enterprise like you're obviously selling your product to them are the hyperscale crowd. We used to think five years ago that they were just running software and commodity components but now what they're doing is they're pushing the ODMs and others to get highly customized. Are you getting dragged into that conversation? We absolutely are. And you can't talk about it. Do you sell to the NSA? I thought, let's talk about open source this wild disruptor. Open stack, open daylight, open flow all the open source initiatives. How are they changing networking? It's amazing, and even down to open flow, just in looking at this, you know, originally people looked at open flow and thought, ah, we just, you know, open flow is something that happens out on the network, and then we need a controller, not an IO controller, but a software controller to manage, you know, the control plane, you know, for the switching. But, you know, now we have, you know, controllers, you know, 10 gig, 40 gig controllers that are in the works that, in the not so distant future, they'll actively participate in the open flow environment themselves, because they manage switching directly. So if you look at e-switching and, you know, virtual Ethernet bridges and so forth that are built right into the, you know, what we used to think of as a NIC, you know, these are now highly intelligent adapters that have to look at the workflows and adapt them and obey switching rules, which means that they need to participate in the SDN environment, open flow 1.1 and interact with, you know, a lot of the other environments that you mentioned. Well, it's interesting, you know, you hear all this sort of software defined talk, and everybody talking about commodity, we just said Bethany Mayeron talking about network function virtualization, specialized gear for the telcos, like I say, the hyperscale guys are going more and more custom to get denser and denser, you know, servers and other components, certainly networking gear. How do you see that playing out? Is the enterprise going to follow a similar path for a period of time to go more toward commodities? Everybody's talking about running, you know, on commodity with the SDN and SDDC and software defined storage, or will they be able to, will they have an appetite for more customization, do you think? I think it's both and, so it's not really an either or. I think that commodity is not necessarily an awful word. I mean, we think that commodity is a driest war volume and simplifies the environment, you know, there's a lot of different aspects to commoditization. Commoditization, as it applies towards simplification, you know, we certainly need much simpler network, simpler IO planes. That's really the goal of a lot of the software defined, you know, in the SDN is to take away the complexity and put that into nice manageable environments where you have a single pane of glass that can manage more than just your external switch, but also map your VM into that switching environment so that you can follow that VM as if it looked like a physical server all the way throughout your network to its, you know, end point and destination. So I think as I look over the whole landscape, I think it's a natural thing to see things tier. Obviously, hardware is getting less expensive. We're seeing 10 gig finally start to climb up on the adoption curve within the enterprise servers. Certainly, we saw that several years ago within, say, the hyperscale or the public cloud environment. We're starting to see more and more of that within the enterprise data center and I think if we consider that as a part of the commoditization absolutely. But I think this notion of, say, software running on dumber and dumber hardware, I think both sides interact with one another. And what we see is as some of the software defined or I'll pick on virtual networking since we were just talking about that, you know, that the concept first started in software and it was, you know what, we'll just create smart V switches, software switches that handle all this tunneling encapsulation and de-encapsulation in software. Hugely flexible, we won't burden anything else in the system. And then what we found is that we dropped a huge amount of our CPU utilization on just encapsulating and de-encapsulating. And so then it was, well, we need help. We need help with the hardware. So now we've added these offloads, lots of places in the network, both in the switches for legacy environments as well as the end points in terms of the IO controllers themselves. And I think we see that on and on in terms of a ratchet effect where software comes up and we see some wonderful new features in software and then the hardware ratchets up to take on some of that functionality. And I think we're going to see that continue. Have you been following the whole Docker buzz, the new container technology? Oh, yes. Sort of taking on VMware in a way, have you had any thoughts on that? Oh, boy, actually, it's a bit too early for me to have well thought on that. Yeah, well, I mean, but it's had a huge impact. Versalization, of course, has a huge impact on your business. And now it looks like more open source disruption coming down. All right, great. We have to leave it there. I really appreciate you coming on the queue. Oh, my pleasure. Great to see you again. Great to see you. Thank you. You decide to queue here. HP Discovered live in Las Vegas. We'll be right back with our next guest tip in this short break.