 Everybody we're back, this is Dave Vellante at Wikibon.org. This is theCUBE, SiliconANGLE's continuous production of EMC World 2013. We're here in Las Vegas, day three, about 15,000 customers and partners and it's really quite an event, probably the biggest and best EMC World that I've been to in the largest of date. I'm here with Stu Miniman, my co-host for this segment. Greg Shearer is here. He's the Vice President of Storage and Strategy at Broadcom. Greg, welcome back to theCUBE. Thank you, it's good to be here. Yeah, so this is quite an event. I think you guys, you have a presence here. You do. You just came in and I think we're struck by the size of the show, as was I. I mean, I think the, you know, we were talking, beginning days it was quite small, you know, a bunch of technical guys and even recently, you know, five, six, seven, nine thousand people, but big explosion this year. Oh, it's tremendous. I mean, that coupled with holding it, you know, during an overlap time with NetWorld Interop down the street, it's really quite a venue. I see a lot of faces that have gone back and forth between the shows. Yeah, well, Stu, you had some folks on from Interop and across pollination. Absolutely, Dave. And we talk there's a lot of silo busting going on and as software is taking over the world, you know, people need to understand those cross-domains. I mean, if you look at a recent hire EMC brought in John Rose to be CTO, because they specifically wanted somebody that wasn't storage. And of course, John was CTO at Nortel and Huawei. So really knew the Interop crowd over there and, you know, talking about all these changes as EMC shows transformation. And, you know, Greg, you've been on theCUBE a couple of times. One of the things we've looked at is we think server virtualization really kicked off this wave of change in infrastructure, ripple through storage, and it's also rippling through networking. So, you know, maybe can you talk to us a little bit about what you're seeing from that intersection of storage and networking today and, yeah. Oh, I'd love to. I mean, it's fascinating, Stu, because, you know, if you look at it, the ripples are still going on in terms of how fast things are moving. Because, you know, where we started out with just combining a bunch of servers, you know, onto one virtual platform, it really put stress on the networking. And, you know, from an economic standpoint, people, you know, that was the first practical platform within the enterprise that decided that convergence between storage and networking was important. I mean, it was not because it was cool technology, but because it was the economics that sort of demanded it. As we move into 40 gig and more, you know, more platforms that have, hey, you look at the latest, you know, Romly-based Intel platforms with, you know, multiple cores, you know, per socket, you know, that's only going to increase. These, you know, virtual servers are now capable of dozens moving to hundreds of, you know, virtual machines. And that's really pushing the envelope even further. If you look at a lot of the cloud-centric data centers, both public and private cloud, that drives the need for virtual networking. So outside of just the virtual server market, you know, we need the ability to instantiate virtual networks. So you see in the media a lot of times that the talk of NVGRE or VXLAN encapsulation or tunneling, we're at the very beginning of that in terms of even its use cases. Today, if you want to migrate a VM from one server to another server, those servers have to exist within the same subnet and VLAN today. Well, it's maybe practical on a very small scale, but if we're talking about global data centers where you have a data center in Norway and a data center in New York City, likelihood of them being on the same subnet or it's not likely, it's impossible. So the whole notion of virtual networking is really encapsulation and extending that notion of the network layer to where now I can appear in any virtual network that I want to anywhere in the world and therefore migrate my workloads accordingly. So Greg, I wonder if we can actually step back for a second and up level for a second. We were talking, you know, one of the big trends that kind of driving of software and in many ways kind of the commoditization of hardware underneath. You know, some of the folks at EMC World but you probably have heard of Broadcom but might not know all the places you guys sit. You sit in the server, you sit in the switch. If I go through the show floor as somebody that knows the components, it's like, oh, hey, you know, there's Juniper, there's Rista that's using your chipset. You look on, you know, all the servers that are going in there. Oh, I know where Broadcom sits in there. Can you give us kind of, you know, that high level, quick overview of where Broadcom sits? You bet, I'd love to. So Broadcom's actually organized into multiple different business segments. The business segment that I work with in the data center and service provider, we call ING or the infrastructure and networking group. Literally, if you go through a data center, it would be hard to find a component that doesn't, or a box that doesn't have Broadcom somewhere inside in the form of network processors with our XLP line of network processors are fies and surdies at the very low level in terms of being able to transmit data across copper at long distance or short reach. Ethernet controllers. So, we have a presence here at the show where we're displaying our new line of cards, Ethernet Nix, both specifically for Dell and HP servers, but we have a very broad line of both the chips themselves as well as boards. In addition to the Nix though, we have a very, very big presence in the switching marketplace. Matter of fact, most 10 gig switches, if you open them up, there's Broadcom Silicon inside. Except for the guys that make their own chips, like one of the big players in the market. Maybe them. But if you look at the- But even they have used Broadcom chips in some of their switch, especially in the low latency market. They do, specifically. And so, the presence, Broadcom really has a presence in many of those markets. I'd say virtually all of them, if you look under the covers, there's Broadcom Silicon. We like to really portray ourselves as really providing the foundation. I know SDN is a very overused term right now, but however you define SDN in terms of where software meets the hardware, in terms of networking, Broadcom provides the foundation for that with FI's and CERDI's chips, the physical layer to Ethernet controllers, Ethernet switches, and network processors to go ahead and work in conjunction with those. Yeah, so, interesting point there. I know you look at how SDN might change the marketplace. If market share shifts between the vendors, if you go to commodity switches like coming out of Taiwan, even any of those pieces, Broadcom's a winner. Yes, yes, matter of fact, we really believe that in a lot of the world, if you look at the switching market in general, what's helped people captive to specific vendors has really been their control plane software. Well, the very nature of OpenFlow, OpenStack, SDN is to take that control plane and commoditize it, push it out into the ecosystem to allow innovation to happen anywhere. I mean, there's many different avenues to this and we could probably talk multiple segments in the whole network NFV, you know. Network function virtualization. Thank you, I'll get think of his fabric. Network function virtualization and even the whole notion of putting things like Chef and Puppet, you know, that allow all different forms of applications to run through API is directly on commodity switches these are sort of both attacking the same kind of marketplace, but they're both looking at the same issue and that's to really allow innovation into the switching ecosystem, much like if you look at the x86 compute server market, anybody can write any application they want within the context of a server. That's really the basis for the operating system and the software development kits. That same paradigm is now moving into the network to be able to run your application where it makes the most sense, especially for visualization and statistics gathering and to be able to see what's happening in your network. We talk a lot on theCUBE about the hyperscale market and the things that we can learn from the Googles and the Facebooks and the Amazons of the world. We saw EMC this week announced that a big Viper software defined storage. You're talking about software defined networking, obviously that's kind of been from a theme standpoint but kind of a leader here, storage is sort of lagging in the whole software defined space and it looks like it's going to change but my question to you is specifically what, in your view, is the industry generally and broad come specifically learning from the hyperscale space, what lessons are you learning and how are you applying them? Boy, it's a great question because if you look at the hyperscale space, you know, within the context of the enterprise, we still talk about, boy, storage and networking, when are they going to fully converge in the hyperscale space? That question was answered many years ago so the answer is it's all Ethernet so storage is Ethernet, all your networking is Ethernet, your lower latency is all Ethernet. So convergence happened there very early on. The other thing that is fascinating is that within the context of the hyperscale environment, virtualization, server virtualization, multi-tenancy is really what was driven there on massive scale. So the whole notion of tunneling in the NBO type capability, network virtualization offload, that was really driven first within that hyperscale just again because of their massive scale where they're deploying tens of thousands of servers at a time as opposed to 10 or one Rackworth or 20 or 30, 40, 50, and their scale is so large that they've developed problems that really couldn't be solved any other way than within the hyperscale it's an all L3 network. TCP is used exclusively from a ECMP and a routing standpoint between top of racks. We see more and more of that happening now being brought back into the enterprise data center just to simplify how things are done, to use fewer amounts of folks to go ahead and manage the ecosystem because that's sort of the large tenant of the hyperscale environment is you can't afford to do things the same way that has been done traditionally in the enterprise. It has to be simple push button and very few personnel to oversee it. And that's being brought back in now. Innovation started in the hyperscale and is moving back into private cloud and the enterprise data centers. How long is that timeframe? Is it five years, seven years, four years? And is the time it takes for hyperscale to seep into the enterprise compressing? Oh, I think it definitely is. And the reason it's compressing is because of the economics. It used to be that two decades ago we did things because technology was cool and it was really the innovation cycle that we could do interesting things. Now we're really looking at it more from the standpoint that the enterprise data center is being driven to survive based on the economics of I need 10 gig, I need 10 gig everywhere because I don't have time to physically create silos of servers that here's my one gig that feeds into 10 gig uplinks that then feed into an ag layer that my tier one, tier two, and tier three of the data center. Now we're building flat data centers but specifically driven because I can go to a server and do a push button and completely reconfigure what are application servers, what are database servers and what are my web tier front end and change that configuration at will as opposed to setting out a crew to recable the servers. It's still, we're seeing so many disruptions here. We always talk about flash. If we remove that bottleneck then the network becomes the bottleneck. We've been talking about open flow, open source, open stack. Yeah, Dave, great point. And one of the challenges, you know, networking and especially if you talk kind of the economics it's a complex and layered piece. If I look at the move from one gig to 10 gig there's the server bus, there's the switchback plane. There's, you know, the cabling is one of the things that gets overlooked all the time and the cables and the optics are what cause so much of the cost and power. So Greg, you know, I think 10 gig, you know it's taken us over 10 years to really move. Yes. Switches, servers, have we gotten over 50% of all servers shipping 10 gig yet? No, we haven't actually. So the percentage in the rack is still quite low actually. It's, you know, on the order of 10% of rack servers. Now this is in the enterprise data center. If we were to look at the public cloud or more of the hyperscale environment. If you include China, which China is lagging the US and the, I'll call it just the Far East. You know, we're definitely seeing in the US it's a 50% penetration, you know, within the public cloud. China's a much smaller percentage. The overall rack though is probably sub 10%. Well, I know we had talked when Romley came out and it was, you know, the hope was that we would get that push to go beyond blade servers into the rack and stack market. So 10G base T's not catching on fast enough is, you know. We're seeing signs of that now and certainly Romley was a very slow start, a very, very slow ramp. You know, now we're moving into, you know, tail end of this year, Intel's refresh, their Ivy bridge cycle. And again, there's a lot of optimism on many of our parts that we're going to start to see much more deployment. You know, something as simple as PCIe Gen 3 was very difficult with Romley. It was, you know, a new technology. A lot of folks were on the edge from the standpoint of interoperating with, you know, Intel servers and interoperating with each other. Ivy bridge is going to break that barrier and we're going to see wide scale deployment of PCIe Gen 3, which overall brings the cost of connectivity, you know, the speed of connectivity up and the cost of it down. So we have high hopes to see that penetration of 10G, you know, really increase significantly. 10G base T is a part of that. There are certain market segments, some of the hyperscale environment doesn't have a huge need for 10G base T because their cabling is top of rack. They're perfectly fine with, you know, direct attached copper, the DAC, Twin X cables. There's other environments though that are more the traditional enterprise that are end of row and they haven't completely migrated over to a 10 gig switching ecosystem. And 10G base T offers one thing that's really irrefutable from a value proposition and that's to allow each side to upgrade independently. And so we're starting to see more deployment of 10G base T, more switch rollouts too. There's been switch rollouts from Dell, there's been switch rollouts from Cisco, you know, major switch players that now have, you know, very concentrated high port count 10G base T switches. Yeah, so, you know, that's kind of the edge is, you know, lagging a little bit on 10 gig. I think in the general switch market you've got something that can do one or 10 gig and even a Rista last week announced that triple speed, one, you know, I'm sorry, 10, 10, 400 I think. So, you know, where are we with that kind of general, you know, cost dynamic of 10 gig, 40 gig and 100 gig? I mean, the cost dynamics, and it's always hard for me to talk specifically because there's always cost and price. I'm privy to the cost because we make the components. I'm not privy to the price because we don't sell it directly, it sells through other channels, but I can say that there's been a huge downward pressure on the overall prices, on the overall costs. The prices, I think, you know, within the tradition low EM market, I think that there's been the tendency to keep a really broad gap between one gig and 10 gig pricing. We're starting to see that come down and that's one of the reasons why I believe that in the IV bridge cycle, we're going to start to see, you know, significantly more 10 gig adoption. On top of the need from a technology and speed standpoint, cost is going to drive that. 40 gig is really a function. If you look at just the cost, 40 gig is really the price of four 10 gig ports. So it really is an MLD for four lanes of 10 bonded. The pricing is always a little bit different, but from a cost, that's what it is. So, Greg, Greg, we're about to wrap up. I want to give you the last word, though. What's the coolest thing you saw at Interop? Oh, boy. You know, the hard part is is that... Besides the Broadcom booth. Oh, that's true. Thank you very much. I think probably the coolest overall technology is really starting to put legs on some of the SDN capabilities. You know, there's been some companies that are being very practical in that and actually demonstrating some of the chef and puppet capabilities of putting really eyes and ears into the network from a purely third party standpoint to be able to see failures as they arise, microbursts, other things that are pretty exciting to have that level of visibility, not through one vendor, but through a very broad range of applications. I love that. Putting eyes and ears on your puppet. So, you know, that's excellent. Well, Greg, always great to deep dive with you on the networking. This is Stu Miniman with Dave Vellante, talking with Broadcom here from the Interop and EMC World Shows in Las Vegas. Always lots of great tech. Great to catch up with you, Greg, and look forward to continuing the conversation. This is SiliconANGLE-TV's live continuous coverage from EMC World 2013. We'll be right back with our next guest.