 Hi, this is Stu Miniman with wikibond.org. Here with SiliconANGLE TVs, live continuous coverage from Dell World 2012 here in Austin, Texas. And segment we've got here is we're going to be talking about the maturation of Ethernet, 10 gig Ethernet, 40 gig Ethernet, latest generation of servers. And joining me is Brian Payne from Dell Server Division. Do you think you're having me? Thanks for joining. And Greg Shearer, who's with Broadcom, he does the strategy around kind of the interconnectivity and the server piece. Greg, welcome back. Thanks, Stu. But both of you guys have been on the queue before. As you know, I like to talk about the latest trends, what's going on in the industry, and really just dig a little deeper as to what's happening in the marketplace. So if I could set the stage, if we look at 10 gig of Ethernet, it's been around now for over a decade. It was ratified back in O2. We've seen a really kind of slow adoption rate. The numbers I heard is in 2011, it was about at the server side, about 20% of all servers were adopted. Of course, earlier this year, we had the Romley launch from Intel. We had Intel CIO on this morning talking about some of the adoption. And of course, Dell released the 12G servers to go along with that. So, Greg, since you guys at Broadcom are really the component side, seeing a lot of that server adoption, bring us up to speed. What have we seen in 2012 of adoption, 10 gig? Where's it happening and what's going on? So we're excited that we are starting to see the ramp of 10 gigs starting to climb. Blade servers has been the predominant marketplace for 10 gig. Mainly because of slot limitations and just general IO constraints. 10 gig has really led the way in blades to start. We've seen moderate increases in the rack and that's where we're really hoping to see a much broader adoption. Some of the segments like public cloud, they've adopted 10 gig in a much broader way. But we're hoping to see, especially in the virtualization segment for the rack, a very, very broad market to see that really increase as we head into 2013. Okay, great. So Brian, how about you? Tell me from Dell's point of view, 12G launched earlier this year. What's the cost of customer adoption? Where are they finding 10 gig really allowing them to new things in their environments? Yeah, so first of all, it's been just an exciting year with 2012 launching 12G and rolling that out. It's been unprecedented success. We've got a lot of growth going on in the marketplace. Last week I was in the UK and talking with some of our customers. One example, we had a customer that was progressive, a financial customer that was rolling out OpenStack to do their test and development. And they were saying, hey, they're on giggy today. And they said, I got to get to 10 gig quickly because as it stands, I can't spin up new virtual machines quick enough. I need more bandwidth to roll that out there. And so I mean, that's just one example. As we look at the broader marketplace, it's looking at a recent research study about spending plans going forward. At the top of customers list is kind of a North American, Europe, mid and large companies. Absolutely at the top of the list was 10 gig adoption in terms of planning more spending. So I mean, I really, to Greg's point, I think we're on the cusp of things picking up and it's really virtualization that's driving it. So the concentration of virtual machines, the need to do live migration and have the bandwidth available for that is critical. And so we're very excited about the potential. 10 gigabit or 10G base T certainly helps in terms of making that leap to 10 gig a little bit easier for our customers. Great, yeah, glad you brought up 10G base T. So if we talk about the application, if you dig down, most people don't think about the cabling, the physical layer that really puts this, but those of us in the industry really have seen that there's one gig ethernet was like well over 90% running just twisted pair. RG45 connectors, billions of ports out there. And so that technology being available for 10 gig has been a limiting factor. But it's a complicated discussion there because power and cooling of the environment, the cost of that environment. I actually, I published some research from Creehen Research, some of the market studies. If you look at adoption to 10G base T was really low. Q1, it actually knocked up a bit and then really started to accelerate in Q2 and continuing Q3. So Greg, I mean, can you tease out for us some of the details on what is the 10G base T, who's doing it and why? So we're very excited to see 10G base T really, I'll say begin its ramp. As you say, it's one of those new technologies that's been around for a very long time. Right. To not see more on, right, the new old technology. That's right. And I mean, just to put it in perspective, where 10G base T started was in 90 nanometer technology and the power was, you know, different people would fudge in terms of what the real power was from a worst case standpoint, but there were multiple folks that had 15 watt solutions just for the physical layer device itself. And so a 15 watt device, when you add that to the controller, it was very difficult to get a single port controller in a PCIe slot that had a limit of 25 watts. So there was no way to do it with a dual channel controller. And so, I mean, this is one of the beauties of silicon migration as we go from process to process. You know, we went from something that was very large to the point where, you know, 10G base T designs, the logic was larger than the controller behind it in terms of all the capabilities that it needed to support. Much of that was to support distance. Along the way though, there's been wonderful progress though. You know, now we're in 40 nanometer, a second generation of 40 nanometer, on our way to 28 nanometer. And now we're looking at something that's a worst case power of around five watts and a nominal, probably around three watts with a short reach capability. One of the things that the standard has done is that if you are just running short reach cables and short reaches defined as less than 15 meters or less, at startup they run a TDR, a time domain reflectometer to measure the distance. And at that point, it's really ingenious. We just cut the launch power in half. And so you end up saving even more and go down nominal to probably just a little over a watt, which is very respectable. So we're excited because now we have speed agility. And in the rack, this makes a tremendous difference. You no longer have to plan a 10G controller and a 10G switch. You can upgrade either side independently. Yeah, so Brian, when I talk to customers on this issue, I find the big guys, the guys that are just going to roll out thousands of servers, 10G-based, he's not going to be for them. Power and cooling is really critical and the power's not there. Where do you see the cut over? Is this still just mostly for enterprises that are upgrading but keeping most of their existing plant? Where does this fit into what you're seeing in customers? Well, so where does 10G-based T fit in? Versus optical or even SSP plus? Actually, it's one of these things that's difficult to predict, right? And so that actually is something that, if we work with Broadcom and we look out at their roadmap and we looked at what they have planned, we talked to our customers, they had a lot of uncertainty about which technology they were going to adopt. We actually designed in modularity into our servers. And so actually, I no longer have to predict the future. We've actually designed the 12th generation of PowerEdge servers where the network, which typically has been embedded or a LAN on motherboard, if you will, a LOM, embedded in the server where we've made a choice for our customers, we actually made that a module so that now our customers have investment protection where they can buy a PowerEdge server and have the ability to change that technology on their pace as they make the migration, as they make their technology choices. My understanding is it's not only just which interface you have, but it's one gig or 10 gig and potentially higher speeds in the future, right? Exactly, exactly. And so I mean, in that respect, we have given our customers a lot of flexibility and a lot of investment protection. And what's cool about it though is that it didn't come with a loss of functionality. So we still have it fully integrated into our embedded management. In fact, the embedded management is probably the most exciting thing our customers are telling us about when they think about PowerEdge servers. We just recently announced the ability to do support assist where now our Dell tech support can tie into that server and understand what's going on with the server when a problem happens. And so when the customer calls them, they already know, they've already trouble shot it and then with the embedded management, let's say that the network controller Broadcom chips don't fail, but let's say that something happened with it with that interface, something became unseated or there was a failure. We actually have the ability to pre-stage the replacement or the BIOS firmware and all the settings of that device where all the customers do is simply slide the server out of the rack or the blade out, pop in the new controller and it will automatically reconfigure for them. So I mean, when I say there's not a loss of functionality, we've actually increased the functionality to give customers flexibility. So Brian, what does Dell do to help customers with the adoption of 10 gig? I know you make it really flexible, but what advice do you give them and are there services that you're putting in place to help them with that rollout? That's a great point. So as we talk to our customers, a lot of the feedback we get is, talk to me about my workload and give me some perspective about what 10 gig of it even that is going to do for me in the workload. So as we build out, what that translates into is that we'll build out reference architectures. It will actually simulate an environment. We'll build reference architectures for a virtualized environment and try different workloads on it. We also did a lot of work around big data and building that out. And so we'll try gigabit ethernet, 10 gigabit ethernet and we'll evaluate and then make a choice and then provide some commentary that customers can read and white papers on why we made the choice that we did. And then we can even go as far with our active systems as pre-configuring that for our customers and delivering it to their docs. So it's really trying to give them, put 10 gigabit ethernet in context of what they're trying to accomplish and making a recommendation when it makes sense for them based on the number of virtual machines they're hosting or the type of database they're hosting. Yeah, so I have to pounce on it. I heard you mentioned big data, which of course is a big trend here. I've seen some of your competitors releasing new servers kind of optimized with a lot of storage internal rather than having a sand. So what's Dell's position? What product lines are you seeing implemented for things like Hadoop or other big data analytics applications? Yeah, so really kind of two parts of the Dell PowerEdge family. We have our PowerEdge Rack servers and so earlier this year we launched our PowerEdge R720XD which is a 2U server that is capable of hosting 50 terabytes of data storage. So I mean, it's just, we just launched four terabyte drives in that and the three and a half inch form factor. We also have the ability to support up to 25, two and a half inch drives in that system and as you know, in the big data kind of Hadoop rollouts, this is a great building block for those customers. On the other side, so that comes with kind of traditional Dell PowerEdge management features, our iDRAC, the lifecycle controller that gave you that parts replacement capability I talked about a minute ago. On the other side though, is our PowerEdge C product line. And so in September we launched our PowerEdge C8000 family which is a 4U chassis that has a lot of flexibility. It has flexibility for a bunch of compute. But it also has flexibility for GPU and other co-processing technology. But the last one which ties back to your point is the ability to put in a bunch of disk sleds. So able to pack a bunch of 12 drive disk sleds into this 4U form factor. So as you look at customers that are going for maximum density and in their environment and looking for the bare essentials, kind of what we see going on in the hyper scale space, that's where the C8000 fits in and can be a platform for hosting. Is that part of the DCS group as Dell then? Or is that yet another group? It's, if you have PowerEdge over here and you have DCS over here, they're the best of both worlds kind of coming into this PowerEdge C lineup. And so as we want to bring hyper scale to the masses, that's where the PowerEdge C center fits in. And to the latest numbers, hyper scale is one of those really kind of, bright spots in the whole overall server market, big growth, small number of customers today driving huge volumes. And Dell's got the leadership position there. That's right, that's right. Kudos to you on that. Thank you. So I guess if we're looking a little bit down the line, Greg, so 10 gig, maybe we're finally starting to get adoption. So everybody of course says, okay, what's next? So 40 gig has been on the market for a while. Where are we with 40 gig? Our customers starting to adopt it when, what's your take? We're actually seeing a very strong 40 gig adoption in the switch side from an inter switch link and aggregation layer. We think that that actually starts to migrate into the servers for certain applications. As we said, 10 gig adoption rate is still not massively high in the rack. And here we're talking about 40 gig and even 100 gig that will be in a position that we'll be sampling that to our major customers next year. And we expect that to launch with the next major cycle. But some of the key technologies that I think are going to be required really to see broader adoption in 10 gig and especially in 40 gig and beyond, Dell's really been a pioneer in Nick partitioning or N-PAR capability. I was recently at a data center. This was a healthcare provider and they were just in process of migrating from one gig to 10 gig. And the front of their servers, they're a Dell customer, I'm happy to say. All right. But front of the servers were very nice. Everything dressed up. You went around to the back of the cabinets and they had 10 and 12 ports of one gig coming out of each server. And it looked unbelievably complex. And it's what may have started out really nice from a dressed cable standpoint two years later of reconfiguring and so forth was just awful. And so these folks were really looking at what can we do to still manage the way we used to with one gig but get fewer 10 gig pipes so we don't have this cable mess. And N-PAR technology is a great way to do that. Where each 10 gig pipe is logically divided up the OS believes that it is separate individual NICs except that now those NICs have the ability to have different quality of service from both fixed bandwidth requirements to say you can only get this much to even oversubscription and then using bandwidth weighting and prioritization. So I think that's one of the things and that's fully integrated into the whole life cycle management and iDRAC to where it can be managed in an easy way because we all know new technology that I tend to live on it's exciting to talk about but if it's not simple to deploy it becomes really a liability and I think this is an area where Dell is really shy and is making that easy to manage and making the transition easy. Brian please jump on in if you want to comment on that kind of simplicity message. Yeah I mean absolutely well I think one of the things that's at play here is just within the IT organizations that we deal with there are a lot of organizational boundaries that are getting blurred and if you think about usage models varying and different customers have different ways of coping whether it's hey I want to retain my kind of traditional way of partitioning my network and so MPAR makes sense for those guys hey we can enable that for them but if they're ready to go all the full convergence path again where we have the flexibility to go offer that so it's a combination of making it very flexible for our customers and then the simplicity comes in in the management interface and the ability to deploy and just manage that server through its life cycle and that's where we're just investing a lot of time and resources to just make the experience managing power edge servers so much simpler there are absolutely no agents required like a lot of OS flexibility that kind of thing and so I mean when I think of simplicity I think about what we're doing in our embedded management where we're leading and pioneering kind of new capabilities in the industry. Yeah that convergence is definitely something we're seeing. Back at Dell Storage Forum I think we were talking about it was the the ecologic that's embedded in the compute where do you think going forward? Are we still going to have just lots of servers in storage? How much is that blurring of the line going to happen? We are in the midst of a serious transition in the industry with the resurgence of DAS in a lot of cases but I think the most exciting thing that we're working on is what can we do with Flash? We talked a little bit about this at Dell Storage Forum we're still on the mission of extending that fluid data architecture where you get all the resiliency and the data integrity that you expect with a compelling array but getting the performance of putting that Flash in the compute on the Express Flash drives that we pioneered in the power edge product line. So I mean that's the exciting innovation if I look out to 2013 and you start to see solutions rolling out from Dell those are the kind of exciting things that you're going to see coming from us. Okay, so my final area I want to cover is power. It's one of those things that a lot of times the green technologies get kind of poo-pooed but both Broadcom and Dell have been making some big pushes in that. So Brian, your folks were telling me about the kind of the fresh air functionality, something I'm not too familiar with. So can you, what is that? Yeah, absolutely. So fresh air capability is the ability for our servers to tolerate temperatures that go up to 114 degrees Fahrenheit or 45 degrees C. Well, what does that enable you to do? Well, actually it means that you can run your servers in a data center that doesn't have chillers because if you look at a map of the world and you look at temperature fluctuations and what happens, you actually, our servers are capable of running at that 114 degrees for 90 hours out of the year, 110 for 900. And if you look at that, that means that North America, Europe, most parts of Asia, you can actually operate that way. Why does a customer care to do that? Well, if you think about cooling, providing a chiller for a megawatt of IT equipment, it's a $3 million capital investment. And so there's some serious savings there. Another one though, let's say you're not ready to make that leap and you're just thinking, well, hey, I've got the peace of mind of ride-through that I know my powered servers can take these temperature fluctuations and operate reliably without any compromise to the warranty. So that's a capability that we support in the current generation of servers as well as our 11th generation of servers. We just are in the process of rolling out what we call the fresh air hot house. So literally a parking lot at our headquarters. It's going to be hosting three servers that are running live financial applications for one of our partners, Efron partners and one of our customers, I should say. Anyway, running their real time, data analytics without any cooling. It's just sitting in an asphalt parking lot with air moving through it. Well, that's great. I talked to my friends down here in Austin and they like to be able to walk around in their flip-flops so they shouldn't have to put on their jackets to be able to go in the data set. That's right, it gets hot here. It does, absolutely. Greg, power, close in remarks. Absolutely. So power, Broadcom tends to focus more on the component side of power, but Tripoli, your energy efficient ethernet is something that we've spent a lot of time working with. We talked about 10G-based T and some of the power issues that it's had in the past. We've put a lot of effort into 10G-based T specifically to downshift from a power standpoint. If the link becomes underutilized, we'll shut that down to a very, very low rate of exchange of data to the point where we save a tremendous amount of power and get well under a watt, nearly shutting the port off. And this is true for a lot of environments where maybe at night they don't have the same load. And I think that both in the minor details like that as well as, and this is true on both Broadcom Switches and Broadcom Controllers, but even at the larger scale, we do feel very strongly in the enterprise environment on offloads, where running offloads like I SCSI, full HBA offload, FCOE, full HBA offload, we're sort of the hybrid car. You can think of it as kind of the hybrid analogy where the electric engine to the big beefy V8 that sits up in Intel land to where it's efficient in offloads protocols very nicely, we can save as much as 90 watts per 10 gig port in a fully loaded IO environment just by running that workload in the controller as opposed to using software in the host to run it. So that's kind of the continuum from the low level energy efficient ethernet at the component level up through the system level and doing protocol offloads. So great, one of the things that really impressed me in the messaging at the show here, Michael Dell said, if you're worried about configuring your ports, setting your VLANs or configuring your Lunge, things are going to change. So flexibility built into the platforms, Dell working with partners to be able to, all those really deep things there, most of that stuff's automated and very flexible that change. So, Brian and Greg, once again, thanks for joining us here on theCUBE. Appreciate the information and we'll watch the 10 gig adoption as it grows. And so this is Stu Miniman with wikibond.org and we'll be right back with our live continuous coverage here from Dell World 2012, right after this brief break. First time on theCUBE, baby.