 And now, SiliconANGLE TV and wikibond.org present a focus spotlight. Live from Las Vegas at VMworld 2011, host John Furrier and Dave Vellante, a women's team, the Open Data Center, with support from Intel and EMC. Intel Inside. We're back live at VMworld 2011. I'm John Furrier, the founder of SiliconANGLE.com and something like that TV. And I'm here with my co-host. I'm Dave Vellante at wikibond.org and this is the spotlight on the Open Data Center. Now, as you know, this week we've been doing in-depth spotlights to help practitioners really understand some key trends and key issues. And this is one of our more interesting spotlights in that we're looking at the future of the data center. You know, 10, 12 years ago, companies like Google came out and realized that traditional IT wasn't, John, going to be able to meet the needs that they had going forward with mega-scale, cloud-scale data centers. One of the big issues, of course, that they had was energy consumption and operational costs. Well, I mean, here's the thing. That's why I look at the data center. The data center is evolving. You know, rack and stack, networking, put the apps on the server, you know, old school, but it's quickly evolved to the Facebooks of the world, the Googles of the world. And data centers are growing at the enterprise level at such a great rate that there's a new data center evolving, it's a new data center model. And it evolves, you know, scale, it involves reconfiguration, kind of a data center operating system, Dave. And really the core, the heart of this, the core issue is power. Yeah, so we've put down, John, and you can see on this slide some of the new data center requirements. And as I say, it was really driven by some of the early web companies, Google in particular, but of course then Amazon and Facebook got into the mix. And as you know, these companies are building mega-class data centers. Microsoft's got data centers, you know, next to the Columbia River where sources of power are very cheap. So they've really changed the way in which they've designed data centers. And the things, the design requirements are for, instead of terabytes, you know, or gigabytes, they're talking petabytes. We're talking hundreds of millions of users versus thousands, you know. And how many users on Facebook? I mean, they have 700 million registered users and the active users are off the charts and puts a lot of pressure on their data center requirements and they have to really kind of harden their data centers. So when I look at these new data centers, these new approaches, they really got to harden and abstract away a lot of the complexities around operations, which is power, which is scale, and also software. And software's become a big part of it. So the hardware vendors have to really kind of get smart around how they use software, how they use network management, but move that into the physical layer, Dave, and really make the data centers part of the kind of operating system plan and use software, automation capabilities and management. Now, one of the other things that they've done is essentially using a white-box strategy throwing commodity hardware at the problem and creating self-healing. So if something dies and something dies every day, disk drives die all the time, servers die all the time, but they can just fail over and they spread the data around. So they have this self-healing and an autonomic recovery as opposed to, okay, I'm going to back it up and I'm going to have to go to tape and I'm going to have to, it just wouldn't work. The whole self-healing automation thing has been around for a while, the concept's not new. What's new is the changing architectures of data centers, locations, different locations near warm climates, cold climates, and so the nature of the healing changes on that regard. Secondly, the actual hardware configurations of the data centers are changing. Obviously, trying to minimize the footprint of the actual physical hardware and then complicating all that, Dave, is virtual machines. When you add virtual machines on top of that, it really starts to really make the data center more complex. Now, the opportunity on the data center is to make it more robust and more scalable and ultimately delivering more value to the user. All of this with using less power. Power is the scarce resource and energy consumption is a real, real design criteria. Yes, and the last thing I want to say about this slide is automation is really the key. These trends are spilling into the traditional data centers and you've mentioned a number of times of power and in this slide we've got some data. CIOs are beginning to pay attention to power. Now, about three or four years ago we did some work in the Wikibon community. We found that less than 5% of the CIOs that we talked to even ever saw the power budget. That number's up to about 15% now. This slide shows the percent of a data center budget that's allocated to power and cooling and you can see where it's headed up, up, up over time. Now, the mega scale data center guys like Google saw this and they said, look, we can't continue to throw this money at power. We got to go to Intel. We need companies like Intel and Seagate, frankly. We need to really redesign the way in which we approach data centers. Yeah, and our reporting at siliconangle.com is clear that when we went out to the customers and started talking to the vendors and customers there were two issues. You mentioned Google and Amazon. They in a way start with their data centers from a clean sheet of paper but a lot of the enterprises don't have that luxury day. They have existing incumbent infrastructures and did not design with power in mind and so that is a real kind of fundamental shift and how do you undo the existing data center so a lot of the smart vendors actually look, understand this and will design for it. So there's a clean sheet of paper and then there's also the existing data centers. Now, here's a nuance on this next slide. The energy culprits in the data center. What you can see here is the demand. The IT equipment, the servers, the storage and other IT equipment only consumes John about half of the power. The rest of it goes into cooling and UPS systems and power distribution and lighting and non-IT equipment. So the point is, and guys like Google realize this, that if you can attack the IT equipment and make that more efficient you can really reduce. You have a double whammy effect. What a lot of people do is they'll go put in a new cooling system but that doesn't solve the fundamental problem. And the mega trend that's also on top of this that's kind of implicit in this is the fact that all the trends around big data, for example, talk about this. I've had in-depth conversations and we've talked about on the queue with Amarawa Dalla who was at Yahoo and that most of the servers and storage out there is underutilized. So you look at the topology of the data center, that's a whole other factor. So when you add in the design criteria for energy you're also looking at big data and these new applications that can take advantage of the resources. So at the end of the day, a data center is a resource set of gear whether that's cloud or data center. So to me, this is a really interesting area that is growing like crazy and not a lot of people are talking about it. So it's really important. It's an interesting area for me as well, John. It's something that I've been studying for a number of years as has the Wikibon community. So there's several key issues that we're looking at. Four in particular that we're going to drill into in this spotlight. Number one, and we're showing them on this slide, how can practitioners ensure that they can meet these new scale data requirements with open standards? That's very important. The second is, can vendors bring commodity pricing to the data center and still make money to innovate? The vendor community has to do R&D. The third we're looking at is what new skill sets are CIOs going to have to bring in and recruit to build the data center of the future and finally how will CIOs fund the transformation to that new data center. Now, the big theme is lock-in. Users don't want lock-in. Vendors have heard that but they still have to preserve their value. How do they do that? And one of the ways that we're looking at is metadata and the evolving metadata and how I protect that metadata and exploit that metadata in big data scenarios and actually bring value to the marketplace and not get locked in. And the other thing that I would deserve there too, I would agree with that, is that software plays a key part of this. So when you look at all the innovations and all the R&D from Intel and HP's of the world, you're seeing that they're taking existing concepts like network management, self-healing, the things we talked about and applying new equations around those paradigms in the new architecture. So software's where they're going to make their money. I think we heard from Tarkin Maynard at WISE who said the hardware's getting cheaper and cheaper. That's okay. The value will come in on what I call nested value from the vendors. It's going to be software and that's where they're going to see the differentiation. At the same time, the table stakes is energy. So you've got to maintain the energy, create a hardened operational infrastructure with software's the value. If you can do that, if vendors can do that, the low pricing for the hardware, maintain energy, they're going to have an opportunity. And use open standards and to build those new scale-out data centers. So John, thank you very much for helping me present this spotlight. New data center on Bullochron. The data center, the data center is rapidly changing and with hybrid cloud and private cloud, you're seeing kind of the interplay between both and it's really relevant. So we're going to go deep into these issues in the spotlight. We're going to bring in some subject matter experts and practitioners. So keep it right here. We'll be right back.