 Okay, we're live back here at HP Discover 2012. I'm John Furrier, joined with my co as Dave Vellante and we're here with Cullen Bash HP's labs, Sustainable Ecosystems Research Director. You're involved in all the data center, energy, power and cooling. I've interviewed Shondra Khan in the past. We've had a lot of conversation and we blogged extensively about all the innovations you guys are working on around big data, machine data, probes, footprint, energy. So welcome to the queue. Thank you, it's a pleasure to be here. So tell us about what's going on in HP Labs before we get into some of the conversations around what you guys are doing right now. Obviously at the show you had a recent announcement, I think last week or the week before around the data center. So just take us through the update. Sure, last week we announced some of the research that we've been doing on net zero energy data centers. And so this is a project that's been going on for about three and a half years now at HP Labs within our Sustainable Ecosystem Research Group. And in that project we're looking to reduce resource consumption in data centers as much as possible. And we take a holistic view, so we're looking at reducing IT resource consumption, energy resource consumption, power and cooling, all of that, we look at one big holistic problem. So what are you seeing in terms of, here at HP Discover 2012, what's the vibe and what are you guys talking about? We're talking about really the future of computer architecture for big data. So how do we architect new machines that can deal with zettabytes of data that are generated every year, just growing and growing over the next four years that were expected, I think, to get to about eight zettabytes of data being generated in the next four years? So how do we construct new computer architectures to deal with that that are very energy efficient and very high performance? So obviously big data is one of those terms where everyone has a different definition. The people in the customer base think it's pretty much hype, but we're big data junkies, we love it. We think that's a trend that's so relevant at many different levels, obviously transferring business. So share with people your definition of big data and then some of the things that people might not know about big data and I kind of, I already had some of the conversations, but I will let you to share it around the machines and probes, some of the real collective intelligence you can do with the network and the machine. So first, the definition of big data. Well for me, I'm a mechanical engineer, so I deal with energy, but for me, big data is really all about unstructured data, it's about data that we have to deal with that's being generated at a rate today that really is unprecedented for us. I think the things you can do with it are interesting, the things that we do in the data center level with data. In fact, we're thinking about it beyond data center, so let's think about cities, right? If we start thinking about how to make cities more sustainable, one way to do that is to collect information about what's happening in that city. This goes down to everything from the cyber physical infrastructure, things like what's happening in our water distribution system, our energy distribution system, our network, if we think about sensing and collecting all of that data so we can act on it, but to me, that's big data. So there's a story floating around, you guys, HP's new data center uses 3% less energy. BBC just came out with an article thanks to our new tool, Dave, of our vertical engine. So talk about that, can you elaborate on that? Yeah, in fact, that's part of the sustainable data center research that I was talking about before, so we're doing a couple of things in data center research and really boils down to three things, I think. The first is trying to reduce demand for resources within data centers and that's specifically what that 30% number refers to and in this piece of research, what we're doing is trying to more efficiently utilize IT infrastructure. Right now, most data centers, the IT infrastructure is utilized at about 20 to 30% max which means most of the machines, most of the time aren't doing much work and so we're working on technology that allows us to consolidate workload on fewer machines, increase utilization closer to 90 to 95% without impairing performance and then as we consolidate that workload, we can turn machines off that aren't being used, we can integrate the operation of the IT systems with the power and cooling systems and all of this gets us to that 30% number. So, Dave runs Wikibon.org which is an open source research firm. We do a lot of research around the data center and some of it's really not published because we publish everything for free but some of the stuff that we're watching in terms of the trends is DCIM. Can you talk about some of the data center infrastructure management software because that is really where the heart of the conversations are kind of early adopter around this energy management, power and cooling. What is going on there? Yeah, to me, I think you can't actively manage the data center in a very efficient way without knowing what's going on holistically in the data center space. So what's happening in the power and cooling infrastructure? What's happening in the IT infrastructure? And so to me DCIM, which again is one of those things that has a lot of different definitions to different people like big data, to me it's really all about integrating the IT with the facility space. So one way to think about this is instead of having different software systems that deal with IT and facilities, these things really should be merged and they really should provide a complete single pane of glass picture for data center operators be they IT operators or facility operators. Once we have that single pane of glass detail, that data all in one place, we can then start acting on it. We can do things like control how cooling resources are distributed with respect to IT needs. We could start controlling the way that IT servers themselves behave with respect to their environment. And that kind of thing isn't done today. These things are still disconnected largely. The IT systems are disconnected from the facility systems and I think that has to change. Yeah. We were in Barcelona, John and I were in Barcelona a couple of years ago and we were at one of the HP launches at a big hotel there and we saw the pods. Was that something that you were involved in early on? Pods are really interesting to us. Pods actually are something that we postulated back in about 2004 timeframe. We thought that ultimately the data center should be the computer and it really should be thought of as a computer and managed as a computer rather than as a bunch of individual machines and individual systems. And the pod is nice because it really brings that vision finally kind of to fruition. It still requires DCIM and I think it really is kind of a great first test case for DCIM type management software because we can control tightly the power and the cooling resources. We can control the IT resources. Yeah, when I interviewed Chandrakant in 09 and basically I said this is an operating system. This is a data center operating system. I think that holistic approach is probably one of the most under discussed stories in the business. I mean, a lot of the, I'll say stone age data centers because it's data centers have been around I don't think for how many decades but if you look at the classic data center it's just standard problems top of rack at stirvers and I got old power supplies. We had the HP guy on here from BCS he was back telling us that he's walked in some customer environments and he saw machines that he's designed still around with the old power supply so you have huge power problems but they're all isolated. So this trend of DCIM and this system approach to a data center is critical not just in one facility across geographies. So with that in mind we'll debate that here. My question is where are we in the market in your view on a scale of one to 10, 10 being everyone's ubiquitous of customers looking at the data center as a holistic system? Is it a one? Is it a three? One and a half? I think it's probably a five and I think we have a long way to go but I think customers more and more are starting to see the value and the need for thinking of data centers as a holistic system. We're by no means operating data centers holistically yet. I think you mentioned pods I think pods is one good way to get us there but we need to do this in large scale facilities too. And a lot of it's not just customers by the way customers are interested in this but it's also being able to provide them with the tools that they need to think about things holistically and to operate holistically. The lines are blurring between infrastructure to Colin. Particularly networking and servers but now even storage is becoming part of that equation. How does that fit into the whole energy efficiency equation? I think that servers and storage, networking that's all part of the IT infrastructure. All of that consumes power all of that requires cooling. Networking gear in the past and in some cases this is still true today was designed with a cooling architecture that didn't fit server architectures. So for example, if you think about placing networking gear into a rack it would exhaust a lot of times hot air into the cold aisle of data centers. The airflow patterns were reversed. Well if you put that networking gear in a rack with servers that really messes up the entire airflow infrastructure for data centers. This is an example of systems not being designed holistically with the single end goal in mind. That's changing on the networking side. It's changing on the storage side and I think that's a good development. We still have some ways to go in that line though. And we talked about the pods earlier and the vision was to get PUE down to whatever it was 1.2 or 1.0 was something just absurd. Today it's obviously substantially higher. I think a lot of people four or five years ago didn't really, weren't aware that most of the power and cooling went to non-IT equipment. Or most of the energy consumption went to non-IT equipment for power and cooling. And so are you seeing that you're attacking that problem or is it still too early? I think we are attacking that problem. I think we're looking, at least I can talk about my research what we're doing is really taking a view of if you have a data center in a certain location you should use local resources that are available in that location for both powering the data center and for cooling the data center. So if we're in a reasonable climate actually Las Vegas is not that unreasonable either we could use outside air to cool the data center. Seems ridiculous in the daytime but at nighttime it gets quite cool. And if you couple that with maybe a multiple different ways of cooling the data center what we call a cooling micro grid you could really optimize how we cool the data center. We can optimize how we power the data center if we have different means of generating energy. So I don't have to go to Iceland to use ambient air. We're saying you don't need to go to Iceland and place the data center in Iceland. There are plenty of opportunities to get very low PUE values anywhere in the world if systems are designed correctly. What do you make of Facebook's sort of opening up its data center design knowledge and intellectual property? What's your take on that? I think what they're trying to do is demonstrate best practices and share those best practices. And I applaud them for that. I think that's largely what we're trying to do here with the work that we do at HP Labs and the work that we're doing in our research group around net zero energy data centers. We're now sharing what we're doing and that's our best practice. I mean, how would you, I mean, so everybody has this sort of this aura around Google and Facebook, right? I mean, they're cool. But at the same time like I said you're doing a lot of work. As they open it up, do you look at things and say wow, that's interesting or I hadn't thought of that that's exactly what we're doing. How does what you're doing compare to what these web giants are doing? A lot of the web giants on the cooling side, for example, are utilizing local resources like we've been saying. So they're using things like outside air to cool their data centers. Some of them are starting to go towards renewable energy now. And that's something that we've been advocating for a long time. So we do share best practices. We all, I was at conference last week where I met with my Facebook counterparts, for example. And we talk all the time HP collaborates with Facebook and they do with us. So talk about the water too. Water was a topic that's interesting because obviously it's scarce resource when you think about it as well. Any sharing data about how people are using water? I mean, are going away from water? Is it the trend? Well, I think that's a great question because I think people tend to focus so much on energy and they don't often realize that there's an overlap between water use and energy use. Water and energy are very interrelated. What water is used to create energy? If we think about a coal-fired power plant or hydro, something like that, water is used to deliver those electrons to our data center. And we need to start bringing that into the equation. And also as people go global, there's some countries that water, like India, for example. I mean, if you want to serve India, water's not really a good option, is it? Water's not a good option. So if you're looking at cooling technologies that heavily use water like evaporative cooling in India, not a good option. So we, again, it goes back to the philosophy of looking at what local resources are available and utilizing those and not overusing those. Let me ask you about a different tactic. I know you're on the mechanical engineering side, but this is a lot. The waters, Greg's going to be moving water around some mechanical issue and asking the data center facilities is that power and cooling includes water. Let's talk about, let's change here, let's talk about the user experience. With mobility, right? You have people who want to access data in the hinterlands of India to war on the road. Geography's important. You really can't have a data center everywhere. You obviously have networkings, you have wireless technologies helping, but how does that change the data center philosophy? I think, one of the things that we're trying to do with our research is extend the reach of IT services globally to those who don't have it today. India's a really great example. There are people in India and Africa, many of whom are connected through mobile technology, but they don't have access to IT services. They have access to telephony and those types of services. So how do we get services to them? And that's a large part of what we think about. And you can't have data centers everywhere, but if you do have data centers in places like India and Africa, you have to place them and operate them in ways that are really self-contained. We can't put a power grid throughout Africa to power data centers. We need to think about taking them off the grid and looking at ways to do that through some of the work that we're doing. We're trying to- That's serious science too. I mean, that's a hard challenge. It's a very hard problem and it's something that we've been working on for a long time, we feel strongly about. And think about a pod, for example. A pod's a good example of this. Let's take a pod and place it in Africa. Well, now how do you power that pod? How do you cool that pod? Solar, man, maybe solar. Maybe solar, maybe wind, maybe biogas. Biogas is very interesting because there are farms in Africa and India where we could collect manure and turn that into biogas, convert that into electricity, power the data center. So this again, changes the definition of mobility, not just being a mobile user, it's mobile data centers, mobile IT services. And it's challenging. And I think this is one of the most interesting areas right now in the world because people don't usually think about it. Yeah, and I think it goes back to what we were talking about before. We have to be able to reduce energy consumption for IT. That's a big problem. That's what we're working on. The other thing we're working on is if we think about solar to power a data center, that signal and that generation capability is not constant. It varies throughout the day. So how do we best mesh demand for energy with the limitations of supply? My last question, because now we're getting tight on time here. It's going to circle back to the big data conversation because Splunk went public and that's kind of a poster child for the data center world, I guess in terms of big data, machine data, logs, log files, but talk about more about big data outside of that kind of log file kind of analysis. There's probes, you got instrumentation going on in the network. I mean, there's a lot of mechanical engineering and software involved in measuring these systems. If you're going to have an operating system, you got to have subsystems, you got to have interconnections, you got to have, you know, the equivalent of building an operating system. So big data has to play a role in that in real time. It does play a role. We can't control resources if we can't sense how they're being used and that extends from the IT layer all the way out to the physical layer. So what we need to do in terms of resource management is measure utilization, measure how applications are being used, how they're utilizing resources. What's the coolest thing, example, you can share about big data from your world being used today? I think if you look at the data that we're collecting in data centers now, we have about 50 plus temperature sensors in each of our servers. Aggregate that at maybe 100 per rack and 1,000 racks in a data center. How do you aggregate all that data and make any sense out of it at all? I think that's a cool application, being able to do analytics on that and then sharing a very precise recommendation with the customer. Change this crack, or this air conditioning unit to a certain temperature, a certain flow rate based on all of that data coming in. That's a value that we can provide to customers and to users. HP, Oshly, great work. Really, we've been following HP Labs. It's just one of those understated resources within HP. You guys doing great work. Congratulations for the folks out there. HP, great leader, recent announcement. NetZero data center, they got a new data center walking the talk, eating their dog food, drinking their own champagne. Great stuff. And I think DCIM is something going to be a topic that we're going to hear more and more outside of the vertical of just data center. So congratulations. Thanks for coming inside the cube. We'll be right back after this short break.