 Live from the Julia Morgan ballroom in San Francisco, extracting the signal from the noise, it's theCUBE covering Structure 2015. Hey, welcome back everybody. Jeff Frick here with theCUBE. We are live in downtown San Francisco with the Julia Morgan ballroom and we are here for the rebirth of Structure, Structure 15. You know, the summer program didn't happen. We got the program is back on. So we're excited to be here. It's the first time we've had theCUBE as Structure, so we're out separating the signal from the noise and we're excited to have Bill Doherty, the CTO of Raging Wire. Welcome Bill. Thank you, it's a pleasure to be here. Absolutely, so you guys have been in this cloud business before it was called cloud. You're doing co-location. Yeah, it's always an interesting show for us at Structure because this is really a conference where people think of the data center as an API. But underneath all of those wonderful clouds is an actual data center company that's running the four walls, keeping the lights on, dealing with the security 24 hours a day and that's really where we fit in. Yeah, I think I saw a bumper stick one time and said cloud is somebody else's computer. That's right. Well, you know, and we always say that the cloud has to live somewhere. That's right. And where it lives is companies like ours. So you do co-location, so co-location by definition means it's other people's machines that are in your data center. You provide the infrastructure that wraps those machines. Absolutely, we build large, out-scale data centers and then rent out pieces of them to other companies. So it's a multi-tenant environment. We're providing stable power, cooling, 24-7 security, some compliance, and then network connectivity to the rest of the world. So you're in a great space, a growing space cloud, whether it's my own cloud, it's AWS cloud, or some other service provider, right? A lot of applications are moving to the cloud. So what are some of the hot topics? Obviously we know in data centers power, power, power, gets talked about all the time, all kinds of innovations around being close to hydroelectric dams or being in very cold climbs to try to leverage the temperature. So what are some of the things as the data center space is evolving? Yeah, so a couple of things. Obviously efficiency of the building is key. And for a couple of reasons. One, we want to be good corporate citizens, good stewards of the planet. And so we're very focused on the efficiency of our buildings. Also for us, other than people, electricity is the number one cost. So the more efficient we can drive that building, the better it is for our margins, better it is for our customers. In addition to that right now, we're really focused on water and water usage. Data centers suck up a tremendous amount of water. Here in California in a drought, that's a really bad place to be. So we're now experimenting with waterless cooling designs that we think are a little more efficient for us, but also really good in terms of the overall consumption of that building. So waterless cooling in terms of the air conditioning, but we've heard talk about water actually being integrated into the cooling itself, more like radiator types of systems versus air conditioning systems. And we see a little bit of that, but really the demand for that kind of platform hasn't hit us much. We still see a lot of customers that are not pushing those densities that you might expect. And for us, the way we design and build our buildings, we can go very high dense without water cooling. So we can do up to 23 kilowatts in a cabinet where most companies are still doing about five to seven kilowatts a cabinet so we can grow and expand with our customers quite a bit without specialized infrastructure or plumbing. Okay, so what's your kind of experience that you see from customers as they say they get tested going, they got an Amazon presence, they've got their old classic data center presence, everybody's talking about hybrid cloud and moving workloads and transferring workloads and direct connecting to AWS. I mean, it's a really complicated and dynamic environment right now. So what's kind of the real world behind the door of the data center? Real world is it's a hybrid world. And it's not just hybrid cloud, it's just hybrid period. You've got enterprises with decades worth of legacy systems and applications and workloads that don't really go well into a public cloud. They're persistent workloads and they also don't have the staff on hand to actually go rewrite those. So most of our customers are very common. You've got your own enterprise data center, you've got a colo with us, you've got one or multiple clouds, we do connections into those clouds for them. And so it's a hybrid world. And it seems from talking to our customers it's going to stay that way for quite a while. And what percentage of your customers don't tell me anything you can't tell me or are kind of corporate customers doing their colo versus service providers that are kind of a middleman between you and probably the end user company? You know, we've got a wide mix. So we have a lot of great cloud customers that have built with us because they really want the highest quality data center to house their cloud. In addition to that, we've got some great enterprises, airlines, manufacturing, fifth largest bank in the world, a lot of e-commerce sites, as well as social media. And so we kind of run the whole gamut. Okay, and kind of where I was going with the question is really this concept of, right, one of the huge things about cloud is right, capacity on demand, capacity on tap. Curious from years that colo, I'm a service provider. I want to have that availability for my clients. How do I actually orchestrate that in the data center? Do I have a bunch of dark boxes waiting to be lit? Do I have them all lit just waiting for someone to buy the capacity? How does it actually work? It varies by provider, but typically what we see is those large scale environments, the machines are pre-deployed. They have to be, so you can dynamically spin them up or down. But our model with them, a big chunk of their bill is variable based on their actual energy consumption. So when they're spinning machines down and not collecting revenue, we're also not collecting revenue because they're not consuming the power. So it's really power, power, power. Power, power, power, power. Excellent. So another big thing we talked about actually, last time I made a raging wire, I was talking about peering. And really, you know, as latency becomes a more and more important issue, especially as everything goes to mobile and now you got an API economy. So, you know, all the apps aren't an app. It's an app that collects a bunch of data from other apps. It delivers it very quickly on my phone or else I'm off to the next app. Talk about the evolution of peering and direct connect and, you know, again, direct connecting into AWS and how kind of the internet that's actually providing these services to us is really starting to morph in the background from what it was originally kind of constructed to do. Yeah, connectivity is critically important to a data center. If you don't have good connectivity, you have a big refrigerator box. And so all of our facilities, we're bringing in a very diverse group of carriers. We're also connecting out to the competition in some markets where we're doing cross connects into the data center down the street because what we're really trying to do is enable customers. So if somebody wants to be a really, really high grade connectivity equinex, but they want their systems to live in a better data center, they're going to move into my facility in Virginia, have a great customer experience and then we'll cross connect them in the equinex for the peering connectivity out to the rest of the world. Interesting. And then you said you guys are growing like crazy. So give people that aren't familiar with raging wire, a little bit of who you are, where your data centers and where you guys are going. Yeah, we're growing like a weed. So right now our principal campuses are in Sacramento, California and Ashburn, Virginia have about a million square feet of data center space. We just broke ground in Dallas, Texas on 42 acres. We're going to build a million square feet in Dallas, Texas. The first building will be online next September. We are under construction at two million square feet in Ashburn, Virginia. We just put an offering on land in Silicon Valley and we're currently shopping in Chicago and New York and a couple other markets. Now you look at the data center market in the United States, 70% of the data center absorption in this country is in six geographies. And our plan really is to own all six of those geographies. 70% is in six geographies. That's correct. Is that just because it maps to the metro areas or is there some other that maps the connectivity to the, it's how the connectivity of the internet? It's how the internet has grown up. And so because of latency, because of peering, because of these things, data centers feed off each other. Everybody wants to be next to everyone else. Which is how you get something like Ashburn, Virginia, Loudoun County, Virginia, we're in three square miles. You have one of the densest data center markets. It's because AOL and UUNET and MCI, all that's out of there. So Ashburn, Virginia, New York and New Jersey, obviously for the financial markets. Dallas, Texas is huge. It's growing an insane rate because it's such a great business environment for companies to be in. And then obviously with all the companies here in Northern California, Northern California remains a fantastic market. Right, so you're the CTS. So what are some of the technologies that you guys are exploring? Kind of new things that maybe the layman doesn't think about that are really going to start to, continue to evolve, kind of data center economics. Yeah, we're spending a lot of time and effort right now on data center infrastructure management, DCIM, which is really, you can think of that as the internet of things in a data center. So we instrument everything. Every generator, every UPS, every pump is instrumented. Also we're taking power readings at every rack because that's how we build our customers and temperature and humidity. And we collect all this data in for operational purposes, but now we're working on ways to share that data with our customers. Because your locus of competition is really operational efficiency, right? You're not making your own boxes. You're not making your own power. You guys really just have to be more efficient than everybody else. We have to be more efficient and we have to be more reliable. One of the things that makes Raging Wire unique is we have a patented infrastructure. It allows us to build to a higher level of availability than the typical data center. It lets us guarantee 100% uptime of our electrical infrastructure and in nine years we've never violated that SLA. And I don't know a lot of data centers that have a nine-year track record of no outages. That's pretty good. Nine years, 100%? Nine years and not bad. I don't know that I would. I'd sign up to buy it. I got to knock on some wood here. Well, I don't knock on these chairs all fall out from underneath you. Well, Bill Doherty, thanks for taking a few minutes of your time here on theCUBE. Thank you very much. Pleasure. All right, so I'm Jeff Frick. You were watching theCUBE. We're live at Structure 2015 and we'll be right back after this short break.