 Okay, we are back live here in Silicon Valley, California. We're here at Brocade headquarters for the Cube, SiliconANGLE.tv special broadcast for Brocade's technology and Analyst Day. I'm John Furrier, founder of SiliconANGLE.com, SiliconANGLE.tv, joining my co-host, Stu Miniman, analysts at Wikibon.org, and we are here with Josh Snowhorn, Cyrus One, vice-president general manager. Welcome to the Cube. Thank you. You guys are going IPO, a little spin-off, we can't really talk about it because you're in a quiet period, but we can talk about tech. Yes, absolutely. So first, tell us why you're here at Brocade, and we can go into some of the really cool stuff you guys are working on in terms of computation, big data, bigger than big, and so on and so forth. Sure, I was invited to come to Brocade to talk about what we're doing in technology side. We've chosen you guys as the vendor for our Seismic Internet Exchange platform. I don't work for Brocade. I'm sorry, chosen Brocade for the Seismic Internet Exchange platform. And we're really excited to talk about the Seismic Internet Exchange platform, and we're pretty excited about what they're doing for us. We have some big data applications that still can angle you might be a customer for us. So let's talk about what you guys are doing. What technically do you have going on that you need to push the envelope on network performance? Well, we're the largest provider of data center services to the oil and gas sector. So if you think about Chevron or Schlumberger or BP, PGS, Halliburton, those kind of guys are doing seismic processing and oil and gas research around the world. And they take all that data on tape and bring it back to our data center to process it or their own data center. And it's really kind of old school in the way they do things. We're trying to create a paradigm shift in the way they actually exchange data where they can do it in the Metro. We're offering it at no cost so they can actually scale up and do hot, hot seismic processing at 10 gig or 100 gig levels. Oil and gas is one of those verticals that where big data is big because a lot of computation, high performance computing, simulation, a lot of that kind of stuff with active data. So having active data requires you to have data in the network, right? Not on tape. Low latency. The biggest reason for tape was security. That represents trillions of dollars in market cap for these guys. So where is the oil and gas industry relative to putting them into a bucket early adopter, fast follower, laggard in terms of tech? Because most people don't think oil and gas is kind of being on the cutting edge of tech. But in reality, there seems to be a lot of demand there on the big data side we've been hearing. So share your opinion on what's happening in that sector. I think they're early adopters, actually. They certainly are not limited by capital. Their ability to actually go and deploy the latest next-generation hardware. They use water-cooled, crazy supercomputing environments. They are buying the most expensive, fastest gear they can. But that networking piece was something that was always missing and they never really wanted to manage it. That's why we're doing it. Take away that headache. So what's their biggest concerns right now is just pure traffic? Pure traffic. You're talking about exabyte scales of data right now and growing. It is so massive. When these guys come in and they do seismic processing, they have a two-end power environment with lots of one end on the processing side. And really, it's about megawatt-scale classes of expansion and deploying this hardware and interconnecting it all. So Josh, Brocade's announcement today talked a lot about scalability and bandwidth. Have you touched any of the new products that they have here or what's your experience been with Brocade on the Ethernet side? I have. We've actually chosen the MLX-E platform on the MLX-32 or MLX-16. We're looking at our 10-gig density on the MLX-32E as 768 10-gig ports per chassis. That is just absolutely massive. And because we're giving away our ports, it's really important for us to have that low cost-per-port metric and great performance and the fact that they're so big in the internet exchange market. They have great success around the world. We felt comfortable with it. Okay, so a two-part question. One is, you know, what's your growth pattern look like? And one of the things that, at least we found in the customers that are using Brocade, especially looking at that cost-per-port is that Brocade's financing the way that they actually deploy and can kind of just, you only pay for what you use and can grow with that has been kind of an intriguing model. So, you know, what's your growth and are you using some of those Brocade financing options to make it more cost-effective for you? I'll start with the second question first. We did not finance it. We were fortunate enough to be capital-rich enough to be able to actually go and deploy what we wanted to deploy. I think on the growth, it's sort of a wait-and-seek kind of thing, but we're not actually doing internet traffic per se. None of the top 50 web properties are our customers. It doesn't mean they won't be now or later, but right now, that seismic processing side, we have over 500 customers. All of them will get ports right away, all for free, so I expect that to scale rather quickly. They'll get very embedded with it. Okay, can you walk us through, you know, most oil and gas environments, kind of single-site, maybe doing Metro. You know, what's the breadth and scope of your environment? What are you doing different than some of your peers? On the networking side? Yeah, yeah. Well there is no peer on the networking side for what we're doing. It literally doesn't exist. Seismic processing is really a single-site thing. Take your tapes, move it by armored vehicle, and do what you need to do and hold on to that data. They keep it incredibly secure because of the value of it. What we're doing differently is offering it for free and lighting up building to building within the Metro. In addition, putting a brocade or an infinite air box combined into their data center. So if their data center is full, they can do all their scaling back into ours. What was that? You said brocade or it was Infanera? Infanera. Oh, Infanera. That's the optical platform we're using to light the Metro and that's click and build, terabit scale, optical networking. Okay. And so you've moved off tape. What's your storage environment look like then? We don't have a storage environment. It's their storage environment. We offer space power and cross-connects, nothing else. Okay. So that's all their hardware. Great. So my big thing about big data right now is that obviously it's early and oil and gas and other verticals like obviously financial. We've seen some, you know, web and government adopting it and they're doing the latest and greatest. Explain to the folks out there what people, explain to the folks who might not understand what big data means who might see it as more of a kind of a generic marketing word. New experience. What do you think that people should know about big data? Big data, what big data really requires is are thousands and thousands of servers in completely new networked environment to interconnect all of that and really, you know, hundreds and hundreds of megawatts of power and big data centers actually house it all. The data could be any, in our case, it's oil and gas, so it's seismic data. These guys are dragging sonar arrays or putting boomer machines in and trying to find shale gas or oil underneath the pre-salt layer and Rio de Janeiro, things like that. And all that... They're also going to run simulations as well, right? It's simulations. It's super computing environments but way beyond what the universities and things like that are doing and obviously with no funding issues. So it changes everything. I had to talk about the insights that big data can provide because one of the things that's really interesting about big data is the schema definitions like in data warehousing and business intelligence systems would require, you know, really hardcore databases. You know, rows and columns well-defined but with big data, you don't necessarily have that. What's your experience with dealing with those databases out there, the NoSQL? You guys have any experience with that? We aren't in the database side so while our customers all do that we simply, they tell us how much power they need, how much space they need. We make sure we cool it and make sure the connectivity's there. That's it. Okay, so maybe that's a different question around big data because this is one that always comes up is, okay, I got data and I'm putting it in all different places on commodity, industry standard hardware. I got to move it across the network. So you mentioned network issues. What are you seeing just a few years ago, say five years ago? What's changed since five years to now in terms of networking where there's some significant differences with some of these technologies? I think what you're seeing on the networking side in the Metro, you're seeing 100 gig absorption and certainly that's really a big driver. Even 10 gig, even lag 10 gig linking together really didn't cut it. 100 gig is really the next step up. Nationally, you have just a couple of backbones really going to 100 gig now and it's going to become a cost factor. I think as the cost starts getting driven down just like internet pricing has been driven down you're going to start to see a lot of substantial growth that way. Have you been seeing, we've been hearing some rumblings that there's no more bandwidth left in like Atlanta. These major metrics are kind of saturated with constraints. Sure. Are you seeing any of that? No, I don't think that's really true. I think people can light lambdas and there's lots of fiber in the ground or people, we'll get to the point where people might start digging again or can't squeeze any more down the fibers. That'll be interesting because they'll have to raise a lot of billions of dollars to redeploy. So Josh, before you joined your current role you were at TerraMark. Yes. My understanding you were actually instrumental in the building of the NAP of Americas. Yes, it was. Actually, I know that folks watching can't see it but Wikibon last year did an infographic on five of the largest data centers and NAP of Americas was the third largest data center in the country. Yes. At that point. Networking is critical but power and density is one of the biggest challenges we see from the big data centers. So can you talk to us a little bit about what you're seeing in that space? What you learned at NAP of Americas that you're taking into your new role? Sure. Well, NAP of Americas was interesting. That was a hub environment. You had a 750,000 square foot data center with all the undersea cables coming in. It was an international handoff. Handled about 95% of all the traffic between North and South America. It's currently owned by Verizon. They've acquired TerraMark. What's limiting down there is really the scale of the building in a downtown environment and be able to get power to it. Where you see data centers going now where there's low energy costs and where you can actually get network out to it, the network became the non-issue. People are willing to purchase network out to areas where the power costs are lower and we can get lower PUEs. We're building a data center in Phoenix where you have the dry desert environment using evaporative technology. We can get a PUE of about a 1.2. Miami, you're certainly not doing that. So it's really changing the world where people are actually deploying their big environments. At least in Phoenix, you don't have the lightning strikes as much as you had in Miami. Now, we have the crazy dust storms there. Wow. Yeah. Okay, Josh, no horn with soon-to-be-public. What's the estimated IPO? Well, we're waiting for tax rulings and things like that from the IRS and certainly with the SEC, but sometime soon. It's a good example. Cyrus will explain the story for the folks out there real quick. We have a couple of minutes left. Sure. Cyrus1 was acquired by Cincinnati Bell a couple of years ago to become really their data center division and they merged all the assets from Cincinnati Bell into Cyrus1 and put a lot of capital into Expand. And they want to realize the benefits of that and all that investment and grow that out now. So that's what they're working on now. So it's a spin-out. It's a spin-out. Absolutely. Okay, great. Final question. What do you see as the future of the data center? Because you work a lot of different data centers. You understand data centers. The big talk is software-defined data center. That's the destination on the roadmap of network virtualization, software-defined networks, but ultimately the bigger play for a lot of these vendors is to make it really a software-defined data center. Absolutely. And there's a lot of issues involved. Footprint, energy, you know, STNs, automation, all of the above. So just give us your vision of the future data center. What needs to happen in the effort for CIOs to re-architect? I mean the first thing you have to think about in the data center side is quit trying to build little buildings and then getting full and trying to squeeze more gear in there. The fact is everything's virtualized and everybody's squeezing every little bit of CPU out of every single server. So now you come to the point of looking at modularity within your footprint of where you're going. Build very large data centers with massive amounts of energy, massive amounts of cooling. But what you need to look at is every single data center is not just power and cooling, it's an ecosystem. And if you don't get your parties interconnecting within those environments, you're going to have failure. Parties being... Which parties? Parties could be energy guys interconnecting, could be content players connecting to eyeballs, provide cloud guys selling services to those people within your data center, the cloud being local. So a little bit of orchestration amongst the different constituents, if you will. Absolutely. And that's where the fabric technology comes into play. You just interconnect them and make it as automated as possible. So that's kind of where the policy stuff comes in. Yeah. That's where the software. You'll be hearing a lot about software-led infrastructure from SiliconANGLE, Wikibon over the next year. And stay tuned for some really compelling research soon to be announced by Wikibon on this. Josh, thanks for coming on theCUBE. We'll be right back with our next guest after this short break. Thank you.