 This is Dave Vellante, and this is SiliconANGLE.tv's continuous coverage of VMworld 2012. This is the official broadcast center of VMworld 2012, and we're here with theCUBE, where we bring you the smartest people that we can find in the industry. We extract the signal from the noise, and we have a spotlight now on something called software-defined storage. We've been hearing a lot about software-defined data center. The software-defined network is a big term that's being bandied about in the industry today, but we're going to talk about software-defined storage. What is that? What does it mean to VMware customers, and how are, specifically, cloud service providers taking advantage of it for their customers? And I'm here with Tim Brown, who's the CEO of a company called Core RAID, and one of Core RAID's customers, Tim Dufour, who's the CEO of Rackforce. Rackforce, according to IDC, is the number one cloud service provider in Canada. Gentlemen, welcome to theCUBE. Thank you. Good to see you guys. So here we are, VMworld 2012. We were just talking off camera. He's back in San Francisco this year, so we get to sleep a little bit more, but we're here, there's a lot of energy, and you're starting to see, Tim, this notion of the software-defined data center come together. So I'm going to start with the customer perspective. So talk about your data center as a cloud service provider. What does that all mean to you? Is that buzzword, or can you actually put that into practice? That's actually very important to us. So software-defined data center to us means driving the complexity out of very complex systems. So we have a number of technicians that manage very complex storage, very complex servers, but at the end of the day, if you can have a software-defined management system, basically you can take all those pieces, all those complex pieces, and be able to rapidly provision and provide services out to the client base. So Kevin, Corey talks about redefining storage. So how are you redefining storage? So there's a couple of important ways. If you look at the transition from old enterprise systems to the cloud, you move from small data running on big boxes to big data running across lots and lots of small boxes. That's a very different computer science problem. And so what we've done is we've taken that same scale-out architecture that the big Google and Amazon networks have used and wrapped it in an enterprise-class set of features for storage. And so what we just now launched is our software-defined storage offering, which really takes that scale-out architecture and is able to automate that with one-click provisioning with the whole scriptable, programmable set of things that can really tie from the infrastructure all the way up the stack to the application. So for service providers, this lets them roll out services that can very quickly deliver capabilities that look like the consumer web that really have the performance and the resilience of the enterprise-class infrastructure. So when you think about service providers and the way in which they're adopting these technologies, they're sort of ahead of the game. You saw Amazon sort of invented the concept of the cloud. And service providers are a bit ahead of the traditional IT enterprise, even though we're seeing IT start to get more serious about some of these simplifying provisioning. But they've got a much more challenging problem because they've got hundreds or if not thousands of applications. So, Tim, talk a little bit about what Pat Gelsinger this morning talked about the museum of legacy infrastructure. What does your museum look like and how are you able to shed some of those legacy processes? What did it look like before and what does it look like now? Take us through that. Sure, it all starts with the data center itself, the data center infrastructure. So when we first started in 2001, we had a small kind of server room, I guess you could say, and it was actually, it became obsolete in about four or five years just because it couldn't support the high density of the modern servers that were coming out. And so we morphed into a number of different data centers over the years. Our latest data center was built in 2009 and we opened the doors in July. And that data center supports up to 1,000 watts per square foot. So we can actually put in a lot of blade centers and a lot of high density storage platforms. That's basically the underlying infrastructure but moving up the stack to the servers. Now we're finding the servers, of course, getting smaller, more powerful, Moore's Law type of idea. And we also see that the workloads that are coming to us are extremely varied. We would have customers that really want just month-to-month test dev environments to customers that sign five-year contracts that have very high reliability, high security requirements. So we have a very varied workload. And that's basically, again, why the software-defined aspects help us to template and profile customers. So you build an infrastructure as a service based on product, core aid, storage technology? Exactly. Our cloud environments are very much enterprise-based. We didn't go to large-scale public cloud infrastructure style. We're much more enterprise-focused. So when a customer comes to us, we listen first. We try and understand what the requirements are. And then we fit our solution around the customer rather than the customer fitting into the solution that's predefined. Somebody once said to me that companies don't buy from startups because they want to. They buy from startups because they have to. Why did you have to go to this type of solution from a company like Core Aid versus go into one of the big whales and combine this? Actually, really appreciated the fact that the underlying architecture that they built, Ethernet-based, as soon as I heard Ethernet-based scale-out infrastructure, then that really struck a chord for me. I think, again, what we're trying to do is we're trying to simplify complexity. And what we see with Core Aid is that it has a very simple architecture to it. But at the same time, very scalable, very robust, and it just drives a lot of the complexity of the storage solution. So Kevin, let's talk about the software-defined data center as we've been hearing from VMware, the sort of top-down approach, abstract, pool, automate. How do you fit into that with your notion of software-defined storage? So we're really coming at the same problem set from the bottoms up, and that's where we can kind of meet and collaborate. So if you look at that abstract pool and automate, that's been done very, very well at the workloads, with the applications, with virtual machines. It's starting to make its way down into the network, but in storage, it just doesn't exist. You can't do it today with the older architectures, think like fiber channel, big boxes. It's very difficult to make that into a very programmable, flexible elastic architecture, and that's been one of the big problems for the enterprise. So we're coming at this with an architecture that looks a lot more like those building blocks, and then from that bare metal, being able to then take that pool of resources, abstract them, and then in a restful API automated way, this is the way the cloud guys do it, really to then make that available as a service all the way up the stack. So you take someone like Rackforce, they're the number one cloud in Canada, number one cloud service provider. They just won the VMware service partner of the year. I mean, we're very proud to be working with them. The problem set that they have is, how do we take enterprise class services, deliver them to our big enterprise customers, but do it with the agility and with the simplicity and be able to innovate in terms of the way that we deliver that as a service. So enabling that is a huge upside for us if we can go help our customers do that, and we think that it really moves the infrastructure forward. So there was a big move, certainly over the last two or three years, VMware announced the VStorage APIs for array integration and all the traditional storage companies worked really hard to integrate. Why didn't that solve this problem? Well, it's a great question. We've done some of that same integration ourselves where you can go, for example, pause a workload before you take a snapshot. There's a number of those kinds of integrations that are useful, there's utility to them, but what they don't do is they don't solve the fundamental provisioning and automation problem. With most of those cloud infrastructures, you have to assume someone has already provisioned storage for you. And in a fast data growing world, in those type of environments, that's very difficult to assume, and much less having the flexibility to go fit different workloads where I need flash drives over here, I need RAID 10 over there, I need very different configurations to achieve the enterprise application requirements. So the flexibility to go not only do the top level integration with the virtual machine, but to drive that all the way down and match the infrastructure to those requirements, that's something that doesn't exist today. So Tim, I wonder if you could talk about, Kevin was mentioning simplifying, accommodating faster growth. Can you talk about, give us some kind of sense as to how fast you're growing and what does that all mean to you? Well, we became a cloud service provider in 2007, and I think year over year, our cloud's been growing in the range of 50 to 70% per year. So very rapid growth, and I just wanted to make another comment with regards to the automation piece that we have a number of skill levels in our staff. And to be able to automate and have this one button provisioning, we see that as reducing a lot of operations overhead. And I think that we can rely on more junior techs to do a lot of the provisioning for us. I think that that's why the software defined storage is so important for us. So how do you scale it? I mean, you're talking, Kevin was earlier talking about using, taking a page out of Google and scaling out. Talk about how you scale this solution. Right. So here's software defined. Oh, does it scale? Yeah. So we have like the actual chassis themselves. We'll have very similar hard drives inside those chassis. We'll have a chassis for SSD and we'll have a chassis for SATA drives. And we'll actually build it like a building block, like a brick wall, if you will. We use, we're going into a production with rain. It's a redundant array of independent nodes. So that gives us the ultimate in reliability and in performance as well. So do you, are there any particular applications which you serve as better than others? Are you sort of doing an IO blender, you know, all general purpose applications? Talk about the use cases here. I think that the, probably the most challenging use applications for us are probably the very advanced, large e-commerce type of applications. They can't afford to have any downtime even during maintenance periods. So we have to build our infrastructure such that even during maintenance there is absolutely no downtime. So that's among the most challenging applications that we serve as, but at the same time we do have customers that are in the test dev environments and things like that and basically everything in between. How do you handling multi-tenancy? Multi-tenancy, we handle it in a number of different ways. There's certainly the, at the server level we can provide what we call a hosted private cloud. And so essentially it's when we take servers and we dedicate the actual hardware, the actual servers are dedicated to the customers. We can actually do that with the storage now with Core 8 as well. So we can dedicate a disk array inside of a chassis, inside of the storage chassis. So now the customer has dedicated servers, they have dedicated disk, and we build kind of a security wrapper around that as well, security and performance. So basically it becomes a separate infrastructure for the customer, but it is hosted in our data center. How about disaster recovery? Where does that fit in? It's a topic that is really challenging for users. It's always top of mind, data protection generally, but disaster recovery specifically. How do you guys handle it? And I want to follow up with Kevin. We have certainly a number of clients and that require disaster recovery. What we actually see is that the environment, the geographic region that we're in is very stable. And we see, for example, Vancouver where it's less stable, that we can see the primary workload actually moving to our data center and the backup becomes in the Vancouver data center, for example. So we do have a number of disaster recovery clients and that's where the big data comes in, tends to come in as well. So that's why we needed to have this very large scale out storage infrastructure as well. So you're hearing that from other clients? I mean, is that a key value proposition? Backup is one of the killer apps for cloud because people can start small there and then what we see exactly the same trend where people will start out with something in the cloud for backup and they'll actually flip it or they'll start to really move things to some of these types of environments if the service provider can really deliver the resilience, the performance and really do a good job of outsourcing that. So you take something like multi-tenancy, that's a hard problem. If you look at solving that, whether it's in a private or a public environment, this is a good example where it's a whole stack problem. It's not a storage problem or a server or a network problem. You have to make all those pieces work together. So the software-defined data center and software-defined storage is part of that. This is all about, how do you programmatically link those together and then be able to use the control plane to pull out data plane elements, whether it's how do you segment off some storage, how do you create a VLAN, or how do you tie that all the way up into a virtual machine workload. So this is all of the underlying plumbing that is necessary to really make that very agile. Kevin, how do you compare the cloud service provider like Tim, for example, with the traditional IT customer? Where are they on that spectrum of cloudness if you will? It's interesting because what we're finding is in large companies you'll have a lot of traditional slow-moving pockets and then you'll have someone who's got pain. And that pain these days is often caused by rapid data growth. So a lot of our customers, we're seeing doubled year on year on year growth. So one, two, four, eight, 16, 32, right? So what worked at one or two stretches at four or eight and when you get to 16, 32, 64, it just won't work. And so what we're finding is that even in conservative Fortune 50 customers, we're finding pools of people that really have a real new problem and they realize that they need to look more like Amazon or Google and they need to do it quickly. And so that's an opportunity either to work with someone like Tim to go and really build an architecture that's suited for that or where we'll come into their inside infrastructure. If they've already got petabytes of data, they may not want to move it. We can build that kind of infrastructure in-house. So we're seeing it in pockets where people are moving but it's growing then very quickly. Complexity is the pain. Coret is the aspirin I'm hearing. But it has a little more powerful there. But I have to ask you, when you really peel back the onion on complexity, it seems like it's been this perpetual barrier to growth. Have we solved that problem, do you feel, as an industry? So I think that if you look at the stack, there's a lot of folks that have focused on just their piece and some of the big vendors are buying up some companies and trying to put it into a stack but it's relatively fixed. We see those as sort of a kind of a modern mainframe but not a lot of flexibility and certainly not a commodity hardware economics. So what we believe is that that's the right idea to get things working together but in the end it's going to have to be much more open about how do you do it, open standards with being able to then program that in a much more flexible way because if you're going to differentiate on services, you need to be able to customize them. You need to be able to really tailor that to the business need and that's very hard to do from within a very fixed stack and a fixed frame so that's where some of the innovation around the software defined, API sets and so forth that they're going to drive people. Tim, I would imagine from your standpoint you just can't be constantly mucking around with infrastructure, it would just be such an inhibitor to your growth. Have you essentially solved that problem at the infrastructure level or is it still a challenge? I think it's still a challenge, definitely. I think it's all about the process as well. It's the way we govern our change management and things like that. I think that's the ongoing key for us is that there's always going to be new technologies, there's always going to be new types of infrastructure but we have to be very careful in the way we do our change management and processes but again, if you get the automation involved to drive out the complexity, that really makes our job a lot easier. Abstract the complexity, pull the resources, automate seems to be the direction that the industry's headed in. Clearly that's what you guys are espousing so congratulations on your successes. Really appreciate you guys coming by theCUBE and good luck, enjoy the rest of the event. Thanks so much. Thanks for watching everybody. We'll be right back with our next guest, keep it right there. This is SiliconANGLE.TV's continuous coverage, live from VMworld 2012.