 And now, SiliconANGLE TV and wikibon.org present a focus spotlight. Live from Las Vegas at VMworld 2011, host John Furrier and Dave Vellante, a new model for cloud service providers with support from SolidFire. Hi everybody, we're back. This is Dave Vellante from Wikibon and this is SiliconANGLE's continuous coverage of VMworld Live 2011. We're in day three and we are in the cloud service provider spotlight. These spotlights are in-depth segments designed to help practitioners understand some of the key trends that are going on. We've been following the cloud service segment for a number of years now at SiliconANGLE and Wikibon. And we have a great pair of guests here from Virtue Stream. Rodney Rogers is the chairman and CEO. Rodney, welcome. Thanks for coming on theCUBE. And Matt Thur is Senior VP and Solutions Architect. Thank you guys for coming on. We're going to talk about your business. We're going to talk about storage because I know that's a big challenge. Maybe we have time. We can talk a little bit about security. Rodney, I wonder if you could start by telling us a little bit about Virtue Stream and what you guys do and what's unique about your company. Sure. So Virtue Stream is a cloud service provider that's a relatively young company. We started in January of 2009. We released our enterprise cloud about a year ago. And our claim to fame is that we provision compute an infrastructure at a sub-VM level. So most all of our competitors, in fact all of our competitors out there provision compute by a VM or VM hour. And what we do is we break the VM down to a more granular level. We combine it into something we call an infrastructure unit, which is a combination of compute, RAM, embedded network bandwidth, and embedded storage IOPS. And what that allows us to do is essentially in so many words, virtualize a VM. We use, we have developed our management layer, our cloud OS above the hypervisor level. We support multi hypervisors. We support ESX for our VM based solution, VMWare based solution, and KVM for our open source based solutions. But we use our resource pooling algorithms and policy management algorithms so that we aggregate our client's consumption across all workload requirements. So we take this more granular slice of a VM and then aggregate that usage so we get to true consumption. We bill our clients, we provision meter and bill via a method by what they actually consume versus what they're allocated to consume. And that really at the end of the day is our core differential. I'll talk a little bit more about how that manifests. Just to understand that, you charge by the drink. We do. Even if there's an allocation there that they can take advantage of, if they don't use it, they don't pay for it. That's correct. VMs are coming off shapes and sizes. Thank you. And essentially what a VM is is it's an allocated amount of compute. What we do is break that down, aggregate our client's usage across all of their demand, and then bill them for that. How granular can you get? Well, we define the infrastructure unit by a measure of compute RAM network and bandwidth. 200 megahertz of compute, 760 megabytes of RAM. And we do not publish the bandwidth in IO. Okay, and I buy those in chunks? Is that right? You buy and buy IU. So we will go out to a client requirement. We will size their overall requirements via IU. We have a couple of different flavors on them. We have a basic IU and we have an enterprise IU or core IU that includes such things as guaranteed response time, guaranteed throughput, embedded data replication, that type of thing. So we've been talking a lot on theCUBE about this theme of cloud service providers and the innovation that's going on in cloud service. You started your company in the dead middle of the worst recession in our lifetimes, most of our lifetimes. And at the time, I mean, literally we had clients in the Wikibon community cutting budgets by 70%. Many of those are financial institutions, but really across the board, deep, deep cuts. And since then they've held them flat. So we went zoop and flat. Unlike the Dow, which of course is bouncing around like crazy now, but they just, now I presume that your budgets have been going up. I mean you're investing in a company, so you guys are innovating and so I think the traditional IT shops can learn a lot from you guys. Well I think, you know, we really believe in the IP and it's a great point. It was a surreal feeling to see the Dow, you know, I think, cross under 7,000 as we were all three months old. What's a big get? A little hairy there. From that standpoint, it was a bizarre time, but we have some great investors in our syndicate, Intel is an investor. And so their technical diligence on us and our ability to withstand that diligence was very heartening in the midst of, you know, of all this, you know, quagmire in terms of the macro markets. At the end of the day, we've ultimately got our momentum on two fronts and it really comes down to that level of efficiency, that granulation, giving us a level of efficiency that's unfound. We sort of service two worlds. The native web app, web 2.0, Greenfield application market, our infrastructure unit methodology and resource pooling allows us to fundamentally size and thus price more efficiently than typical commodity cloud or true public cloud producers. We're also a true multi-tenant. And the enterprise cloud side, when you're dealing with enterprise applications or legacy applications, very high transaction, memory intensive type of applications, having the ability to control IOPS, which we do through our entire architecture, gives us the ability to SLA in a superior fashion than even private cloud providers. So it's not only a price game, it's the ability to guarantee throughput response time so that it can give an enterprise CIO the peace of mind that they can put a heavyweight legacy back office enterprise app in a production environment in the cloud. Alright, that's outstanding. So thank you for that description. I want to talk about, Matt, I want to talk about storage. You got to architect this stuff. I presume storage is a big challenge, storage is a challenge for everybody who comes to this event. Talk about your storage and talk about some of the trends you're seeing and then I want to get into what that means for your customers. Absolutely, so storage is probably been the single biggest pain point we found among our customers as consultants and it was probably one of the single biggest design factors when we were designing our enterprise cloud. Why was it such a pain point? Well, so, you know, as the explosion of, there's been two trends in the industry, there's the massive explosion of just data. There's raw bits, it's almost exponential in size, the amount of data that's being generated on a daily basis, right? When you throw in virtualization and the growth that that is doing is performance growth. The amount of IOs that are necessary to run these massively consolidated systems, right? The disc manufacturers have been great in expanding the space, right? You have one terabyte drive, two terabyte drives, three terabyte drives coming out. Unfortunately, we've got 15,000 RPM drives and we've had them for 15 years and that's what drives performance, right? And we've been stuck there. So, to solve the performance problem, you throw more and more and more spindles, right? And you end up in very inefficient space so your ratio of space to IO is just way off. It's a huge access density problem. Exactly. You have this great declining cost per bit but you can't take advantage of it because you're throwing hardware at the problem. Yeah, I mean, the data's there but you can't get to it. Yeah, okay. So, where we're looking at, of course, is that the newer technologies obviously flash, SSD, right? Which is, has somewhat inverted the problem, right? You have massive amounts of IO, not as much space, though, with the newer-sized SSD drives coming out, the 600-gig SSD drives, and I've seen announcements around one terabyte SSD drives and you start getting much more in balance and you can go ahead and provide this massive amount of performance, along with the appropriately ratioed amount of space to our provider. So, really, the new technologies around SSDs, the other problem that we have is a lot of traditional storage architectures where you're dealing with one or two storage processors with trays of disks behind it. When you start throwing SSD into there, what you end up with is that those storage processors now become the bottleneck, right? This isn't the bandwidth to process that. They don't have the processor or the bandwidth to do it. Squeeze in a balloon. So, some of the new architectures, some of the rain, we're done an array of independent node architectures that are coming out there, which go ahead and allow the storage processors to scale linearly along with the space and the IO capacity are really where we're feeling is the future of the storage technologies. So, there's a real spectrum of flash devices coming out. You have the, you know, sort of stick drives in an EMC array. Right. You know, that's the bandwidth, controller bandwidth limitation that you talked about. You've got these all-flash arrays coming out, guys like SolidFire and Pure, and then you've got the FusionIO memory extension. Do you see yourselves using all of those or are you sort of leaning toward one architecture? Does it depend on the application? Well, you know, there's use cases for all of them. So, something like a FusionIO, right, where you may have a card in a server, is a great way to alleviate concerns about when you hit memory pressure and memory swap. So, maybe you put a FusionIO card in your servers to alleviate that as a use case. Great use case for that kind of technology. So, if you've got traditional arrays like an EMC putting in their SSDs and their flash cache, right, so now where your caches are terabytes in size, and then of course, you know, the newer pure SSD-based technologies, such as SolidFire, where you can go ahead and have these rain configurations with pure SSD and massive amounts of throughput with relatively low power and space consumption as well. Which is, of course, as a service provider, a huge concern, right? The space and the power you're pulling in the data center, you know, the electricity vendors, they're not building a whole lot of new, more power plants, right? So, you end up power constrained there in your data centers. So, our analysis suggests that the all flash lined up to a high performance array, the all flash is actually going to be less expensive because of better utilization. I don't know if you've gone that deep and done that analysis yet. Have you seen the same things? We've done a fairly deep analysis. You know, I ran a statistical data set around about 40 terabytes of active data, and we found that we could drive the flash costs down into sub-spinning disk costs. Which is really interesting, right? You don't think in those terms. That was a bit different for me when I saw the numbers. But what's more interesting is what that means for you. Well, let me ask you this. How do you guarantee today quality of service? All right, so we have to do it through the entire chain, right? So, we have technology and software and tools that guarantee it from the hypervisor all the way through the fabric, all the way down to the storage system. And there's a variety of techniques that we have to use to do that. And where we're finding, though, is some of the newer technologies where IO control and quality of service is now being built into the storage vendor's technology, is really complimenting our ability and our tools and technology, which guarantee it from the hypervisor all the way down through the chain. It's been traditionally a little more difficult down there at the traditional spinning disk level to do that. And so, from a business model standpoint, can you guarantee that quality of service by customer? And how does Flash change your ability to add new value? Well, I think it's creative. From the perspective of the storage component, ultimately, our focus is on that full control of IOPS, right? So, storage is one component within that. The ability to control it through the host cards and through the network itself, the ability to actually get a more robust set of IOPS out of our storage component solutions will give us the ability to have more flexibility in terms of how we control that through the entire architecture. So, we're excited about the new innovations there. We feel ultimately that core definition of our infrastructure unit will afford us the ability for that superior SLA as all of the various component horsepower matures and becomes more of it. It's got a couple of advantages going for you. You're on the cusp of this. You're small. That's the disadvantage, but you're on the cusp of these trends. You're probably faster than the big whales. No doubt. I tend to say we're young, not small. Okay. All right. It's a little marketing. Fair enough. And you're trying to disrupt. We are. So, talk about the company a little bit in terms of headcount. Any metrics you can give us? We have about 140 employees today. We're roughly split between the United States and the UK. Our original founding was in both regions, so we're a bit of a cross-Atlantic play. We have two data centers in the United States, one in Vienna, Virginia, and one in San Francisco, and two data centers in London, offices in New York, Headquartered in Washington, Atlanta, and San Francisco. Most of our software product development occurs in San Francisco and Atlanta. Okay. What was your headcount? 140. Okay, good. That's good sizeable. Yeah. We've grown quickly. We've gotten a lot of traction out of the gate. One of the things that is somewhat of an issue in the venture growth cloud provision or cloud space is revenue traction. There's a lot of companies out there. There's been quite a group of companies that were sort of founded in that 2006 type of timeframe all the way to sort of when we were one of the later entrants to the space, but there is a lot of buzz around IP, around product, around proof of concept, how that's translating to revenue is somewhat of an unknown in the space. We're very proud of the revenue traction that we've gotten. We can't publish it, but I wish we could. We really have done well in that regard. Our largest client is a mid-fortune 500 private company, a large consumer products company that we run their entire compute topography on our cloud from exchange all the way up to a 2700 user production instance of SAP, all in our extreme cloud. We're really excited and just have a lot of work to do to continue to realize our vision. Virtue Stream, great story. Rodney and Matt, thanks very much for coming on theCUBE and sharing your insights with us.