 Everybody, we're back. This is Dave Vellante of wikibond.org and this is siliconangle.tv's theCUBE at VMworld 2012. This is our third year at VMworld. We're here in Moscone North in the hang space. Come by and see us. We got a lot of action going on here. We're in the middle of the focus on Flash. We just had the hybrid Flash array folks on a number of CEOs from that community. And now we're going to look at the all Flash piece. There are a number of companies that are really going after the all Flash opportunity. And I'm here with James Candelaria, who's the CTO of Whiptail, Scott Deetson, who's the CEO of Pure Storage, and Thomas Izakovich, who's the CEO of Nimbus. Gentlemen, welcome to theCUBE. Thank you. Good to be here. So, as I said in the last panel, we love to, as analysts, stick you guys into buckets. I know that drives you crazy, but you do sell all Flash arrays. So, let me start left to right, James, with you. What do you consider of an all Flash storage array? What is it? I consider an all Flash storage array an array that's optimized from top to bottom to deal with the intricacies of the media itself, right? I mean, we've been seeing the tier one vendors selling disk-based arrays for 25, 30 years. A Flash requires a complete rethink of the technology stack end to end. And there are unique properties of Flash that require unique management capabilities. Otherwise, you're going to end up in endurance problems and random right performance problems, which are going to make it unacceptable for enterprise use. So, would you guys agree with that? Anything you'd add, Scott? Certainly, that Flash demands a holistic rethink is not an issue. In fact, I think we got the strongest endorsement of that with EMC acquiring Extreme IO for hundreds of millions of dollars. I mean, that is a company that had a huge amount of IP around managing disk for storage, doing Flash caching, and even a treasure trove around deduplication. And yet, they went with a Flash design stack for their Flash solution. 400 million was what I had heard. But of course, Rich Napolitano on theCUBE said, number was actually lower than what was reported. But big numbers. Thomas, anything you'd add to that? Well, I agree. And I think you need to take a holistic approach to it, incorporating not just software, which is obvious. You need to rethink the way you develop file systems and the way you write IO. But I also think hardware fundamentally needs to change as well. Traditional off-the-shelf hardware which is never designed to handle the performance potential of Flash. It can't deliver the full efficiency and density possibilities of Flash. So as much as Flash is about performance, if you don't engineer the entire system architecture for it, you won't be able to fully take advantage of all of its capabilities. And while the headline is usually performance, we actually think the efficiency aspect of Flash is often overlooked and actually just as compelling, if not even potentially more compelling in time. Power, density, rack space, heating and cooling savings are all very significant. So you guys would look at what three part just announced recently, the all Flash array. You'd look at that and you would say that's a bolt-on? Is that, I mean, in my words, but would you agree with that? I mean, it's technically all Flash and that they didn't introduce any mechanical disk in the configuration, but they haven't done any of the software work necessary to harness the capabilities of that Flash. Okay, so let's dig into that a little bit. So James, you were referencing that. So can you double click on that a little bit? What kind of software capabilities are we talking about here? Yeah, absolutely. I mean, so you have to understand the media you're dealing with. So NAND, I mean, as a general medium, has a inherent inability to rewrite an already written page. So one of the challenges you have is if you wanna rewrite an already written page, you need to erase an entire block. So you need to optimize around that behavior continuously. Otherwise, you're gonna run into the right cliff that some vendors are so prone to talking about extremely quickly and also wear your media out inappropriately. Okay, so I notice, Scott, let me go to you. A lot of the situations that you look at for all Flash arrays are database intensive. Is that the primary target? Is that the right target? Are you trying to expand beyond that? What are your thoughts there? Perfect segue. I actually think sitting here at VMworld, it's not surprising. Virtualization is the best workload for these all Flash arrays. The reason is, even if it's a database workload that you've virtualized, virtualization drives up random IO. Anytime you share a server CPU across a collection of virtual machines, the IO pipe has to get multiplexed across those different machines. So you're sitting in the storage on the back end. What you see is random IO. We call that the IO blender, a term that's been catching on. Disk is terrible at random IO. It's 95% inefficient, 95% seeking and rotating, 5% or less transfer time. If you had an employee that was spending 95% of their time on Facebook and only 5% of their time doing work, you wouldn't like it very much. So virtualization really lends itself because of the random IO intensity to all Flash. And then the killer feature we believe is deduplication, which allows, because there's so much redundancy between VMs, we can shrink the data set down and actually get a cost point that's below what people are spending for performance disk with all Flash. So you get the best of both worlds. The price of a disk based solution and yet the performance of Flash. So Thomas, you were talking before about the efficiency being the overlooked aspect. Would you add anything to what Scott just said? Sure, well I think most definitely everything that's been said so far I agree with. One of the things that's challenging in virtualized environments is that you're gonna have many, many applications simultaneously hitting the storage at once. And so while there may not be any single application that needs 100,000 IOPS or even 50,000 IOPS for that matter, as you look to scale the virtualized infrastructure, you need to make sure that the storage has a lot of scalability in it in terms of its IOPS potential, in terms of its throughput potential, in terms of its capacity potential. So this is one of those areas where the performance of the storage directly correlates to how effectively you can scale the virtualized infrastructure. And so the more performance that you have there, the more you're gonna be able to scale that infrastructure, reduce administrative burden, reduce licensing costs and all the complexity that can come with big time virtualization deployments. So of course EMC and NetApp are playing in this field. You guys are going right after those companies add in IBM, HP, they install bases what you guys are going after and you're disruptors, that's what you do. But they've even EMC's introduced VF Cash, NetApp has Flash Excel and Flash Pools, all these capabilities. Somebody said to me one time, companies don't buy from startups because they want to, they buy because they have to. So I want to go down the line. Why do companies buy from you? What is it that's so compelling, again we'll start with James, that companies have to buy from you, that they can't get from traditional storage companies? Well I think it comes down to tearing down the data silos, right? I mean so virtualization has always been about aggregating your pool of performance into one area where no one workload can overrule any other workload and provide a detriment to it. I mean that's really one of the great things that we can do at Whiptale is provide a performance pool that's large enough to run your ERP, your virtual desktop, your data analytics platform, all simultaneously without short changing any one of them without having to invest in four or five different arrays. I mean so that becomes, that's really our major message here at the show is break down those data silos and let's go ahead and aggregate all of our workloads into one platform. So Scott, what do you bring into the table that the traditional guys can't, feel free to give examples and proof points. So we had a particular customer that was trying to virtualize a very IO intensive database workload and they couldn't get it to virtualize on traditional storage because the storage just wasn't fast enough. They tried out a pure storage array and they found they were able to drive an additional three to one database server consolidation as well as improve their performance greater than five fold. So they shrunk their data center footprint, they saved enough money that the storage turned out to be effectively free and they provided a performance boost to their customers, right? So that's a combination of a bunch of software features coming together to pull that off, right? Flash helps with the random IO, it's the data reduction, the de-duplication that is essential to driving down the cost and then traditional features like high availability which is mandatory for virtual machine business critical workloads, features like snapshots and so on also have to be part of the picture when you're going up against the existing incumbent array vendors. Okay, Thomas, can you give us some examples or? David, it's just such game changing technology. I mean, if you look at it from a performance potential, from a footprint potential, efficiency potential, it's just totally game changing and the customers that we have that have adopted this technology whether it's eBay or Allent Group or any one of the number of customers that we talk about on our website they're getting five to 10x gains over what they were doing before whether it's efficiency, power reduction, performance improvements, I mean these are things that are fundamentally competitive game changers for our customers and companies that aren't significantly thinking about flash in their infrastructure frankly I think are just going to be competitively behind in their IT infrastructure compared to these companies that are really embracing it and taking advantage of all these benefits. Okay, but so EMC evidently agreed with that went out and plunked down a bunch of dough for extreme IO, did you guys look at that and say oh that's awesome, our valuations just went up and you look at that and say oh boy, you know. I like it. I think it's validating. It's definitely validating, right? But to my earlier statement about people buy from startups because they have to does that level the playing field a little bit with your first mover advantage? So competition is really good for all of us, right? We need competition to grow this industry from it's relatively small base today to be the dominant container for performance, performance workloads in the future. So competition is absolutely going to drive that innovation, EMC is a formidable player. The good news for us is they don't have a product in the market yet and it's going to take them some time to get a product there. We better exercise that window of opportunity to grow and scale our businesses and advance our feature sets before they show up because they are that strong a player. How about this notion of quality of service and being able to actually tie quality of service to the application and begin to either charge for that or even internally provide different service levels. Is that something that you guys are seeing in the customer base? Is there demand for that? Is that just sort of pie in the sky? Yeah, we do. I mean, it's certainly for the service providers that are looking to monetize different service level agreements with their customers compared to whatever they're offering or capable of offering on disk based technology, the quality of service controls are critical. And there are lots of ways that can be addressed by through VMware's own feature set provided through storage DRS and other capabilities. VMware itself is actually at the forefront of enabling a tiered infrastructure without having to go to a proprietary storage vendor based tiering construct. And we fully embrace that in this data. We believe that VMware is providing that intelligence in the hypervisor to really assist in that QOS without having to go to a vendor lock-in storage or a solution around tiering. I also think that quality of service is much more interesting and much more necessary for the disk based vendors who can't provide adequate quality of service for the existing workloads while a purebred flash array doesn't typically have the same performance challenges that an all disk array does. It becomes less and less important and less important to bake into the actual product itself and to Tom's point, I think it's more adequately solved at the equalizing layer which is the hypervisor. Because IO is not the scarce resource. Exactly. Why ration something that's not scarce? Echoing the sentiment, customers are uniformly surprised when they've been in the disk world to see sub millisecond latency for all of the operations. And so our service provider customers or large data centers where they have to mix workloads across the storage, they're kind of flabbergasted that they don't have to worry about contention anymore. Contention goes away as a problem with all flash because you have this flat line latency through the entire operating range of the products. And then again, critical point, data reduction enables some of these service providers to offer all flash at a price point that's compelling relative to what they have to spend for disk. So that's a great upsell for them if they can, their cash outlay is not substantially different than what they're paying for disk based technology and yet they can deliver flash to their customers and mark it up. So you're essentially Scott saying that you can deliver an all flash array at parity from a TCO standpoint with traditional subsystems. So it does depend on the workload. It depends on technologies like deduplication and compression for shrinking the data set. But for virtualization workloads, we haven't met one yet where our data reduction wasn't sufficient to deliver price parity with an existing 15K disk array. I think you can even do it today from a TCO basis regardless of the data reduction possibilities. If you look at our latest technology, for example, in this data, we can get about eight times the capacity in the same footprint that would otherwise be necessary for 15K disk. And if you think about those service providers that are focused on the cost of rack space, the cost of power, the cost of cooling, all those other elements, that efficiency is directly impacting their profitability as organizations. It's directly impacting their gross margin. So we think actually right now today on a TCO basis, you can do it and if data reduction plays a role, that's going to make it even more effective. Yeah, so actually we're out of time. So I'm going to take the last word, if you don't mind. I'm going to read something that David Floyer wrote. My colleague who really follows as well. He's very high on the flash only and he said there's really three reasons and I'll share this with our audience. One is they provide lower consistent response time and consistent variance in response time. This makes the design of data systems much simpler. And that simplification is a really alluring theme, I think, for the Wikibon constituency. The second is the management and costing of the data are completely separated from the management and costing of the data movement. And so this leads to, again, far more simpler predictable and lower cost deployments. And the third is to the point that we were just talking about, the overall cost of the system with flash only components is lower and will continue to become lower than systems with traditional storage arrays. I'm guessing you guys aren't going to violently disagree with that, but that's our opinion. And thank you guys for coming on and sharing yours with the CUBE community. Appreciate it. All right, we'll be right back. VMworld 2012, we're live. We're down here in the Hang Center, stop by and keep right there. We get a lot of great guests coming on this afternoon. Right back.