 Live from San Francisco, California, it's The Cube at VMworld 2014. Brought to you by VMware, Cisco, EMC, HP, and Nutanix. Hi everybody, welcome back to San Francisco. This is Dave Vellante and this is The Cube. We're here at VMworld 2014. The Cube is our live mobile studio. We go out to the events, extract the signal from the noise. This is our fifth year at VMworld. We're here back at Moscone. It was at least one year, I think, maybe even two that was in Las Vegas. Good to be back in San Francisco. Tom Cook is here, CEO of Permabit, Cube alum. Tom, it's great to see you again. Boy, I'm thrilled to be here, David. Thank you. So it's a big, great show. I mean, a lot of storage action. Storage has always been a problem in the VMworld community. It continues to be a challenge for people. Here are a lot of stuff on VVOLs, a lot of stuff on Flash. Costa Flash is coming down. Flash, Flash, Flash has really been a big theme. You guys are at the heart of that. You've got a technology to really enable data reduction. The tailwind of Flash has been a perfect storm for you guys. So we'll talk about that in a minute. But you've got some news. We have an exclusive here in The Cube. You guys are going to get an announcement coming up next month. You're going to show us a little leg here, right? Right. So I'm really thrilled to be here, David, because, I mean, this is the place to make an announcement. So we are previewing an announcement of a new product we have called Sandblocks that we're releasing in September. And it's an exciting deduplication compression appliance that addresses any fiber channel array. And when I say any fiber channel array, that means any Flash, any hybrid, or any spinning disk array. We're really excited about it. So, blocks, VLO CKS? That's right. And the cool thing about it, I mean, let's drill down a little bit on performance. It's at 180,000 mixed IOPS, so 7030 IOPS, extremely low latency, and addresses 256 terabytes of physical storage. So with five or six to one data deduplication, a single unit, data deduplication and compression, a single unit addresses well over a petabyte of data. And you can scale it by adding multiple units to your array and scaling much larger beyond that for both performance and for the addressability. So it comes as appliance, you drop it in. It's sort of an inline approach. Can you just describe from a customer standpoint, because you know what questions they're going to be asking. So let's unpack that a little bit. Right. So you attach it through your switch. There's fiber channel connectivity, Ethernet connectivity for management of it. And the other cool thing about the performance is you have a choice, whether you want to apply it to a loan or to a workflow, a workload or not. So if you have a workflow that really reacts well to deduplication, of course you would utilize it. If not, you can run it straight through to your array. The other great thing about it is we don't get in the way of any of the management function of the array, and we're not competing for resources with it. So it's always an inline, real-time deduplication compression device, and we think it's really exciting for the legacy vendors. Okay, so I can point this at virtually any part of my storage infrastructure. Absolutely, and you can have disparate number of arrays behind it. So this is an interesting move. So your business model to date has been, you've released an SDK, there's some integration that has to get done, but the array vendors got to say, okay, we're going to bet on the albedo technology, but this allows essentially any customer through an OEM relationship, I think, to drop this in and make old legacy storage less expensive without the impact of performance because you guys have a very low latency architecture. Let's talk about that a little bit. You certainly got the punchline there, and this came as like a reaction to broad support and requests from our customer base, the OEMs. They were saying to us, geez, we're implementing with you, but we're still 18 months away from getting to market and we're feeling the pressure today. What can you do to help us today? And so we are making this appliance available exclusively through the OEM reseller channel, and that means to their sales forces and to their resellers themselves to sell this in a preconfigured package for their customers. We're really excited about it because we think that overcomes any obstacle to adoption and really helps the incumbent array vendors to compete with those forces that become disruptive in the marketplace. Will you sell directly to the channel or are you selling through the OEM to the channel? No, it's direct to the OEM, and then they represent it through their reseller programs into the channel. So that's a very clean model. So the demand from this is coming from your OEM partners? Exactly right. So they were feeling the pinch in the marketplace from disruptive forces, the all-flash array vendors that have implemented data efficiency, and suddenly they can't meet that price performance quotient. We know they now can. With most of them shipping hybrids and into all-flash, we certainly think that this makes all the incumbent vendors very competitive. And just to give you a status on it, we're not making any of our partnership announcements today. That'll come in September. But we've been working with major vendors and we've now called six major products in the marketplace. So we're going to see this coming to a theater near you real soon. Well, it's interesting. I remember when you guys announced the strategy, really to focus on OEM, the great strategy, but we knew at the time it was going to take a long time. That's sales cycles and OEM deals take a very, very long time. Maybe you could talk about that a little bit. Why is it such a long term? I mean, it's not only the relationship, but it's many tiers of the relationship. The engineers got to bless it. They got to test it. They got to integrate it. And so it does take a long time. But it sounds like... So talk about that a little bit. But it sounds like you're now really building up a strong base and it's about to explode. So it's a great segue into a question of why did we do an appliance? We wanted to make it more consumable by our customer base. We wanted to make it so they could bring it to market immediately. Other than that, they always have to get products into a roadmap, into a development schedule, and then to a release train. So you're competing for resources there. That takes time. And of course, smaller companies have been able to kind of outmaneuver the larger companies with major product lines in that cycle. This enables the major product lines to continue on that, but to get an interim solution in or a bridge solution in today that will enable them to pass great efficiency to their customers. We think that's a real win for customers and for the legacy array makers. So is that how we should look at this? Are we essentially providing a bridge from the old to the new where the new is going to be fully integrated through the SDK? So talk about the roadmap for the appliance. That's entirely correct. We do expect over the next two to three year period that all arrays will be enabled with a functional and real time data efficiency, so a deduplication compression. Right now, they're not. And so there's a void in the marketplace and we think it's a great place to put an appliance in. The other thing to think about, an appliance has certain advantages. So other people have built ASICs into their storage arrays in order to handle this. In this case, there's no competition for the resources. That means we can operate at a very high level and provide great serviceability to customers right from the start. That's an important point, actually. People sometimes miss that there's no free lunch here. Either you have to throw processing resources at the data reduction or you have to take it post-process, which causes... A lot of people don't want to do things post-process. They want real time. So you're saying the appliance, you've been able to take that load off of the traditional array, which probably couldn't handle it anyway. Well, so certainly they can change their bill of materials over time, but in the near term, this enables them to keep with ROEMs, to keep with their current bill of materials and to have a more capable solution than anybody else in the marketplace. One of the things I like about having you on time is you've got a perspective on the industry. You've been in the industry for a while. You really, I mean, you've got ROEM partners, but you're independent and that you're an arms dealer to these guys. So let's break down what's going on in the flash business, the storage business. You've got the sort of, I call it the cartel, the big oligopoly sort of hanging on, building their systems and extending their systems and their road map, bringing out forms of hybrid. Then you've got the sort of pure hybrids and then you've got the all-flash guys. You've got a range of pricing now from, let's say, $1 per gig or less for up to $2 per gig and an all-flash array or even higher. How are you seeing that hierarchy evolve and what can we expect going forward? So I think there's a few trends going on. First of all, in kind of the array, the on-premise array marketplace today, certainly there have been disruptive moves that have happened and people are having a great deal of success in the hybrid market space and the all-flash array. I think over the next two-year period, and by the way, I don't think it's the architectures that are winning there. I think it's data efficiency that's winning there and those disruptive forces have data efficiency and that's what they've been using to win with. But what's going to happen over the next 12 to 24 month period and I would say that's three to four quarters that we're going to see here or beyond that slightly, we're going to see all arrays get equipped with data efficiency. Very capable compression deduplication. It'll be mine or it'll be homegrown and we're going to bridge that gap with an appliance right now. So there's going to be a battle going on for that market space. We think this product levels the playing field so any incumbent can compete with those disruptive forces. Then of course they've got to contend with more longer term the hyperscale vendors who want to take data at rest and archive data and slower moving data and rest that away. And certainly there's competition there so I think the on-premise will end up being extremely competitive battlefield that maybe fewer companies will share in and win in and if it were me, I'd bet on the incumbents. I think they've got every tool in their belt to win. You're basically making the statement that data efficiency is trumping architecture and that's because the number one reason that you hear from customers that they don't buy into flashes is cost. It's too expensive. They're saying that efficiency is leveling that playing field. I think it is and I think we're going to see hybrids be the predominant product that's being sold in the market place. Not all flash arrays. Not all flash arrays. I mean that's sort of what the forecast show. And I think we're going to see hybrids for a good period of time and I don't think anyone's going to beat the incumbents on broad-based application of hybrid arrays. We had TGIL on today. Yep. And Rob Commons was saying we've got essentially hybrid that's all flash. Right. We've got tiers of flash within the all flash array. Yeah. And they're winning on data efficiency too. Yeah. He's talking about cheap and deep flash. I was like, wow, music to my ears. So that was pretty interesting. Okay. So anything else you can share with us that you've seen at VMworld, just some of the trends that you've been observing, the customer meetings that you've been having? So you asked one question earlier about the kind of development of the OEM model and how it takes time. I think it's taken a little bit more time in our view than we thought it would when we started down this road. But now what we see is the market's really quickening. And so we think over the next 12, 24 months, there's going to be data efficiency, deduplication compression everywhere. And it's the talk of the town here. All right, Tom. That's excellent. Great segment, as always. Really appreciate you coming on the queue. Thank you, David. Oh, one other thing, though, before we go. I want you to be the first in the new style trend here. And I'm wanting to hand it over to you. No storage bloat. There you go. All right. Fantastic. Alberio Sand Blocks. Actually, BLOX with an X. All right. Love it. Keep it right there, everybody. We'll be back with theCUBE at VMworld 2014. We're live. We'll be right back.