 Live from San Francisco, California, it's theCUBE at VMworld 2014, brought to you by VMware. Cisco, EMC, HP, and Nutanix. Good morning from San Francisco, everybody. This is Dave Vellante, and this is theCUBE. We're here at VMworld 2014. Rob Commins is here as the Vice President of Marketing at Tegile, a company that's doing hybrid arrays. Hybrid arrays are sort of a combination of flash and spinning disk, although Tegile has a little different point of view on that. Rob, great to see you. Welcome to theCUBE. Hey, good to see you, Dave. Thanks a lot. Yeah, thanks for coming on. So let's start with VMworld. What's going on here? Huge show, 20,000 plus customers. It's kind of the kicker. It used to be Labor Day, kicked off the sort of fall selling season. Now it's VMworld. You know, the kids are back at school, but so give us the update on what you guys are doing here at VMworld. Yes, this is our third year at VMworld. We're having a great time. We've got a big booth right front and center, right in the show. And people are actually in the booth, they're playing a game of chance to win a 2014 Tesla Model S. So that's a lot of fun. People wrapped around the entire booth all week long, so we're having fun. Customers only. That's right, yep, yep. We have to sign an agreement. No employees or contractors can win that, but that's okay. But what we're having a lot of more fun doing is actually demonstrating the extensibility of our technology. You mentioned hybrid. We actually do all flash and hybrid, but we're also playing with the model and actually knocking a lot of guys on their heels by saying, hybrid's not really just flash and disk. It's really, if you back up one layer from that, it's a performance layer and a capacity layer. And what we're demonstrating downstairs is performance with very, very fast flash and then a capacity layer of a lower class of flash. I like to call cheap and deep flash. So it resolves that tension that's always been there between performance and capacity. So the day has arrived where we're now talking about cheap and deep flash. I'm very happy actually. So, but you know, he's actually said, well, we're not just hybrid, we actually do all flash, but in my experience, it's very rare that sort of one size can fit all in the industry. I mean, the closest example of that is NetApp with Waffle. And even that you can see, it's just, it can only go so high and can only go so low. So I want you to talk about the hybrid concept and why it's all of a sudden caught on and is sort of ascending very rapidly. Yeah, if you look at the numbers, the addressable market for hybrid versus all flash is about eight times bigger than all flash. But all flash is kind of neat in people like talking about it. But being able to very, very, I'll call it pragmatically approach that tension between performance and capacity with hybrid. What we do is we create a performance layer that's treated not as a tier of storage, but as a cache. So things are happening in real time. The easiest example to talk about is for example in virtual desktop, when you have a boot storm, that boot storm usually happens within about a 90 minute window in the morning. If you're using sub volume tiering, which a lot of the traditional storage guys do, that happens once or twice a day. And by that time, that boot storm is long gone. So that, the- 8 to 930 people hit their email, boom. Exactly. You can't wait till two in the morning for the auto tiering algorithms to kick in. You got to be right here, right now. And then with inline data reduction, with deduplication and compression, not only does that drop the capacity of the storage system, but it actually has a force multiplier on that cache. So if we get on average, let's say a five to one data reduction rate, I may have a hybrid array that's got two terabytes of very high performance flash in the front end. The host thinks it has 10 terabytes of flash. So the cache hit ratios go up, everybody's performance goes up, and on average we see about a 96% cache hit ratio. So 96% of the time, you're getting a flash class experience to the end user. Only 4% of the time, you're dipping down into the cheaper and deeper layer. Yeah, so you saw that sort of so-called auto tiering hit the more actually, a compelling kind of early on, but they were the first and then others hopped in. Essentially, what I'm hearing from you is a good idea for the time, but it was essentially a bandaid. That it's got to be more real time. Right, I think what's happening is virtualization is driving so much, what I'll call consolidation and multi-tenancy, that these applications are moving so fast. It's funny, our company's called Tageile, Technology and Agile, and we put those two words in a box and shook it real hard, and that's how we came up with the name of the company. But the agility and the velocity of these applications moving around, it wants a real-time cash versus a sub-volume tier that's moving around once or twice a day. Let's come back to cost a little bit, because we seem to be at the point now, that tipping point that everybody's been waiting for. In some cases, people are saying, okay, flash is cheaper than spinning disk, and it certainly probably is at the 15K RPM. That's sort of a dead market now, because flash is cheaper. Where are we, how close are we? Have we hit that tipping point? What's your point of view? Yeah, it's kind of interesting, and it really depends on the customer's applications and how latency-sensitive are their applications. So the people that have gone all-flash and have kind of taken what I call a religious stance that way, they use those data reduction technology, deduplication and compression to drop the cost of flash, but what they're not doing is applying the same algorithms to disk. Whereas at Tagile, we use deduplication compression to drop the cost of flash, but we do the same thing to disk, so that gap is always there. So we can get down to well below $1 a gigabyte with a hybrid system, whereas a typical all-flash is at least double, if not four or five times that. Yeah, yeah, right. They're just starting to get down to sort of two, that's sort of the new benchmark, right? That's right. So the low-water mark for them is double. Okay, but now, so people would say, all right, but you're applying sort of the data reduction technologies to spinning disk, that causes performance overheads. How do you deal with that? Actually, what's really exciting about this is, I mentioned that force multiplier effect. So when we give a 5x boost to the cache, that makes things typically seven to 10 times faster than a traditional storage system. If you look at the deduplication performance cost, it's only about three or 4%. So if I'm giving you 700% more performance, that three or 4% really doesn't matter. Okay, and people will take the cost savings every time. Right. You gave a talk recently at the Flash Memory Summit in August, actually this month. Right. Well, give us an update, how was the Flash Memory Summit? We were talking a little bit off camera about it, but it seems to be coming along and the industry's getting together. It's kind of an industry love fast, I know, but there's a lot of innovation going on and that's sort of underscored at this event, right? Yeah, I think what you're seeing is, people have been talking about flash versus disk is almost what I'll call a binary event. You have disk and you have flash, but you're starting to see the flash market starting to bifurcate a little bit and you've got super high performance flash and like the term I like to use, cheap and deep flash and it's starting to segment and people are starting to wrestle around with what are the issues, what are the challenges, what are the opportunities by leveraging different classes of flash. Now we're very excited about, for example, TLC media for cheap and deep and then up on top in the performance layer, things like NVMe, PCIe storage, we can put that in our architecture as the performance layer and the capacity layer and really keep moving the needle really well both on performance and capacity. You're saying your architecture is media agnostic, is that right? Yeah, yeah, so we like to call it extensible where we know that over time there's going to be better storage for dollars per gigabyte and dollars per IOP and we'll keep riding that technology curve and letting our caching algorithms resolve that tension between the two without driving a lot of user intervention so it's all essentially automated. Well I want to ask you, so you've been in the storage business for a long time and you've seen a lot of companies come and go and trends and so forth. I like to refer to the big whales as the cartel. The oligarchs, yeah. You've got the oligarchs, right? So why can't the oligarchs sort of compete more effectively in this space? They've got huge resources, they've got big product lines. Are they just sort of hanging on to their existing systems to try to get as much as they can out of non-recurring engineering costs? Are they trying to build these types of architectures? Is it just, does it have to be built from the ground up coming from your background? You understand these issues. Help us understand why they can't just sort of say, okay, let's take our existing architecture and redo it. Yeah, if you look at what I'll call the big guys you call the cartel, their systems were architected really around the old client server model where customers deployed an application sitting on a network sitting on a storage system. But now with virtualization with the multi-tenancy and this IO blender effect that happens and now we have flash on the other side pushing the model, those systems were built around a disk drive architecture and to put flash into the model from the ground up and then the effect of virtualization from the top, they're simply not built for that. And to re-architect an existing system to accommodate those things is incredibly hard. It's much easier and that's why you see so much velocity from new storage companies like Tejal taking so much market so fast because we're able to actually put the disk architecture to the side and optimize for a virtualized flash-centric model that we live in today. And that's all code. Right, it's all software. That's why it takes so long. Talk about the channel, a little bit of time here left. You guys are 100% channel company. There's a real land grab going on in the channel. I wonder if you could talk about the channel, your relationship with the channel and why you're doing well there. Yep, so we've got a really nice model. We're 100% channel, being a relatively new company. I like to say we have all the NFL cities here in the United States covered. We're growing very fast in Europe as well. UK, Benelux, the Nordics, Italy. Doing really well there as well. We're starting to APEC as well. And we have a neat model that the channel really likes in which we have this thing called deal registration where if a channel partner registers an opportunity with us, not only are they protected with very nice gross margin protection for that initial deal, but we call it a persistent registration in which for the entire service life of that system in that company, they are protected gross margin wise to sell additional systems, upgrade, scale up or scale out that system over that three or five year service life. So your deal, Reg, is not just a one shot deal where you're basically head faking them saying, okay, thanks for the new business, but the renewal stuff is all ours. It's a long term relationship. And we have people that actually are, they work with the customers to optimize performance and capacity over the service life of the array, grow it with new applications. Maybe they're adding a database or a virtual desktop infrastructure. We'll come back and we'll actually award that business right to the channel partner. But we're doing a lot of the groundwork ourselves. Are you seeing specialization in the channel, particularly around workloads, whether it's Microsoft or VDI or VMware or is it still largely sort of a box selling mentality? Well, there's really two sides of the market. We see a lot of the box movers and they do a good job. We love them. That's great. That's fantastic. Thank you very much. Moves some gear. But you actually hit the nail on the head. There's three primary application areas that we sell into extremely well. Virtual server consolidation, virtual desktop, and then a database consolidation. And what's fun with the two on the, that I mentioned, virtual server and virtual desktop, the way that the software oligarchs, we're talking about the hardware oligarchs, the software oligarchs, the way they sell their software is by CPU core. What we're finding is what I like to call the laptop effect. We've all opened up our laptop and you fire up Excel, you fire up your email application. And every so often, the thing just sits there and you can see the disk drive blinking away and you go, oh geez, I got to grab a cup of coffee while I'm waiting for the thing to come back. What's happening is the CPU and memory are out of resource and it's using the disk in the laptop as an extension of resource. What we do is we actually work with our customers to actually force their VM hypervisor or database software into that situation. When they reach out to disk, it's coming from flash, so it's still fast. You don't have that drag like we do on our laptops. And when we do that, we're actually dropping the number of CPU cores that the hypervisor and the database are running on and that saves the customer an incredible amount of money. For example, Oracle Enterprise Database, before you put an application on top of it, it's $75,000 per CPU core. So we have customers buying $100,000 arrays and dropping four or five CPU cores right out of their database and saving $300,000, $400,000 right now. It's a multiply by another 1.18 for maintenance. That's right. Excellent, all right, Rob, we're out of time. That's great. Last question, the bumper sticker on VMworld 2014. So, Dejal, you're really starting to get traction. You mentioned your international expansion. That's great. Product suite is coming together. You're building out the channel. So what's the bumper sticker on VMworld 2014 for you guys? Yeah, we like to call it no compromise. What we mean by that is we give customers the flexibility of choice. All flash, hybrid, block, file, deduplication and compression, remote replication and local snapshot. It's a full bench and it's all included in the same single price. All right, Rob, you sound excited and I really appreciate you coming on theCUBE. It's great to see you again. Thanks, Dave, appreciate it. Take care. Keep it right there, buddy. We'll be back with our next guest. This is theCUBE, we're live from VMworld 2014. We'll be right back.