 A small fraction of what their other solution costs, they're able to run it actually as fast as their physical instance for DR, and they're extraordinarily happy with their result. So Chris, what architecturally makes your cost point better than what we've been seeing on the market before? It's a very, there are a number of confluences. One of them is its origin and focus is in VMs, so that's allowed us to simplify a lot of things. Okay, just to clarify, so when you say VMs, is that VM more exclusive or? It's, we're hypervisor agnostic. Right now the focus is primarily on VMs, that's what most of the business is. Absolutely, they're the lion's share of the market, but we don't ignore that there are other competitors out there. And we'll be supporting other hypervisors in the future. So focus built for virtualization was a key simplification for us. It's this combination of inline deduplication and compression and thin provisioning. You can get the NFS with flash. So all the buzzwords you've got in there, all you were missing was cloud, and you would have a bingo. We actually, if you look at our website, you will not see the C word anywhere. Or the S word with the strategic word, neither of those have both been banned from our vocabulary. We set out to minimize. But you are innovative. You might see innovative on that, yeah, sorry, guilty. It's okay. Interesting technology, definitely something, so flash, dedupe, compression, and you work out the order of the compression and the dedupe to make sure that they all work together. And based on per VM. So the typical example is really interesting in that case. So there's a situation where they had a terabyte financial footprint. So the oracle logical size was a terabyte. I guess if they deployed it with a terabyte of SSD, it would work fine. But that was just blown through their budget, so they weren't going to do that. In our case, because of the way our system is architected, the on-flash footprints of that terabyte database is 177 gigabytes. How much flash do you have in your system? We have a terabyte. So you do, but a terabyte of flash in your system isn't going to blow out the price like it would in some of those traditional sandboxes? And there's a bunch of, I don't want to go techno-weenie on you, but there's a bunch of reasons why that's the case. But in that particular instance, great illustration, so our system is supporting this DR instance of their production database, and it's only consuming 17% of the flash and 100% of the workload is coming from flash. That's fantastic. So going back to this question of the problems in the marketplace, there must have been a sort of broader philosophy you had behind putting all of this time and effort into developing this product. What were the key problems in this virtualization area that you saw there and how are you trying to tackle those? The key issues that we saw, if you look at it from a storage into the equation, they come down to complexity, which is the rat's nest of connections between VMs and the different lungs and objects underneath. And cost is almost always in there somewhere. And the ability to have the thing combined and get stuff done quickly. So it's speed of execution, complexity, and the cost of the system. Usually we're the things that cause people lots of problems. So we set out to address all three of those issues. We were talking to one of the House of Brick recently about the problems they came across. And they said 50% of the problems that they came across were how the storage was set up. Yes, right. So they're actually a large integrator helping to virtualize Oracle, which is an interesting use case. And that's actually, I would say that's on the low side. VMworld, what's your estimate? 70%, 80% of the problems that you heard about in the sessions were about I.O. And most of those were related to storage. There's an adage, you hear this, you talk to customers over and over, and if there's a performance escalation, storage is guilty until proven innocent. I mean, that's just the way it works. And it's no longer about running out of capacity so much as performance solutions that can really help with our new virtualizing. Something changed, it blew up, okay, now we're in where's Waldo. So talking about that, when something goes wrong, and there's some interesting stuff that you guys have in your product, because I always hear in a virtual environment, we shove a bunch of stuff in there and something goes wrong, and then it's like, yeah, it's the storage issue, and okay, let me sort out my virtual environment versus my physical environment and how do I fix the storage? So why is it different with your product? It's completely different as you know, multiple reasons. But the core reason is that there are no objects in our system other than virtual machines and virtual disks. There's no other hidden indirection layer. So no lones? There's no lones. Raid groups? No raid groups. There's no weird stripes. There's no volumes. There's none of that stuff exists. So if you go into even our most detailed molecular management interface. We have some files there, don't you? Yes, but the objects that are controlled are v-discs, and we take full advantage of that all the way up and down the stack. Yeah, I kind of like it because there's always that problem when you abstract an environment or put a layer of abstraction in between, and if something goes wrong, somebody has to, that generalist stuff just worked and he has to go fix it, but you guys change the way that you map to storage. There's a famous thing. I can't remember who. Some professor said, in computer science, an additional layer of indirection can solve any problem, except for the problem of having too many layers of indirection. But you're building this product, as I understand, on the NFS connection with VMware, and that's probably not the area that I'm surprised that you chose that because of all the areas in VMware, it's probably maybe the least developed. Certainly, I think as an observer, it would be fair to say that NFS functionality has lagged the block access. Right. There's no question about that. From talking to people inside VMware, that gap has been closed. If it had been caught by a network, it would have been a different story, wouldn't it? I suppose. But that has been closing, and certainly internally, my understanding is they're being treated as peers, as first-class equals. Absolutely. So you're comfortable with being in that area? But the reason we chose NFS was not, we actually can, probably will, do iSCSI connectivity. There's nothing about the protocol, per se. It just was well-supported. It was a straightforward process and allows thin provisioning. It's the simplest path to thin provisioning up front. That's the reason that we would, if there was blah, blah, blah, connectivity that offered similar, I mean, we're not wedded to NFS or dependent upon anything in particular there. It's an existing client that we could connect with. Yeah, but both NFS and iSCSI map more naturally to virtual environments than something like Fiber Channel, which took a while to kind of develop over the years. And Fiber Channel solutions are doing great today in that environment. Performance enhancements have been there, but true, NFS has been there forever. And therefore, you know, we've seen some good growth. And the NFS solution is very robust. I mean, the client actually works very well. Okay, so you've got, so your founder came from VMware, can you tell us a little bit about the rest of the team? Sure. So we have other folks that came from VMware, Citrix. I myself, I was at NetApp for 10 years. I was the vice president of product management there. We have, you guys met this morning, Ed Lee, one of our architects, who was from Data Demand. Yeah, yeah. Wrote the original RAID driver at Berkeley back in the day, so has a lot of experience with that. Prateek Wadhur, our vice president of engineering, was also Data Demand. He was the guy that made the trains run on time. And that product is so easy to use. We have other folks from Brocade, did some of the original architecture work there. So it's actually this combination of folks with deep virtualization experience and storage experience, which I think has led us to do something that is obvious in retrospect. But nobody else has done. And it was that combination that allowed us to bridge over those two gaps, I think. We've been expecting, looking for other people to do something similar and have really been surprised that there's been nobody else that's connected to the dots. But I think it's that combination of deep expertise on both ends that led us to this point. We were talking to some of the CIOs recently, one in particular. And we were asking them, what would you do differently about virtualization? And one of his answers was very interesting. He said, we should have reorganized ourselves and got away from the storage gurus, the database gurus, the server gurus, and really made them generalists and made them responsible for the whole system. Got to break the silos. Got to break the silos. Is that a constraint to your business, that these silos? Well, anytime you're trying to bring in a new way, I'm avoiding using the paradigm word, any new approach to something, is obviously it breaks glass. And organizational issues are one of the key issues you have to overcome. So it certainly is unfamiliar. And yet I would say of the people that we've talked to, the folks that understand virtualization get it immediately. And the people that are writing the checks get it immediately. They understand how this can have a profound impact. And at the end of the day, as we were discussing this morning, at the end of the day, it's applications that drive the business. And so if you can deploy an application where literally from top to bottom, you're all speaking the same lingua. And the only objects that exist all the way from the application layer all the way down to the bits that are sitting in disks in place are the same objects. You can actually talk to each other. We had a fascinating, I had a fascinating interchange. I wasn't there, but a reporter by our SE who deployed our first system at one of VMware's largest companies. We have not been able to announce this yet. But we had a fascinating interchange. Our SE asked both the storage admin and the VM admin were standing there and asked them, so how big are your VMs? And the storage admin said, well, our VMs are 500 gig because that's the size of the data store that they create. And so he's done 500 gig, you know, away you go. And the VM admin verbally slaps him. I mean, by this guy, Sandeep Sakandar, he basically goes, no, idiot. Our VMs are like typically like 25 gig. So here you have an extraordinarily sophisticated customer and these two people work together all the time and they're not speaking the same language. I mean, one of the storage guys is working hard. I mean, there's a lot of work associated with presenting that virtual data store object and yet to the VM admin, they recognize that as necessary but not sufficient first step in this whole process. So you really see your product going in and the virtualization admin can really just take care of that whole storage environment. Not necessarily putting the storage admin out of business, but he can manage much larger environments in doing this. Yeah, I think managing multiple nodes and capacity planning and storage is not all going to be in VMs. I mean, we are probably a niche player. It happens to be a big niche and it's growing, but we're only in VMs, absolutely. Some of the challenges we see in IT in general is if they don't get things done the way, you've got the lines of business that are just grabbing a credit card and going out to Amazon. So now you've got another technology that can help that virtualization admin really do their job more efficiently. Yeah, as she mentioned, Amazon. We have another customer who's actually running us in production now and they have actually pulled a portion of their workload out of Amazon and are now created their own internal private cloud in large part because of the radical shift in economics that we present to them. So it's now cost effective for them to deploy their own private cloud, where previously the cost was just out of sight and it made more sense to pay the monthly fees to Amazon to get this hosted externally. And they have a very interesting approach to HA in their environment, which I think with a startup, why would you run a startup in a production environment? Well, the reason is that they actually have about three of our systems and have multiple copies of the VMs on each one of these nodes. And so at the application layer, they have application AJ, which is really what you want. And what applications are they running on there? They're web-facing apps. Web-facing apps, okay. Excellent. So you're obviously very new in there. You must be working on a few other things to enhance your product. Yes. What sort of things aren't there at the moment you're working hard to get? The two key future directions for us are, one is making multiple nodes even easier to manage. I mean, right now, if you want to expand, scale our systems, you scale them the same way you do with the ESX host. You run a performance capacity or storage capacity, you add another 8.5 terabyte data store. We're working on ways to make that easy with large numbers of data stores, which actually storage providers are particularly interested in. So we've talked to folks like that. So let's just do a quick reset. So this is Stu Miniman with Wikibon. I'm here with Chris Bennett of Tintry and David Floyer of Wikibon talking about some new approaches to storage specifically for virtualization environments. And Chris, one of the things we talked about earlier I want to just touch on a little bit more is, it's not so much about capacity, but it's about performance. And I think from a management standpoint, I heard something from you that I'd never heard about latency and IOPS and how I can really get visibility. Can you fill us in on that? Absolutely. So one of the things, there are a bunch of things that are really cool. So it's really fun. So on our UI. You'd like your job, I'm guessing. It's really fun. You're having a blast. I'm having a great time. It's really fun. Yeah, we don't have to. I'm guessing marketing guys don't like the stealth mode too much. Yeah, it's not so much fun. So on our UI, there are two fuel gauges. One is traditional for storage devices, which is how much of the capacity have you consumed. The other one is very innovative, which is performance fuel gauge. And as long as both of those fuel gauges are green, you can keep adding VMs to this node. So what do I get on the performance, what am I measuring from performance standpoint? It's actually a complicated algorithm internally. It's some combination of how many flash ops, how many disc ops, how much footprint you've already taken in the flash. So it's IOPS, and I believe there's latency also. Yes, so I'm sorry to answer your question. So let's say you have a performance problem. And you need to diagnose what's going on. You can drill into our UI just with a couple of clicks on that particular VDIS or that VM. And you can see what the latency is, real time, as well as what the latency has been for the last seven days. On per VDIS, you can see that information. That's awesome. I know we've gotten a lot of feedback that, right, storage is broken. And how do I go and find that? Is it better now? Is it better now? Let's throw some more hardware at it. Let's do something else now. We understand at the application level. So the storage guy can actually say, it isn't me. You've done something in the database. Here it is. It's half millisecond. Right. What more do you want? Any questions? Away you go. So we've had numerous customers. I don't want to exaggerate. Several customers in my presence, while they're seeing the demo, essentially say, I've been asking for this for years. I want to be able to see this kind of information. I haven't been able to see it before. So Chris, to lead us out, can you tell us where we can find more information on Tintry? www.tintry.com. And how can you spell that for us? T-I-N-T-R-I. And the I has a little thing over it, right? It's actually Tintry, actually it's an accent. Tintry is Gaelic for lightning. There you go. So in Gaelic, it's actually has the accent on the final I. OK, so check out Tintry.com. We look forward to following your progress. Thank you for joining us here at theCUBE. My pleasure.