 Mosconi in the North Hall in the hang center. So, okay, sorry, we're back. This is Dave Vellante of Wikibon.org. This is SiliconAngle.tv's continuous coverage of VMworld 2012. We're here in the hang center. Stop by. Come visit us. This is a panel that we have constructed on flash, and specifically on hybrid flash. In other words, the combination of flash and traditional disk drives. A lot of people have posited that, you know, essentially all this tiering, all this granular tiering goes away, and you've got flash up front and a bit bucket in the back, and we're going to talk about that. To my right is Kieran Hardy, who's the CEO of Tintry. He's a Cube alum. Kieran, welcome back to the Cube. Thank you. To his right is Suresh Vasudevan, who's the CEO of Nimble. Another hybrid flash, what we call hybrid flash array company. And to his right is Rohit Shetrapal, who's the CEO of Tegile. Gentlemen, welcome. Thank you. Thank you for taking some time. Now, as analysts, of course, we like to put you in buckets. I know you hate that. Of course, then you want us to size the market and so forth. But generally, you're in that category, Kieran, where you've decided to go to market with a combination product. You've got some flash. You've got some spinning disk. And so, let's start there. You know, why the combination, and how do you guys position yourselves, and we'll go down the line, and then we'll open it up. Our positioning, so first of all, we don't view it even starting from flash or from disk. We view it as solving the problems of storage in virtualized environments. So, we're a storage array that's specifically for virtualized environments, so not a general-purpose storage. And so, we use flash. And the reason for using flash is because it, you know, much more significant performance than disk-based systems. Our approach is that 99% of the I.O. should actually be handled from flash. So, both reads and writes should be handled from flash. And only the very cold data should be stored on disk. So, it's data that hasn't been accessed for a few days. So, in our case, you know, it's an economic reason, is the reason that we actually store things on disk. But there's nothing as part of the architecture that would prevent an all-flash array in the future. And we believe that caching approach is not the right approach, one in which you're just handling reads from flash. You need to handle both reads and writes from flash. So, sure, Ash, now I first learned about Nimble a couple years ago at a storage networking world. I was very much intrigued. So, just picking up on what Karen said, you've got the logic to manage those various components, right? Talk a little bit about that and, you know, where you guys fit. Sure. I would first echo the comment made earlier that we do believe that hybrid storage systems are going to account for the bulk of mainstream application data storage as we move forward, and predominantly for two broad reasons. The first, we believe that while flash is very attractive on certain characteristics, disks have certain complementary and attractive characteristics. And an architecture that can flexibly leverage both is ideally suited to delivering more efficiency. And the key here is flexible leverage. We recently introduced a capability where you can change the ratio of flash to be extremely low or extremely high depending on your workload needs. And the key again is non-disruptively change the mix of how much flash is made available to a workload. Having said that, so what we've done, we now have a very rapid growth of customer base, nearly over a thousand deployments in the space of just two years. Much of that is because our economics of delivering both high performance and high capacity without having to manage tiers manually. Automated provisioning of when to use flash and when to use disks has allowed us to deliver both high performance and high capacity at a much lower cost than anybody else. And that's one of the fundamental benefits we bring to bear. Now Rohit, when I was preparing for this panel, a lot of people said, you know, the e-mail's going back and forth, and big wars between these guys. You guys are not the enemy. It's the big install base. That's really the opportunity, right? I mean, talk about that a little bit. The good part is that there is a large part of the market that is the technology in every way, shape, or form, right? Kiran talked about the virtualization aspect of it. Virtualization, in many ways, has dramatically changed what we need from storage across the last few years. It needs high IOPS, and as it needs the high IOPS, there has to be a better method than just taking it and spinning it across lots and lots of disks and having lots and lots of heads attack it at the same time. So in essence, what are we all saying together here is that flash, it's been around here for 15 years. What does it do well? It does the caching extremely well. We all know it's extremely fast, but the other aspect of it, it doesn't store the data extremely well. And so if we combine these two technologies together, what do we do? We get the best of flash, and we get the best of hard drive in this model. So in essence, you've got a faster Prius at a Tesla level here, and then when you take that Tesla level, you can also store the data and do much better than we did in the past in the incumbents. So I have some general questions, and three of you can feel free to answer them or we can continue to move on. But my first one is when I look at your websites, I look at the references, a lot of mid-sized, mid-market type of customers. Is that the right target? Is that the right use case? Or is there some inhibitor to the larger enterprise customers? What do you guys think about that? So we're definitely targeted as the mid-sized enterprise and above. What happened with larger enterprises is the sales cycle is longer, so it may take a few years to get significant deployments, but we actually have quite a number of Fortune 500 companies that are using our systems. Large animation studios, some of the biggest companies in the world are actually using the system. And so we've actually been incredibly gratified by how rapidly the technology has been adopted at that enterprise level. We're definitely not an SMB play. We're a mid-sized enterprise and above. How about you, Suresh? Yeah, I would say of the 600-plus customers, 95% of them tend to fall in the category of mid-sized enterprises. We now have a couple of the world's largest tech companies, one of the world's largest media companies, large banks as well among our customers. I think for us the key is finding customers who have mission-critical needs, like the large enterprises, to be ready for an incumbent. That is the ideal target profile. Think of it more as a good market route rather than as where the product fits. That's the way we tend to think about it. If I use that automobile analogy one more time but in a different way, you have a payload and you have a truck that carries the payload. And so in this case, what we're saying is there's different levels in a different product family that you need from the mid-tier up into the large enterprise where we're focused, and whether it may be an airline system or it may be a smaller organization that just has a new need for a virtual desktop. All of those must be satisfied by our product lines in order to move from a small payload into a large payload which may be a very big truck in that model. Clearly the market opportunity is EMC, NetApp's customers, IBM, HP, the big masses out there. NetApp offers a lot of cash. Let's take them as an example. They get flash cash, flash Excel, into the servers. They get these things that just announced called flash pools. Why do you not consider them or do you consider them a hybrid storage array and why do you guys feel like you have a superior approach? So we would actually position and this applies both to NetApp and EMC. Essentially they're taking an architecture that was built 20 years ago and they're bolting flash onto it. So they're either doing caching or they're doing some level of tiering. So flash cash are both products that do that kind of caching. It's essentially not changing the underlying system. So they're not able to do things like taking advantage of deduplication, compression, also MLC flash. So fundamentally what we believe is you have to start from the ground up. You can start with an architecture that was designed for disk based systems. You need to start with an architecture that's built for flash, that's built for multi-core processors and that's built for virtualization. I guess the way I would, if you think about any storage system, the fundamental building block of a storage system is what I would call the data layout. How do you map the data on the underlying media? When you take an existing array and you add flash to it, you're rethinking the layout in the context of the added flash layer, but you're not rethinking the layout of the disk system that's already existed. What we collectively are able to do is to say how do we design a data layout that optimizes to both the properties of what disk does well and the properties of what disk does well. It's far easier to build a file system optimized for one media type, whether it's an all flash array or an all disk array, to appropriately think about how to optimize for both in a flexible manner is much, much more difficult and that's why you get the dramatically better efficiencies than an adapt with a flash cache or an EMC with a fast tier and so on. So it's sort of designed with the specific use case in mind as opposed to stuffed on or bolted on. Let's look at what Karen was talking about and you mentioned specifically, if we have a net app or an EMC, they have flash in their boxes too. So what's different? Well, the fundamental is that we have gotten IOPS by taking data and striping it across every one of those disks and that's the way we get IOPS. Well, let's fundamentally flip that on its head and say let everything go through flash as a read write cache. Let's take things and put them in memory and as we can put them in memory and then we can go back to that IOPS and then we'll go store the data on the disk itself. Now we can go many steps in that layer. The first step in that layer becomes you can do exactly what they do by saying there's a hot spot and you can pin that hot spot specifically onto cache into SSD. The second layer is the read write goes through the caching engine as well and then the third part is what about that file table system. You can't do that all the time. Why can't we add that also which is the metadata into caching as well. When you do those three things, now you're able to get yourself the read write, the access to the file table system, we're able to do things like deduplication and you're able to put that deduplication and reduce the image size and run much more in memory. So we fundamentally flipped the NetApp or EMC infrastructure architecture into three different things than what traditional disk did first. So I want to come back to that, Rohit. We've seen the last 15 years function has sort of migrated out of the host server onto the disk array and now we're seeing it migrate back up. David Floyer has been quite vocal about this in Wikibon saying that the place to manage the metadata is the fast server, not the slow storage. How would you respond to that? So there is different areas to manage the data. One is the virtual image that's running in some way, shape, or form. If you're sitting in a single infrastructure, which is normally a restricted environment to run one application, sitting at VMworld, we've taken the data and we've striped it across from our centralized infrastructure that VMworld now allows us to say I can distribute computing, I can distribute storage. Why go back into that same model which says let me optimize just one component of the server infrastructure, let's optimize metadata where it belongs in every section of this. Storage requires its metadata and file system to be optimized and you may have transaction aspects that may be optimized in metadata as it relates to the server itself. So there's different places for metadata. The good part is we're all understanding metadata and saying it's absolutely key to make the scalability happen. So what about that, Giresh? Can you send hints from the storage to the server? I would say two things. One, I absolutely believe that flash on the server as a segment, as a use case, is going to grow dramatically, not just grow dramatically. And it takes two forms. Applications that are fully contained within the server where the application data management is now the burden of the application itself, not the flash card, if you will. Then there are mainstream applications where flash on the server will act as a cache, as a read cache and I think that increasingly as read caches on the servers get larger and tend to use flash, what you'll see is the storage system becomes burdened far more with handling writes and handling, I would say things that the server could not contain as a cache itself. So it needs to have fast performance but much more so it needs to be the stable repository where things like data protection disaster recovery might occur. In that context, I believe the best people to manage that cache and the metadata can reach in and say what you should do there. And I believe that will happen. If I could just add something to that actually, I think obviously you're going to get the raw fastest performance by being on the host. Flash will be on the host but I think there's a lot of advantages in actually having stateless servers and there are a lot of issues once you start managing state on the server itself. So we actually, we do agree with the notion of that there will be read caches on the host but the number of applications that will actually need that level of performance is actually likely to be fairly small for quite a long time. There are relatively few applications that need more than 100,000 IOPS and so the real problems are how do you make the systems simpler to manage and into the issues that VMware was talking about today in terms of the software defined data center, it's about agility, it's about moving faster, it's about being able to do things where you can automate. So what the problems are, it's not just a performance problem on the host side. Well and you have a particular VMware angle, what about workloads, Kieran, with these hybrid arrays, are they well suited for general purpose workloads or database workloads, are Microsoft specific or are you living in a world of IO blenders? Absolutely. So what we see is, we see the SQL servers, we see the oracles, we see VDI, we see SharePoint, we see Exchange and it's the mainstream applications. What it's not is the applications that may require a few 100,000 IOPS or a million IOPS or something at that level but that's really a small part of the market. We're interested in the mainstream market where data management is really important and where ease of deployment and simplicity is absolutely critical. And Suresh, what about you in terms of the workload? We have an advantage, we capture every system stat every five minutes and we're able to segment what our customers are doing with all of our systems. Our top five workloads, the most prevalent is databases, often SQL dominated but increasingly Oracle as well. Tied for number two and three are Exchange and VDI. Those are now our top second and third most frequent. So you guys would be at Oracle Open World? Indeed. Well eventually Oracle becomes more and more significant for us as a workload or as an application, as a database if you will as we start seeing penetration into larger customers, as I said, 5% rate tends to be large enterprise for us. We're going to give you the last word, we're out of time but so please wrap it up for us. I do think virtualization is ideal from a flash perspective. You're able to get that I upload and take care of it. In a virtual desktop environment it's important to reduce the image size and make that desktop affordable as you go into virtual desktop. So you may have a $250 cost of storage today but when you're able to dedupid and bring it down it becomes a $40 desktop for you. Now it becomes something that you can actually implement. I agree with Suresh that database is a big part of the engine as well and databases have always required high IOPS so that's just a part and parcel of what naturally flows from our systems. Exchange is another one and deduplication there also helps substantially for you. And then the last part is let's not forget the file share itself. File share is very much required and is a large part of what is a part of the entire solution as it goes to a virtual desktop or just in general services. The last part of this is big data analytics is also something that we're all in natural fit to. And so when we take it and we combine it across taking cache and making it metadata based, cache based as well as pin based we're able to get you the scalability that the others have not been able to get. Yeah and the potential to bring together analytic applications and transactional applications and feed analytics into the transactional applications in near real time before you lose the customer. Let's define that as real time as huge potential productivity impacts. That's what you guys are really bringing in my opinion to the party. Just major disruption in business impact beyond just straight TCO and that's I think what is exciting. Alright Tintry, Nimble, Tejial thank you very guys gentlemen very much for coming on theCUBE. I know it's a long session but we jam packed a lot of info in there. Keep it right there we'll be right back siliconangle.tv's continuous coverage of VMworld 2012. We'll be right back. Thank you very much. Thank you.