 Live from San Jose, California, it's theCUBE, happy adaptive flash launch brought to you by Nimble Storage. Now here are your hosts, John Furrier and Stu Miniman. Welcome back everyone, I'm here live in Silicon Valley for the Nimble Storage exclusive coverage of their product launch. I'm John Furrier of theCUBE and with Wikibon's analyst Stu Miniman. Eric Bergenere with Research for IDC2, analyst on the, on the, on the, this is going to have some fun. Now we can really get into it. Welcome to theCUBE. Thanks very much. So obviously flash is a hot sector in this different segment. So breakdown for the folks out there and the different kind of market segments in flashes. There's always conversations like, oh, we have flash, we have a little bit of flash, we're hybrid, all flash array. Just quickly lay out the, the, the horses. Yeah, so the legacy area for flash has been sort of the dedicated applications that require extremely high performance. But as virtualization has really encroached, HDD just does not handle the kind of IOPS throughput needed in those environments. And so we needed to move to something that was more matched to what you needed. So you've got mixed workloads now. So you still have some of those dedicated high performance application environments that need pure flash, but there's also environments that need a better mix of more performance and also capacity. And I think that's where the hybrid arrays come in, into play in those arenas. So where, where you see them deployed primarily. So databases, high performance database environments, mixed virtual workloads, and of course, virtual desktop infrastructure. Very popular application environment for flash. So no one trick pony for anything. There's a lot of different use cases and a lot of different approaches. And what are they proposing? What does Nimble have that's unique relative to the rest of the, the horses on the course, as Dave Amante always says. Yeah. So one of the things that's very interesting about what Nimble is doing is the ability to host mixed workloads in these environments and continue to provide the performance necessary as you scale the configurations. So that requires a lot more than just pure flash performance, but you're looking at things like some of the flash management, how you're doing ware leveling and ensuring endurance. What kind of quality of service do you have so that you can put the performance where you need it in these environments and ensure that as spikes occur, you don't have the noisy neighbor problem developed where some of the other applications running on that same platform get impacted. And actually just one final point on that too is that that's one of the key areas that's going to allow arrays that have got flash incorporated in them to provide a much better total cost of ownership relative to the HDD based boxes. You're not ever going to get close to the dollar per gig as long as you've got a single application that you're putting on that flash array. But once you can put mixed workloads in those environments, you can really start to bring that total cost of ownership down. So Eric, over the last six years we've seen tremendous innovation and adoption of flash. I remember when it first came out, people thought of flash as kind of just this single market and of course there's flash on servers, there's old technologies that we put flash in, there's lots of things in between and in many ways the lines are blurring between even the hybrid and the all flash. Some of the old architectures are being rejiggered. What advice are you giving to practitioners out there? How should they be looking at this vast array of the traditional guys and the startups to choose the right architecture? Yeah, so the server side flash, if you take a look at how that plays in the data center, that's really all about the lowest latency requirements. You don't have a network hop in the way, but you've also got problems to deal with on the high availability and resiliency side if you're looking to deploy that in a data center. So that has some limitations. Even if you try to create an environment where you might mirror changes to another server, you've still got a network hop to deal with. And as soon as a network hop gets introduced, that allows you to take a look at a shared array where you basically have the same types of latencies in those environments and you've got a lot better high availability resiliency types of characteristics where you can point that storage to different servers if things fail and that's a huge advantage in the data center. So I think the two basic deployment methods in the server or in the network, server is going to be low latency but doesn't really provide that kind of resiliency. So big data workloads, things where you're trying to do very rapid analytics in real time, but you don't necessarily need to have the data be persistent. That's a great fit for server side. In the shared network arena, that's really the the bailiwick of enterprise applications like databases, mail and a lot of the mixed workloads that are running in the virtual infrastructure. And I like how you lay that out. We kind of look at the server side stuff. There are ways that we're going to network and lower that latency and do performance things like RDMA, Microsoft SMB3, really seeing high performance low latency coming. And we're trying to lessen the length of that cable kind of in between. When you look out, what do your forecasts show as to the market size of the server based piece versus the storage array market? Because it's usually a different price per gigabyte and just overall capacity that goes into those. So that's absolutely true. Over the course of the next couple of years, we see the internal flash market. That's the one we use to characterize the server side. That's going to be in the range of two to three billion over the next several years. If you take a look at the array market for products like this, Ashish Nodkarni, one of the other analysts that spoke earlier today showed one of the forecasts we have. But let me just give you two quick numbers to give you an idea of the difference between a market on the server flash side that's a couple of billion. By 2017, we're going to see a $1.6 billion market for all flash array products. And that's going to be growing at about 58% a year. But on the hybrid side, we're going to see a market of $14.3 billion. And so it's about nine times the size. We expect it'll be growing at about 21% cagger, so not as fast as the pure flash side, but clearly a much larger market. And I think a lot of that is just driven by the total cost of ownership that the hybrid arrays provide. Really today is better than what you can get on the all flash. And the all flash people are doing things to try to get those numbers closer. There's a pretty big gap in there today. And the hybrid architecture seemed to be able to deliver performance where you need it and capacity where you need it at what today is a better mix on a dollar per gig price. So that big piece of the market, that hybrid market, do you differentiate between older architectures that have redone all their software stack versus more modern architectures? So we do, and that's actually an excellent point. So the newer architectures that were built specifically for flash, we call flash optimized storage architectures. And those are the types of products that we're tracking when we talk about a hybrid array. We're not referring necessarily to a product that let's say we shipped five years ago and you can put a couple of SSDs in it. That doesn't really meet that definition. And by the way, there's a huge distinction in the performance that you will get from the same SSD device deployed in a legacy architecture versus what you can get out of that same exact device deployed in a flash optimized architecture. In talking with customers out there and also vendors and tests they've run, we're seeing a 5 to 10x performance difference in terms of throughput for the same device, newer architecture versus the older one. So there's a huge advantage from a cost point of view in leveraging flash optimized architectures if cost is a concern, which for most people it is. How do you size the market? I mean flash is getting actually more pervasive in the enterprise down the mainstream and these guys really positioned well with this whole mid-market story. But the prices get coming down. Reliability, how is it doing on that front? And also, there's different versions of flash. How do you size the market? How do you guys as researchers try to tackle that? How do you bite off the analysis there? Well, so we're taking a look at shipments, card level shipments that are going into servers. So that's basically what we're going to characterize as internal and there's a new very NASA segment there that I would call memory channel storage where you're attaching flash cells, flash chips directly to the memory channel so they're not configured in SSDs. But that's all sort of host based and we look at that as internal. From the external, what we look at is there's an overall external storage market which includes everything and flash is some component of that. So we talked about the numbers a little bit earlier. The overall storage market in that same time frame is going to be in the $30 billion range so not quite half of the capacity that will be shipped by 2017 is going to be in the flash side, but it's getting pretty close and it's clearly on the upswing, growing much faster than the next market. You're in a good sector over there at IDC. It's a very fun place to be. There's a lot going on there right now. There are some other declining businesses. We've got to ask you a final question for the folks out there. Translate for the end users watching what this Nimble story is about. Obviously the big product lounge here and you're supporting that. What's very important about this announcement is the expansion of the flash capacity capabilities with the product line. They've already got a flash optimized architecture. In fact, they were one of the very early entrants in that arena. Using a hybrid approach they had an ability to configure less flash in these environments and so as you start to create more denser mixed workload configurations you're going to need more of that IOPS intensive type of performance. The other key thing too as you start to do mixed workload consolidation is you need a lot more capacity in these systems. One of the other things that Nimble announced today is the ability to cluster together four of their systems under a single system image. You can get a maximum of a couple of hundred terabytes total capacity and up to 64 terabytes of flash. So you can do some tooling and do some efficiency within the environment. Absolutely. They provide a lot of flexibility to configure the studio and how much HDD do I really want in these environments based on what I'm doing which gives the customer another lever to adjust the cost effectiveness of the solution. What's the pressure on top of the stack? Is there any application you mentioned workloads? Obviously the more diversity in the workload the more flexibility you need to be adaptive but what's driving the pressure from the top? Is it the apps DevOps? Is it cloud? I think it's more on the application side. VDI is very IOPS hungry environments and as people start to deploy that technology more often you really do need a lot more IOPS and lower latencies. If you're going to be able to provide a desktop that performs close to what you could get with let's say an Apple with an SSD on your desktop. I think that's one of the critical ones. Big data in analytics is also critical. That fits into the server side offerings a little better because you don't necessarily need all of the resiliency characteristics that you do for things like databases but clearly there's huge databases you know, transactional databases, trading floor apps that require 100 microsecond latencies and so that's another application environment that's driving it. Eric, thanks for your commentary. Appreciate IDC's. Got a good grip on the market. Thanks for coming and sharing your perspective. This is theCUBE. We'll be right back with our next guest. Exclusive coverage for nimble storage here in Silicon Valley. We'll be right back after this short break.