 Live from Las Vegas, Nevada, it's the Cube at IBM Edge 2014. Brought to you by IBM. Now here are your hosts, John Furrier and Dave Vellante. Welcome back to Las Vegas everybody. This is the Cube, we're here live at IBM Edge. We're going to talk Flash with Woody Hutzel. He is a manager of the Flash Systems portfolio and strategy at IBM. You've been in the Flash business for quite a long time. Woody, great to see you, welcome to the Cube. Oh, thanks very much for having me. So, you know, I think you said off camera that you've been in the Flash memory business or the memory systems business for about 14 years now. That's correct. Not Flash, but Flash. Started as RAM based storage. Right, okay. So, I go back even further. I'm kidding. Back into the early 1980s with the big mainframe solid state desk. So, we've seen this come around, of course, the way you made those persistent back then is like quasi-persistent, I guess, in battery backups, right? That's correct. So, it must be interesting to why. I mean, 14 years is a good time frame. You got a good observation space. And it was always an interesting business, but a tough business, you know? Nitchie. Very Nitchie. Very high, highest, highest performance, super high performance database, but cool, but never really exploded. And then all of a sudden, you get this persistent Flash layer, consumer devices and boom. Talk about the experience. I think the great thing about working in the memory system business is that all along, whether we were charging $5,000 a gigabyte for RAM or today under $10 a gigabyte for Flash, that the impact on the customer's environment is always really dramatic. So, at the time, why would somebody pay $5,000 a gigabyte? Well, it's because it could truly transform their business. It would allow them to pursue things or implement strategies or support customer workloads that they never could have done before. And now, with the cost declining so dramatically because of the low cost of Flash storage, we can just support that many more customers and have a much broader impact. Yeah, and so, those sound like pretty dramatic numbers, and they are, but the other piece that's, in a way even as dramatic is the delta between spinning disk and that memory. That's really started to compress too. You're seeing spinning disk costs per bit come down at maybe what, less than 30% a year, 20, 25% a year, and Flash coming, what, 50 plus percent? Yeah, I think it depends on the market you're looking at, but the raw Flash is certainly declining very rapidly. I think for the Flash we tend to use in the enterprise, you know, the projections I see are more probably in line with 25 to 30%, but nonetheless, it's very encouraging. No, is that because you're just trying to keep prices up or what? That's one of my first goals, yeah. You know, but practically speaking, we're already a point where even if the price per gigabyte of a disk is, you know, still lower than the price per gigabyte of Flash, we can still make economic sense of deploying Flash in the data center because we allow them to place racks of disk drives with very small instances of Flash, and as a result, lower their acquisition costs, lower their long-term return, total cost of ownership. And I know, and I know, certainly you're not the only company to do this, but you're trying to focus the market, not on cost per gigabyte, but on the value and the total cost of ownership, and that's the right thing to do. But having said that, if Flash comes down close enough, as you just pointed out, the economics are just going to overwhelm you, and then certainly if it's the same, all things being equal and the price is the same, you're going to go with the better technology, the faster, lower power, smaller, and that's what's happening. I mean- Absolutely, so what we're seeing when we talk to our large customers to Boeing Flash, we then in turn price it to their internal customers, is that the price for our tier zero Flash system, the highest performing products in the market, are marginally higher than their tier one disk solutions that they're selling internally to their users. So what they're telling us is, why not? Why are we not already moving to all Flash for our performance-intensive applications? Right, so why not? Certainly Flash is replacing, I think we forecasted in 2009, David Floyer actually did the forecast. Back in 2009, he said by 2014, so-called high spin speed disks, high performance disks will be dead, Flash will kill them. Now, not quite there yet. Not quite there. Not bad for 2009. And you can see that day coming. But that said, so why don't Flash systems completely replace these tier one systems? It's the functionality of the tier one systems, isn't it the Flash? Or talk about that a little bit. So I think actually what we're starting to see today is that increasingly Flash systems are taking the place of tier one systems in the market today, the way we think of a traditional storage monolith, that essentially what we think of as the tier one market is reorganizing, and it's reorganizing because the people who got into tier one originally may have done it for a variety of reasons. Some may have got in for purely performance reasons. Some may have got in for storage services reasons, and some may have gotten in for centralization reasons. And what I think is already happening today is that those who got in for performance reasons are already moving off to all Flash appliances. So that transition is already occurring. So what we've done in IBM and with our Flash system product is that for the people who are services centric, who want the rich feature sets, like snapshots and replication and data reduction, we now pair our Flash system product with the store-wise software stack and now can offer a rich mix of services and performance in the same solution, thus enabling yet another category of tier one customers to move to a new environment. Yeah, I want to talk about that. I want to come back and talk about that because your strategy is somewhat unique there. But if you had to do the Y buy pie for tier one and you had to split it between let's just make it simple. Those that buy for performance, those that buy for services, knowing that some guys buy both for both. But if we had to just simplify it, what percent of the market would you say is the performance guys versus the services guys? So I think there are a number of trends at work. And so what I would suggest is if I would look at it more from a trend basis, that the trend is that more are buying it for performance because in some senses, a lot of the services are moving to other layers in the stack. So for example, a lot of services are happening at the application layer, or sometimes it's within the database management system, or sometimes it's within, say, the virtualization stack. So as a result, we see a lot of customers who say, I don't need as many services in the software layer, I can use them in the application stack. And so as a result, there's even more dynamics at work here. That's interesting. So that brings up a whole another discussion. Where should the storage management be versus being the application or the storage array, right? And I think there's very clear benefits when the application is providing those storage services from the standpoint of integration, if the application has knowledge, recovery can oftentimes be more facile and more reliable, but it's not conducive to a general purpose horizontal infrastructure that can support applications across the portfolio. If I want to do Oracle and Microsoft and IBM DB2 or whatever, you know, SAP, I'm not going to do that. I'm going to have to still pipe those within each of those applications. So again, brings up another subject. Where do you see customers leaning? Is it sort of mixed or is there a clear trend there? Yeah, and I would say it's pretty mixed. And I think it's because some people have preferences. Some prefer storage services to happen at the application layer. Some prefer it to happen at the storage layer. And then some of it is just fundamental changes in the technologies. So yeah, I think that we're going to see a mix going forward. Well, if you see what Oracle does with ASM, I mean, that's going to happen. That makes a lot of sense. You see some of the things that Microsoft is doing with exchange makes a lot of sense to have some of those storage services in there, but some of the more general purpose services don't make sense now, or may not make as much economic sense. I want to come back to the notion one of the things you're seeing, and I'd love to get your take on the competition, a lot of the startups making a lot of noise. Yes. Features, immature stacks. And then you've got other companies, tech EMC, the buy extreme IO, immature stack, a lot of data services in strewn about that they're trying to pull together, and then you got IBM. Acquires Texas memory system and says, okay, we can put this thing behind an SVC which has a mature stack. Yeah, maybe there's going to be a little bit of some overhead there, but it's negligible. The customers I've talked to, and I've talked to many, including on theCUBE, we talked to Sprint, and then many, many others. No, given the performance gains that you get, it's so negligible, so that's not an issue. So talk about that strategy. Is it a long-term strategy? Is it a stop gap until you could build a more robust stack on the software-defined side? What's the direction? Right, yeah, so I mean, I think the strategy today is to take advantage of what are already very mature software stacks, so that there's not really a strong incentive to go out and develop something new, because the things we have today are already proving to be, as you mentioned, very low overhead when mixed and matched with the flash system. So as a result, when you compare us to competing solutions, we actually deliver lower latency, higher IOs per second, and as a result, better application acceleration than these companies that are, quote, you know, starting up and have, quote, developed for flash. Okay, so if I hear you, if I can interpret that, let me say it, let me put the fourth of premise and you tell me if it's right or not. Because virtualization, storage virtualization, virtualization of the underlying hardware is fundamental in a software-defined strategy, you've got the SVC technology to do that, or the V7000. There's, that's going to be a fundamental staple of your software-defined strategy, so there's no reason to try to build a parallel stack. You're going to lean on the SVC to do that. Absolutely. Okay. Yeah, that technology is rock solid. And what do you tell the client who says, yeah, but I don't want another appliance, you know, to manage? Right. Why don't you guys build that in software? What do you say to them? So I think that the evidence is mounting that a software-defined flash solution, essentially, where you have essentially servers acting as storage is not the best approach for flash. Because frankly, flash provides such exceptionally low latency that it's best implemented with a hardware-accelerated architecture. So that's one of the things we've really done with flash system, is we've taken all of the noise, decreased the latency as much as possible, so that what the customer is experiencing is the best flash has to offer. And then where the customer needs services, we can add in the SVC layer. So I love when you talk to folks with strategy in their titles. You know, strategy used to, it was out of favor for a while, now it's coming back in a big way. Because you have time to, well, you make time to look at, you know, what's happening and the trends in the marketplace. And one of the things that we've observed, John, many times, is that for 15, 20 years, we've seen function move away from the host server out to the SAN, and for a lot of good reasons, protection, we needed to offload the host, we had to get the data off site, whatever it was. Now you're starting to see function move back and flash is a big catalyst there. So I want to get your opinion on this notion of what Stu Miniman coined as, or David Floyer's, server SAN. The idea that you have some direct access device, let's say it's flash, interconnected through, I don't know, some high-speed interconnect, let's say it's ethernet, or maybe something else, doesn't have to be ethernet. And you pull that dash, and that's essentially, you have a software-defined stack on top of it. As a direction for the storage industry, do you see that as viable, do you see that as a niche? So I think it's certainly viable, but I would say it's very nascent, right? I mean, this is an interesting topic. I think there's a lot of venture money going into the server SAN concept. I think that it makes the most sense when you have commodity elements at every layer of the stack. Whether that's a commodity server with commodity software, with commodity flash, then I think in that environment, you can make sense of it. Running an open source, no SQL database. Yeah, sure, but what I do think is that it is not a model that promotes high performance. That as soon as you go to a server SAN in order to protect data, you're either increasing your cost if you're using flash, because you're replicating data between solutions, or you're doing some sort of a software raid, and that software raid is going to hit your performance and increase your latency so high as to not make it really tasty as an implementation solution. So I think there's some technical problems to solve to make it attractive for flash, and certainly IBM's interested in that area. Woody, for the folks out there who are worried about data growth, what do you say to them? Obviously, I just posted a picture up there that Matt Eastman from IDC just shared around the 28, it was a picture of a SAN disk, 128 megabytes to 128 gigabytes. So the quote was, for the folks worried about data growth, now this is happening. That's just a small example, but in the data center, for the folks that are worried about data growth, what do you say to them? What's the warm blanket of comfort that you can offer them in terms of the tech that's coming down and some of the architectures that they need to think about in terms of what they do to build? Sure, I mean, I think there are a couple of important ways to look at that. In the flash area specifically, we've been on a very aggressive roadmap when it comes to new NAND flash generations, and each of those generations is resulting in a doubling of NAND flash density. And that in turn means that we're able to deliver solutions with higher capacity per rack unit. And so what we're seeing as we look out on our roadmap are capacities of flash that we think will satisfy the majority of data center requirements, even in very small form factors, down to a couple of you or less, depending on the size of the operation. Now the other part of it, of course, is disk and the other part of that, then it's tape. So you will always continue to have a need to store a lot of your data in place where you don't need to access it very frequently. Disk will be one tier for that and tape another. I mean, that picture doesn't even follow Moore's law. Should be able to do better than that in nine years, shouldn't we? That's right, yeah. Should be 256. Okay, so let's talk a little bit about where to use flash. What are you recommending to customers? So I think there are a couple of aspects of the market that are really interesting today. Historically, the market has been entirely an application acceleration market. And in specific for the application acceleration market, we're looking at especially accelerating database environments. And of course, all sorts of applications that are database resident. So maybe that's SAP environment, but that could be an Oracle database, a DB2 database, SQL server database. Could be a key value store, right? In a no SQL environment. It couldn't be metadata for IBM Elastic Storage. So the point is where you have small block, random access to storage, flash is a fantastic accelerator. We're also seeing increasing use cases for flash in virtual desktop environments and IBM has partnered with Atlantis Computing to deliver a complete solution in that environment. But I think there's this other aspect to the market that's becoming pretty interesting, which is flash as a tier one replacement. So you're stepping a half step away from the purely application acceleration focus and saying, how can we make flash so affordable that it doesn't make sense not to buy flash for your next iteration? So one of the key elements in that equation then is to pair your flash product with a data reduction solution. So in our case with our flash system VA40, we have a real time compression capability that can deliver up to five X effective capacity. And as a result, decrease the cost for capacity to then customer. And that then attacks again a different segment of the market in my opinion. Yeah, right. And then again, if we're talking about the SVC before you're dramatically expanding the TAM for SVC and now it used to be largely sort of a default tier two system, right? Get your data off of tier one, put it on to tier two, stick it behind in SVC and reduce the tier one avoidance kind of strategy. Now you're saying, hey, we can plug a flash system into SVC and with the full function of the stack attack tier one apps. Absolutely, that solution is shipping today. So when it's ready and customers are adopting it, it is meeting the need of those customers who need both performance and the services rich stack. Okay, Woody, thanks for coming on theCUBE. Really appreciate it. This is theCUBE live in Las Vegas for IBM Edge. This is our flagship program. I'm John Furrier with Dave Vellante. We'll be right back after this short break.