 Okay, we're back. This is Dave Vellante and this is theCUBE where we extract the signal from the noise, bring you the best guests that we can find. We're here at 590 Madison Avenue, IBM's New York headquarters, where they bring in all their customers, the executive briefing center. We're here with Krishna Nathan who runs development for the storage group at IBM. Krishna, welcome. Thank you very much, David. Glad to be here. Yeah, appreciate you coming on. I want to talk about how things are changing in your world. Obviously, IBM has a deep, rich history of organic development. That's your world renowned for that. And you, I think, do a great job of complimenting that development with acquisitions. But I'm specifically interested in the kinds of things that you guys are doing, specifically as it relates to bringing together the systems and storage worlds, which for years have been kind of separate, but we've sort of seen these worlds coming together and they're starting to now, aren't they? And in your world, probably, they have been for quite some time. So I wonder if you could share with our audience how that's all come about. Sure. So if you see there's been this evolution over the last, it's really over the last five or 10 years it's been accelerated. But for a long time, storage was seen as something very separate from systems. Storage was, systems had their own path of evolution and technology acceleration. Storage sort of was a separate thing, largely driven perhaps by the fact that storage was at its fundamentally mechanically driven system. Now with the advent of things like Flash, what's happening is you're getting away from this bottleneck that the spinning media always provided for storage and it's opening up a whole new set of possibilities of tighter integration between the storage subsystems now and the rest of the system. So what can you do? You can more tightly couple your applications to the storage system. In the past, there was always a level of intermediation, if you will, between the application and the storage. We're starting to see cases now and it's probably loosely called software-defined storage or software-defined environments where now the application itself can control the storage. So rather than go to your storage admin and say, I want another lawn, I want more space, I need you to restripe my data because I'm not getting the performance I'm getting, the application now is able to start talking to the storage medium itself to optimize it. So a much tighter coupling between from end-to-end system perspective and it's made easier by something like Flash. So we talk a lot about the consumerization of IT and now it's software-defined, we call it Wikibon software-led infrastructure and a lot of this is coming from what I would call the hyperscale guys, that crowd, Google and Facebook who are essentially running highly automated installations. They, when something breaks, they just let it break and then they throw it in the wood chipper. Do you see that as a harbinger for traditional data center design or those two worlds could kind of stay separate for a long time? So I think there will be a requirement at the very high end in terms of tier zero, OLTP transactions, airline reservations, these sorts of things, where the premium will always be on high availability, on performance, et cetera. But having said that, that's not where the bulk of the storage lives. That's not where the bulk of the data people store lives. And I think for that vast majority of the data, these kinds of models will emerge. So I don't think, so to answer your question, I don't think everything is gonna go here, I don't think the other stuff is gonna go away, but increasingly we're gonna see this sort of a model where it's kind of good enough, right? It's good enough and it works. And the applications where the fits will adopt it. And then you'll have other applications, your traditional, you know, high end performance-based applications where you will stay with a paradigm that's similar to what we have today. That'll evolve too, but not necessarily that long. So I wanna follow up that. So good enough has been a great market in this world for a long time. And so the beauty of what you guys are announcing with Texas Memory Systems, really re-branding, it's not anything new from a technology standpoint, but you're dropping in essentially a block-based device. It doesn't change the application environment. It's very non-disruptive from that standpoint and you get instant boost of performance and business value. That's great. But there is this potential to take that even further, to get rid of what I call the horrible storage stack with all due respect to those who helped build up the horrible storage stack. What I mean by that is not only get rid of the spinning disk but to actually bypass protocols that are chatty and dramatically, even further dramatically, increase performance. Do you see that as too much, too specialized, and we just need to be good enough? Or do you see that as having the potential to truly transform industries? So two things. Let me address it two ways. First of all, for the last 50 years, ever since computer science was discovered, invented, whatever you want to call it, storage has always been the laggard, right? You didn't want to page out because storage was slow. You didn't want to do these things. They taught you, listen, this is the slow guy here, so we built systems mindful of the fact that there was a slow piece. Now, with the advent of flash, that's going away. So therefore, what we in the industry are going to have to do is essentially take a look at all those bottlenecks that were purposely introduced into the system, right, so that to better take advantage of now the fact that you no longer have this latency. So I think that will profoundly change the stack. And it'll change the stack across the board. It's just like saying, you know, it's not a question of good enough. It'll make things cheaper. It'll make things easier to maintain, right? There'll be a TCO and a TCA value proposition. So I think that'll happen. Now, the other thing about how flash will simplify things is because of the fact that, you know, you're going to see, I think, a simplification of the tiers. Today, storage is highly complex. You have tier zeroes, tier ones, tier one and a half, tier twos, tier threes. You archive your data onto tape. And it's a mess. Regardless of how smart you are in doing the management, it is a mess. So flash, I think, brings this promise of being able to reduce this to just two tiers. You'll have the high end flag. You'll have flash for all things which are where you need performance. And you'll have some form of either rotating medium or perhaps even some new hybrid tape disk thing. But you'll just have two tiers. So I think it simplifies it from both points. Yeah, I agree, by the way. I think you're basically, if I could restate what you're saying, all active data is going to be in flash. All high performance, high IO data and everything else is into the bit bucket. And you want it to be as low cost as possible. And when you move it, generally speaking, for the majority of the cases, it's going to be a one-way trip to that bit. Right, right, it's a right only thing, right? So what about this notion, I touched on it before, but I want to ask you specifically, what about this notion of atomic right? Steve Mills showed a chart today and showed how you can actually save a lot of money because if you optimize the way I would say it, if you optimize the infrastructure across servers and storage, you can lower the number of cores that are required and that is going to cut your database license costs. There's another step, which is to actually do direct memory atomic rights. Is that something that IBM is working on? I know you're working on it, but is that something that we can expect you guys to actually deliver? Yes, so let me put it in the context, right? Today we think about still it with flash, DRAM and then flash. And flash is often treated as part of the memory subsystem, but almost like a block device, that's how you do it. Now this next evolution is a commingling between what happens in DRAM and what happens in flash. So yes, DRAM will always be more expensive, it'll cost you more and you're going to get better performance and so on and so forth, but flash is much closer to DRAM than it is to hard disks. And if you start treating it like DRAM from a system perspective, then you have more stuff that you can put into this augmented DRAM plus flash space. So yes, I think the answer is yes, you will be looking, the systems will be looking for this sort of, you know, this atomicity in terms of how you do data and treating flash as just more DRAM. And of course the downside is that you're going to have to rewrite applications or write applications in a different way to take advantage of that, but the upside is enormous benefits in business productivity. Correct, correct. Yeah, so you buy that. You buy that, and I do, I do buy that, but I think, you know, keep in mind, it's easy to say, I think it's a little, it's overly optimistic to hope that no application will have to be rewritten again, because even the applications have fallen prey to the same thing I said about system design. The applications themselves said, hey, the disk is really slow. So you did things, whether we admit it or not or realize it or not, we did things in a particular fashion. And if you go back and open that thing up and open the counterworms, you'll say, well, I can do it differently. Well, an application development in a large way has been sort of had a heat shield, it's called storage, and now that's changing, isn't it? Right, exactly. All right, good. Well, listen, Krista, thanks very much for coming on and spending some time. Thank you, you're welcome. It was really a pleasure. All right, keep it right there. This is Dave Vellante. We'll be right back with our next guest from Madison Avenue, IBM FlashAhead. This is theCUBE.