 Hi everybody, we're back, this is Dave Vellante of Wikibon. I'm here with At Stu, Stu Miniman of Wikibon as well. This is theCUBE, SiliconANGLE's continuous production, live from IBM Edge, we're here at the Mandalay Bay Hotel, in wonderful Las Vegas. Stu, we spend more time in Las Vegas in May and June. Yeah, Dave, everybody's been asking if we have, you know, the theCUBE, like, you know, Bungalow, where we can all just stay here. Our equipment lives here, but luckily we don't. Even my kids now, when I talk to them at night, they say, hey, daddy, how's the loss of wages? You still have your shirt. Anyway, Ron Reif is here. Ron is the storage software business strategy head for the group formerly known as Tivoli, now the cloud and smarter infrastructure group. Ron, welcome to theCUBE. Oh, it's good to be here. Thanks. Good to see you again. So, let's see, I'm not going to ask you about the group name. You told me off camera that that's, that's something that others make the decision on. So, but congratulations on the transformation. Thank you. Let's just leave it at that. So, big talk about SDS here. I want to get into that, but before we do, let's talk about your role. So you, in a strategy role, and I mean, a lot of companies kind of don't want to be in a strategy role. IBM is a great place to be in a strategy role because you actually have impacts on decisions, M&A. You've got ways to leverage the portfolio, internal, external collaboration. Very exciting role. So what specifically are you doing, within the division, within the group, in a strategy role? Well, my job in business strategy is to go spend time with clients, understand their business. We've been talking a lot this week about economics. My job is to understand the economics of their business problems. What is it that is causing them grief and IT, specifically in the area of storage? Think about what the solutions that IBM has from a technological standpoint research that we could bring to bear to solve those problems, and then work out the economics of, can we invest enough money into it, create the solution, charge clients in such a way that they get value, get their problems solved, and we end up making money. So I'm a sort of a business strategy, investment strategy just for storage software. And you have to figure out how to do that and get a return, as you said. Get a return. And so there's external factors involved in that. There's tectonic shifts, there's drivers, there's other influencers, there's competition, you factor all that in. Now is it the Ron Reif methodology? Does IBM have some kind of methodology? Is it a little bit of both? We have processes, procedures that we follow, but there is a lot of an art form to it, trying to understand what the bad guys are up to, listening very closely to what the analysts are telling us, understanding. Bad guys being hackers or competitors? No, no, competitors, no, no hacking. We talked a lot about information security this week, we're not involved in any of that. But just keeping our ear to the ground on what's happening in the marketplace and what technologies are coming out of research. So I spend a lot of time with clients, a lot of time with analysts, a lot of time with our researchers, our developers, just trying to put together the entire puzzle and help guide decisions on where do we put our investment dollars? Can we get the technology out the door quick enough to be competitive in the marketplace and make a good return? Okay, so what's the market telling you? You spend a lot of time with customers, reading the analysts, reports talking to analysts. What are the big trends that you're seeing that are driving some of your advice to your colleagues and some of the decisions that you guys are making within the division? There's a macro trend. It's really sort of simple, but it's one that you have to sort of back off and look at for a moment. For decades and decades, we've been dealing with data is growing. My whole career at IBM and even before then, we've been worried about data is it gonna get too big? Can we afford to store it? For decades, data was growing, but the hardware vendors that were building storage systems were able to improve aerial density a little bit more quickly than the data was growing. So from a client perspective, an IT manager, the solution to the problem of data growth was they bought a current generation storage device that was denser at a better price per gigabyte and they were able to keep up, keep that budget moving relatively flat while data was growing. In the last several years, it's been fairly well documented now that data is growing faster than hardware vendor's ability to keep up with aerial density improvements. And so what's happening is if the old way of doing storage administration doesn't change, CIO's IT managers are gonna end up spending more and more and more of their IT budget on disk hardware because they have to have more of it. And so it's driving shifts. One of those shifts is this move toward software defined storage. So, part of me says, why don't you guys all just sit back and let it happen and you make a ton of money? But we all know that would kill the market. Kill innovation. Kill our clients, right? They wouldn't be able to afford to. Something has to change. So venture capital would come in and solve the problem. So when you see Hadoop in many respects was designed to solve that problem and others. So okay, so that's a major shift that you see. So let's talk about software defined. Let's start with, what is it? What is it to IBM? Yeah, so software, one of the big difficulties in the industry has been settling on a definition. We'll come up with an idea and we'll communicate it one way and vendor X, competitor Y will come up with a different way and everybody's trying to co-op the term. Our view is very similar to what IDC recently sort of communicated as a full taxonomy of what is software, they call it software-based storage is that it's a software layer that gives the full set of storage services but does it on commodity hardware and it's not in any way tied to back-end disk capacity. It's just a means of managing storage with commodity hardware. In a lot of ways it's very similar to what has happened in the server marketplace with server hypervisors. Today when you talk about a server, you're not talking about a physical machine, it's a virtual machine that can live on any kind of commodity-based hardware. What's happened is that all the function has moved into that software layer, relatively inexpensive compared to the rest of the IT budget and it's allowed the hardware to commoditize. And given CIOs, IT managers, a lot more flexibility and the ability to drive down the overall cost of servers in their budget, same thing's happening in storage. So Ron, I'm wondering if I could poke on that because I really, I don't know that I agree that kind of software-defined and commoditization, I think there's a connection but I don't think they necessarily go hand in hand when I think about what server virtualization did, it did not necessarily commoditize. Everybody's not running white boxes. Server virtualization changed the architecture of hardware and sure, we collapsed down and consolidated, got greater efficiencies but IBM's still selling lots of servers and they're actually selling bigger servers for the most part, beefier cores, everything like that. If I look at both networking and especially storage, one of the big differences is if we go to some kind of software layer, it's not going to collapse how much hardware I need. As you were saying before, storage growth we can't keep up with, so software-defined is not going to reduce how much capacity I need, it's going to keep increasing. So, I guess- Well, kind of depends on what you mean by commodity, right? So, but let's give Ron a little latitude here. Let me try to weigh in on this. You basically said like servers, let's say commoditized, and what allowed that was a virtualization layer and abstraction layer, so basically you're talking about abstracting the underlying hardware. That's right. Okay, and that leads to, you can essentially then run that on what we're calling commodity storage. Right, so I think it's a fair question. In the server world, you're right, not everybody's running on white boxes but you have the flexibility to run on a white box if you want to, or an IBM System X system or- And what I'd say is, you know, of course we got any workload really can go on x86 this day, so it's standardization kind of across there. And the value has moved up into the software layer and so there's not a lot of, I mean the differentiation in the servers, they provide the MIPS, the megahertz that are needed for compute, but that's about all they provide. They provide that and maybe reliability, that's the differentiation that you can come in. Same thing's gonna happen in storage. You're right that you're not gonna stop the need for gigabytes, but what you're gonna get out of the storage device other than gigabytes and maybe availability and IOs per second, not much because all the rest of the services come out of the software layer. Yeah, and I'd agree we're talking about it to the services and the value that you add, not, you know, the sheet metal in the hardware. The other thing that we're finding with that software base approach is that in order to get tier one kind of services, traditionally IT managers have had an overabundance of what you would think of as a tier one disk array. 70% of your shop might be tier one, so you can give those services to the applications and the rest of it might be tier two or tier three. With a software defined layer, we're finding that clients are shifting the mix. 30% tier one, 5% flash, and the rest of it, tier two and tier three, the overall cost of the dollars per gigabyte in the infrastructure ends up dropping like a rock. And that's part of what we think of as the commoditization of the disk arrays. All right, so let's keep picking at this definition here and we'll get it and we'll sort of talk about what it means to IBM. So we got the commodity piece, but that seems to be almost an attribute. I think that the ability to pull those hardware resources is key and that leads to commoditization of the hardware. But then you've got this discussion about people say separate the control plane from the data plane. So you've got essentially data services that could be somewhere, for instance, inside of an SVC or V7000 that might be in a disk array itself, but you've got some kind of other so-called control plane, which is what, an orchestration layer? Is that right? Is that potentially open stack or maybe it's VMware or IBM hypervisor, is that? Yeah, so we think of that software defined layer as the control plane. It's the layer that provides all of the storage services that you need from snapshotting to data placement to replication, the thin provisioning, compression. Everything that you would need to provide for the data is in that control plane. It also includes a set of APIs out the top end that integrate with orchestrators, like open stack or smart cloud orchestration from our standpoint or VMware, if they're provisioning storage along with a virtual machine. So everything that you need is in that software layer. Underneath that is the physical capacity. Kind of like VMware has pretty much everything you need for a virtual machine. And underneath that is some Intel cores that provide the compute capacity. All right, so let's double click on that. So the services are part of that software layer. Is that right under that definition? The way we think about it, yeah. Okay. And so those services could reside, I presume I'm correct, within an SBC or maybe you're saying no, they ultimately need to be extracted. Well, so we've talked about our, yeah, it's a good question of where exactly does it run. First, it needs to be software. Second, we think it needs to be able to run on really any commodity, in our case, Intel-based engine that you wanna choose. Now, we've talked this week about the store-wise software platform, which is our software defined storage layer. That software code base, we package it with a variety of different Intel-based commodity engines. One of them is the same volume controller engine, it's a gateway that doesn't have any of its own capacity, it federates capacity from, you know, we would say commoditized storage, any storage that you might want to choose, white box on up to high end. It's magic. It's magic. We also package that software with engines that have their own flash associated with them so that you can sort of augment the performance of whatever storage you federate together. We package it with engines that have their own spinning disk as well to augment the capacity of whatever you federate together. We have entry, mid-range, enterprise configurations. It's just a lot of different engines, but one software layer that you can use and choose whatever storage hardware you like. Okay, and then like I said, you got the northbound APIs into... I think those are key. Those northbound APIs, especially in a cloud world. So let's talk about that a little bit. So, and I want to tie this into where you guys are at today. So let's talk provisioning. Okay. Okay, presumably I'm able to provision not just capacity, but also performance. Right, think about the, in the old days when you were actually dealing with a physical device, the act of provisioning was, hey, I want this many gigabytes on that device in this RAID configuration. And it's got these attributes and I need that much horsepower from my app. It took a storage scientist to figure out where to put stuff. What scientist? Why didn't we come up with that name? Could have been heroes. But, you know, in the world of software define, you're dealing much more in SLAs. So the kinds of questions you ask when you're provisioning are, what service level do you want? And that service level can include performance, can improve resiliency, disaster recovery, the needs of the workload in a service level. We put it in the form of a service catalog entry inside our Smart Cloud Virtual Storage Center, the software layer. You ask what service level you need, you ask how much capacity in that service level you want and you ask who needs access to it. Those are really the only three questions that you need to ask when you're provisioning at a software layer. Software layer then takes care of figuring out what commodity hardware that you have in the environment that will meet those service levels and it builds the virtual volume. But specifically I could then, as an example, make a provision, 50 gigabytes capacity in a thousand IOPS, make that my policy, maybe give it a retention policy. And a replication policy and so forth. And I can do that through an API call. That's right. So the way we make it available is two ways. The human being can sit down and say, you want 50 gigabytes of database capacity and make it accessible to that guy. Or the API calls coming from things like OpenStack or VMware or orchestration software that are provisioning a virtual machine and a virtual network and some virtual storage for a workload all at once. Okay, and I can change that policy essentially on the fly. That's right. Through an API call. That's right. You just, the policy is bound to a virtual volume that has your data in it. You can change the policy and the system will relocate the virtual volume based on the policy changes. And I can do that today? Sure. Okay, I would say that's software defined. It's smart cloud virtual storage center. So let me give you a use case, an interesting use case. We have, imagine that you're a data center owner. You've got some physical desk. You've picked from high end vendor X and maybe mid-range vendor Y and low end vendor Z. You've got three different vendors, multiple tiers. And you've got some workloads spread across those physical desks. With the smart cloud virtual storage center, a software defined layer, it will analyze both the physical capability of the hardware that you've handed in, as well as the IO patterns of the workloads that, what they need. And it'll make a connection between the workload and the physical capability of the storage. It'll spread out workload across the physical infrastructure to optimize the capabilities of everything you've given. So nobody gets too hot, nobody gets too cold. And then once the data is placed in the right location, it'll watch for blocks of that data that are getting hot. And it'll dynamically, transparently, move those blocks over to flash storage, for example, to keep everything running optimally. Across vendors, across tiers, and even across sites today, all in a software defined way, regardless of what hardware you happen to choose. We're seeing that CIOs then can choose cheaper storage, more commoditized storage, and everything operates well because of the software defined layer. So, Sir Ron, my last question I have for you is I'm actually looking at your blog, and you talked a little bit about across sites. So how do cloud and kind of object storage fit into the software defined storage message? So, you know, cloud, we ran an interesting study recently with Enterprise Management Associates. We were trying to identify cloud, talking to customers who had made steps into cloud, and trying to figure out what are the interesting tidbits that we can learn to teach other people who are moving into cloud. One of the things that they discovered was that a big hindrance in doing cloud at all is physical storage. They needed to move to virtual storage. So, this idea of a software defined storage layer has become critical for cloud deployments because of the flexibility. Excellent. All right, Ron Reif, we're getting the hook. We've got to go. So, I really appreciate your time. Thanks for coming on theCUBE. It's good to talk to you. Always good to be here. All right, everybody, keep it right there. We're right back with our next guest. We're live from IBM Edge in Las Vegas. This is theCUBE. We'll be right back.