 Hello and welcome to theCUBE Studios in Palo Alto, California for another CUBE Conversation where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. Well, as I sit here in our CUBE Studios, 2020's fast approaching and every year as we turn the corner on a new year, we bring in some of our leading thought leaders to ask them what they see the coming year holding in the particular technology domain which they work. And this one is no different. We've got a great CUBE guest, a frequent CUBE guest, Eric Herzog, a CMO and VP of Global Channels, IBM Storage, and Eric's here to talk about storage in 2020. Eric? Peter, thank you. Love being here at theCUBE. Great solutions, you guys do a great job on educating everyone in the marketplace. Well, thanks very much. But let's start really quickly. Quick update on IBM Storage. Well, been a very good year for us. Lots of innovation. We brought out a new store-wise family in the entry space, brought out some great solutions for big data and AI solutions with our Elastic Storage System 3000. Support for backup in container environments. We've had persistent storage for containers, but now we can back it up with our award-winning Spectrum Protect and Protect Plus. So we've got a great set of solutions for the hybrid multi-cloud world, for big data and AI and the things you need to get cyber resiliency across your enterprise in your storage estate. All right, so let's talk about how folks are going to apply those technologies. You've heard me say this a lot. The difference between business and digital business is the role that data plays in a digital business. So let's start with data and work our way down into some of the trends. How are, in your conversations with customers, because you talked to a lot of customers, is that notion of data as an asset starting to take hold? Most of our clients, whether it be big, medium, or small, it doesn't matter where they are in the world, realize that data is their most valuable asset. Their customer database, their product databases, what they do for service and support. It doesn't matter what the industry is. Retail, manufacturing, obviously, sport a number of other IT players in the industry that leverage IBM technologies across the board, but they really know that data is the thing that they need to grow, they need to nurture, and they always need to make sure that data is protected or that it could be out of business. All right, so let's now, starting with that point, in the tech industry, storage has always kind of been the thing you did after you did your server, after you did your network, but there's evidence that as data starts taking more center stage, more enterprises are starting to think more about the data services that they need, and that points more directly to storage hardware, storage software. Let's start with that notion of the ascension of storage within the enterprise. So with data as their most valuable asset, what that means as storage is the critical foundation. As you know, if the storage makes a mistake, that data is gone. If you have a malware ransomware attack, guess what? Storage can help you recover. In fact, we even got some technology in our spectrum of tech product that can detect anomalous activity and help the backup admin or the storage admins realize they're having a ransomware or malware attack, and then they can take the right corrective action. So storage is that foundation across all their applications, workloads, and use cases that optimizes it, and with data as the end result of those applications, workloads, and use cases, if the storage has a problem, the data has a problem. So let's talk about what you see as in that foundation, some of the storage services we're going to be talking most about in 2020. So I think one of the big things- Well, I'm sorry, data services that we're going to be talking most about in 2020. So I think one of the big things is the critical nature of the storage to help protect their data. People when they think of cybersecurity and resiliency think about keeping the bad guy out. And since it's not an issue of if, it's when chasing the bad guy down. But I've talked to CIOs and other executives, sometimes they get the bad guy right away. Other times it takes them weeks. So if you don't have storage with the right cyber resiliency, whether that be data rest encryption, encrypting data when you send it out transparently to your hybrid multi-cloud environment, whether malware and ransomware detection, things like air gapping, whether it be air gap to tape or air gap to cloud. If you don't think about that as part of your overall security strategy, you're going to leave yourself vulnerable and that data could be compromised and stolen. So I can almost say that in 2020 we're going to talk more about how the relationship between security and data and storage is going to evolve almost to the point where we're actually going to start thinking about how security can be, it becomes almost a feature or an attribute of a storage or a data object. Have I got that right? Yeah, I mean, think of it as storage infused with cyber resiliency so that when it does happen, the storage helps you be protected until you get the bad guy and track them down. And until you do, you want that storage to resist all attacks, you need that storage to be encrypted so they can't steal it. So that's the thing, when you look at an overarching security strategy, yes, you want to keep the bad guy out, yes, you want to track the bad guy down, but when they get in, you better make sure that what's there is bolted to the wall. You know, it's the jewelry and the floor safe underneath the carpet, they don't even know it's there. So those are the types of things you need to rely on and your storage can do almost all of that for you once the bad guy's there until you get him. So the second thing I want to talk about along this vein is we've talked about the difference between hardware and software, software defined storage, but still it ends up looking like a silo for most of the players out there. And I've talked to a number of CIOs who say, you know, buying a lot of these software defined storage systems is just like buying not a piece of hardware, but a piece of software is a separate thing to manage. At what point in time do you think we're going to start talking about a set of technologies you're capable of spanning multiple vendors and delivering a more broad generalized, but nonetheless high function, highly secure storage infrastructure that brings with it software defined cloud-like capabilities? So what we see is the capability of A, transparently traversing from on-prem to your hybrid multicloud seamlessly. They can't be hard to do. It's got to happen very easily. The cloud is a target. And by the way, most mid-size enterprise and up don't use one cloud or they use many. So you've got to be able to traverse those many, move data back and forth transparently. Second thing we see coming this year is taking the over-complexity of multiple storage platforms coupled with hybrid cloud and merging them across. So you could have an entry system, a mid-range system, an high-end system, traversing the cloud with a single API, a single data management platform, performance and price points that vary depending on your application workload and use case. Obviously you use entry storage for certain things, high-end storage for other things, but if you could have one way to manage all that data, and by the way, for certain solutions, we've got this with one of our product called Spectrum Virtualize. We support enterprise class data service including moving the data out to cloud, not only on IBM storage, but over 450 other arrays, which are not IBM logoed. Now that's taking that seamlessness of entry, mid-range, on-prem enterprise, traversing it to the cloud, doing it not only for IBM storage, but doing it for our competitors quite honestly. Now once you have that flexibility, now it introduces a lot of conversations about how to match workloads to the right data technologies. How do you see workloads evolving? Some of these data-first workloads, AI, ML, and how's that going to drive storage decisions in the next year, year and a half, do you think? Well again, as we talked about already, storage is that critical foundation for all of your data needs. So depending on the data need, you've got multiple price points that we've talked about traversing out to the cloud. The second thing we see is there's different parameters that you can leverage. For example, AI, big data and analytic workloads are very dependent on bandwidth. So if you can take a scalable infrastructure that scales to exabytes of capacity, can scale to terabytes per second of bandwidth, then that means across a giant global namespace. For example, we've got with our spectrum scale solutions and our Elastic Storage System 3000, the capability of racking and stacking to rack you at a time, growing the capacity seamlessly, growing the performance seamlessly, providing that high performance bandwidth you need for AI, analytic and big data workloads. And by the way, guess what? You could traverse it out to the cloud and you need to archive it. So looking at AI as a major force in the coming, not just next year, but in the coming years ago, it's here to stay. And the characteristics that IBM sees that we've had in our spectrum scale products we've had for years that have really come out of this super computing and the high performance computing space. Those are the similar characteristics to AI workloads, machine workloads, to the big data workloads and analytics. So we've got the right solution. In fact, the two largest supercomputers on this planet have almost an exabyte of IBM storage focused on AI analytics and big data. So that's what we see traversing everywhere. And by the way, we also see these AI workloads moving from just the big enterprise guys down into small shops as well. So that's another trend you're going to see. The easier to make that storage foundation underneath your AI workloads, the more easy it is for the big company, the mid-sized company, small company, all to get into AI and get the value. The small companies have to compete with the big guys. So they need something too. And we can provide that starting with a little simple two rack U unit and scaling up into exabyte class capabilities. So all these new workloads and the simplicity of how you can apply that nonetheless is still driving questions about how the storage hierarchies evolved. Now this notion of the storage hierarchies been around for what, 40, 50 years or something like that. Tape and this. But there's some new entrance here and there are some reasons why some of the old entrance are still going to be around. So I want to talk about too, how do you see tape evolving? Is there still need for that? Let's start there. So we see tape as actually very valuable. We've had a real strong uptick the last couple of years in tape consumption. And not just in the enterprise accounts, in fact, several of the largest cloud providers use IBM tape solutions. So when you need to provide incredible amounts of data, you need to provide primary, secondary, and I'd say archive workloads. And you're looking at petabytes and petabytes and petabytes and exabytes and exabytes and exabytes and zettabytes and zettabytes. You've got to have a low cost platform and tape provides still by far the lowest cost platform. So tape is here to stay as one of those key media choices to help you keep your cost down yet easily go out to the cloud or easily pull data back. So tape still is a reasonable and in fact a necessary entrant in that overall storage hierarchy. One of the new ones that we're starting to hear more about is storage class memory, the idea of filling in that performance gap between external devices and memory itself so that we can have a persistent store that can service all the new kinds of parallelism that we're introducing into these systems. How do you see storage class memory playing out in the next couple of years? Well, we already publicly announced in 2019 that in 2020 in the first half we'd be shipping storage class memory. We would not only work in some coming systems that we're going to be announcing in the first half of the year but they would also work on some of our older products such as the flash system, 9100 family, the store-wise V7000 Gen 3 will be able to use storage class memory as well. So it is a way to also leverage AI based hearing. So in the old days flash would tear to disk. You created a hybrid array. With storage class memory it'll be a different type of hybrid array in the future. Storage class memory actually tearing to flash. Now obviously the storage class memory is incredibly fast and flash is incredibly fast compared to disk but it's all relative. In the old days a hybrid array was faster than an all hard drive array and that was flash and disk. Now you're going to see hybrid arrays will be storage class memory and with our easy tier function which is part of our spectrum virtualized software we use AI based hearing to automatically move the data back and forth to when it's hot and when it's cool. Now obviously flash is still fast but if flash is that secondary medium in a configuration like that it's going to be incredibly fast but it's still going to be lower cost. The other thing in the early years that storage class memory will be an expensive option from all vendors. It will of course over time get cheap just the way flashed. Flash was way more expensive than hard drives over time it, you know. Now it's basically the same price as what were the old 15,000 RPM hard drives which have basically gone away. Storage class over several years will do that of course as well and by the way it's very traditional in storage as you and I've been around so long and I've worked at hard drive companies the old days I remember when the fast hard drive was a 5,400 RPM drive then a 7,200 RPM drive then a 10,000 RPM drive and if you think about it in the hard drive world there was almost always two to three different spin speeds at different price points. You can do the same thing now with storage class memory as your fastest tier and now it's still incredibly fast tier with flash. So it'll allow you to do that and that will grow over time and it's going to be slow to start but it'll continue to grow. We're there at IBM already publicly announcing we'll have products in the first half of 2020 that will support storage class memory. All right, so let's hit flash because there's always been this concern about are we going to have enough flash capacity? You know, there's enough product going to come online but also this notion that, you know since everybody's getting flash from the same place the flash, there's not going to be a lot of innovation there's not going to be a lot of differentiation in the flash drives. Now, how do you see that playing out? Is there still room for innovation on the actual drive itself or the actual module itself? So when you look at flash that's what IBM has focused on taking raw flash and creating our own flash modules. Yes, we can use standard solid state disks if you want to but our flash core modules which have been out since our flash system product line which is many years old we just announced a new set in 2018 in the middle of the year that delivered in a four node cluster up to 15 million IOPS with under 100 microseconds of latency by creating our own custom flash at the same time when we launched that product the flash system 9100 we were able to launch it with NVMe technology built right in so we were one of the first players to ship NVMe in a storage subsystem by the way we're end to end so you can go fiber channel over fabric and finnaban over fabric or ethernet over fabric to NVMe all the way on the backside at the media level but not only we get that performance and that latency we've also been able to put up to two petabytes in only two rack U two petabytes in two rack U so incredible rack density so those things you can do by innovating in a flash environment so flash can continue to have innovation and in fact you should watch for some of the things we're going to be announcing in the first half of 2020 around our flash core modules and our flash system technologies well I look forward to that conversation but before you go here I got one more question for you share look I've known you for a long time you spend as much time with customers as anybody in this world every CIO I talk to says I want to talk to the guy who brings me or the gal who brings me the great idea you know I want those new ideas when Erica's org walks into their office what's the good idea that you bring you're bringing them especially as it pertains to storage for the next year so actually it's really a couple things one it's all about hybrid multicloud you need to seamless remove data back and forth it's got to be easy to do the entry platform mid-range high-end out to the cloud back and forth and you don't want to spend a lot of time doing it and you want to be fully automated so storage doesn't create any barriers storage is that foundation that goes on and off-prem and it supports multiple cloud vendors second thing is what we already talked about which is because data is your most valuable asset if you don't have cyber resiliency on the storage side you are leaving yourself exposed clearly big data and AI and the other thing that's been a hot topic which is related by the way to hybrid multiclouds is the rise of the container space for primary, for secondary how do you integrate with Red Hat? What do you do to support containers in a Kubernetes environment? That's a critical thing and we see the world in 2020 being trifled you're still going to have applications that are bare metal right on the server you're going to have tons of applications that are virtualized VMware, Hyper-V, KVM, OVM all the virtualization layers but you're going to start seeing the rise of the container admin containers are not just going to be the purview of the DevOps guy and we have customers they're talking about doing 10,000, 20,000, 30,000 containers just like they did when they first started going into the VM worlds and now that they're going to do that you're going to see customers that have bare metal, virtual machines and containers and guess what? They may start having to have container admins that focus on the administration of containers because when you start doing 30, 40, 50,000 you can't have the DevOps guy manage that because you're deploying it all over the place so we see containers this is the year that containers starts to go really big time and we're there already with our Red Hat support what we do in Kubernetes environments we provide primary storage support for persistence in containers and we also by the way have the capability of backing that up so we see containers really taking off and how it relates to your storage environment which probably often ties to how you configure your hybrid multi-cloud configs Excellent, Eric Herzog, CMO and Vice President of Partner Strategies for IBM Storage once again thanks for being on theCUBE Thank you And thanks for joining us for another CUBE Conversation I'm Peter Burris, see you next time