 From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. Hi I'm Peter Burris and welcome to another CUBE Conversation from our studios in beautiful Palo Alto, California. One of the biggest challenges that every user faces is how are they going to arrange their resources that are responsible for storing, managing, delivering and protecting data. And that's a significant challenge but it gets even worse when we start talking about multi-cloud. So today we've got Eric Herzog who's the CMO and VP of Worldwide Storage Channels at IBM Storage to talk a bit about the evolving relationship of what constitutes a modern comprehensive storage portfolio and multi-cloud. Eric welcome to the CUBE. Peter thank you, thank you. So start off, what's happening with IBM storage these days and let's get into this kind of how multi-cloud is affecting some of your decisions and some of your customer's decisions. So what we've done is we've started talking about multi-cloud over two years ago. When Ed Walsh joined the company as a general manager, we went on an analyst road show. In fact, we came here to the CUBE and shot a video. We talked about how the IBM Storage Division is all about multi-cloud. And we look about that in kind of three ways. First of all, if you're creating a private cloud, we work with you from a container, whether you're VMware based, whether you're doing it in a more traditional private cloud. Now the modern private cloud all container based. Second is hybrid cloud. Data on-prem, out to a public cloud provider. And the third aspect and in fact, you guys have written about it one of your studies that no one's going to use one public cloud provider, they're going to use multiple cloud providers. So whether that be IBM cloud, which of course we love because we're IBM shareholders, but we work with Amazon, we work with Google. And in fact, we work with any cloud provider. Our spectrum protect backup product, which is one of the most awarded enterprise backup packages can back up to any cloud. In fact, over 350 small and medium cloud providers, the engine for their backup as a service is spectrum protect. Again, completely heterogeneous, we don't care what cloud you use, we support everyone. And we started that mantra two and a half years ago when Ed first joined the company. Now I remember when you came on, we talked a lot about this notion of data first and the idea that increasingly what was, or data driven, I think was what we talked about. Right, data driven. And increasingly we talked about or we made the observation that enterprises were going to take a look at the natural arrangement of their data and that was going to influence a lot of their cloud, a lot of their architecture and certainly a lot of the storage divisions or decisions. How is that playing out? Is that still obtaining? Are you still seeing more enterprises taking this kind of data driven approach to thinking about their overall cloud architectures? Well, the world is absolutely data centric. Where does the data go? What are security issues with that data? How is it close to the compute when I need it? How do I archive it? How do I back it up? How do I protect it? We're here in Silicon Valley. I'm a native Palo Alto, by the way, and we really do have earthquakes here. And they really do have earthquakes in Japan and China and there's all kinds of natural disasters. And of course, as you guys have pointed out, as have almost all the analysts, the number one cause of data loss besides humans is actually still fire, even with fire suppressant data science. And we have fires out here in Northern California too. That's true. So you've got to make sure that you're backing up that data. You're archiving the data. Cloud couldn't be part of that strategy. When does it need to be on-prem? When does it need to be on-prem? So it's all about being a data driven and companies look at the data, profile the data determine what sort of storage do I need? Can I go high-end, mid-range and entry, profile that data, figure that out, what they need to do? And then do the same thing now with on-prem and off-prem. For certain data sets, for security reasons, legal reasons, you probably are not going to put it out into a public cloud provider. But other data sets are ideal for that. And so all of those decisions that are being made by what's the security of the data, what's the legality of that data, what's the performance I need of that data, and how often do I need the data? If you're going to constantly go back and forth, pull data back in, going to a public cloud provider, which charged both for in and out of the data, that actually may cost more than buying an array on-prem. And so everyone's using that data centricity to figure out how do they spend their money and how do they optimize the data to use it in their applications, workloads and use cases. So if you think about it, the reality is by application, workload, location, regulatory issues, we're seeing enterprises start to recognize and increase specialization of their data assets. And that's going to lead to a degree of specializations in the classes of data management and storage technologies that they utilize. Now, what is the challenge of choosing a specific solution versus looking at more of a portfolio of solutions that perhaps provide a little bit more commonality? How are customers, how are the IBM customer base dealing with that question? Well, for us, the good thing is have a broad portfolio. When you look at the base storage arrays, we have file, block and object, they're all award-winning, we can go big, we can go medium and we could go small. And because of what we do with our array family, we have products that tend to be expensive because of what they do, products that are mid-priced and products that are perfect for Herzog's barn grill or maybe for 5,000 different bank branches because that bank is not gonna buy expensive storage for every branch. They have a small array there in case cord goes down, of course, when you or I go in to get a check or transact, if the core data center is down, that Wells Fargo, B of A, Bank of Tokyo, they're all transacting. There's a small array there. Well, you don't want to spend a lot of money for that. You need a good, reliable, all-flash array with the right RAS capability, right? The availability capability, that's what you need. And we can do that. The other thing we do is we have very much cloudified everything we do. We can tier to the cloud. We can back up to the cloud. With object storage, we can place it in the cloud. So we've made the cloud, if you will, a seamless tier to the storage infrastructure for our customers, whether it be backup data, archive data, primary data, and made it so that it's very easy to do. Remember, with that downturn in 08 and 09, a lot of storage people lost their job. And while IT headcount is back up to where it used to be, in fact, it's actually exceeded, if there was 50 storage guys at Company X and they had to let go 25 of them, they didn't hire 25 storage guys now, but they got 10 times the data. So they probably have two more storage guys. They're from 25 to 27, except they're managing 10 times the data. So automation, seamless integration with clouds, and being multi-cloud, supporting hybrid clouds is a critical thing in today's storage world. So you've talked a little bit about how format, data format issues still impact storage decisions. You've talked about how disasters or availability still impact storage divisions, certainly costs does. But you've also talked about some of the innovative things that are happening. Security, encryption, evolve backup and restore capabilities, AI, and how that's going to play. How are, what are some of the key things that your customer base is asking for that's really driving some of your portfolio decisions? Sure, well when we look beyond making sure we integrate with every cloud and make it seamless, the other aspect is AI. AI is taken off, machine learning, big data, all of those. And there it's all about having the right platform from an array perspective, but then marrying it with the right software. So for example, our scale out file system spectrum scale can go to exabyte class. In fact, the two fastest supercomputers on this planet have almost half an exabyte of IBM spectrum scale for big data analytics and machine learning workloads. At the same time, you need to have object store. If you're generating that huge amount of data set in AI world, you want to be able to put it out. We also now have spectrum discoverer, which allows you to use metadata, which is the data about the data and allow an AI app, a machine learning app or an analytics app to actually access the metadata through an API. So that's one area. So cloud, then AI is a very important aspect. And of course cyber resiliency and cyber security is critical. Everyone thinks I got a call security company. So the IBM security division, RSA, checkpoint, Symantec, right, McAfee, all these things. But the reality is, as you guys have noted, 98% of all enterprises are going to get broken into. So while they're in your house, they can steal you blind. Before the cops show up, like the old movie, what are they doing? They're loading up the truck before the cops show up. Well, guess what, if, what if that happened? Cops didn't show for 20 minutes, but they couldn't steal anything. Or the TV was tied to your fingerprint. So guess what? They couldn't use the TV, so they couldn't steal it. That's what we've done. So whether it be encryption everywhere, we can encrypt backup sets. We could encrypt data at rest. We could even encrypt arrays that aren't ours with our spectrum virtualized family. Airgapping, so that if you ran somewhere or malware, you could airgap to tape. We've actually created airgapping out with a cloud snapshot. We have a product called Safeguard Copy, which creates what I'll call a faux airgap in the mainframe space, but allows that protection so it's almost as if it was airgapped even though it's on an array. So that's a ransomware and malware. Being able to detect that are backup products when they see unusual activity will flag the backup or storage admin and say there's unusual activity. Why? Because ransomware and malware generate unusual activity on backup data sets in particular. So it's flaky. Now we don't go out and say, by the way, that's Herzog ransomware or Peter Burris ransomware, but we do say something is wrong. You need to take a look. So integrating that sort of cyber resiliency and cybersecurity into the entire storage portfolio doesn't mean we solve everything, which is why when you get an overall security strategy, you've got that great wall of China to keep the enemy out. You've got what I call Chase software to get the bad guy once he's in the house, the cops that are coming to get the bad guy, but you've got to be able to lock everything down. You'll do it. So a comprehensive security strategy and resiliency strategy involves not only your security vendor, but actually your storage vendor. And IBM's got the right cyber resiliency and security technology on the storage side to marry up regardless of which security vendor they choose. Now you mentioned a number of things that are associated with how an enterprise is going to generate greater leverage, greater value out of data that already knows. So you mentioned encryption and you mentioned being able to look at metadata for AI applications. As we move to a more software driven world of storage where volumes, physical volumes can still be made more virtual so you can move them around different workloads and associate the data more easily. Tell us a little bit about how data movement becomes an issue in the storage world because storage has already been associated with, it's here, but increasingly because of automation, because of AI, because of what businesses are trying to do, it's becoming more associated with intelligent, smart, secure, optimized movement of data. How is that starting to impact the portfolio? So we look at that really as data mobility and data mobility can be another number of different things. For example, we already mentioned we treat clouds as transparent tiers. We can back up to cloud, that's data mobility. We also tier data. We can tier data within an array with our spectrum virtualized product. We can tier data, block data across 450 arrays, most of which aren't IBM logo. We can tier from IBM to EMC. EMC could then tier to HDS. HDS could tier to Hitachi and we do that on arrays and aren't. So in that case, what you're doing is looking for the optimal price point, whether it be super and feature sets and you move data around all transparently. So it's all got to be automated, that's anything. In the old days, we thought we had Nirvana when the tiering was automatically move the data when it's 30 days old. What if we automatically move data with our easy tier technology through AI? When the data is hot, moves it to the hottest tier when the data is cold, it puts it out to the lowest cost tier. That's real automation leveraging AI technology. Same thing, something simple, migration. How much money have all the storage companies made on migration services? What if you could do transparent block migration in the background on the fly without ever taking your servers down? We can do that. And what we do is it's so intelligent we always favor the data set. So when the data set is being worked on, migration slows down. When the data set slows down, guess what? Migration picks up. But the point is data mobility, in this case from an old array to a new array. So whether it be migrating data, whether it be tiering data, whether you're moving data out to the cloud, whether it be primary data or backup data or object data for archive, the bottom line is we've infused not only the cloudification of our storage portfolio, but the mobility aspects of the portfolio, which does of course include cloud, but all like tiering mode likely is on premise, right? You could tier to the cloud, but all flash array to a cheap 7200 RPM array, you save a lot of money, and we could do that using AI technology with easy tier. All examples of moving data around transparently, quickly, efficiently to save cost, both in CAPEX using 7200 RPM arrays, of course to cut costs. But actually, OPEX, the storage admin, there aren't 100 storage admins at Burris Incorporated. You had to let them go, you've hired 100 of the people back, but you hired them all for DevOps, so you have 50 guys in storage, right? Actually there are, but I'm a lousy business man, so I'm not going to be in business as long. One more question, Eric. When we think about, I mean, look, you're an old style road warrior, you're out with customers a lot. Increasingly, and I know this group talked about it, you're finding yourself trying to explain to business people, not just IT people, how digital business data and storage come together. When you are having these conversations with executives on the business side, how does this notion of data services get discussed? What are some of the conversations like? Well, I think the key thing you've got to point out is storage guys love to talk speeds and feeds. I'm sold, I can still talk TPI and BPI on hard drives and no one does that anymore, right? But when you're talking to the CEO or the CFO or the line of business owner, it's all about delivering data at the right performance level you need for your application's workloads and use cases. Your right resiliency for your application workloads and use cases, your right availability. So it's all about application workloads and use cases. So you don't talk about storage speeds and feeds that you would with storage admin or maybe in the VP of infrastructure in a Fortune 500. You talk about it's all about the data, keeping the data secure, keeping the data reliable, keeping the old right performance. So if it's on the type of workload that needs performance, for example, it's takes the easy one flash. Why do I need flash? Well, Mr. CEO, do you use logistics? Of course we do. Who do you use SAP? Oh, how long does that logistics workload take? Oh, it takes us like 24 hours to run. What if I told you could run that every night in an hour? That's the power of flash. So you translate what you and I are used to storage nerdiness, we translate it into business value. In this case, running that SAP workload in an hour versus 24 has a real business impact. And that's the way you got to talk about storage these days. When you're not trying to storage admin, when the admin, yes, you want to talk latency and IOPS and bandwidth, but the CEO is just going to turn his nose up. But when you say I can run the MongoDB workload or I can do this or do that and I can do it what was 24 hours in an hour or half an hour, that translate to real data and real value out of that data. And that's what they're looking for is how to extract value from the data. If the data isn't performing, you get less value. If the data isn't there, you clearly have no value. And if the data isn't available enough so that it's down part time, if you are doing truly digital business. So if Herzog is bar and grill, actually everything's done digitally. So before you get that pizza or before you get that cigar, you have to order it online. If you're my website, which has a database underneath, of course I can handle the transactions, right? I got to take the credit card, got to get the orders right. If that is down half the time, my business is down. And that's an example of taking IT and translate to something as simple as a bar and grill. And everyone's doing it these days. So when you talk about, do you want that website up all the time? Do you need your order entry system up all the time? Do you need your this or that? Then they actually get it. And then obviously making sure that the application's running quickly, swiftly and smoothly. And Storges, if you will, that critical foundation underneath everything. It's not the fancy windows, it's not the fancy paint. But if that foundation isn't right, what happens? The whole building falls down. And that's exactly what Storges delivers, regardless of the application workload. That right critical foundation of performance, availability, reliability. That's what they need. When you have that, all applications run better. And your business runs better. Yeah, and the one thing I'd add to that, Eric, is increasingly the conversations we're having is options. And one of the advantages of a large portfolio or a platform approach is that the things you're doing today, you'll discover new things that you didn't anticipate. And you want the option to be able to do them quickly. Absolutely. Very, very important thing. So, applications, workloads, use cases, multi-cloud storage portfolio. Eric, thanks again for coming on theCUBE. Always love having you. Great, thank you. And once again, I'm Peter Burris, talking with Eric Herzog, CMO and VP of Worldwide Storage Channels at IBM Storage. Thanks again for watching this CUBE conversation. Until next time.