 From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hey, welcome back everybody. Jeff Frick here with theCUBE, coming to you from our Palo Alto Studios today for a CUBE Conversation. We've got a couple of CUBE alumni veterans have been on a lot of times. They got some exciting announcements to tell us today. So we're excited to jump into it, so let's go. First, we're joined by Eric Herzog. He's the CMO and VP Worldwide Storage Channels for IBM Storage, many-time CUBE alum. Eric, great to see you. Great, thank you very much for having us today. Absolutely, and joining him, I think all the way from North Carolina, Sam Werner, the VP of Offering Manager, Business Line Executive, Storage for IBM. Sam, great to see you as well. Great to be here, thank you. Absolutely, so let's jump into it. So Sam, you're in North Carolina. I think that's where the Red Hat people are. You guys have Red Hat. A lot of conversations about containers. Containers are going nuts. We know containers are going nuts and it was Docker and then Kubernetes and really a lot of traction. Wonder if you can reflect on what you see from your point of view and how that impacts what you guys are working on. Yeah, you know, it's interesting. We talk, everybody hears about containers constantly. Obviously it's a hot part of digital transformation. What's interesting about it though is most of those initiatives are being driven out of business lines. I spend a lot of time with the people who do infrastructure management, particularly the storage teams, the teams that have to support all of that data in the data center. And they're struggling to be honest with you. These initiatives are coming at them from application developers and they're being asked to figure out how to deliver the same level of SLAs, the same level of performance, governance, security, recovery times, availability. And it's a scramble for them to be quite honest. They're trying to figure out how to automate their storage. They're trying to figure out how to leverage the investments they've made as they go through a digital transformation. And keep in mind, a lot of these initiatives are accelerating right now because of this global pandemic we're living through. I don't know that the strategy's necessarily changed but there's been an acceleration. So all of a sudden these storage people kind of trying to get up to speed are being thrown right into the mix. So we're working directly with them. You'll see in some of our announcements, we're helping them get on that journey and provide the infrastructure their teams need. And a lot of this is driven by multi-cloud and hybrid cloud which we're seeing a really aggressive move to before it was kind of this rush to public cloud and then everybody figured out, well, maybe public cloud isn't necessarily right for everything. And it's kind of this horses for courses, if you will, with multi-cloud and hybrid cloud, another kind of complexity thrown into the storage mix that you guys have to deal with. Yeah, and that's another big challenge. Now in the early days of cloud, people were lifting and shifting applications trying to get lower CAPEX and they were also starting to deploy DevOps in the public cloud in order to improve agility. And what they found is there were a lot of challenges with that where they thought lifting and shifting an application with lower their capital costs. The TCO actually went up significantly where they started building new applications in the cloud. They found they were becoming trapped there and they couldn't get the connectivity they needed back into their core applications. So now we're at this point where they're trying to really transform the rest of it and they're using containers to modernize the rest of the infrastructure and complete the digital transformation. They want to get into a hybrid cloud environment. What we found is enterprises get 2 1⁄2x more value out of their IT when they use a hybrid multi-cloud infrastructure model versus an all public cloud model. So what they're trying to figure out is how to piece those different components together. So you need a software driven storage infrastructure that gives you the flexibility to deploy in a common way and automate in a common way, both in a public cloud but on premises and give you that flexibility. And that's what we're working on at IBM and with our colleagues at Red Hat. So Eric, you've been in the business a long time and it's amazing as it just continues to evolve, continues to evolve this kind of unsexy thing under the covers called storage, which is so foundational. Now as data has become maybe a liability because I have to buy a bunch of storage, now it is the core asset of the company. And in fact, a lot of valuations on a lot of companies is based on its value of that state and what they can do. So clearly you've got a couple of aces in the hole you always do. So tell us what you guys are up to at IBM to take advantage of this opportunity. Well, what we're doing is we are launching a number of solutions for various workloads and applications built with a strong container element. For example, a number of solutions about modern data protection cyber resiliency. In fact, we announced last year, almost a year ago, actually it was a year ago last week, Sam and I were on stage and one of our developers did a demo of us protecting data in a container environment. So now we're extending that beyond what we showed a year ago. We have other solutions that involve what we do with AI big data and analytic applications that are in a container environment. What if I told you, instead of having to replicate and duplicate and have another set of storage right with the OpenShift container configuration that you could connect to an existing external exabyte class data lake. So that not only could your container apps get to it but the existing apps, whether they be bare metal or virtualized, all of them could get to the same data lake. Wow, that's a concept saving time, saving money, one pool of storage that'll work for all those environments. And now that containers are being deployed in production, that's something we're announcing as well. So we've got a lot of announcements today across the board, most of which are container and some of which are not. For example, LT09, the latest hype performance and high capacity tape. We're announcing some solutions around there but the bulk of what we're announcing today is really on what IBM is doing to continue to be the leader in container storage support. And it's great, cause you talked about a couple of very specific applications that we hear about all the time. One obviously on the big data and analytic side, you know, as that continues to kind of chase Mr. Ivana of ultimately getting the right information to the right people at the right time so they can make the right decision. And the other piece you talked about was business continuity and data replication and to bring people back. And one of the hot topics we've talked to a lot of people about now is kind of this shift in a security threat around ransomware and the fact that these guys are a little bit more sophisticated and it'll actually go after your backup before they let you know that they're into your primary storage. So these are two really important market areas that we could see continue activity as all the people we talk to every day. You must be seeing the same thing. Absolutely, we are, you know, containers are the wave. I'm a native California and I'm coming to you from Silicon Valley and you don't fight the wave, you ride it. So at IBM, we're doing that. We're been the leader in container storage. We, as you know, way back when invented the hard drive, which is the foundation of almost this entire storage industry. And we were response for that. So we're banking sure that as container is the coming wave that we are writing that in and doing the right things for our customers, for our channel partners that support those customers, whether they be existing customers. Obviously, with this move to containers is gonna be some people searching for probably a new vendor. And that's something that's gonna go right into our wheelhouse because of the things we're doing. And some of our capabilities, for example, with our flash systems, with our spectrum virtualized, we're actually gonna be able to support CSI snapshots not only for IBM storage, but our spectrum virtualized product supports over 500 different arrays, most of which aren't ours. So if you've got that old EMC, VNX2, or that HPE, three par or a nimble or all kinds of other storage, if you need CSI snapshot support, you can get it from IBM with our spectrum virtualized software that runs on our flash systems, which of course cuts KAPEC and OPEX in a heterogeneous environment but gives them that advanced container support that they don't get because they're an older product from another vendor. We're making sure that we could pull our storage and even our competitor storage into the world of containers and do it in the right way for the end user. That's great. Sam, I want to go back to you and talk about the relationship with Red Hat. I think it was about a year ago, I don't know, it's a friend of me went when IBM purchased Red Hat, clearly you guys have been working very closely together. What does that mean for you? You've been in the business for a long time. You've been at IBM for a long time to have a partner kind of in bed with you with Red Hat and bringing some of their capabilities into your portfolio. It's been an incredible experience and I always say my friends at Red Hat because we spend so much time together, we're looking at now leveraging a community that's really on the front edge of this movement to containers. They bring that along with their experience around storage and containers, along with the years and years of enterprise class storage delivery that we have in the IBM storage portfolio. And we're bringing those pieces together and this is a case of truly one plus one equals three. And an example you'll see in this announcement is the integration of our data protection portfolio with their container native storage. We allow you to in any environment take a snapshot of that data. This move towards modern data protection is all about a movement to doing data protection in a different way, which is about leveraging snapshots, taking instant copies of data that are application aware, allowing you to reuse and mount that data for different purposes, be able to protect yourself from ransomware. Our data protection portfolio has industry leading ransomware protection and detection in it. So we'll actually detect it before it comes to problem. We're taking that industry leading data protection software and we are integrating it into Red Hat container native storage giving you the ability to solve one of the biggest challenges in this digital transformation, which is backing up your data now that you're moving towards stateful containers and persistent storage. So that's one area we're collaborating we're working on ensuring that our storage arrays that Eric was talking about, that they integrate tightly with OpenShift and that they also work again with OpenShift container storage, a cloud native storage portfolio from Red Hat. So we're bringing these pieces together and on top of that, we're doing some really interesting things with licensing. We allow you to consume the Red Hat storage portfolio along with the IBM software defined storage portfolio under a single license and you can deploy the different pieces you need under one single license. So you get this ultimate investment protection and ability to deploy anywhere. So we're, I think we're adding a lot of value for our customers and helping them on this journey. Yeah, Eric, I wonder if you can show your perspective on multi-cloud management. I know that's a big piece of what you guys are behind and it's a big piece of kind of the real world as we've kind of gotten through the hype and now we're into production and it is a multi-cloud world and it is, you got to manage this stuff. It's all over the place. I wonder if you could speak to kind of how that challenge, you know, factors into your design decisions and how you guys are thinking about, you know, kind of the future. Well, we've done this in a couple of ways and things that are coming out in this launch. First of all, IBM has produced with a container centric model, what they call the multi-cloud manager. It's the IBM cloud pack for multi-cloud management. That product is designed to manage multiple clouds, not just the IBM cloud, but Amazon, Azure, et cetera. What we've done is taken our spectrum protect plus and we've integrated it into the multi-cloud manager. So what that means to save time, to save money and make it easier to use when the customer is in the multi-cloud manager, they can actually select spectrum protect plus, launch it and then start to protect data. So that's one thing we've done in this launch. The other thing we've done is integrate capability of IBM spectrum virtualize running in a flash system to also take the capability of supporting OCP, the OpenShift Container Platform in a clustered environment. So what we can do there is on premise, if there really was an earthquake in Silicon Valley right now, that OpenShift is sitting on a server. The server's just got crushed by the roof when it caved in. So you wanna make sure you've got disaster recovery. So what we can do is take that OpenShift Container Platform cluster. We can support it with our spectrum virtualized software running on our flash system, just like we can do heterogeneous storage that's not ours. In this case, we're doing it with Red Hat. And then what we can do is to provide disaster recovery and business continuity to different cloud vendors, not just to IBM cloud, but to several cloud vendors, we can give them the capability of replicating and protecting that cluster to a cloud configuration. So if there really was an earthquake, they could then go to the cloud, they could recover that Red Hat cluster to a different data center and run it on-prem. So we're not only doing the integration with a multi-cloud manager, which is multi-cloud centric, allowing ease of use with our spectrum protect plus, but in the case of a really tough situation of fire in a data center, earthquake, hurricane, whatever, the Red Hat OpenShift cluster can be replicated out to a cloud with our spectrum virtualized software. So in both cases, multi-cloud examples because in the first one, of course, the multi-cloud manager is designed and does support multiple clouds. In the second example, we support multiple clouds where our spectrum virtualized for public cloud software. So you can take that OpenShift cluster, replicate it and not just do it with one cloud vendor, what was several. So showing that multi-cloud management is important and then leverage that in this launch with a very strong element of container centricity. I just want to add, and I'm glad you brought that up, Eric, this whole multi-cloud capability with the spectrum virtualized family. I can say the same for our spectrum scale family, which is our storage infrastructure for AI and big data. We actually in this announcement have containerized the client, making it very simple to deploy in a Kubernetes cluster. But one of the really special things about spectrum scale is it's active file management. This allows you to build out a file system, not only on-premises for your on-prem Kubernetes cluster, but you can actually extend that to a public cloud and it automatically will extend the file system. If you were to go into a public cloud marketplace, which it's available in more than one, you could go in there, click deploy. For example, an AWS marketplace, click deploy, it'll deploy your spectrum scale cluster. You've now extended your file system from on-prem into the cloud. If you need to access any of that data, you can access it and automatically cache it locally and we'll manage all of the file access for you. It's an interesting kind of paradox between kind of the complexity of what's going on in the backend, but really trying to deliver simplicity on the front end. Again, this ultimate goal of getting the right data to the right person at the right time. You just had a blog post, Eric, recently that you talked about every piece of data isn't equal. And I think it's really highlighted in this conversation we just had about recovery and how you prioritize and how you think about your data because the relative value of any particular piece might be highly variable, which should drive the way that you treat it in your system. So I wonder if you can speak a little bit to helping people think about data in the right way as they both have all their operational data, which they've always had, but now they've got all this instructor data that's coming in like crazy and all data isn't created equal, as you said. And if there is a earthquake or there is a ransomware attack, you need to be smart about what you have available to bring back quickly and maybe what's not quite so important. Well, I think the key thing, let me go to a modern data protection term. These are two very technical terms was, one is a recovery time. How long does it take you to get that data back? And the second one is the recovery point. What point in time are you recovering the data from? And the reason those are critical is when you look at your data sets, whether you replicate, you snap, you do a backup, the key thing you've got to figure out is, what is my recovery time? How long is it going to take me? What's my recovery point? Obviously in certain industries, you want to recover as rapidly as possible. And you also want to have the absolute most recent data. So then once you know what it takes you to do that, okay, from an RPO and an RTL perspective, recovery point objective, recovery time objective, once you know that, then you need to look at your data sets and look at, what does it take to run the company if there really was a fire and your data center was destroyed? So you take a look at those data sets, you see what are the ones that I need to recover first to keep the company up and rolling. So let's take an example, the sales database or the support database. I would say those are pretty critical to almost any company, whether you'd be a high tech company, whether you'd be a furniture company, whether you'd be a delivery company. However, there also is probably a database of assets. For example, IBM is a big company, we have buildings all over. Well, guess what? We don't lease a chair or a table or a whiteboard, we buy them. Those are physical assets that the company has to pay, do write downs on and all this other stuff. They need to track it. If we close a building, we need to move the desk to another building, right? Even if we're leasing a building, you know, the furniture is ours, right? So does an asset database need to be recovered instantaneously? Probably not. So we should focus on another thing. So let's say I'm a bank. Banks are both online and brick and mortar. I happen to be a Wells Fargo person. So guess what? There's Wells Fargo banks, two of them in the city I'm in, okay? So the assets of the money, in this case now, don't think the brick and mortar of the building with Wells Fargo or their desks in there, but now you're talking financial assets or their high-velocity trading apps, those things need to be recovered almost instantaneously. And that's what you need to do when you're looking at data sets, is figure out what's critical to the business to keep it up and rolling, what's the next most critical? And you do it in basically the way you would tear anything. What's the most important thing? What's the next most important thing? It doesn't matter how you approach your job, how you used to approach school. What are the classes I have to get an A in? What classes can I not get an A in? Depending on what your major was, all that sort of stuff, you're setting priorities, right? And the data set, since data is the most critical asset of any company, whether it's a global Fortune 500 or whether it's Herzog Cigar Store, all of those assets, that data is the most valuable. So you've got to make sure, recover what you need as rapidly as you need it, but you can't recover all of it. You just, there's just no way to do that. So that's why you really ranked the importance of the data. You use sameware with malware and ransomware. If you have a malware or ransomware attack, certain data you need to recover as soon as you can. So for example, in fact, there was one. Jeff, you're in Silicon Valley as well. You probably read about the University of California, San Francisco, end up having to pay over a million dollars of ransom because some of the data related to COVID research, University of California, San Francisco, it was the healthcare center for the University of California in Northern California. They were working on COVID and guess what, the stuff was held for ransom. They had no choice but to pay them. And they really did pay, this is around end of June of this year. So, okay, you don't really want to do that. So you need to look at everything from malware and ransomware, the importance of the data. And that's how you figure this stuff out, whether it be in a container environment, a traditional environment or a virtualized environment. And that's why data protection is so important. And with this launch, not only are we doing the data protection we've been doing for years, but now taking it to the heart of the new wave, which is the wave of containers. Let me add just quickly on that, Eric. So think about those different cases you talked about. You're probably going to want for your mission critically, you're going to want snapshots of that data that can be recovered near instantaneously. And then for some of your data, you might decide you want to store it out in the cloud. And with Spectrum Protect, we just announced our ability to now store data in Google Cloud. In addition to, we are in supported AWS, Azure, IBM Cloud and various on-prem object stores. So we already provided that capability. And then we're in this announcement talking about LTO 9. You got to also be smart about which data do you need to keep according to regulation for long periods of time or is it just important to archive? You're not going to beat the economics nor the safety of storing data out on tape. But like Eric said, if all of your data is out on tape and you have an event, you're not going to be able to restore quickly enough, at least the mission critical things. And so those are the things you need to be in snapshot. And that's one of the main things we're announcing here for Kubernetes environments is the ability to quickly snapshot application aware backups of your mission critical data and your Kubernetes environments to be very quickly be recovered. That's good. So I gave you the last word then we're going to sign off for out of time, but I do want to get this in as 2020. If I didn't ask a COVID question, I would be in big trouble. So, you know, you've all seen the memes and the jokes about really COVID being an accelerant to digital transformation, not necessarily changed, but certainly a huge accelerant. I mean, you guys have a, I'm sure a product roadmap that's baked pretty far in advance, but I wonder if you can speak to, you know, from your perspective as COVID has accelerated digital transformation, you guys are so foundational to executing that, you know, kind of what has it done in terms of what you're seeing with your customers, you know, kind of the demand. And are you seeing this kind of validation as to an accelerant to move to these better types of architectures? Let's start with you, Sam. We'll get back to you. Yeah, you know, and I think you said this, but I mean, the strategy really hasn't changed for the enterprises, but of course it is accelerating it. And I see storage teams more quickly getting into trouble trying to solve some of these challenges. So we're working closely with them. They're looking for more automation. They have less people in the data center on premises. They're looking to do more automation, simplify the management of the environment. We're doing a lot around Ansible to help them with that. We're accelerating our roadmaps around that sort of integration and automation. They're looking for better visibility into their environments. So we've made a lot of investments around our storage insights SaaS platform that allows them to get complete visibility into their data center. And not just in their data center, we also give them visibility to their stores that are deploying in the cloud. So we're making it easier for them to monitor and manage and automate their storage infrastructure. And then of course, if you look at everything we're doing in this announcement, it's about enabling our software and our storage infrastructure to integrate directly into these new Kubernetes initiatives. That way, as this digital transformation accelerates and application developers are demanding more and more Kubernetes capabilities, they're able to deliver the same SLAs in the same level of security and the same level of governance that their customers expect from them, but in this new world. So that's what we're doing. If you look at our announcement, you'll see that across the sets of capabilities that we're delivering here. Eric, we'll give you the last word and then we're gonna go to Eric's cigar shop as soon as this is over. So it's clearly all about storage made simple in a Kubernetes environment, in a container environment, whether it's block storage, file storage, whether it be object storage at IBM's goal is to offer ever increasing sophisticated services for the enterprise at the same time, make it easier and easier to use and to consume. If you go back to the old days, the storage admins manage X amount of gigabytes, maybe terabytes. Now the same admin is managing 10 petabytes of data. So the data explosion is real across all environments, container environments, even old bare metal and of course the not quite so new anymore virtualized environments. The admins need to manage that more and more easily and automated point and click. Use AI based automated tiering. For example, we have with our easy tier technology that automatically moves data when it's hot to the fastest tier. And when it's not as hot, it's cool. It pushes down to a slower tier, but it's all automated. You point and you click. Let's take our migration capabilities. We built it into our software. I buy a new array, I need to migrate the data. You point, you click and we automatic, transparent migration in the background on the fly without taking the servers or the storage down. And we always favor the application workload. So if the application workload is heavy at certain times a day, we slow the migration. At night for sake of argument, if it's a company that is not truly 24 by seven, heavily 24 by seven, and at night it slows down, we accelerate the migration. All about automation. We've done it with Ansible in this launch. We've done it with additional integration with other platforms. So our spectrum scale, for example, can use the OpenShift Management Framework to configure and to grow our spectrum scale or elastic storage system clusters. We've done it in this case with our Spectrum Protect Plus, as you saw, integration into the multi-cloud manager. So for us, it's storage made simple, incredibly new features all the time, but at the same time we do that, make sure that it's easier and easier to use. And in some cases, like with Ansible, not even the real storage people, but God forbid that DevOps guy messes with the storage and loses that data. Wow. By losing something like Ansible and that Ansible framework, we make sure that essentially the DevOps guy, the test guy, the analytics guy, basically doesn't lose the data and screw up the storage. And that's a big, big issue. So all about storage made simple in the right way with incredible enterprise features that essentially make easy and easy to use. We're trying to make everything essentially like your iPhone, that easy to use. That's the goal. And with a lot less storage admins in the world than there have been and incredible storage growth every single year, you better make it easy for the same person to manage all that storage because it's not shrinking. It is someone who's sitting at 50 petabytes today is 150 petabytes the next year and five years from now will be sitting on an exabyte of production data. And they're not gonna hire tons of admins. It's gonna be the same two or four people that were doing the work. Now they gotta manage an exabyte, which is why this storage made simple such a strong effort for us with integration with the Kubernetes frameworks or we're done with OpenShift. Heck, even what we used to do in the old days with vCenterOps from VMware, VASA, VAI, all those old VMware tools we've made sure tight integration, easy to use, easy to manage, but sophisticated features to go with that simplicity is really about how you manage storage. It's not about making your storage dumb. People want smarter and smarter storage. So you make it smarter or you make it just easy to use at the same time. Right. Well, great summary and I don't think I could do a better job. So I think we'll just leave it right there. So congratulations to both of you and the teams for these announcements. I'm sure a whole lot of hard work and sweat went in over the last little while and continued success. And thanks for the check and always great to see you. Thank you, we love being on theCUBE as always. All right, thanks again. All right, he's Eric, he was Sam. I'm Jeff, you're watching theCUBE. We'll see you next time. Thanks for watching.