 From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. Hi everybody, welcome to this CUBE Conversation. My name is Dave Vellante. We're here in the CUBE studios in Marlboro, Massachusetts. We're going to talk about storage and some of the trends that are going on in storage. And things have changed quite dramatically. It's not just about what media you're using today. You've got a lot of other considerations. Cloud, on-prem, incomes the edge and it really drives new considerations for customers. Sandip Aurora is here. He's the director of North American Storage and Big Data Solutions at Hewlett Packard Enterprise. He's going to talk to me about some of these trends, the customer point of view, and what HPE is doing to solve some of these problems. Sandip, thanks very much for coming on theCUBE. Dave, thanks for having me, super excited. So you heard my little narrative upfront about some of the big picture trends. What do you see as some of the tectonic shifts in the storage marketplace? Yeah, Dave, so listen, we've traveled around the continent here and I spent a lot of time with customers in North America. And what I hear from customers is their center of universe revolves around being able to map to the cloud journey. And what does that mean for their data? And I look at our cloud operating model and I map that to HPE's own point of view. Our point of view is bringing the intelligent data platform to our customers. And when we talk about mapping the cloud operating model to a customer, what does that really mean for us? When I talk to customers, they tell me three things. It means that you have extreme cost efficiency, you've got super ease of use, and you've got resource optimization. How do you utilize them in the best manner? So let me ask you on that one. Big data is in your title. And one of the things that we observed early on in the big data days was it was about bringing five megabytes of code to a petabyte of data. That sounded great and it was great, but it also caused problems because you're pushing now, storage is everywhere. I mentioned the edge. So I'm sure you're seeing that with customers. There is no more perimeter. Storage is just everywhere, wherever you want it to be. So when you talk about the cloud operating model, are you talking about bringing that experience to your data wherever that data lives? Yeah, it's a great question. It used to be that you had an accounting system and that had a database and that was delivering you a ton of data that you could analyze and store and read and write. And now you've got data that's being produced at the edge. You've got point of sale systems. You've got autonomous vehicles. You've got data that's being produced on the cloud itself. And you've got data that's being produced at the core. So what we're talking about is not just the automation of bringing that data in, but also how that data is being utilized. And to us, the way we map that challenge is through intelligence. So when you talk about, let's talk about the breakdown of those three things, cost efficiency, ease of use and resource optimization. Let's start with cost efficiency. So obviously there's TCO. There's also the way in which I consume, right? The people I presume are looking for a different pricing model. Is that, are you hearing that? Yeah, absolutely. So as part of the cost of running their business and being able to operate like a cloud, everybody's looking at a variety of different procurement and utilization models. One of the ways HPE provides a utilization model that can map to their cloud journey, a public cloud journey is through GreenLake. The ability to use and consume data on demand, consume compute on demand across the entire portfolio of products HPE has, essentially is what a GreenLake journey looks like. And let's go into ease of use. So what do you mean by that? I mean, people, they think cloud, they think swipe the credit card and start deploying machines. What do you mean by ease of use? For us, ease of use translates back to how do you map to a simpler operating and support model? For us, the support model is the key for customers to be able to realize the benefits of going to the cloud. To get to a simpler support model, we use AIOps. And for us, AIOps means using a product called InfoSight. InfoSight is a product that uses deep learning and machine learning algorithms to look at a wide net of call home data from physical resources out there, and then be able to take that data and make it actionable. And the action behind that is predictiveness, the prescriptiveness of creating automated support tickets and closing automated support tickets without anybody ever having to pick up a phone and call IT support. That InfoSight model now is being expanded across the board to all HPE products. It started with Nimble. Now, InfoSight is available on three parts. It's available on Synergy. And a recent announcement said it's also available on ProLiance. And we expect that InfoSight becomes the glue, the automation AI glue that goes across the entire portfolio of HPE products. So this is a great example of applying AI to data. So it's like call home taking to a whole new level, isn't it? Yeah, it absolutely is. And in fact, what it does is it uses the call home data that we've had for a long time with products like three part, which essentially was amazing data, but not being actioned on in an automated fashion. It takes that data and it creates an automation task around it. And many times that automation task leads to a much simpler support experience. All right, the third item you mentioned was resource optimization. Let's drill down into that. I infer from that there are performance implications as maybe governance, compliance, physical placement, can you elaborate? That's in color. Yeah, so I think it's all of the above that he just talked about. It's definitely about applying the right performance level to the right set of applications. We call this application aware storage. The ability to be able to understand which application is creating the data allows us to understand how that data needs to be accessed, which in turn means we know where it needs to reside. One of the things that HPE is doing in the storage domain is creating a common storage fabric with the cloud. We call that the fabric for the cloud. The idea there is that we have a single layer between the on-premises and off-premises resources that allows us to move data as needed depending on the application needs and depending on the user needs. Okay, so that brings me to multi-cloud. It's a hot buzzword now. Some people don't like it, but it's a reality. And so you've got data on-prem. You want to look like the cloud operating model. You got data in the cloud, the edge confuses things even more. And so what is your perspective on multi-cloud and then I have a follow-up for you? Yeah, for us, multi-cloud means the ability to be able to run your business, whether it's on-premises or off-premises, based on the needs of the requirements of the application and the business user. We don't want to force a model down on our customer's throat. We want them to have optimization across both models. And the way we do that is using a couple of different products. We've got a product known as Cloud Bank, which maps to store once. Store once is our purpose-built backup appliance. Where a customer can store a backup copy of the data on-premises, and then a backup copy of that on a public cloud like Azure, AWS, or Google. Similarly, we've got products with Nimble in three parts that allow us to have tight integration with both public and private cloud domains. And in the future, the idea is to bring all of that together where the automation and the orchestration allows customers not to worry about what product they're using, but more about what are the requirements of the application. Because sometimes it's going to want to bring data back. Whether it's peak and I want to put into the cloud for bursting, I want to bring it back for more control. Whatever it is, when it comes back, I want to have that cloud operating model. That's where the AI ops fits in, that you were just describing. Yeah, absolutely. Okay, and so let's get into more specifically what HPE is doing. You referenced some of the things that you and your partners are doing, but what specifically are you doing from the standpoint of products? You mentioned what I call a data plane and control plane. What do you have there that we can actually buy? Yeah, absolutely. So what we have, as I talked about earlier from an AI ops point of view is our product called InfoSight. And InfoSight is available to all customers that today use three part Nimble or ProLand service. As long as you have a valid support contract, it comes available to them. So I remember when HPE acquired Nimble, you said one of the things you're going to do is take that technology and push it across the portfolio. So that's something that you've really done in a pretty short timeframe. We have, and what it does, it gives us the opportunity now not just to look at call home data from storage, but then also look at call home data from the compute side. And then what we can do is correlate the data coming back to have better predictability and outcomes on your data center operations as opposed to doing it at the layer of infrastructure. And you also set out a vision of this orchestration layer. Could you talk more about that? Are we talking about across all clouds, whether it's on-prem or at the edge or in the public cloud? Yeah, we are. We're talking about making it as simple as possible where the customers are not necessarily picking and choosing. It allows them to have a strategy that allows them to go across the data center, whether it's a public cloud building their own private infrastructure or running on traditional on-premises sand structure. So this vision for us, cloud fabric vision for us allows for customers to do that. And what about software defined storage? Where does that fit into this whole equation? Yeah, I'm glad you mentioned that because that was the third tenant of what HPE truly brings to our customers. Software defined is something that allows us to maximize the utilization of the existing resources that our customers have. So what we've done is, we've partnered with a great deal of really strong software defined vendors such as Commvault, Cohesity, Cumulo, Deterra. You know, we work very closely with the likes of Veeam, Zerto. And the goal there is to provide our customers with a whole range of options to drive building a software defined infrastructure built off the Apollo series of products. Apollo servers or storage products for us are extremely dense storage products that allow for both cost and resource optimization. What's the nature of these technology partners, partnerships that you do in engineering integration or is it just kind of going to market together? Yeah, we bundle our partners into three main categories. We've got a set of complete partners. These complete partners are a relationship where we do joint reference architecture. We create a joint pricing list and we bring them in to the family. We've got a set of partners as part of the Pathfinder program. The Pathfinder program, our partners that we've made strategic, HPE has made strategic investments in. And then the third set is partners that we resell through HPE. So depending on which partner it is, they fall into a different bucket and we have all sets of resources including engineering collaboration to make sure that the customer is buying a solution as opposed to a product. That's great, Sundip. Thank you and thank you for watching. But before we go, how do people learn more? The way you learn more is make sure you contact your partner and make sure you come to Discover. So we'll hopefully see you all at Discover.