 Hello everyone and welcome to theCUBE. We're recovering the recent news from Hewlett Packard Enterprise, making moves in storage. And with me are Omar Assad, Vice President and General Manager for Primary Storage, HCI and Data Management at HPE, and Sandeep Singh, who's the Vice President of Storage Marketing at Hewlett Packard Enterprise. Gentlemen, welcome back to theCUBE. Great to see you both. Dave, it's a pleasure to be here. Always a pleasure talking to you, Dave. Thank you so much. Oh, it's my pleasure. Hey, so we just watched HPE make a big announcement. And I wonder, Sandeep, if you could give us a quick recap. Yeah, of course, Dave. In the world of enterprise storage, there hasn't been a moment like this in decades. A point at which everything is changing for data and infrastructure. And it's really coming at the nexus of data, cloud and AI. That's opening up the opportunity for customers across industries to accelerate their data driven transformation. Building on that, we just unveiled a new vision for data that accelerates the data driven transformation for customers, edge to cloud. And we, to pay that off, we introduce a new data services platform that consists of two game changing innovations. First, it's a data services cloud console, which is a SaaS based console that delivers cloud operational agility for customers. And it's designed to unify data operations through a suite of cloud services. Our second announcement is HPE Electra. HPE Electra is a cloud native data infrastructure portfolio to power your data edge to cloud. It's managed natively with data services cloud console. And it brings that cloud operational model to customers wherever their data lives. These innovations are really combined with our industry leading AI ops platform, which is HPE InfoSight. And combine these innovations, radically simplify and bring that cloud operational model to customers for data and infrastructure management. And give the opportunity for streamlining data management across the lifecycle. These innovations are making it possible for organizations across the industries to unleash the power of data. So that's kind of cool. I mean, a lot of the stuff we've been talking about for all these years is sort of this unified layer across all clouds, on-prem, AI injected in. I could tell you're excited. And it sounds like you can't wait to get these offerings in the hands of customers. But I wonder if we can back up a little minute. Omar, maybe you could describe the problem statement that you're addressing with this announcement. What are customers really? What are their pain points? Excellent question, Dave. So in my role as the general manager for data management and storage here at HPE, I get the wonderful opportunity to talk to hundreds of customers in a year. And as time has progressed, as the amount of data under organizations management has continued to increase, what I've noticed is that recently there are three main themes that are continuously emerging and are now bubbling at the top. The first one is storage infrastructure management itself is extremely complex for customers. While there have been lots of leaps and down progress in managing a single array or managing two arrays with a lot of simplification of the UI and maybe some modern UIs are present. But as the problem starts to get at scale, as customers acquire more and more assets to store and manage their data on premise, the management at scale is extremely complex. Yes, storage has gotten faster. Yes, flash has had a profound effect on performance availability and latency access to the data. But infrastructure management and storage management as a whole has become a pain for customers. And this is a constant theme as storage life cycle management comes up, storage refreshes come up and deploying and managing storage infrastructure at scale comes up. So that's one of the main problems that I've been seeing as I talk to customers. Now, secondly, a lot of customers are now talking about two different elements. One is storage and storage deployment and life cycle management. And the second is the management and the data of data that is stored on those storage devices. As the amount of data grows, the silos continue to grow a single view of life cycle management of customers don't get to see it. And lastly, one of the biggest things that we see is a lot of customers are now asking, how can I extract value from this data under my management because they can't seem to parse through these silos? So there is an incredible amount of productivity lost when it comes to data management as a whole, which is just fragmented into silos and then from a storage management. And when you put these two together and especially add two more elements to it, which is hybrid management of data or a multi-cloud management of data, the silos and the sprawl just continues. And there is nothing that is stitching together this thing at scale. So these are the three main themes that constantly appear in these discussions, although in spite of these, a lot of modern enhancements in storage. Well, I wonder if I could comment guys, is I've been following this industry for a number of years and you're absolutely right, Omer. I mean, if you look at the amount of money and time and energy that's spent into or put into the data architectures, people are frustrated, they're not getting enough out of it. And I'd note that the prevailing way in which we've attacked complexity historically is you build a better box. And well, that system was maybe easier to manage than the predecessor systems. All it did is create another silo and then the cloud, despite its apparent simplicity, that was another disconnected silo. So then we threw siloed management solutions at the problem and we're left with this collection of point solutions that would data sort of trapped inside. So I wonder if you could give us your thoughts on that and do you agree, what data do you have around this problem statement? Yeah, Dave, that's a great point. And actually ESG just recently conducted a survey of over 250 IT decision makers. And that actually brings one of the perfect validations of the problems that Omar and you just articulated. What it showed is that 93% of the respondents indicated that storage and data management, that complexity is impeding their digital transformation. On average, the organizations have over 23 different data management tools, which just typifies and is a perfect showcase of the fragmentation and the complexity that exists in that data management. And 95% of the respondents indicated that solving storage and data management, that complexity is a top 10 business initiative for them. And actually top five for 67% of the respondents. So it's a great validation across the board. Well, it's fresh in their minds too because pre-pandemic, there was probably a mixed picture. It was probably, well, there's complacency and we're not moving fast enough, we have other priorities, but they were forced into this. Now they know what the real problem is. It's front and center. I like that you're putting out there in your announcement the sort of future state that you're envisioning for customers. And I wonder if we could sort of summarize that and share with our listeners that vision that you unveiled, what does it look like and how are you making it real? Yeah, overall, we feel very strongly that it's time for customers to reimagine data management. And our vision is that customers need to break down the silos and complexity that plagues the distributor data environments. And they need to experience a new data experience across the board that's going to help them accelerate their data-driven transformation. And we call this vision, unified data ops. Unified data ops integrates data-centric policies across the board to streamline data management, cloud-native control and operations to bring that agility of cloud and that operational model to wherever data lives and AI-driven insights and intelligence to make the infrastructure invisible. It delivers a whole new experience to customers to radically simplify and bring the agility of cloud to data and data infrastructure, streamline data management and really help customers innovate faster than ever before. We're making the promise of unified data ops real by transforming the entire HPE storage business to a cloud-native software-defined data services. And that's through introducing a data services platform that expands to HPE GreenLake. I mean, the key word I take away there, Sandeep, is invisible. I mean, as a customer, I want you to abstract that complexity away, that underlying infrastructure complexity is, I just don't want to see it anymore. Omer, I wonder if we could start with the first part of the announcement. Maybe you can help us unpack data services cloud console. I mean, people are immediately going to think it's just another software product to manage infrastructure, but to really innovate, I'm hoping that it's more than that. Absolutely, David, it's a lot more than that. What we have done fundamentally at the root of the problem is we have taken the data and infrastructure control away from the hardware and through that, we've provided a unified approach to manage the data wherever it lives. It's a full-blown SaaS console, which our customers get on to. And from there, they can deploy appliances, manage appliances, lifecycle appliances, and then they not only stop at that, but then go ahead and start to get context around their data. But all of this is visible through a SaaS platform, a SaaS console, as every customer onboards themselves and their equipment and their storage infrastructure onto this console. Then they can go ahead and define role-based access for different parts of their organization. They can also apply role-based access to HPE GreenLake Management personnel so they can come in and do and perform all operations for the customers. We are the same console by just being another access control methodology in that. And then in addition to that, as data mobility is extremely important to our customers, how do you make data available in different hyperscaler clouds if the customer's digital transformation requires that? So again, from that single cloud console, from that single data console, which we are naming here as data services console, customers are able to curate the data, maneuver the data, pre-position the data into different hyperscalers. But the beautiful thing is that the entire view of the storage infrastructure, the data with its context that is stored on top of that access control methodologies and management framework is operational from a single SAS console, which the customer can decide to give access to whichever management entity or authority comes in to help them. And then what this leads us into is then combining these things into a northbound API. So anybody that wants to streamline operational manageability can then use these APIs to program against a single API which will then control the entire infrastructure on behalf of the customer. So in summary day, what this is is, it is bringing that cloud operational model that was so desired by each one of our customers into their data centers. And this is what I call an in-place transformation of a management experience for a customer by making them seamlessly available on a cloud operational model for their infrastructure. Yeah, and you've turned that into essentially an API with a lot of automation, that's great. So, okay, so that's kind of how you're trying to change the game here, you're charting new territory. I wonder, you talked to hundreds and hundreds of customers every year. I wonder if you could paint a picture from the customer perspective, how does their experience actually change? Right, wonderful, this allows me to break it down to bits and bytes further for you, and I love that. So the way you look at it is, recently if you look at the storage management from an, as we talked about earlier, from an array perspective, or maybe two arrays perspective has been simplified, I mean, it's a solved problem. But when you start to imagine deploying hundreds of arrays, and these are large customers, they have massive amounts of data assets, storage management hasn't scaled along as the infrastructure scales. But if you look at the consumer world, you can have hundreds of devices, but the ownership model is completely set. So the inspiration for solving this problem for us actually lied, was inspired from consumerization of IT, and that's a big trend over here. So now we're changing the customer's ownership model, the customer's deployment model, and the customer's data management model into a true cloud first model. So let me give some of the examples of that, right? So first of all, let's talk about deployment. So previously, the deployment has been a massive challenge for our customers. What does deployment in this new data services console world looks like? Devices show up, you rack them up, and then you plug in the power cable, you plug in the network cable, and then you walk out of the data center. Data center administrator or the storage administrator, they will be on their iPad, on their data services console or iPhone or whatever the device of their choices. And from that console, from that point on, it will be, the device will be registered, onboarded, its initial state will be given to it from the cloud. And if the customer has some predefined states for their previous deployment model already saved with the data console, they don't even need to do that. We'll just take that and apply that state and induct the device into the fleet. That's just one example. It's extremely simple, plug in the power cable, plug in the network cable, and the data center operational manager just walks out. After that, you could be on the beach, you could be at your home, you could be driving in a car. And this don't, I advise people not to fiddle with their iPhones when they're driving in a car, but still you could do it if you wanted, right? So that's just one part from a deployment methodology perspective. Now the second thing that, you know, Sandeep and I often bounce ideas on is provisioning off a workload. It's like a science these days. Is this array going to be able to absorb my workload? Is the latency gonna go south? Does this workload latency profile match this particular piece of device in my data center? All of this is extremely manual. And it literally takes, I mean, if you talk to any of the customers or even analysts, deploying a workload is a massive challenge. It's a guesswork that you have to model and basically see how it works out. I think based on HPE InfoSight, we're collecting hundreds and millions of data points from all these devices. So now to harness that and present that back to a customer in a very simple manner so that we can model on their behalf to the data services console, which is now workload aware. You just describe your workload. Hey, I'm gonna need these many IOPS. And by the way, this happens to be my application. And that's it. On the backend, because we're managing your infrastructure, the cloud console understands your entire fleet. We are seeing the statistics and the telemetry coming off of your systems. And because now you have described the workload for us, we can do that matching for you. And what intent-based provisioning does is, describe your workloads in two or three clicks or maybe two or three API construct formats and we'll do the provisioning, the deployment and bringing it up for you on your behalf on the right pieces of infrastructure that matched it. And if you don't like our choices, you can manually change it as well. But from a provisioning perspective, I think that took days can now come down to a couple of minutes of the description. And lastly, then global data management, distributed infrastructure from edge to cloud, invisible upgrades, only upgrading the right amount of infrastructure that needs the upgrade. All of that just comes rolling along with it. So those are some of the things that this data services console as a SAS management and scale allows you to do. And actually, if I can just jump in and add a little bit of what Omer described, especially with intent-based provisioning, that's really bringing a paradigm shift to provisioning. It's shifting it from a LUN centric to app centric provisioning. And when you combine it with identity management and role-based access, what it means is that you're enabling self-service on-demand provisioning of the underlying data infrastructure to accelerate the app workload deployments. And you're eliminating guesswork and providing the ability to be able to optimize service level objectives. Yeah, it sounds like you've really nailed that within an elegant way that provisioning challenge. I've been saying for years, if your primary expertise is deploying logical unit numbers, you better find some other skills because the day is coming that that's just going to get automated away. So that's cool. There's another issue that I'm sure you've thought about, but I wonder if you could address. I mean, you've got the cloud, the definition of cloud is changing. The cloud is expanding to on-prem, on-prem expanding the cloud, it's going out to the edge, it's going across clouds. And so security becomes a big issue that threat surface is expanding. The operating model is changing. So how are you thinking about addressing those security concerns? Excellent question, Dave. So most of the organizations that we talked to in today's modern world, almost every customer that I talked to has deployed either some sort of a cloud console, they're either one of the customers for the hyperscalers or by and far SAS based applications are pervasive across the customer base. And as you know, we were the first ones to introduce the automatic telemetry management through HPE InfoSight. That's one of the largest storage SAS services in production today that we operate on behalf of our customers, which has Dave about 85% connectivity rate. So from that perspective, keeping customers data secure, keeping customers telemetry information secure, we're no stranger to that. Again, we follow all security protocols that any cloud operational SAS service would do so. Reverse tunneling, the firewall compliance security audit logs that are published to our customers and published to customers, chief information security officers. So all of those, what I call crossing the T's and dotted the I's, we do that with security experts and security policies for which each of our customers has a different set of rules. And we have a proper engagement model that we go through that particular audit process for our customers. Then secondly, Dave, the data services cloud console is actually built on a fundamental cloud deployment technology that is not sort of that new. Aruba central, which is an Aruba management console, which is also an HPE company. It's been deployed, it's managing millions of access points in a SAS framework for our customers. So the fundamental building blocks of the data storage console from a basic enablement perspective come from the Aruba central console. And what we have taken is we've taken those generic cloud based SAS services and then built data and storage centric SAS services on top of that and made them available to our customers. Yeah, I really like the Aruba, you picked that up several years ago. And same thing with InfoSight, the way that you bring it to other parts of the portfolio, that those are really good signs to watch of successful acquisitions. All right, there's a lot here. I want to talk about the second part of the announcement. I know your branding team, you guys are serious about branding, that new product brand, maybe you could talk about that. So again, so delivering the cloud operational model is just the first piece, right? And now the second part of the announcement is delivering the cloud native hardware infrastructure which is extremely performant to go along with this cloud operational model. So what we've done, Dave, in this announcement is we've announced HPE Electra. This is our new brand for our cloud and native infrastructure to power your data and its appliances from core to the edge to the cloud, right? And what it does is it takes the cloud operational model and this hardware is powered by that. It's completely wrapped around that. And so HPE Electra is available in two models right now. The HPE Electra 9000, which is available for mission critical workloads for those high-intensity workloads with 100% availability guarantee, where no failure is ever an option. And then it is also available as HPE Electra 6000, which is available for general purpose, business critical workloads, generally trying to address that mid-range of the storage market. And both of these systems are full 100% NVMe front and back and they're powered by the same unified cloud management operational experience that the data cloud console provides. And what it does is it allows our customers to simplify their deployment model. It simplifies their management model and really, really allows them to focus on the context, the data and their app diversity, whereas data mobility, data connectivity, data management in a multi-cloud world is then completely abstracted from that. Yeah, and Dave. Go ahead, please. Just to jump in, HPE Electra, combined with data services cloud console is delivering a cloud experience that makes deploying and scaling the application workloads as simple as flipping a switch. Nice. It really does. And I'm very comfortable in saying is, like HPE InfoSight, we were the first in the industry to bring AI-based telemetry and support-enabled metrics to work. And then here with the data services console and the hardware that goes along with it, we're just completely transforming the storage ownership and a storage management model. And for our customers, it's a seamless non-destructive upgrade with fully data in place upgrade and they transform to a cloud operational model where they can manage their infrastructure better where they are through a complete consumer grade SaaS console is, again, the first of its kind when you look at storage management and storage management at scale. And I like how you're emphasizing that management layer, but underneath, all the modern hardware technologies too, which is important because it's a performance, it's got to be good performance. So now, can we bring this back again to the customers? What are the outcomes that this is going to enable for them? So I think, Dave, the first and the foremost thing is as they scale their storage infrastructures, they don't have to think. There is no, it's really as simple as, yeah, just send it to the data center, plug in the power cable, plug in the network cable and up it comes. And from that point onwards, the life cycle and the device management aspect are completely abstracted by the data services console. All they have to focus is, I just have new capacity available to me and when I have an application, the system will figure out for me where they need to deploy. So no more needing the guesswork, the Excel sheets of capacity management, the chargeback models, none of that stuff is needed. And for customers that are looking to transform their applications, customers are looking to refactor their applications into a hyperscaler model or maybe transform from VM to containers, all they need to think about and focus is on that. The data will just follow these workloads from that perspective. And Dave, just to, to Omar's response here, as I speak with customers, one other thing I'm hearing from IT is that line of business really wants IT to deliver that agility of cloud, yet IT also has to deliver all of the enterprise reliability availability, all of the data services. And what's fantastic here is that through this cloud operational model, IT can deliver that agility that line of business owners are looking for. At the same time, there have been under pressure to do a lot more with less. And through this agility, IT is able to get time back, be able to focus more on the strategic projects. At the same time, be able to get time back to spend more time with their families. That's incredibly important. Right. Well, I love the sort of mindset shift that I'm seeing from HPE. We're not talking about how much the box weighs, you know, we're talking about the customer experience. And I wonder, you know, that kind of leads me so deep to how this kind of fits in to this new, really to me, I'm seeing the transformation before our eyes, but how does it fit into HPE's overall mission? Well, our mission overall, it is to be the Edge2Cloud platform as a service company with HPE GreenLake, being the key to delivering that cloud experience. And as Omar put it, be able to deliver that cloud experience wherever the customer's data lives. And today we're advancing HPE GreenLake with as a service transformation of the HPE storage business to a software-defined cloud data services business overall. And for our customers, this translates to cloud operational and ownership experience that unleashes their agility, their data and their innovation. So we're super excited. Guys, I can tell you're excited. Thanks so much for coming to theCUBE and summarizing the announcements. Congratulations and best of luck to both of you and to HPE and your customers. Thank you, Dave, it was a pleasure.