 Hello, and welcome back everyone to theCUBE's live and ongoing coverage of HPE Discover Barcelona. I'm your host, Rebecca Knight, along with my co-host and analyst, Rob Streche. We are welcoming back to theCUBE, Patrick Osborn. He is the SVP and GM hybrid cloud data storage at HPE. Welcome, direct from Boston, Massachusetts. Yes, the center of the universe. All of us, yeah, exactly. We're here in Barcelona. How is Boston operating without us? I don't know. I know, I know. Thank you for having me. It's great to be back. We heard Antonio Neary on the main stage this morning. There's a head spinning number of announcements, a couple of new things from HPE's storage as well that he hinted at, but we have you here to tell our viewers more about what you're announcing. Absolutely, so obviously AI disrupting everything right at this point, it's a huge opportunity for not only the industry, but our enterprise customers. We use AI all the time in our products to provide a differentiated product experience. So if you're not doing that, you're going to be disrupted. But what we found is that for the customers that are doing AI native and this type of application development, they need a new level of performance, scale, a lot of attributes in the storage layer that's foundational for you to do all of your data preparation, data analysis, and ultimately building models and tuning and inference. So we made some announcements today around GreenLake for File, right, which is an unstructured data platform delivered through our GreenLake cloud platform and provides a great level of performance and scale for our customers for these workloads. And how are you seeing the adoption of AI on that platform? How's that adoption going? Yeah, it's going really well. So we have a number of different customers in their life cycle, right? So when you think about the AI systems-ness and how you're going to deploy that on-prem, you have your developers and your data architects, but we have customers that are just beginning their journey, right, so they're going to start with a set of systems that are going to allow you to do tuning, inference, sort of kick the tires and we have customers that'll start at the terabyte and petabyte scale. And then we have customers that are really well on their way to using models and doing inference and training. They're a little bit more advanced and so they're at the tens of petabytes. And then we have some of our largest customers here. We saw people providing GPUs as a service on stage and those folks are, we have people regularly come into us asking for data sets now in the hundreds of petabytes, right? And literally, you know, terabytes per second throughput. So when you think about the maturity curve, most folks start small, right, and begin their journey and then we have folks who are fully at scale and really need that performance and scalability. Well, this morning we heard Antonio talking about sort of the three sets of customers in terms of what their needs are. And then also, as you said, where they are in their journey. How do you think about these customers and approach them in terms of working with them in custom ways, but also in ways that you can give them solutions off the shelf too? Absolutely, so I sort of bucketize our customers into two areas. Bucketize, I love it. Yeah, we're making new words here. It's very a bossy thing to do. And so I've got customers who are, they're modernizing their workloads. We think about themes, we talk about private cloud, we talk about modernization, bringing workloads to a hybrid deployment. And then we have all these new workloads, right? Which are essentially based off a new set of application constructs, you know, a lot of heavy on Kubernetes, containerized workloads, scale. And so for the customers that fit in that area, what we want to do, and these are customers struggling with operationalizing these workloads in the data center, right? They're all very new, but they need reliability, they need availability, they need performance, they need things like backup and data protection, they need, oftentimes we have customers that say, hey, I've got 40% growth in my Kubernetes environment, but I don't know how to map my application developers to how much storage they're using, right? So it's a lot of things that we solved in the mode one application world that are now coming up very quickly in this AI driven enterprise world as well too. Yeah, I think that's one of the keys is that the new term of platform engineering and how people are building it out is what was IT is now transforming into that. And we're seeing that there's a balance and we've been talking about this, that there's kind of an equilibrium between where are net new applications going between cloud and on-prem? And it's almost a 50-50 split that we're seeing. One of the things that people always ask us about is they're like, hey, you see the market, what about cost efficiency and security? What kind of answers do you give to your customers and prospects when they're looking at that and looking at GreenLake for file and for AI workloads? Yeah, so definitely for AI workloads, the conversation usually starts with a data conversation because the first step is creating a data lake and really sort of doing a lot of data prep, understanding what you're doing. And so that's more of a little bit of an efficiency, so I need to understand where my data is, how to tag it, how to understand it. Then when we start talking about that in the context of performance and scale, then we start to understand what those parameters are. Do I have a throughput problem? Do I have a capacity problem? And some of the solutions that we talked today address all those areas. So it's very efficient architecture in terms of you can start off fairly small and scale into the PETA scale, not only from a capacity perspective, but from a performance scale. Many of these workloads now are very different in the way that data is laid out. It's a lot of unstructured data. You're talking hundreds of millions, if not billions of small files, right? Constantly reading files, writing out checkpoints, so it's a very data intensive workload. The entire flow for AI, so you have to make sure that you have complete, aggregated, NVMe-based architecture, so we provide that, and then I think the interesting thing is around data prominence and security as well too, right? That's a continued topic in this area. Yeah, I was going to say that whole part of it, the data sovereignty aspect of it, has to be a huge reason why people are looking at GreenLake for file, and put their AI workloads on there is, hey, I can't go to the cloud with this data, or I don't want to, or it's intellectual property, and I'm really worried about the security of it. Is that a lot of the conversations you see? Yeah, so many of our mid-range and higher-end enterprise customers struggle with this on a daily basis, so from an adoption perspective, they want to use their data centers or a well-known co-location environment partner, right? So we made some announcements last year for our private cloud with Equinex, for example, and so providing those type of solutions where you can have provenance over your data, but it's cloud-adjacent from a networking latency perspective and being able to use some of that tool chain. But I've met with a ton of customers this week who are bringing these workloads on-prem, because that's where the data is. So ultimately at the end of the day is that these AI data sets have a huge amount of gravity to them, and you want to bring the workloads to that, and that's what we're providing with GreenLake for file. But while giving them a cloud operational model, right? When it comes to AI, it sounds as though so many customers are wary of making a giant and expensive mistake in terms of how they deploy it. How do you walk through the decision tree with customers? You've already laid out the things that are on their minds in terms of where they need things to be and how they need things to work, but how do you sort of hold their hands and help them and walk them through this process? So they talked a little bit today in the keynote about we have a ton of expertise in advisory services, so we do that. We often work with our customers to show them the journey that we've taken. So we use AI ops in our products all the time to make a better product experience. And so we're on essentially our fifth version of our data lake and our AI and machine learning that we use in the products. So we often give them some of our best practices. I think from a cost in stepping in, what we're here to provide is essentially reference architecture, a system, the ability to take what they've presented on stage and make that very easy for a customer to consume. The two personas that we like to talk to are essentially the AI container developer, right? And then whoever is the application and data architect, and they don't really want to care about infrastructure really at the end of the day. They want to be able to understand their data, the outcomes they're driving to and get that intelligence. And so it's our job to provide that and a very easy package solution, all accessible through APIs. And then we offer it in a green light fashion, right? So if you don't want to lay out a large amount of capex for GPUs, for storage, for the fabric, for the software, we can go through it in a stepped consumption model for customers starting small, scaling up into the mid and then beyond, which is very popular. I think that's how they consume in the cloud anyways. And I think that's when we talk about hybrid and I talk about Antonio being very early on into hybrid and really leaning into it. It is about that consumption model and how I get to that and how I normalize it to be more of a ARR. I will get that right today. Model versus a subscription model. How are you seeing that and how's been the momentum also on the other side with the Altera, I mean it's an Altera. Electra. I don't know why, I can't get that name right either, but how's the momentum? Yeah, it's been great. So the model validates itself. So we're trying to bring the predictable to the unpredictable, really. And that's what happens a lot in storage in general, right? Especially when in the context of AI as a form of workloads that are being deployed on some of these new application paradigms driven by Kubernetes and there's a number of different parts of it, but that's super unpredictable, right? And so for us, storage as a service has been super popular. So we've grown that in essence, 45% year over year for us. And so the customers really like that model where they can provide a sort of a minimum commitment level, right? And then be able to burst, expand, depending on their needs. Because for example, as you say, they dip their toes in the water of AI and all of a sudden now you've got a larger model you're dealing with, a lot of results, machine language and machine learning produce quite a bit of data. And so that unpredictable piece, they can deal with sort of ad hoc during their journey. And so for us, and then also providing all of them, I guess you'd say unsexy infrastructure capabilities around that, which is how do I DR it? How do I back it up? How do I map consumption of the developers to how much storage they're using, right? I mean, these are classic problems that need to get solved for our customers. Your colleague just called those the dirty jobs of AI. Yes, exactly. The ones no one wants to do, but that really needs to get done. 100%, yeah. And it's often too because when we talk to our customers and these environments become business and mission critical really fast, right? So they move from sandbox and testing into full blown, you know, a part of their enterprise workflow, right? It's very, very important. So we need to support them as such and give them the ability to scale. Yeah, I think even Antonio touched on it earlier today. It also becomes a component of corporate governance because if you don't know where your data is coming from, how the models are built and how all of this is, and how did you then train the models on what data and you can't prove that, especially over here where PII, if you're using anything PII to go and train a model, you're quickly going to run a foul with GDPR and other things here locally. How are you seeing people like lean in towards you and say, hey, this is really, this is the performance we need in these different places because to your point, you know, with being cloud adjacent or being more towards the edge and what are some of the demands you're seeing in that more hybrid model? Yeah, so a couple of things. I think one of the areas where we see a lot of growth is definitely on the edge, right? And so there's more applications being deployed in places outside of the traditional data center, colo, cloud, right? And they're doing some really interesting things. We have a number of customers here talking today about retail, grocery stores, all these places, new fan experiences, all those use modern containerized applications. They use video for inference and security and buying patterns and in the grocery store, they do real time, just in time, inventory and analytics. And so there's a lot of really complex things that are happening outside. So you've got that edge to core model, which is really interesting. And then, you know, from a hybrid perspective, I think one of the big things for us is being able to allow customers to place the data where they need it, depending on the workloads, right? And so the workloads are definitely, in the case of AI, coming to the data, right? And so allowing our customers to choose or that they want that on-prem or, you know, up in the cloud is super important. So that's why we provide all of these data services, right? That are abstracted from the physical storage to be able to provide movement of data, placement of it. And so that's been very successful for us. Sustainability is a real theme at this conference. And we heard Antonio talk about it. A lot of people on theCUBE have talked about it. AI is an energy greedy technology. How does your organization, HP Storage, approach this issue? And are you hearing this more and more from customers as a key concern, as they are deploying AI? Yeah, so especially here in the AMIA theater, right? I get to the luxury of being an exec sponsor for a number of customers, both in, you know, specific sectors like financial services or, but definitely in the service provider market, like very, very concerned with sustainability. It's usually on the first one to two pages of their quarterly or yearly perspective. And so we need to give them visibility into our infrastructure and how it's operating, right? So power, cooling, and so we have our own sustainability dashboard that we provide for all of our storage and data management products. We also have a ton of observability, right? We think about ops ramp and providing that level of observability that they can gain data out of that and understand how to map application usage and data usage to the infrastructure. And then there's a lot of onus on us to make sure that our upstream suppliers, right, that we work with are providing sustainable products, not only in the way they're manufactured, how they're recycled, being in the Boston area, one of our big offices in that area for HPE does the circular economy, so recycling of equipment. And so now we have customers that are asking us for a seven to 10 year TCO on storage, right? And so that means, you know, making sure that we come and sustainably upgrade equipment, right? Recycle that in a responsible manner. So it's all part of a, you know, a big ecosystem that HPE provides for our customers. Yeah, I think that's, I think one of the big keys is how do you bring it all together? And I think one of the other things that people worry about, especially in AI, is data protection. How are you addressing that from those workloads and being able to protect them? Yeah, so a couple of the things that really, that our customers like about GreenLake cloud platforms, specifically the data services that we provide, so it'll give you a view when you drop into the cloud console, because really at the end of the day, people think about GreenLake, sometimes as a consumption model, but for me on the product side, it's a product experience, right? So you go, this is our hybrid cloud console, so you start in at greenlake.hpe.com and then you go drop right into your data services cloud console, right? And so every workload that you provision, all of the storage and file shares, any type of application provision that happens, we get to automatically build in all this templating of SLA. So what's my RTO and RPO objective? How fast do I want to back up? Is this a platinum level app? Is this a silver level app? And so what we can do is when those are deployed, that gets automatically applied, and then we can view the adherence to that SLA, and it's all audited. So when you talk about providence and making sure you have a data signature, we can also audit and show customers that everything is being backed up, it's recoverable in the case of a disaster, either you have a disruption on-prem or you have a ransomware issue. So all of this is services that we provide in the platform that customers can set up. It's easy to use and it's super automated. Excellent. Well, Patrick, thank you so much. Well, I was going to say, next time we'll, if we're in Spain again, we'll make you do it in Spanish because I know you used to live here as well. So I did, I did. That'll be my next trick. Very cool, yes. We will absolutely do that. Yes, you can add on it. Patrick, thank you so much for coming on. Thank you for having me. Thank you for returning to theCUBE. I'm Rebecca Knight for Rob Stretch. I stay tuned for more of theCUBE's live coverage of HPE Discover Barcelona. We'll be back right after this. You're watching theCUBE, the leader in high-tech technology enterprise coverage.