 Welcome back, everyone, to the next generation evolution of storage. I'm John Furrier, host of theCUBE. We're here with Pure Storage, precautions in the house. CUBE alumni, general manager of the digital experience business unit. Great to see you. This is like seventh time on theCUBE. Welcome to come into the studio. Thanks for coming in. Yeah, thanks for having me back. The title of this series is next generation storage, evolution of storage, but basically the waves are coming in, generated by one season. Another tsunami of more data is not stopping. This is a key part of how companies are re-architecting. User experiences are changing. The apps are changing and cloud continues to grow, which means that the consumption, how people buy technology and consume services is right there. You're heading up a business unit that's doing the service for storage. Explain your business unit because you have a really successful and growing product and we'll get into it. But explain what you guys do. Yeah, we're building up storage as a service. So a lot of our customers are using it as a distributed cloud. So if you can think about storage, traditionally it was like, I buy a box, use a box, right? And then when the box is old, buy another box. Well, there's a lot of inefficiency and waste that goes into that. So if we apply the cloud like concepts to it, what if you could get the cloud wherever you're at? Because applications sit everywhere, whether it's on-premise or in the public cloud, and you should be able to get that cloud experience anywhere. So we deploy a storage endpoint the way we like to think about it. And so in the customer's data center, we would, they don't buy the gear. We guarantee performance and capacity SLAs. They can do reserve commits, pays you go on demand, that whole type of model. But they don't have to worry about managing, running or operating asset or asset life cycles. They can just use storage and get the benefits of the cloud operating model and consumption model wherever they sit. So it's a dream scenario, basically, if you're a customer, because one, storage is gear. You got to buy stuff. You got to deploy it, got to connect it. In the past, it's never worked because it's hard, right? It's hard to do. Like, as things break and you got to come in and fix it, you got to migrate, spending disk would die. You got to swap that out. Some systems were tied to each other. AJ just talked about some of the things that are going on between challenge, between disk and controllers and it's, but it's evolved with flash. You guys have an advantage. Why are you guys successful? Because customers want to consume this way. They don't have to take a risk. They can consume like cloud, pay for what they get. Why are you guys successful? What makes it so unique that you guys can pull this off? Well, I wish I invented it, but I think I've only been with the company five years. About 13 years ago when the company started, some decisions were made architecturally to build this concept called a nevergreen architecture, which means what if you could build a system that never got old? Meaning, usually when you buy an asset it ages out, right? What if you could build a fountain of youth where that same asset would be newer 13 years later, right? So the way you have to do that is you monitor every component and anytime a component wears, you can swap that component with no degradation to performance or no data migration or no disruption to the customer. So that concept was built into our technology. So when we decided to build the storage as a service business, we realized we had a unique advantage and that we can actually deliver a service that just like a SaaS service, just like your Salesforce gives you new features all the time, right? Your Salesforce CRM gives you new features all the time. We could do the same thing in storage. Your hardware gets better over time, your software gets better over time, your security gets better over time, right? So that's the concept that we bring into it and we're unique because of this evergreen architecture we have. So the efficiency flexibility is there. You get that easily. Customers pay for what they use. What are some of the things that you see around the questions that might come up around security, meantime between failure, the normal stuff that people talk about in storage? Well, so like if you think about it this way, there was a point in time in storage where everyone's like, well, the network will protect me. Like as long as the network's secure, everything's fine, right? And I don't want it, once storage is like working, don't touch that, right? And what would happen is people would be like, okay, I'm only going to patch my storage once every year or change controls, windows because it was clunky and cumbersome. This day and age, a Linux, Linux, I don't know, in a week, there's like seven to 10 major Linux vulnerabilities. Every storage operating system's built on Linux. What do you think is going to happen, right? So imagine that, you know, if you apply the SaaS based security principles, you probably need to update your storage system daily, right? Like really think about it to secure your environment. So that's customers, are they going to keep up with the operational overhead to do that? Probably not. So what we do is now we, because our evergreen architecture allows us to update components non-disruptively, that applies to our software stack as well. So we can do software upgrades with no performance or customer downtime. So we now can just push updates to those IoT endpoints directly from Pure One, our cloud management plane, and customers can benefit from rapid software innovation cycles. And we've changed our software release cycle to actually now ship monthly as well. So our purity feature releases are coming out monthly where customers are actually getting new capabilities monthly. So this is not, I wish I talked to AJ, I didn't have a lot of time, but one of the things about, you mentioned the software stack, you're seeing that as an advantage in all the hot areas, whether it's GPU, GPU clusters, the software stack to build that developer and or agility angle is huge. Talk about your software stack and why that's so important to the evergreen model and also to getting new features. I'm sure AI is going to have some unique things about inference and training, making data available, controlling data. I'm sure there's going to need to be an upgrade on the stack on that piece. Well, so that's interesting. Later, I think it's probably coming soon, probably in the next few months, we're just doing a purity software update to introduce GPU direct support for FlashBlade, right? That's just, okay, now we support GPU direct, right? So next, so like those types of things are things that we can bring. But that's the key, you guys are just pushing software updates. There's not a lot of hardware or is there hardware, is there just swapping out, and it's going to develop? The good news is we build our hardware where our hardware life cycles, typically with evergreen have about a two to three year like generational life cycle, right? There's always this new memory, new DRAM, new NAND flash, et cetera, that has this kind of two to three year TikTok evolution. The good news is people don't have to wait because as it becomes available, we can swap it in to continually make sure your energy and density and... Because you're phoning home, you're getting a sense of it, you're phoning home, you get a sense of it. So you guys are getting ahead on the predictor side. So that follows its own innovation cycle. So customers don't have to worry about, hey, I'm on an old piece of hardware or I'm on an old piece of software. They both get better independently over time and the security updates also get better over time because we're constantly pushing those. So that's kind of a layer and I know you brought up AI. So what's fascinating is in this type of approach, right? When you're thinking about, obviously, people talk about GPUs, burn GPUs, let's go. Like, you know, but... It's like, go, get me more GPUs. I want to talk to them. How do you actually like, there's kind of to get a good outcome. And I think we've been talking about this in kind of statistics and predictive and that was hot before AI was hot is there's only two vectors, right? More data or more compute to get better answer, right? So are you going like the more data you have to input the better your prediction model will be or the more simulation you can run against a data set, the better your answer will be, right? Those are the two vectors you're playing with. Now, if you can't get the data into the compute fast enough, you're going to have a problem, enter a flash. I don't know how you do any of this without flash, right? Like you, you're going to just have a bunch of GPUs that have a lot of horsepower that are bottlenecked by the bandwidth of disk. But that thing aside, the next element then is in the optimization of enterprise data, my internal data, external public internet data, you got to cleanse the data, you got to bring it together. And if you've created storage silos, like we see storage is a very fragmented market space today, where you've got, these are my archive systems and these are my mission critical application systems and these are my analytic observability platforms and that type of thing, you are trying to build, like bringing everything together for AI will require data consolidation. So storage fragmentation is the enemy. And if you can go ahead and consolidate. For AI, I mean horizontal is the better play for AI. AJ mentioned that earlier. That is critical. What's available, what's addressable? AI is based on data quality and availability. I mean, high availability, highly available. These are storage terms applied to AI now. So AI and storage now is symbionic in the relationship. There's that connection is going to be even more tighter. It's the enabler to allow you to do that. So how do customers are thinking about that? Take us through the subscription. Okay, I'm a subscriber. I'm using the solution. Thank you very much. If it's my budget, I use it. Now I'm in the AI planning phase. How does Pure help me? What would you just subscribe to an AI module? Is there AI? Let's say you're already on a green one. Well, you're already on a model where as you scale, like you don't even need to buy more. Like you can actually just start using. And because we're the vendor managing the hardware that sits at that endpoint, we always maintain about a 20, 25% buffer headroom. So if you're using more, we're always landing more hardware than you're actually you need. So you have the ability to grow as you need. And in the consumption model, you'll have volatility, right? You might say, okay, I'm gonna spike and do a big training thing where I'm going ahead and I need spike. And I'll just pay for on demand for that because that's not my steady run rate. In the consumption model, you're not buying anything. You're not owning anything. So for AI, you can just say, hey, this month's gonna be hot. It's fine because I've built my training models. Now that I'm kind of running at a steady state around the inference, then that's where I'm gonna set my reserve commits and off I go, right? So that model allows customers to get started immediately with AI, just jump into it and go with. And obviously, we haven't talked a lot about it. And I think just like many years ago in cloud, people were talking about like, okay, I'm going to the cloud. And then people were like, well, maybe I'm coming back because it's too expensive. Like there's this repatriation trend because people realize like running full tilt was expensive. Like AI is very powerful, but it also could be very expensive. Is that repatriation or is that net new use cases that they wanna have on premise? Because that's where their data is. That's proprietary or- I think it's both, right? I think we see both happening just because cost models. And I think in AI, just like there was like DevOps and then FinOps for cloud cost management, right? I think there's gonna be AI FinOps as a market segment that's gonna happen being like, okay, I know AI can do that, but am I willing to pay a billion dollars to do that? You know, like there's a cost to doing this. Or will it, if it gets viral or something hits, am I ready for it? And what's the cost envelope look like? And by the way, is that what I want? And am I prepared for it? So again, I think cost is huge, we've seen that a lot. I have to ask, because this comes up a lot from customers that I talk to is that they say I'm experimenting on prem and then maybe I'll do cloud, but I'll do cloud. And then once I'm up and running, I'll do a little bit in the cloud and on prem, but I want my data to be here on premise. I want my GPU clusters to be there. I want bare metal. I want a GPU cluster. I want my storage on premise as the use case today from a development standpoint. What do you guys see? So I think I see all of those types of scenarios happening. Now, the easiest, like we always talk about this, like hybrid multi-cloud world is like, you know, the perfect scenario, but it does require some deliberate choices. One, you need consistency between your on-premises and cloud environment. So we have operations and technology. Operations and technology. And we have that now with our cloud block store product. So you can run cloud block store in the public cloud. You can run a flash array on prem. And we've even had customers that could run cloud block store on AWS and Azure with Oracle running on one. Oracle running on the other, active-active, right? So we provide replication technology across hyperscalers today. So all of that, those things exist from a storage standpoint. But then you also need to think about, you know, your application deployment, which typically is you VM or container, right? So, you know, VMware's created that common space for VMs that can be deployed. And Kubernetes has pretty much become the standard for how you can deploy in a cloud agnostic way. And with that, we have our Portworx capabilities that allow us to do that. So, I think, you know, where I see this going, right? Is if you think first, you know, services and consumption first, you layer on principles of flexibility in cloud operating model, then you make technology choices that are more container first. I think you're gonna get to the point where you're gonna be ready to give your business agility. Prakash, I think that's really a key point. And I would just say, we're seeing some validation in the marketplace because you mentioned Portworx, even in Kubernetes as it becomes like Linux, the conversation shifted from, how do you stand up Kubernetes clusters to, okay, what's the end-to-end workflow look like? What's my platform engineering look like? Which is essentially a pretext to app developers, who are gonna need store stuff on storage. Okay, so I see that. The question I have for you is, okay, we believe that to be true. Okay, so let's go to the next level. Pure has always been a product leadership company, good innovation strategy, investment in R&D, good expertise in Flash, you mentioned all the evergreen stuff and all that, and AJ did as well. Okay, great. But I want to subscribe to a platform now, because remember, I don't got platform engineering conversations happening, so assume best-of-breed is table stakes. Check, you guys done that. What's the product look like at a platform level? If I get this holistic view, is there a subscription for that? Or is it like a collection of subscriptions, or how does the customer motion look for you guys when they're thinking, okay, I'm gonna start looking at my entire end-to-end process. I got GNII coming, I'm gonna have a data engineer soon, I got platform engineering in full throttle. Kubernetes is now under the covers. I'm looking at pipelines, about native services. The way we think about everything is, in storage, you care about performance, capacity, and availability, right, and resiliency. We'll give you SLAs for all of the above. Meaning, what service level do you want, right? So the platform is the service level at that point, right? So, because should you really be worried about what hardware to deploy for the service level? No, you just need a service level. And we've instrumented that service level where customers actually have visibility to that SLA right in product. It's monitored and they do it. If we miss the SLA, there's a service credit built into the product. Two, we've enhanced those things. We have a zero data migration guarantee. So we're not even, we're not gonna bait and switch, tell you like saying, okay, here's our new hardware platform that we did that doesn't qualify or whatever, right? Like we're not doing those things that just a lot of traditional storage vendors do. Some guarantees aren't really guarantees. Yeah, you know, oh here, I'm gonna take this old product, rebranded as a new product so it doesn't apply as the, you know, under terms and conditions. We don't do any of that, right? So that's kind of the second thing. Third, because we're running these services within customer's data centers, not only are we giving you SLAs, whatever power and rack space we use, we actually pay for. So customers know that it's a real service because you're gonna treat it just like you treat a cloud, right? So you see yourselves as the easy button for platform engineers just to plug in what they need because man, they're thinking about the developers who are going to need to store at scale. So it's not like the developers will call them and say, I want more storage. Like, you know, the previously developers needed to just provision. They're like, okay, I just want to do things because money is no object, right? I think that, like it's, I run an engineering team right now. I have platform engineers and I have a. It's a good experience to provide but money is an object. Well, so it's funny. I have this guy, his name is Vivi on our team and his title is DevSecFinTestOps. He's a kingdom builder. Yeah, so I was like, wait a minute. So why did we create a role called DevSecFinTestOps? Right, if I got that right, I think that's his title. But so the head of my developer platform engineering that runs systems is also responsible for my cloud costs. Why is that? Right, because he needs, when he provisions a service for developers, he needs a rate. So by tying everything to an SLA, you actually allow developers to say, when I'm making this provisioning request for the storage, do I want the $3 version or the 50 cent version, right? And they can tie it to their SLA's for their applications. So it puts responsibility of the outcome directly in control in the hands of developers. I think that's the key to that money is free quote, as you got out because like, if the platform engineer does his job, it will seem like the developer has access to storage. That's the job of the platform engineer, your point. Yeah, before it used to be, oh, let's go build it and then we'll optimize it later. That's yesteryear. Modern developers actually need rates in products. Prakash, it's always great to have you on theCUBE. What's your vision for your business unit? What's next? You're always working on some new things. So you've got the generative AI wave coming, more storage is still needed. It's going to be different. We've been saying on theCUBE that the script will flip on data management as scale kicks in and you got automation. Automation basically is what AI is doing. You're automating stuff, generating things. That generative AI starts doing work, whether it's the code whispers on Amazon or co-pilots every place else. You can have a lot more augmentation for the humans to help provision, manage all that good stuff. What's your vision for your effort? Yeah, so I've got like, it's interesting because I have one for customers and then I have one for our own internal team. For our own internal team, right? If you're running the service, right? Like running a service has always been about SREs. So generative AI can replace SREs. I'm pretty convinced of that, right? So building and training models for generative AI is to replace SREs for running and operating a remotely distributed service just moves more capacity into innovation on the development side. And then as we're developing for customers, previously people used to think about application-specific storage. Now you've got general-purpose storage that can do a lot, but with the advent of where the technology space is going, where we think in a few years we'll have a 300 terabyte all-flash drive and a similar form factor to this, right? At that point, you'll need to start doing micro-application understanding. And I do think these service SLAs will become more application-aware. Where you can say, just let's apply retention policies and these SLA policies to these applications. And the storage system itself will be aware of which application is working on it and optimize for the way it interfaces, the way block size is right. And even extend the life of the media based on the application type. So I think it's fascinating in that, to do the new service economy, storage will have to become more application and context-aware. And I think to your point to make that happen, you gotta know the underlying data because that's the AI fee there right there. The data needs to know exactly what to do when and where that's going to be driven by the application. Hence the developer is going to have to be data savvy. Not just know there's a database out there but know a lot about or rely on systems that can treat data with intelligence. Yeah, every developer starts with observability nowadays. Great, well great to see you. Thanks for coming on theCUBE. Really appreciate you for coming on. Thanks for coming on our next generation storage series, appreciate it. Thank you. Okay, we'll be back with more after this short break. Thank you for watching. We'll be right back for this next generation of storage. I'm John Furrier, host of theCUBE. We'll be right back.