 Welcome to this special CUBE presentation, the next generation storage solutions. I'm John Furrier, host of theCUBE. From the early days of magnetic tapes and floppy disks to the advent of hard drives, solid state drives and beyond, this session series offers an exploration of storage technology evolution. We'll be spotlighting the key milestones such as cloud storage, distributed systems, and the integration of artificial intelligence, discussing their impact and demonstrating how businesses can leverage these innovations to enhance efficiency and competitiveness. Moreover, we'll delve into critical aspects of data security, exploring new protection techniques, modern operation, storage automation and key sustainable focuses. In a moment, we'll be joined by AJ Singh, the Chief Product Officer of Pure Storage. You're watching theCUBE, the leader in high tech emerging coverage. Here's a really simple idea. What if your data storage didn't require constant upgrades, migrations, and disruptions? Instead of the same tedious routine, your system is always updating and modernizing and scaling with your business. So you can focus on more important things. And with no product end of life, it can also help the planet by using 85% less energy. That's just what we did. Pure Storage, sustainable data storage, pure and simple. Visit purestorage.com slash simple to learn more. Welcome back everyone to the next generation of storage. I'm John Furrier, host of theCUBE. We're here in Palo Alto studio with AJ Singh, Chief Product Officer with Pure Storage. AJ, great to see you. Great to see you too, John. Thanks for coming in. Yes, I'm happy to be here. So you're the Chief Product Officer at Pure. You guys have had an incredible run. Congratulations to you and the company. Thank you. We've been following you guys from day one when Flash first came out. You know, really kind of like against the grain at that time, but now Flash has crossed over economics, sustainability, really obviously next gen mechanism and, you know, ability to store. But now there's more to it. You get the keys to the kingdom over there at Storage and Pure Storage as the product lead. As you look at what's happening in the market, what's the storage market evolving to? We're seeing innovation happening in storage. Again, it never goes away because you got to store the data somewhere. But it's evolving very quickly, certainly with Genevieve AI. Absolutely, John. I mean, if you think about storage, especially the unstructured data side of it, it's truly exploding. If you think of data, it's kind of exploding. You know, in 2023, roughly 120 zettabytes of data put out there and growing at roughly 25% a year. So just that's 30, 40 zettabytes added every year. And if you think about the budgets, though, on the other hand, on the storage teams, the budgets quite are not keeping pace with. So what ends up happening is a lot of storage teams today end up in this kind of run the treadmill of just kind of keep the lights on, keep storage going. And so they're dealing with these issues of, what I would say, still mostly legacy mindset, legacy storage. Flash is traditionally penetrated in the high-end, high-performance side of the marketplace. But we are now here with Flash. They were truly penetrating every corner of storage other than tape. But all stages of disk and storage, Flash is here to stay. What's the main difference between the legacy storage and Flash storage as you look at this next wave coming? Because there's a lot of architectural conversations happening, core cloud, on-premise edge, a lot of things are going on. What's the difference, true difference right now? Yeah, I mean, if you think about it, the two difference really comes down to sort of dealing with challenges like every five years with legacy storage, you've got to go do a forklift upgrade. Typically that means you have to bring things down, do a data migration, so the reliability aspects are not there. A lot of times people are just used to that storage, but there's a better world out there. You can now run storage with 100% uptime for 10 years straight out with Flash. And even with new technology, you can upgrade the technology, no downtime, we call it non-destructive upgrades. So that's kind of new compared to legacy. Also storage we feel like is becoming more reliable. You think about hard disk return rates, right? Hard disk return rates are typically two-ish percent, it's well-documented, you've got to Google, hard drive return rates, two-ish percent. SSD return rates, which is Flash, by the way, and I'll talk a little bit about how we differ from SSDs, it's about 1%. In our direct Flash module, return rates are roughly 0.5 to 2%. So you can imagine 10x more reliable compared to hard disk, 5x more reliable compared to Flash. And you can imagine, anytime you have a failure, you've got to run a hole, somebody coming in, pulling out the failed drive, rebuild a new drive, it's a lot of labor. So you don't need to put that much labor, you don't need to have the same level of failure rates, you can have much better failure rates, and overall, just a much better outcome for you as a consumer of enterprise storage. We were talking before, he came on camera about some of the research we did at Wikibon now, it's called theCUBE Reachers, we just renamed it, because that really was the beginning of the Flash, we had made some forecasts. You have data now, you're seeing that kind of playing out. Can you share what's going on in the Flash, storage, price performance, cost, crossover, because it's clearly going to have some sustainability advantage, but talk about some of that, the trends that you're seeing now, and where it's come from, and where it's going. Absolutely, that was a great piece of work done by Wikibon, and if you recall, what that data showed was, if you look at dollar per terabyte of hard disk versus Flash, think about it, hard disk is on its seventh decade of innovation, done a great job, real workhorse of the industry, but the amount of optimization you can eke out of it is now limited, of course the industry doing hammer and hammer and all that good stuff, and those curves are still reducing on a long scale, relatively flat. Flash on the other hand is more of the third decade of innovation at the Wikibon Flight Show, a much steeper curve on the price drops of Flash. You know, driven by consumer Flash, you know, laptops, PCs, right? They already gone to Flash, so tons of volume happening, tons of innovation that the Flash vendors are doing, and so that curve is going much steeper, and SSD that what the Wikibon Flight Show was, a chart showed was that in 2026, dollar per terabyte of SSD is going to get cheaper than hard disk. Now, we introduced in 2023, our first disks take our product, where we have a lower total cost of ownership with our direct to Flash versus SSD compared to SSD in 2023, so our curve compared to the Wikibon SSD curve, if you think of the direct to Flash module curve, is three years shifted to the left of that, and predominantly the biggest driver of the shift is if you think about SSD for a minute, when the Flash vendor first came out, they were like, hey, I got this great new technology, can I use it for storage? You know, no moving media and all of that, and people said, yeah, but you know, it's got a proprietary API, all my software is in for disk, so if you can have a talk disk to me, I'll use it. So the SSD vendor went back, I mean, the Flash vendor went back and they said, oh, here's a new module called Solid State Disk, it talks disk northbound, it talks Flash southbound, but what it is, it's a complex translation engine in between which has a CPU, uses a ton of memory, just like somebody wants to talk to me in English, but I'm first talking to them in Spanish, translating it into English, but direct to Flash, we said, let's eliminate this translation layer and just go straight direct to Flash. In some sense, someone like a Pure Story, we were born in NAND and our engineers, they know NAND cold and they're looking at how do you rehydrate a cell and how do you, you know, right amplify and in fact, all the NAND vendors come to us and say, hey, how's it going in the field with our NAND supply? The rest of the competition, the SSDs, you know, you have an arms length, they have an arms length relationship to NAND. They're like, ah, NAND, I only want to consume it if it looks like disk. And so they really are not close to NAND and so that's the reason our curve is going to left shifted, you know? Yeah, you got better leverage there. Yeah. Better technology. You guys have always been innovative. I want to get to the market share in a second, but I want to stay on the innovation thread for a second. From day one, it's always been an innovative culture. Even with all Flash at that time was controversial. It'll never work. It'll never get escape velocity. It kept getting there. As you look at the innovation now, what is the innovation strategy with Flash? Because I see a lot of tactical benefits. I see speed, cost, scale, check, check, check, sustainability, very strong story on sustainability as powers are going to be the constraint. Yes, yes. As we see AI coming, only going to get more important. Yes, yes. You guys do well there. What's the innovation strategy for pure storage as you look at that next wave? Yeah, so the way we think of innovation at Pure is compared to the rest of the industry, we tend to invest about 20% into R&D. So we are really betting for the next decade. We think multi-cloud storage is the next horizontal that is going to become an industry of its own. And we find a lot of our competition tends to be single-digit percent investment in storage. If I feel like for some of them, the cloud has been a little bit of a distraction. All the storage is going to the cloud. Maybe we should be focusing on some other things and not on storage. And let's put it into private equity mode and all that stuff. We are convinced that storage, multi-cloud storage, flash is the media transition that's happening and we are leading that media transition. And flash is needed not only in enterprises but even hyperscalers. Hyperscalers also have a ton of hard disk and they need to change that for all the sustainability, reliability, less labor required type of reasons. So there's a ton of room for innovation there. But as you continue to innovate in this space, our view is that starting to, of course, have a simple portfolio. If you look at a lot of the industries with the new portfolio with flash, you have the opportunity to do that. A lot of the industry has traditionally had low-end storage, medium-end storage, high-end storage, all disk storage, hybrid storage, all flash storage. That's a three-by-three complexity nightmare, right? Survival customer, I've got bits of pieces of all of these, phylo data. It's not a pretty picture. But when you start fresh with flash, you have the opportunity to have a much, take a break from the past. And so in our case, we have just one purity operating system, one direct flash module that goes into many different areas, one scale-up architecture under the hood, one scale-out architecture under the hood for latency and throughput applications, one management system. And this purity runs on Azure, AWS, and on-prem, so it's purity cloud in some sense. And then we have one cloud-native architecture with portworks and a cloud-operating model with Evergreen and our Evergreen approach. The piece parts come together. In fact, we had a research meeting just the other day with our research team, Rob Stretchy and our team, who's now with theCUBE Research, formerly Wikibon, as you mentioned. He pointed out that obviously, innovation strategy is strong, but you guys have the price performance there. So speeds and feeds are coming back. That's a whole other conversation. But there's a lot of TAM available. There's a lot of big markets still in storage. So tons of headroom. So your comment about private equity is a little bit of a tongue-in-cheek, but the companies that don't have innovation, they don't have a lot of prospects. And so hence the private equity, you meaning they have to kind of sell, basically. Exactly. I mean, I would always look at any company and if they have anything associated with private equity, means I'm kind of giving up on innovation. I mean, hey, not to throw the private equity you guys under the bus. No, they're arbitrage. Yeah, they're optimizing it for EBITDA, right? On the other hand, I mean, lay people off and sell and get some dollars back. It's a financial transaction, not an innovation transaction. It's not an innovation transaction, right? On the other hand, we see a ton of opportunity in TAM. I mean, just if you just look at the number of exabytes consumed in the industry, it's literally hundreds and hundreds of exabytes that are still on disk between hyperscalers and enterprises, it's a huge opportunity. We see it as a 50, $60 billion TAM for us, for flash to consume. And we feel like we are leading the disruption. And actually, when I'm talking about disruption for a minute, there are other parallels in the industry, right? Media-based disruption. If you think about in the audio industry, right? We had LPs, CDs, Walkman, then the iPod, streaming media, and with every media shift, a new leader emerged, right? We had the Sony Walkman, then the Apple iPod, and then it was Pandora and Spotify on streaming media, media transition, right? Same thing in the video side, you saw that happen as well, right? Hey, it was VHS, DVD, streaming media. Like in the VHS DVD days, we had Blockbuster. And the functionality went better too, everything got better. Yeah, and now streaming media is so much better, right? So we feel like the same transition is happening in the storage industry as it goes from hard disk to flash. And we're kind of with a three-year lead leading the charge there. Let's get into some of the market share data. We see IDC seems to suggest you guys are gaining share over the last decade. What's driving that? Obviously besides the innovation strategy, is it the fact that it's going to outlast out with the competition and you guys have the better product? What's the reason for the share increase? Yeah, and actually the biggest driver of the share increase is that customers start to see that we deliver really compelling outcomes. Because the industry, customers care about outcome, then I'll elaborate on some of those outcomes. Typically we tend to be two to five X less power. So from a sustainability standpoint, less power compared to other all-flash systems, 10X less power compared to disk systems and space. So power and space we are much better because of the direct to flash. We don't have that inefficient translation layer sitting in the middle. But we are also 10X more reliable. We talk about return rates, right? All that extra translation layer that fails all the time, right? So SSD return rates versus our return rates are five X better, right? We tend to be, as a result, fewer failures, easier to operate, less labor to operate. We'll typically be five to 10X less labor to operate. You talk to a customer, they say, oh, that pure storage array, it kind of just runs. Actually forgot about it. I mean, those are some bold numbers. So 10X, what were the numbers? 10X, what was the number? Yeah, two to five X less power, 10X more reliable, five to 10X less labor to operate. And a really consistent, simple product line. Not this three by three matrix that I talked about, right? So that's kind of another reason. Well, operation simplicity is one of the cloud ops we're seeing with the cloud and the next wave of cloud. It's not your yesterday's IT environment or platform engineering, a lot of the important words you mentioned there in there too. So how do you guys deliver those claims? I mean, that's like pretty significant performance. I think sustainability may seem too extra, still damn good. I mean, compared to the 10X, big numbers. Yeah, so it comes. How do you deliver those? Yeah, so there's four really what we call our core sustainable advantages. What I mean by sustainable is it's not like you can get 10 engineers for six months and do it. It's a 200 engineer or four year kind of effort. The first one is we talked about the direct to flash. We don't have this inefficient translation layer that burns a ton of power, takes a lot of space. So we don't have that. So that's why we do two to five X better power. The same translation layer, by the way, causes a bunch of outages and it's not as reliable. And if you look at the from an outcome standpoint it's public data on 1% return rate for SSDs. We're roughly 0.15 to 0.2%. So five X better on that. Again, the same translation layer and the labor to operate our evergreen model. So we have engineered into our product that you can upgrade, it's a thing of beauty to watch. You walk to the array, you can just pull out the controller. The array is running, put in the new controller and the thing just keeps running. There's no migration. So an array that somebody bought 10 years ago, 10 years later is the latest and greatest technology. And over those 10 years they've had no downtime. So it's truly an always on running storage. That checks on the performance side, checks on the operator side, also on the power side too. Yes, five X less power it consumes. So that's a big, talk about the, this is one of those things that I won't say it was overlooked cause people always talk about it, but it's kind of like falls into the save the planet category, people kind of roll their eyes. But it's not so much save the planet. It's much more, we don't have the space in the data center or the power envelope to do it. Absolutely, if you think about power, right? IT consumes two to 4% of global power from just a macro perspective. And typically storage is 25% of that. So storage could be 25% of 4%. That's a 1% of global power. A lot of that is on inefficient spending media, right? I mean, hard disk. With flash, we can cut it by a factor of 10. So imagine that 1% becomes 0.1%. And you can imagine the hyperscalers in places like Ireland that said, no more data center. We just don't have the power to give you. So they're like, where do I put my next data center? How do I get the power for it? Iceland. Iceland, yeah. And built in cooling. Built in cooling. No, but this is much power and cooling and power and cooling. I just heard the other day there was a customer that said, I either got to pay my power bill or my IT person to run it. So if I don't pay the power bill, I don't have IT to run it. If I don't pay the power bill, I can't pay for my IT guy. So you're in this, this is real economic hardship. Absolutely. With the economics of power. Yes, and actually, especially in Europe, it's becoming really big. And in the US too, in California, it's starting to become big and it's starting to pick up here as well. Okay, I got to ask you, you're the product lead. You've got the keys to the kingdom at the company, as I mentioned, because the product is a fun position because you get to see the customer and you see the engineering side of it. How would you describe why Pure's product is so good? In the elevator or at a party, what's the bumper sticker? What's the elevator pitch on why the product's so good? Why is Pure so innovative? Why should I invest? What's in it for me? Why is Pure a great product? So the, I mean, Pure ultimately, is really trying to, what I say, to store, manage, and protect the world's data. And the reason the product is so amazing is that we've got the best flash technology in the marketplace, three years ahead of everybody else, that takes substantially less power, substantially less space, is much more reliable, and we have the evergreen model built into it, it's built to just run forever. Think of it as self-driving storage, right? I mean, imagine from the days of your constantly, so think of it in that sense. We are like self-driving. Why in the evergreen model, for the folks that don't know what that is, what does that mean when you say we have the evergreen model? What does that specifically mean? Yeah, for the evergreen model is that, basically say that once you buy storage from us, the first time, you might put your data on the storage, 10 years later, the storage will still be running, and you will have the latest and greatest technology, which means you get software updates, all the latest great from snapshots, for ransomware recovery, snapshots, for ransomware snapshots, a bunch of software upgrades, plus the hardware controller, went from PCIe Gen2 to Gen3 to Gen4, controller upgrades, all happening on the fly with no downtime, and it's the best in terms of recovery, because any of the old parts we recover, we recycle, so it's very friendly from a climate standpoint, but the benefit you get is non-disruptive upgrades. It's always on, always running. Awesome, where is the product portfolio going next? Obviously, you've got the roadmap, you've got the leading edge, you've always been innovative, love the innovation strategy, plus you've got product leadership, but best of breed, that's table stakes now. You know, it's platform now, you've got cloud, it's the platform market, storage is going to be horizontal, you mentioned that earlier, I would say multi-cloud, or we call it super cloud layers, control planes, you're starting to see companies build these data planes around their distributed environments, whether it's multiple clouds, or environments, I'm sure power at the edge is going to be an important part for this, I can see great things there. Am I kind of on track, I'm like, I don't even think you're going with it? Absolutely, I think you've kind of said it pretty well. Essentially, we've got this really simple portfolio that runs on-prem with these amazing numbers, runs on the cloud, on the cloud, by the way, we can save customers storage costs, 50% of the storage cost running on the cloud itself by coming in between, right? So think of us, we've taken this simple portfolio and we're elevating it into a platform, right? And the platform is, think of it as a storage cloud that effectively runs in the public cloud, on the edge, on-premise, with a control plane that is all API-driven, where you can track your data, it has metadata in the control plane, you can track your data, you're creating snapshots. So essentially, think of it as a, you had a CMDB for configuration management database, think of it as a data management data, DMDB, right? Tracking your global data. Now think of the AI use case. The biggest challenge in AI today, if you think for a minute, is that the data is locked up in silos all over the place called the old model. And so if I have to suddenly train a model, I got to go copy the data everywhere, then train the model and now I need to drive inferences from the model, and then I'm going to figure out all the way that data out there, I have to cleanse it. But with this approach, your data is always on. And it's always on with a truly unique technology in flash that is highly sustainable, very friendly, very reliable, so that your data is available and you can start to drive inferences from that data now, right? And get those applications that'll help you win in the digital transformation. I think the Evergreen model also points to the fact that as the Chen AI comes in, you'll need software updates to manage this inference, when to make data available, governance, a lot more intelligence in how to manage the data is going to come down and be programmable, not so much human driven. API driven, both for traditional and cloud native. You know, we haven't talked much about Portworx, but Portworx is effectively our solution for the cloud native side and it's all integrating into the platform. So the platform will be truly for both traditional and cloud native applications and being able to... We've been covering Portworx, I got to say, that's a secret weapon you guys have with Portworx. It's doing great. Doing great, fits into the platform. I think that's a nice tie into where I see Kubernetes going, because as we pointed out in theCUBE, Kubernetes is getting boring, which means it's working. Like Linux, no one talks about Linux anymore other than it's a standard, everyone runs it, there's no Linux conference anymore. So we see KubeCon evolving into the conversation of platform engineering and end-to-end platform data engineering. It's coming very fast and we got great research on that. We've been reporting on it. Data engineering is the next persona for IT. And this is where the data platform, right for both cloud native and traditional, that enables you to... Now imagine back to the AI example, you know, of course you need the data to be able to train models and all of that, but then to write the training apps or the inference apps, a lot of times they're written for the first time and they're written on containers using Kubernetes. So, you know, we've got many cases where Portworx is on training apps and inference apps and our core platform is on the training side. A.J., it's always great to chat with you on theCUBE. It's like we're having a little riff session here. It's like we need a whiteboard, a great leadership. Last question for you as you lay all this out, it's very clear that you're enabling a positive disruption for your customers or transformation. I call it disruptive enabler. It's disrupting the status quo, but you're also enabling a transformation for the future. What are the customers seeing for benefits? What can they expect for the next gen storage to enable them to do? And what are they doing today? Take me through customer anecdotal use case or a slice of life of a customer of today and then what this will enable for them. Yeah, so you think of the slice of life of a customer today, right? It's kind of, I've got limited budget. I'm kind of running on this constant treadmill of upgrading my storage of forklift upgrades. I'm running constant, hey, I've got a flipper, fail drive, put in a new drive. So it's a constant kind of treadmill, right? And it's quite fragmented. A lot of grinding. A lot of grinding, different use case, fragmentation by use case. Now, if you think of the model of this storage cloud, a consumption-centric model, we have the Evergreen one, which is a focus on that. It's a true horizontal cloud where now, the failure rates are dramatically less. Two, you're not running this treadmill of upgrades all the time. It just seamlessly upgrades. So you're always on the latest technology. And you are leveraging public cloud storage resources, edge storage resources, and on-prem storage resources with data that is annotated and tracked. So imagine this data and with data services that you can now truly work with this data to get the insights that you need to drive to gain a competitive edge in the digital transformation. Data is gold. And we are truly unlocking this gold for our customers. Yeah, and I think the idea of having storage is always there, available for developers. Yes. Available for the app developers and then platform engineers. Yes. Key to have that platform. And again, the horizontal speaks to the availability of data. Absolutely, absolutely. AJ, great to have you on. Thanks for coming on to the next generation storage session. Appreciate it. Thanks John, appreciate it. Okay, in a moment, Prakash will be on. Who's the general manager of digital experience for business unit that's pure storage. We'll be here in studio. You're watching theCUBE. The leader in high-tech enterprise coverage extracting the signal from the noise. We'll be right back. Welcome back everyone to the next generation evolution of storage. I'm John Furrier, host of theCUBE. We're here with Pure Storage, Prakash is in the house. Cube alumni, general manager of the digital experience business unit. Great to see you. This is like seventh time on theCUBE. Welcome to coming into the studio. Thanks for coming in. Yeah, thanks for having me back. The title of this series is next generation storage evolution of storage, but basically the waves coming in generate I1C's. Another tsunami of more data. It's not stopping. This is a key part of how companies are re-architecting. User experiences are changing, the apps are changing and cloud continues to grow, which means that the consumption, how people buy technology and consume services is right there. You're heading up a business unit that's doing the service for storage. Explain your business unit because you have a really successful and growing product and we'll get into it, but explain what you guys do. Yeah, we're building up storage as a service. So a lot of our customers are using it as a distributed cloud. So if you can think about storage, traditionally it was like I buy a box, use a box, right? And then when the box is old, buy another box. Well, there's a lot of inefficiency and waste that goes into that. So if we apply the cloud-like concepts to it, what if you could get the cloud wherever you're at? Because applications sit everywhere, whether it's on-premise or in the public cloud, and you should be able to get that cloud experience anywhere. So we deploy a storage endpoint the way we like to think about it. And so in a customer's data center, we would, they don't buy the gear. We guarantee performance and capacity SLAs. They can do reserve commits, pay as you go on demand, that whole type of model, but they don't have to worry about managing, running, or operating asset or asset life cycles. They can just use storage and get the benefits of the cloud operating model and consumption model wherever they sit. So it's a dream scenario, basically, if you're a customer because one, storage is gear, you got to buy stuff, you got to deploy it, got to connect it. In the past, it's never worked because it's hard, right? It's hard to do. Because things break and you got to come in and fix it. You got to migrate, spinning disk would die, you got to swap that out. Some systems were tied to each other. AJ just talked about some of the things that are going on between challenge, between disk and controllers. But it's evolved with Flash, you guys have an advantage. Why are you guys successful? Because customers want to consume this way. They don't have to take a risk. They can consume like cloud, pay for what they get. Why are you guys successful? What makes it so unique that you guys can pull this off? Well, I wish I invented it, but I think I've only been with the company five years. About 13 years ago when the company started, some decisions were made architecturally to build this concept called a nevergreen architecture. Which means what if you could build a system that never got old? Meaning, usually when you buy an asset, it ages out. What if you could build a fountain of youth where that same asset would be newer 13 years later? So the way you have to do that is you monitor every component and any time a component wears, you can swap that component with no degradation to performance or no data migration or no disruption to the customer. So that concept was built into our technology. So when we decided to build the storage as a service business, we realized we had a unique advantage in that we can actually deliver a service that just like a SaaS service, just like your Salesforce gives you new features all the time, right? Your Salesforce CRM gives you new features all the time. We could do the same thing in storage. Your hardware gets better over time, your software gets better over time, your security gets better over time, right? So that's the concept that we bring into it and we're unique because of this evergreen architecture we have. So the efficiency flexibility is there. You get that easily. Customers pay for what they use. What are some of the things that you see around the questions that might come up around security? Meantime between failure, the normal stuff that people talk about in storage. Well, so like if you think about it this way, there was a point in time in storage where everyone's like, well, the network will protect me. Like as long as the network's secure, everything's fine, right? And I don't want it, once storage is like working, don't touch that, right? And what would happen is people would be like, okay, I'm only going to patch my storage once every year or change controls, windows, because it was clunky and cumbersome. This day and age, a Linux, I don't know, in a week, there's like seven to 10 major Linux vulnerabilities. Every storage operating system's built on Linux. What do you think is going to happen, right? So imagine that if you apply the SaaS-based security principles, you probably need to update your storage system daily, right? If you really think about it to secure your environment. So customers, are they going to keep up with the operational overhead to do that? Probably not. So what we do is now, because our evergreen architecture allows us to update components non-disruptively, that applies to our software stack as well. So we can do software upgrades with no performance or customer downtime. So we now can just push updates to those IoT endpoints directly from Pure One, our cloud management plane, right? And customers can benefit from rapid software innovation cycles. And we've changed our software release cycle to actually now ship monthly as well. So our purity feature releases are coming out monthly where customers are actually getting new capabilities monthly. So this is something I wish I talked to AJ of, I didn't have a lot of time, but one of the things about, you mentioned the software stack, you're seeing that as an advantage in all the hot areas, whether it's GPU, GPU clusters, the software stack to build that developer and or agility angle is huge. Talk about your software stack and why that's so important to the evergreen model. And also to getting new features. I'm sure AI is going to have some unique things about inference and training, making data available, controlling data. I'm sure there's going to need to be an upgrade on the stack on that piece. Well, so that's interesting. Later, I think it's probably coming soon, probably in the next few months, we're just doing a purity software update to introduce GPU direct support for FlashBlade, right? That's just, okay, now we support GPU direct, right? So next, so like those types of things are things that we can bring. That's the key. You guys are just pushing software updates. There's not a lot of hardware, or is there hardware? Is there swapping out? Well, so the good news is we build our hardware where our hardware life cycles typically with evergreen have about a two to three year, like generational life cycle, right? There's always this new memory, new DRAM, new NAND flash, et cetera, that has this kind of two to three year tick tock evolution. The good news is people don't have to wait because as it becomes available, we can swap it in to continually make sure your energy and density and... Because you're phoning home, you're getting a sense of it, you're phoning home, you get a sense of it. So you guys are getting ahead on the predictor side. So that follows its own innovation cycle. So customers don't have to worry about, hey, I'm on an old piece of hardware or I'm on an old piece of software. They both get better independently over time and the security updates also get better over time because we're constantly pushing those. So that's kind of a layer. And I know you brought up AI. So what's fascinating is in this type of approach, right? When you're thinking about, obviously, people talk about GPUs, burn GPUs, let's go. Like, you know, but... It's like, goal, get me more GPUs, I want to talk, I order them. How do you actually, like, there's kind of to get a good outcome. And I think we've been talking about this in kind of statistics and predictive when that was hot before AI was hot is there's only two vectors, right? More data or more compute to get better answer, right? Like, the more data you have to input, the better your prediction model will be or the more simulation you can run against a data set, the better your answer will be, right? Those are the two vectors you're playing with. Now, if you can't get the data into the compute fast enough, you're going to have a problem. Enter flash. I don't know how you do any of this without flash, right? Like, you're going to just have a bunch of GPUs that have a lot of horsepower that are bottlenecked by the bandwidth of disk. But that thing aside, the next element then is in the optimization of enterprise data, my internal data, external public internet data, you got to cleanse the data, you got to bring it together. And if you've created storage silos, like we see storage as a very fragmented market space today where, you know, you've got, you know, these are my archive systems and these are my, you know, mission critical application systems and these are my analytic observability platforms and that type of thing, you are trying to build, like bringing everything together for AI will require data consolidation. So storage fragmentation is the enemy. And if you can go ahead and consolidate. For AI, I mean, horizontal is the better play for AI. And when AJ mentioned that earlier, that is critical. What's available, what's addressable? AI is based on data quality and availability. I mean, high availability, highly available. These are storage terms applied to AI now. So AI and storage now is symbiotic in the relationship. There's, that connection is going to be even more tighter. It's the enabler to allow you to do that. So how do customers are thinking about that? Take us through the subscription. Okay, I'm a subscriber. I'm using the solution. Thank you very much. If it's my budget, I use it. Now I'm in the AI planning phase. How does Pure help me? What would you just subscribe to an AI module? Let's say you're already on a green one. Well, you're already on a model where as you scale, like you don't even need to buy more. Like you can actually just start using. And you know, we, because we're the vendor managing the hardware that sits at that endpoint, we always maintain about a 20, 25% buffer headroom. So if you're using more, we're always landing more hardware than you're actually you've need. So you have the ability to grow as you need. And in the consumption model, you'll have volatility, right? You might say, okay, I'm going to spike and do a big training thing where I'm, you know, going ahead and I need spike. And I'll just pay for on demand for that. Because that's not my steady run rate. In the consumption model, you're not buying anything. You're not owning anything. So for AI, you can, you know, just say, hey, this month's going to be hot. It's fine because I've built my training models. Now that I'm kind of running at a steady state around the inference, then that's where I'm going to set my reserve commits and off I go, right? So that model allows customers to get started immediately with AI. Just jump into it and go with, and obviously, you know, we, we haven't talked a lot about it. And I think, you know, just like many years ago in cloud, people were talking about like, okay, I'm going to the cloud. And then people were like, well, maybe I'm coming back because it's too expensive. Like there's this repatriation trend because people realize like running full tilt was expensive. Like AI is very powerful, but it also could be very expensive. Is there, is that repatriation or is that net new use cases that they want to have on premise? Because that's where their data is. That's proprietary or- I think it's both, right? I think we see both happening just because cost models. And I think in AI, they're like, just like there was like dev ops and then FinOps for cloud cost management, right? I think there's going to be AI FinOps as a market- Kind of happening now. That's going to happen being like, okay, I know AI can do that, but am I willing to pay a billion dollars to do that? Yeah. You know, like there's a cost to doing this. Or will it, if it gets viral or something hits, am I ready for it? And what's the cost envelope look like? And by the way, is that what I want? Exactly. And am I prepared for it? So again, I think cost is huge. You've seen that a lot. I have to ask because this comes up a lot from customers that I talked to is that they say, I'm experimenting on prem and then maybe I'll do cloud, but I'll do cloud. And then once I'm up and running, I'll do a little bit in the cloud and on prem but I want my data to be here on premise. I want my GPU clusters to be there. I want bare metal. I want a GPU cluster. I want my storage on premise as the use case today from a development standpoint. Yeah. What do you guys see? So I think I see all of those types of scenarios happening. Now the easiest, like we always talk about this like hybrid multi-cloud world is like, you know, the perfect scenario but it does require some deliberate choices. One, you need consistency between your on-premises and cloud environment. So we have operations and technology and we have that now with our cloud block store product. So you can run cloud block store in the public cloud. You can run a flash array on prem and we've even had customers that could run cloud block store on AWS and Azure with Oracle running on one, Oracle running on the other are active-active, right? So we provide replication technology across hyperscalers today. So all of that, those things exist from a storage standpoint but then you also need to think about, you know, your application deployment which typically is you VM or container, right? So, you know, VM was created that common space for VMs that can be deployed and Kubernetes has pretty much become the standard for how you can deploy in a cloud agnostic way. And with that, we have our Portworx capabilities that allow us to do that. So I think, you know, where I see this going, right? As if you think first, you know, services and consumption first, you layer on principles of flexibility in cloud operating model, then you make technology choices that are more container first. I think you're gonna get to the point where you're gonna be ready to give your business agility. Prakash, I think that's really a key point and I would just say we're seeing some validation in the marketplace because you mentioned Portworx, even in Kubernetes as it becomes like Linux, the conversation shifted from, how do you stand up Kubernetes clusters to, okay, what's the end-to-end workflow look like? What's my platform engineering look like? Which is essentially a pretext to app developers who are gonna need store stuff on storage. Okay, so I see that. The question I have for you is, okay, we believe that to be true. Okay, so let's go to the next level. Pure has always been a product leadership company, good innovation strategy, investment in R&D, good expertise in the Flash, you mentioned all the evergreen stuff and all of that in AJD as well. Okay, great. But I want to subscribe to a platform now because remember, I don't got platform engineering conversations happening, so assume best-of-breed is table stakes. Check, you guys done that. What's the product look like at a platform level, like at this holistic view? Is there a subscription for that? Or is it like a collection of subscriptions or how does the customer motion look for you guys when they're thinking, okay, I'm gonna start looking at my entire end-to-end process. I got GNII coming, I'm gonna have a data engineer soon, I got platform engineering and a full throttle. Kubernetes is now under the covers. I'm looking at pipelines, about native services. So the way we think about everything is in storage you care about performance, capacity and availability, right? And resiliency. We'll give you SLAs for all of the above. Meaning, what service level do you want, right? So the platform is the service level at that point, right? So, because should you really be worried about what hardware to deploy for the service level? No, you just need a service level. And we've instrumented that service level where customers actually have visibility to that SLA write-in product. It's monitored and they do it. If we miss the SLA, there's a service credit built into the product. Two, we've enhanced those things. We have a zero data migration guarantee, so we're not gonna bait and switch, tell you like saying, okay, here's our new hardware platform that we did that doesn't qualify or whatever, right? Like we're not doing those things that a lot of traditional storage vendors do. Some guarantees aren't really guarantees. Yeah, you know, oh, here, I'm gonna take this old product rebranded as a new product so it doesn't apply as the, you know, under terms and conditions. We don't do any of that, right? So that's kind of the second thing. Third, because we're running these services within customers' data centers, not only are we giving you SLAs, whatever power and rack space we use, we actually pay for. So customers know that it's a real service because you're gonna treat it just like you treat a cloud, right? So you see yourselves as the easy button for platform engineers just to plug in what they need because man, they're thinking about the developers who are going to need to store at scale, so it's not like the developers are calling them and say, I want more storage. And developers need, like, you know, the previously developers needed to just provision. They're like, okay, I just want to do things because money is no object, right? I think that, like, it's, I run an engineering team right now. I have platform engineers and I have a... It's a good experience to provide, but money is an object, but it's... So it's funny, I have this guy, his name's Vivi on our team, and his title is DevSecVinTestOps. He's a kingdom builder. Yeah, so I was like, wait a minute, so why did we create a world called DevSecVinTestOps, right, if I got that right, I think that's his title. But so the head of my developer platform engineering that runs systems is also responsible for my cloud costs. Why is that? Right, because he needs... When he provisions a service for developers, he needs a rate. So by tying everything to an SLA, you actually allow developers to say, when I'm making this provisioning request for the storage, do I want the $3 version or the 50 cent version, right? And they can tie it to their SLA's for their applications. So it puts responsibility of the outcome directly in control in the hands of developers. And I think that's the key to that money is free quote, you got that, because if the platform engineer does his job, it will seem like the developer has access to storage. That's the job of the platform engineer, your point. Yeah, before it used to be, oh, let's go build it and then we'll optimize it later. That's yesteryear. Modern developers actually need rates in products. Percrot, it's always great to have you on theCUBE. What's your vision for your business unit? What's next? You're always working on some new things. So you've got the generative AI wave coming, more storage is still needed. It's going to be different. We've been saying on theCUBE that the script will flip on data management as scale kicks in and you got automation. Automation basically is what AI is doing, your automating stuff, generating things. As generative AI starts doing work, whether it's the code whispers on Amazon or co-pilots every place else, you can have a lot more augmentation for the humans to help provision, manage all that good stuff. What's your vision for your effort? Yeah, so I've got, it's interesting because I have one for customers and then I have one for our own internal team. For our own internal team, if you're running the service, running a service has always been about SREs. So generative AI can replace SREs. I'm pretty convinced of that. So building and training models for generative AIs to replace SREs for running and operating a remotely distributed service just moves more capacity into innovation on the development side. And then as we're developing for customers, previously people used to think about application-specific storage, now you've got general-purpose storage that can do a lot, but with the advent of where the technology space is going, where we think in a few years we'll have a 300 terabyte all-flash drive in a similar form factor as this. At that point, you'll need to start doing micro-application understanding. And I do think these service SLAs will become more application-aware. Where you can say, just let's apply retention policies and these SLA policies to these applications. And the storage system itself will be aware of which application is working on it and optimized for the way it interfaces, the way block size is right. And even extend the life of the media based on the application type. So I think it's fascinating in that, to do the new service economy, storage will have to become more application and context-aware. And I think to your point to make that happen, you got to know the underlying data because that's the AI feeder right there. The data needs to know exactly what to do when and where that's going to be driven by the application hence the developer is going to have to be data savvy. Not just know there's a database out there but know a lot about or rely on systems that can treat data with intelligence. Yeah, every developer starts with observability nowadays. Great. Well, great to see you. Thanks for coming on theCUBE. Really appreciate you for coming on. Thanks for coming on our next generation storage series. Appreciate it. Thank you. In a moment, Dave Vellante will be speaking with Steve McDowell, Chief Analyst at NAND Research. I'm John Furrier with theCUBE and Palo Alto, you're watching theCUBE. The leader in high-tech enterprise coverage extracting the signal from the noise. Thanks for watching. Here's a really simple idea. What if your data storage didn't require constant upgrades, migrations, and disruptions? Instead of the same tedious routine, your system is always updating and modernizing and scaling with your business so you can focus on more important things. And with no product end of life, it can also help the planet by using 85% less energy. That's just what we did. Pure storage, sustainable data storage, pure and simple. Visit purestorage.com slash simple to learn more. Hello, and welcome to this special conversation with Steve McDowell, who's the Principal Analyst and Founding Partner at NAND Research. And we're talking about the next generation of storage, Steve, welcome. Hey, it's good to be here, Dave. Hey, so tell me first about NAND Research. Congratulations for getting that up and off the ground. Yeah, so NAND Research, we're just celebrating our first anniversary. We're a small analyst firm. We focus on kind of the horizontal technologies around the data center. So everything from servers and storage to kind of the software stacks that live right above that. So it's an interesting space to play. So let's talk a little bit about IT complexity. Cloud was going to solve all IT complexity, but you know how it is in this industry. We solve complexity with complexity. And so where do you see the state of IT complexity today and then we'll get into how that's going to change? Wow, so it's as complex as it's ever been. I think, you know, I've been in this industry now for 30 years and yeah, every time we think we've solved the complexity problem, more comes along. I mean, there was a period, you know, maybe eight, nine years ago when, you know, the world believed that cloud was going to be the answer. Right? Let's take all of this, especially when we're talking about infrastructure, let's take all of this infrastructure complexity and push it onto somebody else's plate. So I can just write an Amazon a check and I can focus on the business of digital transformation. But I think as we all know, it hasn't played out quite as well as people predict, right? There's some workloads that just naturally want to live on-prem, right? Maybe whether it's for governance issues or latency issues or whatever, right? So what we've ended up doing was creating more complexity because now we have this whole kind of hybrid cloud infrastructure. I think you call it super cloud where, you know, now I'm a poor IT guy, I got to manage stuff in the cloud, maybe multiple clouds and on-prem and it just gets harder and harder. But what cloud has also done is it's reset the expectations of what an IT experience can be. It's told, you know, not just the IT guys, but their CFO and finance guys that, you know, I don't have to buy all of this equipment, right? Maybe there's a self-service, maybe there's a consumption-based model. So we're living in an extremely complex world that's a blend of clouds and then even on-prem, it's a blend of I own my rack, I rent my rack, I have a consumption-based, you know, as a service. So there's a lot going on, a lot going on for the poor IT guy. So let's explain that a little bit for people because some people might say, well, you could always, you know, lease. So it was OpEx, not CapEx, but you're talking about something different. You're talking about the actual experience, that cloud experience coming to on-prem and hybrid. How has that evolved? And then I want to get into specifically how storage is really taking more of that burden. Oh, sure. And it's, you're absolutely right. I mean, I could always lease. And, you know, the rise of consumption-based coincided with a couple of things. One is they changed some tax regulations in some countries that made leasing less financially attractive, right, from a tax basis. At the same time, you know, cloud has taught us that a managed experience is a pretty good thing. You know, if I can take storage, for example, you know, install a rack of storage and not have to rack and stack it and configure it and worry about the network and, you know, all of the technical details of the physical piece of it and just worry about the data piece of it, you know, as I'm driving my data or digital transformation projects forward. It's a good thing, right? So I think we've seen the consumption-based experience evolve from kind of a lease, which still exists in some context, but what's really growing and what's really popular right now is the managed experience. So I'm getting, you know, compute as a service. I'm getting storage as a service. Okay, and so let's double click on storage a little bit. How, where's storage's role in the value, you know, equation and is storage doing more of the work? And if so, how? Storage, storage is evolving rapidly and we could talk about any number of things from, you know, flash consuming hard drives to the role of data and how storage is consolidating. So I think at a macro level, in a macro level, what's happening is, you know, data is becoming more critical to the digital enterprise than ever before, right? It used to be about, you know, we're putting processes onto compute and that's our digital transformation. But now we're talking about data transformation. You know, and some of this coincides with the rise of advanced analytics. Some of this coincides with the rise of AI, which is I have data all over the place and I need to kind of solidate that data, you know, whether it's a data lake or something, you know, similar. I need to consolidate that data and manage that data because that data is now driving my organization. That's A, B is the cybersecurity aspect, right? We're in an environment now and it is kind of rapidly taken over, which is, you know, ransomware and bad actors are now targeting my data in a way that they never did before. It used to be I'll lock up a machine and hit some ransomware. Now it's all encrypted all of your data. So the role of data storage in terms of protecting that data is also evolving. You know, so we're seeing things like immutable snapshots and ransomware detection kind of in the storage device itself. So there's a lot going on in storage and it's taking, you know, more front and center role. So obviously flash is not new, but it continues to evolve. Where are we at in terms of flashes, impact, you know, the all flash data center? I mean, we go back, you know, last decade, but there's still opportunities for organizations to apply flash in new ways. You could do obviously a lot more. You can compress more without the performance penalty. You can share data, you know, better in a more facile way. So where are we at in that whole curve and the crossover with spinning disk? I think we're about to hit kind of the second evolution of flash, right? 10 years ago and pure storage was, you know, I think key in pushing the industry in the direction of all flash because legacy, you know, storage vendors really had no desire to go there, right? It's disruptive to them as much as it is to IT, but the value of flash proved itself. So, you know, kind of the first explosion of flash, you know, 2014, 2015, we started pushing flash into high performance workloads, right? It was still expensive, right? But you could not beat the performance or the density and all of the stories around that. But it wasn't going to replace disk drives, right? Not as it existed. As these things do, flash has evolved, right? So we're now, we're looking at technologies like QLC NAND, for example, which less performance and has some other, you know, kind of SLA characteristics, but it makes it much more attractive versus hard disk drive. So we're starting to see now QLC flash push down into near line storage, right? And if we look at the density curves and we look at the price curves over time, right? We're not far off where, you know, by the end of the decade, maybe, where we'll see Greenfield storage installations being all flash, and that's an exciting time. We couldn't have visualized this even five years ago, but, you know, the densities, the price, the performance, the reliability that we put in around QLC make it a viable alternative. So I've got your report from NAND research tackling IT complexity with simplified enterprise storage. And in there, you have this nice chart and sort of laying out the workloads that are appropriate for QLC NAND, you know, near line, big data, analytics, data lake, et cetera, data protection, and then TLC NAND, OLTP, right, is the wheelhouse. That's the right intensive workloads. And what I'm hearing from you is that flash is expanding its total available market, if you will, in terms of use cases. And that's a function of economics. Is that correct? And also the tech itself? Economics and capability, yes. So, you know, the price of NAND is, for some capacities, has already crossed over that of hard disk drives. And, you know, we can see a point in time where at the same capacity point, the NAND will be, you know, as cheap or cheaper than spinning hard drives. And the other thing to think about when you think about the economics of flash storage is, it's not just the acquisition cost, it's also the operational cost. You know, flash, when it's not serving data, doesn't consume power or it's so minuscule that it doesn't count, right? And the same can't be true of hard drives. So, you know, so it's an economic point, but it's also a technology point. When we look at technologies like QLC flash, right, it arrived a little bit of baggage. They said, you know, QLC doesn't have the endurance, doesn't have all of these things that the TLC, which is the high performance flash does have. And, you know, some storage vendors have taken some innovative approaches and said, well, we can engineer around that. And we're seeing that. If you look at Pure Storage and their E-Series, you know, Flash Array and Flash Blade, which uses QLC NAND, right? They've put a lot of software engineering on top of the QLC to solve kind of the things that gave QLC a sketchy reputation to begin with. So, you know, it's proven itself as enterprise class, right? We've all been shipping QLC now for enough time where we can trust it as an industry. So it's both, you know, technology and economics, and they're converging at this point. Right, thank you for that. So in your report, we have this, I want to talk about sustainability. You've got this stat data centers account for about one to 2% of global energy consumption with storage accounting for about 25% of data center usage. And most of that storage consumption of energy is from spinning discs. When you're moving discs around, you're obviously using more energy. So how does Flash sort of address that problem? And where do you see that headed? Well, Flash is a semiconductor. So, you know, it consumes energy, but a minuscule amount, you know, relative to something mechanical. If you look inside your hard drive, you know, you have a motor in every one of these hard drives that's spinning at 7,500 or 10,000 rotations per minute. So those things are consuming power. Even at idle, right? A hard drive doesn't completely stop. So, you know, from a sustainability perspective, right? There's no question that Flash is more sustainable. I mean, just in terms of raw electric usage, we can measure this, we can look at this. You know, so there's that. There's also the, not just the power consumed, but the heat generated. Flash doesn't generate the heat that a mechanical hard drive does. So if I'm looking at data center scale, right? There's less than I have to heat and cool. So there's a little bit of a ripple effect. And then kind of thirdly is the density story. I can pack a whole lot more bites into a Flash array than I can a hard drive based array, just in terms of, you know, the physical footprint, which again comes to the environmental impact inside the data center. I don't have to heat it. I don't have to cool it. I don't have to have as many racks. So it's just an overall better story. Yeah, these are key points. I mean, of course the CPU and GPU were the big culprits, but with all this activity going around in AI and the world of GPUs, GPUs are not only hot from a market standpoint, they run hot. So whatever we can do to sort of attack even at 25%, because it feels like from an economic standpoint, you're really starting to, like you said, this is now the next wave. So it's sort of a no-brainer to sort of point Flash at that problem. And then I'm sure there are many folks, smart people working on the heat of the CPU and the GPU and liquid cooling and the like, but it seems like Flash as a replacement for spinning disk is imminent, at least in a lot of use cases. I think it's inevitable. And nobody's going to go in and wholesale rip out all of their existing drives, right? We're following the replacement cycles. So, you know, the hard drive based arrays that are being sold today, those will be replaced by Flash in five years, right? And I think, you know, post, you know, we're already in the kind of gray area now where if I'm looking in near-line storage, things that's relatively hot and accessed, you know, I'm going to consider QLC, right? I'm going to make that, I'm going to do that math and make that choice, whether I go hard drive and Flash, right? It's not a choice that I would have made three or five years ago even, right? For a lot of these workloads, but I think by the end of the decade, right? If the trends hold, all new green field, right? With maybe a few exceptions is going to be Flash storage. So, Pure has, Pure was always trying to be the first at. They were like the first with the evergreen model. They were the first, I think the first company to actually partner with NVIDIA, as I recall. And so, you've got a chart in here that shows improvement curves, capacity, and over time, HDDs, SSDs, and then you've got a curve on Pure, which is much steeper. Why is it the Pure is able to do that? How are they doing in the market? Where do they fit? What's your perspective on Pure? So, Pure is an interesting one. If you look at the OEMs that sell storage today, most of them have a legacy business, right? If you look at Dell, right? They consume DMC. They already had a storage business. They've got, I don't know how many product lines that are all kind of running a little bit different. They can sell you storage for whatever application you want, but it's a little disjointed. And when I'm looking at where do I evolve my product line if I'm a Dell or an HPE or a Lenovo, I have to take into account all this legacy stuff. What I like about Pure is, they have a few models, but at the end of the day, they really have one set of technologies. They have one operating experience. And I don't know that any other storage vendor can say that, certainly not a tier one storage vendor. So they've taken that and they started with a philosophy and their philosophy wasn't about, let's push flash into the market. I think they said flash solves a real problem and we can use that as a lever to simplify the experience. And I think for as long as I've been watching Pure for a decade plus now, it's all been about simplifying. It's not just simplifying flash storage, it's simplifying the whole storage experience. So where I'm a fan of Pure is how they apply that philosophy of simplification to everything, whether it's consumption based, whether it's even procurement. There's a reason that Pure is probably the only storage company I think that publishes their MPS score, which is in the 80s. And that's insane for an infrastructure company. But it's because it's in their DNA to provide a simplified experience. And again, it comes back to where we started the conversation, IT is complex, right? And IT guy, man, he struggles every day. Anything you can do to solve this problem is gonna resonate and it's gonna be goodness. And that's where Pure's focused. And it's less about the technology and more about the approach, right? Well, that MPS score, off the show, I'm just looking that up, that's insane, is right. I think they're, I just Googled it. Pure is an MPS score of 86. And Apple, Apple's in the 70s, I think. That's unbelievable. So, and you're right. I mean, Pure has always been focused on simplicity. Storage, as you well know, you've been in the business as of I for a while. Storage, simple was never how you describe storage back in the day. So, Steve, it was great having you on. Congratulations on your one year anniversary and really appreciate you sharing the results of your perspective and research. No, thank you, Dave. It's good to see you always. All right, cheers. And thank you for watching the next generation of storage made possible by Pure. Now, to learn more, please check out the resources tab and the links in the description of this video. Bye for now.