 Hey everyone, welcome back to theCUBE. The leader in live tech coverage. We're coming to you live from Las Vegas, the Venetian Expo. It's HPE Discover 2023. Lisa Martin here with Dave Vellante. Our first day of coverage of three days. We're going to be having a ton of conversations as we always do the CUBE's canon of content, Dave. Just continues. It always comes back to the data. It always does come back to the data. We're going to be having a great conversation with Brandon Whitelaw who joins us. VP of Strategic Partnerships at Cumulo. Brandon, great to have you on the show. Talk to us about what's going on at Cumulo, industry, leadership. Give us all that good stuff. Yeah, thanks so much. It's great to be here. As I mentioned before, long-time listener, first-time caller. And it's a great show. It's my first time at Discover. I've been at other competitive spots exactly here before. And it's a fantastic, put together show, really insightful. We're loving it. Great traction with customers coming to the booth and being able to talk about what we have related to being able to scale anywhere and help with the data challenges that customers are facing. Cumulo now is five years in a row in Gartner's Magic Quadrant and the Leader Quadrant, most innovative in that space. And it really comes down to predicated on the ability to help customers in a purely actual, software-defined way take advantage of hardware innovation as rapidly as great companies like HPE can put it out to be able to go after their challenges related to putting their data wherever they need it, whether it be, as we heard today, on the edge, in the data center or in the cloud, or where I really think the future is, which is in a hybrid topology. And that's one of the things that Cumulo has done. Early on, you guys were sort of leaned into that cross-cloud. It was an early example of what we called super-cloud. We actually pointed to Cumulo as a consistent experience, whether you're on-prem or across-cloud. But what about the challenges of unstructured data? I thought they were all solved by 2023. What are the issues that customers are having? Is it just the amount of data just keeps growing and the old three Vs or five Vs or whatever it was? What's your take? Volume, velocity, yeah. I mean, I think the interesting thing about it is that it continues to evolve more and more into primary, high-value workloads. Once upon a time, unstructured data was kind of your back-office IT home directories and file shares. For the most part, we see that transitioning towards better programs to be able to collaborate like Office 365, OneDrive, Google Drive, et cetera. Where we see most coming out now is customers really struggling with trying to figure out how to modernize these workloads to be able to kind of buy the base and rent the peak, so to speak, of private and public cloud and be able to have the same simple, seamless, scalable experience for that data anywhere they want to put that data, where it's demanded of them. Whether or not they have artificial intelligence, data scientists wanting to use a particular algorithm that comes out, Google one day, Amazon the next day, and they have to have their data there and ready for it, or they have to have it on-prem where they know that they can have a consistent, controlled experience and leverage all the goodness that GreenLake has to offer. So we really help customers really be able to do any of that without compromise. In fact, I saw it from both sides. I saw from over a decade of having best in class on-prem scale out technology and then in the cloud for three years at AWS seeing customers trying to move it there from the other direction and the reality is that it's kind of having to be cloud smart. Where does it makes the most sense and that can change from month to month depending on what's going on in the department you're talking to. And so to be able to do that without compromise, not like a toy implementation in the cloud but the full actual experience without compromise that's effective from a cost performance and scale really helps. And where are these customer conversations these days? Is this at the C level in terms of managing these growing volumes of unstructured data being able to have access to the right data but from a cost perspective and from a sustainability perspective where are you talking with customers these days? Well, so they're really getting on both sides. So Core IT is getting squeezed as always to do more with less. So they want simplicity, they want efficiency, they want the ability to consolidate more workloads to have fewer things to manage. And on the other side, that same organization is trying to funnel as much money as possible into agile, flexible, next-gen workloads and have those deployed again wherever it makes most sense at that time. And 94% of companies of enterprise accounts are in cloud and some 80 some odd percent are multi-cloud and the complexities of having a different experience everywhere and a different way to manage and store that data is really, really challenging and that adds to expense. And so ultimately being able to do that in a more modern approach that reduces cost quite a bit and bridge the gap between that kind of strategic CIO level and the storage team who still needs to govern, secure and control it and provide that real-time file system analytics on any side of that equation that helps across the board there. You talked about buy the base, rent the peak. Yeah. So I like that. I like that. Or even more so GreenLake like. Yeah. So do you approach that with partners or do you also, are you tempted to or do you basically do it through a cumulo interface with clients? Yeah, so we were a 100% partner driven as in how we go to market and what we find is that there's, there's actually two different types of engagements based on what the, where the customers are at in their journey of being able to leverage cloud appropriately. So usually we see that the trusted guided IT professionals that help resell traditional hardware that are fantastic partners with HPE helping with the majority of that. And then we also see cloud SI consultants coming in to try to help figure out the best way to optimize cloud where necessary. And because of how we're licensed it's pure software. We don't have any proprietary hardware. It doesn't make it, we can take advantage of the newest, greatest thing that HPE comes out with tomorrow or public clouds. And it makes it so that you really have a tremendous amount of agility and flexibility to help with that. And then the partners of course can help guide it because especially when doing a hybrid topology it's almost as sad as it is to say to be a storage guy it's almost never about actually just the storage. On prem it's kind of like Indiana Jones like how do I swap out one thing for the next thing and don't change anything out? But with cloud especially hybrid cloud it's, it reminds me of like the Martian like how do you go about figuring out how to science this thing to be able to make it so you have it in the right place at the right time and you find the right topology and you need to experience cloud providers to understand the application side of things. So we leverage both. I love that you just dropped two awesome movies The Martian and Indiana Jones. The Spirit of Movies and Easter Eggs. You mentioned one a minute ago I mentioned it in the teaser for your segment. We know skill out. We talk about it all the time. Kimulo's skill anywhere. That's the new id girl. What does that mean? And why does it matter? We see it as really an evolution and you guys have seen the ebbs and flows of IT, right? Of centralization, decentralization and all the things that have happened. I was part of the journey of the beginning of skill out. So back in the day and going from a skill up file to actual scale out and that's really the core foundation that started with Kimulo but we found as an opportunity is to be able to take that same experience and just go a level further which is if I just zoom out a bit from the data center and now I'm looking at edge and cloud and multi-cloud it's really being able to scale that same experience and capability in any of those domains. So it's like a true global namespace but not confined within a data center it's actually. Yeah, so the same experience, the same performance, the same flexibility and agility in any of the three major hyperscalers that you get on-prem. Again, not kind of a toy implementation, joking. I kind of sometimes really get it to like, I don't know, food that you get in a movie theater, generically, like you don't go out of your way to go to the movie theater to eat the nachos there. But if you're there, you eat it because you're not allowed to bring other things in and that's a lot of the file options in cloud today even from established on-premises vendors. They're very limited in scale, performance and cost. I kind of jokingly say if you were to take any of those solutions wrap it in a box and try to sell it on-prem you kind of get laughed out the door. But in our circumstance it's not. It's actually just the same. In fact, in some circumstances, faster performance in cloud than on-prem and with what HPE has done from a flexibility and agility and cost modeling perspective they're really combining the best of both worlds and from a technology perspective we're a perfect fit there. Why do you think that is? Is it just, is it the maturity of the stack? I mean, the storage is hard, right? You know, it takes years, maybe even sometimes a decade to mature your stack. We know this. Is it, but cloud's been around for a while, you know? And so why do you think it is that their stacks are somewhat deficient or actually largely deficient to, you know, not only cumulon but take a, pick your storage company that's been around for a long time. They've got better product, more functional features. Like I said, they get kicked out of the enterprise if they actually tried to offer the movie theater food. I hope that takes up, no, as an analogy. So I think what it comes down to is the tale of two cities. So you have kind of legacy vendors who started with their software stack 20, 30 years ago that really are in a situation a lot of customers in where they look at it and they say the best I can do is lift and shift the existing thing I have and change as little as possible to put it there. And because I can't control the underlying hardware stack which is actually where I get most of my money anyways, I'm gonna just try to get kind of the same experience but, you know, try to fit that square peg in a round hole as much as possible. And it's really limited because of that. On the other end of the scale with the hyperscalers they predominantly invested in block and objects. Now admittedly misguidingly thinking that all the file will just transition one of those two ways. Not realizing that with the advent of like AI and ML which is predominantly done on file if you look at bioinformatics and media entertainment this is all file and it's all been growing the whole time cloud's been growing. In fact, it's the fastest growing segment of those three while cloud has been growing. And the largest. Yeah, and so where we see this huge opportunity is 95% of the unstructured data on-premises file but less than 5% in the cloud is. And so file's really the final frontier. It is the anchor tenant, as customers have told me the anchor tenant on-prem that is keeping them from being able to move workloads because there's just such a gap there. Now recently in the last couple of years the hyperscalers started to really focus on this through partnerships. For example, we have the Azure native cumulo offering which is Azure's only native integrated portal launch storage opportunity. And on top of it we have more coming related to AWS and Google but the ability really presents itself where block and object being mostly application connected without humans touching it, seemed to just be easier I think along the way and file is complex. The file system owns a lot of things that normally a block and object system don't. And that's why you have so many variants. There is no, unfortunately, there's not one size fits all there. And customers have preference and use cases change. No compression algorithm or experience. Famous cloud CEO once said, what about AI? I mean, given the thrust and AI was invented last year I'll put a joke, late last year. But given all the focus on AI and the marriage of AI and file as you just pointed out, how do you guys, you know, cumulo, HPE, how do you think about that and where do you see it going? Well, a couple things. One, fortunately, although I was not here to take credit for whatsoever for it, cumulo was very early in that game we're actually just being a consumer of AI. So a deep machine learning algorithm to help optimize how it caches data between spinning disk and SSD. Just tremendously high cash hits that actually is very well translated into how we operate in the cloud as well. So 10 years plus of maturing that model and using it to make the storage better. So it's actually a really early example kind of eating your own dog food in that way. You know, a 4k heat map of the entire file system is very hard to do unless you plan to do it from day one. So that was there. I think in addition to that, what we see is a lot of customers now in this arms race to figure out how to implement and integrate this into their business and get access to the right appropriate tools for it. And a lot of times what it comes down to is this hybrid topology. So how do I go test and be agile and try new things very quickly in the cloud? And then if it gets production and gets going really big, sometimes it makes sense to actually pull that back down on-prem into a green light cloud where you can control the cost characteristics, the performance characteristics, maybe data sovereignty and locality characteristics. And so, again, having that hybrid topology in the reverse sometimes, I had a customer telling me their incubation chamber is always cloud, but the moment the workload goes production, they bring it on-prem because they know that it comes down to variability. If variability is very high, cloud works great. If variability is low and consistent, you can almost always save more money back on-prem. I think a lot of people confuse that for repatriation. It's really just smart, cloud smart I guess is what that is. There's a lot of that going on. So do you actually specifically sort of optimize for GPUs? Have you been doing that for a while? Yeah, I mean, the nice thing is kind of an accidental architectural benefit, so to speak, before this was a massive workload for most customers. Most traditional file systems struggle with the simultaneous read threads needed by all these GPU cores. You know, the core count, even on a one-U HPE box out here, can be 25, 30,000 cores of CUDA cores looking for threads. And traditional file systems have a limitation on that. They hold a memory handle on every single one, we don't. And so it allows us to scale to the absolute tip performance of the whole system without having that kind of artificial construct in place there that many other systems do have a challenge with. So that's one thing. The second thing actually where we see a huge amount of opportunity in Azure and AWS as well is that surprisingly to me, while 99% of AI machine learning on-premise through a file system, high-performance NVMe file system, in cloud, it's almost none of it. In fact, almost all of it is built again with the tools they had at the time with staging data and object, and then holding it, sorry, an object, and then staging it into the local NVMe disk, which by the way, they're paying per second for that GPU server. It's the most expensive byte of data you can buy in the cloud is that local disk in that GPU server. And they're waiting sometimes hours to load data into it to run the query. So having a shared high-performance file system that can scale the whole data set across multiple GPUs actually accelerates time to learning, reduces time to be to learn so you can move on to inference and other stages that workload really helps out quite a bit. And I can get that in the cloud. If I want to access the cumulostack, I can do that, right? Yeah, absolutely. Same exact experience, native sync between the two. Like I said, you can train in cloud if it made sense for that period of time and bring it back on prem for inference or use the most recent models. Anywhere that data needs to be to get the access to those tools. So the premise that HPE is putting forth is a couple, but basically their LLMs that they announced LLMs in GreenLake is essentially people are going to want to train privately. It's going to be more efficiently done with HPE supercomputers, right? Presumably you want a piece of that action. I know what are your thoughts on that. Yeah, look, I think it makes sense because surprisingly, the number one challenge with these models in hyperscalers today is access to the GPUs. There is such a massive challenge in just getting time on these things and the cost is through the roof as you can imagine because they can, because of the demand. And so I think with supply chains now getting back to where they should be and things along these lines, people are realizing that being able to kind of experiment and try new models and community access in the cloud. But when it really comes time to like put that thing to 100% and just go and train that model as hard as you can, some of the hardware innovation that the HPE has available just makes a total sense and really makes it also to where, you know, look, I'm sure the first times people went through some training runs in the cloud and then you get the bill at the end. It was a bit of a, you know. And so, yeah, to have a pre-budgeted IT experience, again, it's a differentiation whether on-prem or in cloud, we're flat rate pricing. There's no hidden expenses. You're not charged per read or write like a lot of things. And so it really aligns value to where it matters most, which is how do you just accelerate outcomes? Last question, Brandon, take us out with your favorite customer story that you think really articulates the value of what Cumulo is delivering, especially with its partnership with HPE. And if you can drop an analogy, your game is strong, I encourage it. You know, one of the ones that's most dear to my heart is actually working with a children's hospital in Ohio where they, unlike regular hospitals, actually have a much longer retention for medical imaging and they also wanted a way to be able to get access to AI machine learning models to be able to help with early tumor detection and things like this. And for HIPAA requirements, it's got to be their own data set. And so they really wanted the ability to, they're not masters of IT per se. Their focus is obviously helping kids. They needed to be able to find cost effectiveness in storing the data, at the same time, use that to train their models so they can get better at science. And being able to go between Azure and on-prem with GreenLake has allowed them to move that data where is stage appropriate when it's needed to not slow down radiology, not slow down surgery, but also train models to develop new ways to find specifically to their demographic, their kids, that they're trying to help. And it's just fantastic to see the results already. And you'll hear a lot more about it, I think, publicly by the end of this year, but it's a great partnership. Great partnership. You just talked about how that really shines the light on the availability, the agility, the flexibility, but also on those business outcomes. And when we, when you look at it in healthcare, the business outcomes are life and death. Thank you so much, Brandon, for joining Dave and me on the program, talking to us about what's new with Cumulo, the HPE partnership and the outcomes that you're helping customers achieve. Great to have you. Thanks for calling in. Thank you so much. Yeah. I appreciate it. All right. We want to thank you for watching the segment for Brandon Whitelaw and Dave Vellante. I'm Lisa Martin, but don't go anywhere because our next segment features a CUBE alumni, Greg Ernst from Intel. He's going to be joining us talking about HPE and Intel and what a successful AI deployment entails. You're watching theCUBE, the leader in live tech coverage.