 Hi everybody, we're back in San Jose at GTC24, just unpacking the AI era. It's just still pretty packed here, maybe not as so much as it was the other day, but it's good to be here. John Mayo is here. He is the VP of alliances at Vast Data. Thanks so much for taking some time. Thanks, Joe, for you guys. Huge show, it's good to be back. Actually, I don't think I've been to a GTC since I joined Vast four and a half years ago. Well, yeah, because 2019 was the last one. Right. And they've certainly outgrown 2019. It's been phenomenal. Yeah, it's pretty good. You guys got a good presence here. And we were just at a luncheon that you guys held. Thank you for analysts. That was good. And you had one of the leaders from Genesis Cloud there kind of explaining their model. But you guys sort of quietly sort of in plain sight, but hidden in plain sight, if you will, have really carved out a really prominent position with some of these alternative clouds. I wonder if you could explain for the audience, like why do people need a different cloud for AI? Why can't the sort of conventional clouds that we all know and love kind of support the workloads, the way you guys envision it? Yeah, so I mean, last year, 2023 was really a banner breakout year for Vast data, especially in these kind of specialized AI clouds, like you mentioned, right? Genesis Cloud being one of them. We have great partnerships with CoreWe, with Lambda Labs, Core42, right, and many, many more. I think there's been an opportunity in the market for these specialized clouds because everyone started in the hyperscaler clouds, but there are some needs where we're talking now thousands and tens of thousands of GPUs sometime for some of these very, very large models that are being trained today that just need a lot more horsepower, right? Need a lot more, both on the data side, but also on the compute side, obviously. So there's been a kind of a emergence of these type of vendors, and we've been working hand in hand with many of them, designing new architectures to enable this type of infrastructure. So Jeff Denworth said something at the launch. I showed up late, so I kind of didn't catch the whole thing, but he said, look, the traditional cloud guys, they will thick provision a GPU cluster. And I'm like, hmm, wait a minute. That doesn't sound cost effective. What does that mean, and how are these alternative clouds different? What does the customer get? So I think a lot of the specialized cloud providers are providing dedicated bare metal. They're not typically heavily shared infrastructure. Whether they virtualize or not is kind of not the point. There's kind of been lots of technology advancements to give you max peak performance of GPUs in lots of different ways. But I think the biggest difference is that it's really a purpose built. I mean, you were at the keynote, I'm sure, on Monday. Networking's a huge part of it, right? So the considerations at going to building these AI supercomputers, you have to look at it as you're literally building a supercomputer for some of these new workloads. So every single facet of the data center from physical layouts to how you do network design to how you segment compute and storage networking, all of those things are big factors in making sure these jobs run well and run as fast as possible. So it sounds like these companies are just, they're thinking from the ground up, they're purpose built data centers for AI. As opposed to kind of general purpose. Yeah, yeah, they're definitely not general purpose clouds, but it's been pretty fun because we've been working with CoreWeave, for example, and it's allowed us to really innovate. Like we just made an announcement a couple of weeks ago where we've been partnering with NVIDIA on the Bluefield DPU technology. So it's their kind of accelerated networking card plus compute all in one technology. And we've been working with partners like CoreWeave to build a system that's more secure, more scalable, more efficient from a power consumption perspective. So when you're running at this scale, some of these AI cloud providers, like it matters a lot, right? Security matters a lot, multi-tenancy matters a lot, Q-West matters a lot for their business, right? And so it's been pretty fun. How are they able to, I mean, the hyperscalers have so much cash, they got cheap cost of money. They're lining up to buy GPUs we heard on Monday, they all signed up for Blackwell, they're ready to go, they all got shout outs. How do these guys compete from a CapEx standpoint? Are they just like get amazing VC funding or they have all kinds of backers is strategic? How are they able to compete? I'm not a finance guy. So I do know that there has been a massive infusion of investments globally. And I think there's also some of it is private money, some of it is government money, right? A lot of nations now are looking at building sovereign clouds for AI specifically, right? And that's going to stay within their borders. And a lot of times it's funded by government money. So I think there's money coming from private sector, also from governments and all over, and there's probably more creative ways of financing that I don't quite understand. Well, but it makes sense what you're saying because these governments, they have to protect their own interests, they have local laws, they don't want people breaking the laws, but at the same time, if it's really hard to comply, they want to make it easier. So that makes a lot of sense. Okay, let's get to the vast like, what's your role in these alternative data centers? So you guys are obviously providing file systems, they're doing a lot of like amazing HPC and HPC and AI are colliding. Give us the rundown there. Yeah, so I mean, historically we've been, people could think of us as a high performance scale out file and object storage system. And we do that very well. We've built our amazing business over the last four years on that kind of foundation. But starting last year, we started to introduce and roll out and start to ship new capabilities. And I don't know if it's been clear to everybody, we talked about the vision, but we introduced new capabilities like the vast database and people have always wondered what, why are you getting into that space? And I think as AI's kind of come onto the scene, people are starting to understand that you need those kinds of structured semantics in order to take unstructured data, images, videos, et cetera, be able to put some structure and be able to feed that into these large GPU clusters to unlock the true insight, right? So we're, the reason why we've had so much success recently is that we're starting to introduce new kind of up the stack data services necessary at solving a lot of these large complex AI and data pipelines end to end, right? So it's not just about how quickly can you read and write the IO performance of systems. That's obviously table stakes, but how can you simplify the data management? How do you accelerate the data preparation and the other stages that happen within a very complex AI pipeline? Yeah, we had, we had Renan and Jeff on our breaking analysis program. We were talking about the next data platform. And there were some of the, you know, some of the attributes that you think about, you mentioned some, but you're talking about petabyte scale. You're also talking about all different data types, different query types. And, you know, today, today's data platforms really aren't built for that. They're largely built for analytics, which is great, but that, they weren't built for AI. Now you see this, you know, you see the snowflakes and the Databricks, which are great companies, but they're sort of, you know, it's a pejorative, I know, but they're sort of bolting on AI. Okay, but what you'll work for them is a lot of data in there. But again, I think, if I understand it correctly and listening to Renan and Jeff, you guys saw this coming. You didn't know when, you know, so you have, you had a good business just selling big honkin' object storage. And then boom, the market hits and it just comes your way. We definitely, if anybody told you this, we definitely didn't predict when this would happen, but you can kind of see it on the horizon. I also think that the time, you know, as far as, you know, why we're different and why we think we have a unique advantage is we started a little bit late and we built everything from the ground up, like from infrastructure up, that was optimized for where we are today, right in this AI era. So we're not building and kind of stacking on top of legacy infrastructure approaches. We've custom built literally from the hardware on up from day one, and it's put us in a very good position to win. You know, you saw, it's interesting you say that, John, because you saw like pre-cloud, you know, companies that were formed in, let's say 2006, 2007 timeframe, or even before, even, you know, 2009 before the cloud was a thing, before the classification of business, they had to pivot. And some of them did quite successfully, but versus, you know, again, take like a snowflake that was really born in the cloud, you know, born cloud native, and they were able to excel and reach escape velocity. So the fact that you were a little late, right, was actually advantageous in terms of compressing the time to AI, if I can use that concept. No, I think we've not only been able to do that for the traditional on-prem data center, but we're also moving the other direction into the hyperscaler clouds to help solve these problems there for customers as well, so. Okay, so you're helping those guys move into the AI era as well, as you were saying. Right, right, right. And so, so that's going to be an interesting collision down the road. You get the rabbits out front, grabbing some early market share, you know, gaining some traction, and probably gaining a lot of innovation. What about the ecosystems that are forming around those sort of AI clouds? What are you seeing there? I think there's obviously a big focus on software partnerships right now. NVIDIA's got a very broad range of software solutions. There's a lot of emphasis this week, especially on the inferencing side, right? So inferencing, I believe, is a big focus for this year. It's where, you know, a lot of people right now have bragging rights around training, but inferencing's where the economy's actually going to, you know, you're going to see that explode, right? So it's where the monetization effects are going to happen. So there's a whole entire ecosystem of software. There's a lot of focus on NIMS, the announcements with NVIDIA. We're partnering with a lot of different ecosystem players that are helping operationalize a lot of inference workloads as well. So yeah, I think increasingly it's going to be around how do you help, you know, from a full bicycle management of inferencing, you know, train models, getting those deployed through the monetization of those is going to be a big part of the ecosystem. What's the partnership with NIMS? Can you explain that and we can unpack that a little bit? I mean, so NIMS is an entire solution where you want to be able to use existing data to be able to fine tune, you know, a lot of these foundational models are becoming really, really, really smart, really good. But if you want to apply kind of domain specific relevance from your business, from organization, you have to feed it with some business specific content, right, information. So those are frameworks that are emerging to help make that simpler, right, to go tap into the enterprise and the, you know, the stroves of data that they have in their environments make that more tailored for their business, not even to the organization, but it could be even at a departmental level within a large organization. So you're saying that I can basically point that infrastructure at the data that's inside of a vast object store, and essentially create my own sort of LLM, if you will, but that's very domain specific. So this would be applied, whether it's financial services, I could see use cases in healthcare, I mean virtually any industry. I think so, I mean, they gave some examples of it, but yeah, absolutely. Taking, you know, I was talking with a large system integrator the other day, but they are by definite information environment from how they handle contracts to how they develop knowledge, right? They're like, that is their secret sauce. So being able to capture that content that's been stored in archive forever, feeding that into fine tune and improve these is going to be very, very powerful. So this show, I think the show actually will mark a milestone because there was no GTC, you know, really after COVID, this was the first one after COVID, pretty amazing actually. You know, most conferences, I'm sure you remember, you know, late 2021, even 2022, isn't it great to be back in person and we all cheer. And now it's 2024, people aren't saying that anymore. But I think this conference, I'd love to get your thoughts, your sort of final question, really marks a new milestone in the AI era. It touches virtually every industry. It's not just a bunch of technology companies here presenting, I mean, I guess, you know, the old bromide companies, a technology company, it's really proving out here. I've never seen a more talked about event I mean, you know, you see things like Dreamforce and you know, some of those other big conferences and CES and others, you know, but I mean, it's still going on and on the TV shows and the business channels. And it really is, I think, marking a new era for our industry. I, you know, the cliche, you know, I think it was used by Jensen last year, right? It was like the iPhone moment has arrived. And I think it's so true. I mean, the funny, so anecdotally, right? I haven't actually spoken with or met with a university or a research center this entire week, right? It's with like private or, you know, private sector enterprises that are trying to figure out how to infuse their business with AI and possibly either save costs or make more money or increase productivity, whatever it would be. So that is a clear inflection points happening in the industry. And it's pretty exciting to see. All right, John, thanks so much for coming on the trip. Thank you for having me. Appreciate it. All right, keep it right there. We'll be back for more action from San Jose, Dave Vellante and John Furrier from GTC24, you're watching theCUBE.