 Okay, welcome back everyone to theCUBE's live performance here, VAS presents Beyond, Build Beyond. I'm John Furrier, host of theCUBE. We've got a great panel here. It's a partner panel, VAST and NVIDIA. We've got John Mao, VP of Global Business Development at VAST Data and Tony Puckaday, senior director, product marketing, artificial intelligence at NVIDIA. Guys, welcome to this segment on the partner update. John, great to see you. Thanks for having us. Thanks for coming in from Atlanta remotely in. Thanks for having us. So I got to ask, what's the relationship with NVIDIA and VAS? You guys have a deep relationship. It all started back with some money investment and also part of the platform. Explain the relationship. Yeah, the relationship goes back to the early days when Renan was starting the company and a few engineers were looking at building a new type of architecture. And this was even before the term envy and uber fabrics kind of came to fruition. So it goes back to then. Since then we built obviously a new product, a new architecture in the market. Today we're not only a customer of NVIDIA and also a portfolio company, but we're a great technology partnership from a go to market and sales perspective too. Tony NVIDIA has an amazing success over the years. Even more recently with the AI wave, all the investments in the software stack in NVIDIA is playing out in the global stage. Stock price reflects that. The overall performance of NVIDIA has been spectacular. Congratulations. But here, you're a big part of the VAS system. What is the technology underpinnings? What's specifically going on in the VAS platform that you guys are powering? You know, if you think about it, this is a great coming together about leadership class data platform with leadership class accelerated computing. I think what we're doing together and which I'm really excited about is our ability to take extremely complex AI workloads, deliver them with the fastest time solution possible based on best of breed architecture. Our customers time again have asked us, simplify this, give us better performance with scale, better utilization, better productive work capacity for data scientists and developers. And I think that's what this coming together is all about. The work that VAS does mirrors our own design philosophy and how we look at the full AI developer stack and what's required to give these teams productive work capacity that can scale linearly with the size of system that's being used. You know, in every market Tony, there's the market map and the stacks and the different components. What is the AI stack? Cause we're seeing the market, even Jeff the co-founder was saying we're shrinking stacks and pulling it all together in this platform. What does the AI stack look like today and how does that look like in the VAS platform? And what does that do to enable for customers? Can you take us through that stack? Yeah, I'll start from kind of bottom up. If you think about first of all the infrastructure layer where you have NVIDIA's accelerated computing solutions. So I'm responsible for the DGX platform. That is our flagship best of breed infrastructure for AI development. So if you've heard of the DGX super pod as an example, that is if you will the leading capability machine for large language models and generative AI. So that's the bottom of the stack if you will the foundation. Then you have obviously libraries, drivers, communication primitives. There's a whole lot of engineering that goes on at the software layer to optimize every bit of data that gets processed by that stack to move it between endpoints as fast and with lowest latency as possible. Then if you think above that there's infrastructure management. There's capabilities that allow workloads and their users to request access to resources at the physical infrastructure layer. So you have to have an incredibly automated, intelligent layer or apparatus that can sit above all of that and go between developers who simply wanna execute a training job to the infrastructure that has the resources to do that job for them so that the developers don't have to worry about that stuff. And then finally have, if you will, the developer workflow console. We call it base command platform. This is where data scientists do their work where they execute training runs where they, if you will point to data sets, point at compute resources and kind of connect the two and execute a run and hopefully get a better and better model that's ready for production. And then above all of that and really where the developer experience begins is what we call NVIDIA AI Enterprise. That is a suite of best of breed software that we've created in partnership with our ecosystem that include things like optimized frameworks that lets you get the most out of the infrastructure, pre-trained models that you can grab off the shelf for instance, for specific use cases like large language models or cybersecurity or customer service or whatever it is or even accelerated data science tools that lets you, for instance, prep and engineer your data and have a more curated, more streamlined training workflow. So that's really, that's how we think of the stack and there's optimizations up and down that and obviously we work with VAST at every layer. Awesome, thanks for that call. John, now on the VAST side obviously what jumps out at me is the data engine. Okay, we see that. Okay, I love the first one, I love the word engine and the engine's in there. It feels like an operating system, feels like something's happening, got compute. This seems to be where things connected. Explain that VAST impact of the NVIDIA relationship is that the data engine is a data space or is it all of the above? I think it's all of the above, right? I mean, I think for the casual observer people might think that AI is a single application where you kind of download and run like on a server but in reality AI is a series of different applications that really constitute a pipeline or a workflow and so all of the different product announcements we made today, they somehow played different roles, right? So from, we talked about, I think we announced the database for example, we've done a lot of work in optimizing and integrating with solutions like Apache Spark, something that NVIDIA has also done a great amount of work with accelerating for data science and analytics. Data engine I think is the most exciting thing for me personally because it's gonna allow us to build this loop system, closed loop system that allows us to take data, infer on the data, feed the inference data back into the system and keep going recursively. So that gets super, super exciting and then a data space is a big one. I've talked about this with Tony in the past but a lot of our customers, when I go talk to our customers everybody wants that flexibility, that hybrid model of being able to compute where they choose. Data space is our ability to now stretch kind of our name space in a hybrid manner to anywhere the customer so chooses. How integrated is NVIDIA in the VAS platform? Give us the order of magnitude. Where are the key elements, is it in everything or is it, explain in the simple terms what's the NVIDIA piece? Well I think NVIDIA, obviously at the core of our technology a lot of the NVIDIA networking components are a big part of our infrastructure solution today. So we are big users of all of the networking components both on the ethernet and Finneban side. We were one of the first OEMs to really embrace the Bluefield DPU inside of our systems as well. And I think more importantly above that layer at the integration we've done with Tony and team on things like DJX Superpod, we're very proud of the fact that we figured out how to supercharge NFS as an enterprise storage protocol for some of these parallel high performance and model training type of workloads. And there's more, right? We're starting to work, I'm personally excited about some of the software that Tony talked about in the analytics space and the AI space that's going to become more and more pervasive. You know, I love this conversation because you talk about hardware, we love hardware by the way, we're hardware geek. But software really is the magic and the ability of these abstractions and the benefits to the customers are an enabling piece. So can you guys talk about the, how this renders itself to the customer? Okay, what's the value of the relationship with Nvidia and Vast for the customer? Is it that you're making the software available to enable developers? Is it ease of deployment? What are some of the benefits that you guys bring to the table together for customers? Tony, what's up? Yeah, if I could set context a little bit, you know, these workloads customers are working on are only getting bigger, large monolithic workloads like large language models, but also customers who, as John pointed out, they build their AI center of excellence on a DJX super pot as an example. And so, you know, they need solutions that let them manage their workflow, go from data to insights faster, but there's a challenge with traditional offerings in this space which we together have been addressing, right? Typically this kind of infrastructure as it scales, so does the complexity within these systems and the likelihood of higher rates of failure, right? And that cost of failure when you're talking about a very large language model that on the 21st day of a 22 day training run, if it crashes, that's a lot of work and that's a lot of cost to a business. So being able to take that kind of risk and complexity out of the system, being able to optimize their workload in code such that their workload runs with the maximum performance possible. That's a lot of software work and a lot of stuff that our team and the vast team do to help customers to get that kind of performance. The allocation of resources to a customer's jobs efficiently and rapidly versus stalling out developers who are waiting on machine cycles and losing productivities. You know, clearly in a lot of enterprises these large AI development platforms and the enterprises acquire them have been conditioned to accept marginal returns on fairly large investments. And I think what we're working on here and what we're delivering to customers now is a completely different mindset, being able to maximize the productive work capacity of the infrastructure through a lot of software innovation. In fact, you know, for ourselves and probably for vast two we're honestly more a software company than we are a hardware company and that's reflected in the stack. Yeah, I love that. By the way, John, your angle on this for customers what are you seeing? I mean, I think a lot of it is if you look at people getting into these kinds of workloads in a serious way I think we showed a slide today in the keynote it's a spaghetti, it's a mess in terms of complexity, right? And so a lot of the things that we're building toward are really to bring a level of simplification eliminating some of these trade-offs that Rendon talked about today in the keynote as well and bring a new order of magnitude and efficiency, right, efficiency so that so the researchers can focus on the science and less on the infrastructure and the plumbing that makes all of this happen, right? That's really the vision that we have here at Vest. You know, one of the things I love about the AI piece, Tony is that when you see the scale of the data new things emerge, new data types, new workflows and new benefits are emerging that it would never imagine because you're not it's like you're at home in the stratosphere of value creation. And Rendon talked about that in our interview when he asked them about the early days what the media companies they had distributed teams maybe it was a COVID issue, maybe that's how they work but that kind of mirrors the workforce and the world we live in. Like you're going to have large pockets of data everywhere and people working on a different data sets and data is the new IP. We're hearing that in theCUBE all the time integrating with other data. So you have data, talking to data, data being programmed on and being automatically triggered and having functions, we saw that. This is a programmatic value proposition at scale. What's your reaction to that? How do you see that evolving? This is kind of a pinch me moment in the industry because this is kind of a first generation for the enterprise to see the scale come on board quickly. Yeah, you know, I think you captured eloquently the reality is that we need to bring compute to where the data lives. Data is the source code of the enterprise today. It's what fuels AI development. And essentially our customers have been looking for platforms that make it easier to bring the computational power to where that data lives. And that data can live literally anywhere. And customers, for instance, those who've been repatriating a lot of intensive workloads from cloud to on-prem or a co-location or other facilities have done that for that very reason, right? To bring the computational capacity to where the work needs to be required. If you have to desalinize ocean water, it's a lot easier to bring your plant right to where the water is versus having to pipe it, you don't have to be across the country kind of thing. So it rings true for us and absolutely that's been our focus. There's no point in fighting data gravity. You're going to lose every single time. John, I love the desalination because that's really the people, the cleaner the data, okay, the better the AI. It's a huge part of the equation. Huge, we hear this a lot. I mean, there's a whole, you know, job description popping up around, some people call it data development. Data engineering is a whole entire role in itself for most organizations now. That data preparation, all the augmentation, transformation, the cleaning, right, of the data is such a huge part of these AI pipelines. I think that there's a lot more that needs to be done in the industry, but we're taking huge steps. We're solving a lot of these problems because of that spaghetti mess I talked about earlier. Yeah, and I think the combination of Vast and NVIDIA is a really good one. You guys working together, I can see the magic there. And I think one of the things that I'd like to get your reaction to, both of you before we end the segment, is the movement around this data developer, which we've been talking a lot about. We see it clearly. Now, you guys are seeing it's in your material, it's a persona. It kind of looks like the DevOps early days infrastructure has code thinking, but we saw that change with DevSecOps, where security teams said, hey, we'll create guardrails. Sound familiar? Guardrails, AI, you're here all the time, but shifting left has been the big movement for developers, okay? So you're seeing a similar pattern emerge for data where the data teams are moving out of their silos from being like a database administrator, data warehouse too, just smashing down those departments and becoming native to the organization. So do you guys see a similar pattern where like security is going to be an organizational operational role to funnel the data up and let engineers and developers write code in line? I'll take a first cut. I mean, it's a big topic, you know? I think not only is data the fuel for a lot of these workloads, but at the same time, there's a lot of conversations in the industry around the liability of the data being used too, right? So not only do you have to prep it and version it, you need to be able to explain where that data came from, the explainability, the traceability is a huge, huge, huge topic that's being discussed in the industry. So yeah, I mean, I think the data engineering role is going to become a bigger and bigger function. If we can help automate or make that process easier, I think it's going to help accelerate a lot of these workloads as they move into the future. Software supply chain, data supply chain. I mean, Tony, do you see this kind of paradigm coming? Yeah, absolutely. It's table stakes. I think it's already happening now. A lot of organizations are kind of structuring themselves in this way because it is the lifeblood of not only their enterprise, but their intellectual property, their explainability for a model that they're going to put in production that could put customers in an awkward position or put relationships in harm way, all of those things. Yeah, this is really kind of a perfect storm. What we're just kind of riffing here off the cuff here is that people are storing data for legal reasons, compliance reasons and innovation reasons all at the same time. It's all one use case, hence a platform. This seems to be a kind of a revolution in my mind because this just takes it to the next level because you'd have data for compliance, you have data for legal, and then innovation was like a R&D function, but now they're all one. Explainable AI, managing the compliance. I think just... Yeah, I mean this idea of kind of systems of records or a single source of truth, foreign organizations becoming more and more important. People have been trying to aspire to solving this problem historically. And there's lots of different attempts of doing it, but I don't think it's been very feasible to run some of these workloads over past legacy approaches. And so yeah, it's becoming more and more important for these organizations for sure. You guys are building beyond my final question. First of all, thanks for coming on from Atlanta, I appreciate it, Tony. Final question as we wrap. What do you guys talk about when you get together? What's next? I'll let Tony go first. It's funny how much we've done virtually because we've done a ton of work together without necessarily being co-located, but I think when John and I talk, inevitably it turns to this whole aspect of how we're democratizing this stuff, making this mainstream. John's been in with us from years and years ago and he knows as I know that over the course of the last decade, so much of this started and was rooted in science and academia and no offense to either one of those if you happen to be in that space. But the reality is that right now, we have customers day in, day out who are seeing this iPhone moment, who are seeing generative AI and seeing the pragmatic use cases that drive down TCO, that increase ROI. And these customers are coming to us with, make this simpler, make it easier. Help me to derive insights from these oceans of unstructured data that I've got where I couldn't before. And I think John and I are both super excited about the opportunity and the work that we're already doing. Well, we're looking forward to hearing of those new announcements and the new features. So congratulations on your relationship. Thanks for coming on, Tony. Thanks for coming in. Thank you. To the Cube Vast Presents event here in Palo Alto, I appreciate it. Okay, Jeff Denworth's coming up with Dave Vellante next with a customer perspective. I'm John Furrier, we'll be right back after this short break.