 Welcome back everyone, live coverage of theCUBE here in Las Vegas for VMworld, VMware Explorer, not VMworld, I said it again. I've done it too. I said it again. 13 years we've been covering VMworld, and now it's VMworld Explorer on its second EM job where we've wrapped Stretch A, breaking down all the extra two sets. Got a great conversation about storage, platforms, AI, and all that jazz here on theCUBE. Keith Norby, Senior Director, Worldwide Partner Solutions, and go to Marker for NetApp, and Matt Paul, VP, Global AI Solutions for NVIDIA. Welcome to theCUBE. Matt, great to have you on. Thanks for having us. Keith, great to see you, CUBE alumni. Welcome to the CUBE alumni family. NVIDIA, great topic to talk about given the CEO was on stage with Ragu. Really kind of in an authentic moment where two geeks in their journey in Silicon Valley, kind of looking at each other like, how'd we get here? Oh my, one's a trillionaire, and one's, you know, didn't make that much. Post-Broadcom, we'll see. But NVIDIA doing well. Congratulations. Thank you. Generative AI, AI is here. It's here for real. We've seen it coming for a long time, and we're very happy to have the opportunity to work with VMware to bring it to the enterprise and bring it in mass. You know, I'm glad you came on, because Keith and I talk to us all the time. I remember seven, eight years ago, we interviewed NVIDIA folks, and back then, everyone knows it for the graphics card. Everyone wants a GPU card for their game or their machine. I should get that question a lot, but that conversation back then was from NVIDIA. Clearly, not even a hesitation. We're a software company. Not a hardware company. Hardware is software, and that was a really pivotal moment. At least for us, it's recognized that that's the case. Now you come here, the software stack's critical. NetApp, similar concept. I remember the conversation, oh, most of the decade, we see ourselves as a software company. I mean, all the insides saw this. Hardware and software were together. This is kind of the theme of this segment. NetApp's been there, done that throughout the years. Now AI drops on your lap. It's not a new thing. You're not just jumping on the bandwagon. You got some meat on the bone. Yeah, I think, you know, listen, it's no accident that we've, that I brought Matt on this segment with us, because I want to talk about, you know, not just this 20-year plus journey that we've had. I remember back when Steve Herrod talked about the Software Defined Data Center and some of the things that we're building off there. You know, the last partnership we've had with VMware and this show has been about us trying to solve multi-cloud. And we've done that through all the three hyperscalers, kind of building this foundation of abilities to do new things, new workloads, and doing that for customers with partners. Now that you look at it and you say, okay, Gen AI, we have had a four-year partnership with NVIDIA here that didn't just start last fall, last year. It's been building for four years, and I want to have Matt be able to kind of talk about that a little bit and talk about what we've done together, the fact that we've kind of put this in, partner competencies and programs, and just what we've done in terms of being that trusted partner. You know, in this time where you may have uncertainty about where the direction of VMware might go or the direction about what Gen AI might become in this part where people seek clarity, NetApp is this trusted partner, this trusted source to work with NVIDIA and really help bring clarity to the confusion and clarity to what people are looking for for answers. So Matt, talk about the relationship. Honestly, you mentioned implying that people jump out of the bandwagon. Which they are, a lot of people AI washing. You work with a lot of partners that have been there from the beginning, really kind of investing. And then the world changes quickly in the past year, and then boom, if you are already kind of pipelining into that wave, kind of in the flow so to speak with AI, working with it, you've got a real pop. Absolutely, so we knew AI was real. We knew generative AI was something that was going to change the world. We were all surprised by the chat GPT-3 explosion. We're very happy about it. But no, we've been planning for this for a long time. AI at its core is about taking lots of data, processing that data, applying the right algorithms and software on top of it. So we've been partnering with NetApp for four years. We've been looking at the network, how do you move the data in and out of the NetApp appliances? So we purchased Melanauts. As you mentioned, Jensen talks a lot about the software layer. We've been myopically focused on making sure that as you build out this optimized engine, the infrastructure, that we have the right software on top of it from the orchestration to the IT management all the way up to how do you run your AI models? Foundational models, all the stuff that's making all the buzz nowadays. So the underpinnings of this are really, really important. It's about how do you take the data? How do you process it very quickly with our GPUs, with our CPUs, with our GPUs, and then the software on top of it to make it all work? Yeah, and NetApp's position there is what? Their role, NetApp's role is the world-class leader in accelerated storage. So you have to feed the beast, as they say, right? Our GPUs are very hungry, and the faster you can get data in them, the faster you get results. So it's very important for us to make sure that we're feeding the beast, that we're getting the data into the GPU, and it comes down to the storage itself, it comes down to the networking, and so this has been a big focus for us for a long time. Yeah, and I mean, obviously we're at VMware Explorer, and I know NetApp has been with GCVE, has been with AVS, and with VMC on AWS, and obviously they all have lots of GPUs and GPUs and things like that. It would seem like this is one of those places where, hey, I have on-prem data that's in a NetApp, I want to take it somewhere to use and to burst and use a lot of GPUs, and that would be one of the use cases that you're seeing from your customers on the NetApp side these days. Yeah, absolutely, and in fact, I talked a lot about partners and the role of partners to bring clarity to this model, and what's been true about the AI models is that data scientists will start the experimentation in the cloud because of what the devs have for availability of services, and they bring it on-prem to kind of bring a mainstream, but what's true now is there's a lot of dynamics about the on-prem and cloud, especially you see AWS and what they announced with Bedrock and all the different offers with cloud and on-prem, so I think there's just going to be a multitude of these options, both what VMware announced today at the show and what we've all been building, and I think that's the point of us having this partnership is that we've seen a lot of things come and go, and GenAI is the new iPhone interface revolution to all these models, and as you now have LLMs leveraged against this new interface, this new output, we've together got the experience to really have the partners work through and help customers land that because that is what everyone's asking for right now. I think you guys nailed the research on this, that the spend is being reprioritized, and now the partners are left to try to help come up with the answers on how that spend gets utilized in ways that deliver back to the business. And interesting too, the storage is also moving up the stack through relationships where the apps need to get that data, again, feeding the beast is a great term, I love the aggressive, typical NVIDIA, kind of go that way, fast speed, power, the apps are going to want to tap the data too, right? So they're going to have to, you got to build that intelligence from the storage up, and that's just, I mean, you go back 10 years ago, that was a hard problem. Yeah, definitely. Now it's like, okay, table stakes. Yeah, I mean, you look at the evolution of, like you said, the network as a transport, the speed of storage, the speed of the GPUs, the scale of the platform itself, because it's not really just one thing, it's not just like one speed of a GPU or a DGX or a GPU, a farm, or a cluster, a super cluster or super pod, it's about how you build the total system to the design of the AI spec, the use case you're trying to deliver, and how that's modified from a very narrow use case to now Gen AI being kind of a broader use case because of the ability to come in with the LLM and the Gen AI sort of interface to basically extract that out for the common person to be able to get a very elaborate use case. Talk about the customer perspective. What's in it for the customer? I'm watching this, I'm like, okay, I know you got partner networks, you got end user customers. This is a win if you can get the AI equation into the hands of the customer where they can feed the beast, build their solutions. When you hear about modern apps, I mean the whole story is, hey, these apps are coming, they're going to be data driven natively and they're going to need to be managed. Where is the customer benefit? Take us through what they see in this. Yeah, I think it's pretty clear. The benefit is agility, speed, and new capabilities. If you just take a look at some of the examples you saw from some of these things that were revealed with chat GPT and others, that was just a very small glimpse. Internally even at NetApp, as with every other company, you're probably seeing XYZ GPT nicknames of that derivative come out and say, hey, we're going to write code for you, but probably done so in an IP protected way internally. That's a big thing you got to start doing now. It's creating content. It's coming up with new answers or even just doing email generation. So it's very innocent, very tactical things all the way to very strategic big things, but these trivial things that we've all consumed a lot of time with, those are getting solved. But then I think there's also these waves of things that strategically haven't quite been thought of yet that will also still evolve to. And that's why I think just all the stuff, Matt, that we've sort of worked on together, I think we're just at the early innings of where all this goes. Very early, very early. So if you look back four years ago when we started our partnership, we were talking to researchers, researchers in the back closet, working on these weird AI projects and it was really hard. How do you take the compute? How do you take the storage? What are the algorithms? What's the software stack? What are the outcomes you're trying to derive? And look at where we are. We're here with VMware, right? The operating system of the enterprise, we have made it extremely easy for every organization out there to dive in AI. We have full stack solutions, whether it's in the true enterprise, whether it's in the developer stages, you can buy an AI solution and get started immediately. So we're seeing the world change. It's not just the developers, it's not just the researchers. We're in mainstream enterprise. We're in mainstream IT. And I think you'll see this become quite pervasive and it's because we focused on the solutions. It's not just looking at a chip, it's not looking at one piece of the equation. It's focusing on the business outcomes. How do you build the engine that powers the software that drives those business outcomes? So we've been working on this for a long time. Jensen talks about it. You can't just be a chip vendor. You need to think data center scale. And that's where the partnership comes in, best in class storage, best in class compute. Great investment. And NetApp's always on the cutting edge, it's always surviving, thriving. Next 20 mile stay, look at the marketplace. I mean, it's vertical lines straight up, growth with AI's coming, every app can be disrupted. More data's coming in, more policies, more intelligence is needed. It's just a perfect storm. Then you got all these legal compliance, a lot of stuff going on. So there's some blockers. I think that's going to get blown away. So how do you guys see the next couple of years playing out from an NVIDIA NetApp perspective? Where's the partnership go? How do you see it evolving? Well, I'll tell you for us, the cornerstone is we launched partner sphere. So Vegas has their sphere, we have our partner sphere. I think part of what we've got to do is we've got to get partner sphere on the sphere. We're working on that, that's the next road map item by NetApp Insight. What is partner sphere? Is that a good one? Partner sphere is our program that we have that works with all partners, no matter what type of partner they're at, they could be sell to, sell through, sell with. We have three competency focus areas on hybrid cloud, AI, and public cloud. So that really means that those partners that just have data science practices that are specialized on AI and data scientists and gen AI, they don't have the other things, they just want to focus on that, that's great. By the way, it distinguishes those partners because we know that there's only a rare amount of these partners that have that data science capabilities, they rise to the very top. I would say there's probably at least maybe 10 and it gives them the distinction within NetApp to rise the top and be qualified for what to do. Yeah, and you were saying, I think earlier that that's not something that's new that that was a competency that you've had that for years now. Yeah, so we started building it in months and years ago in terms of the new program. We recently launched the program but as you know, it takes a long time to kind of build that program up, work through the modeling, et cetera. But we had the forethought, the vision, Jenny Flinders had the discipline to kind of keep it in there and make that part of what we thought going forward. I think it was thanks to the partnership and just what we saw was gonna be a big future with AI going forward. Yeah, and it would seem to be that, the data either has to be there, obviously to feed the beast and bringing those things together. Is that where you see the partners playing that role? Because I mean, we've been hearing a lot from VMware this week about how they're investing in and that was a big piece of the keynote this morning, how we're investing in the partner ecosystem, not only with NVIDIA and people like yourselves at NetApp but with the other partners out there as well for delivering. Is that what you're seeing is alignment between the three companies even? Yeah, when we started this thing out, we were focused on these things called service delivery partners. And ultimately, these are partners that help bring like three years of confusion into like three to six months of clarity. So in other words, they would shorten these things up because now organizations would know what to do and it helped Matt and I out quite a bit in being able to land the business and also it helped the customer. But now where it's at is you see things like we have a bunch of financial services customers on Wall Street and they need to model the Gen AI things we're talking about. So we have a partner who's actually building this competency in their ATC and as they do so, they take on a lead role for helping us kind of do what we need to do in a joint architecture and adding VMware in the middle of that is right on top of it too. Matt, great to have you on. I want a quick plug before you go about what you guys are working on, NVIDIA and then any commentary on NetApp. First, give a plug and everyone likes to know what you guys got going on. Great success, huge fans of the company. Get a quick plug for NVIDIA. Yeah, thank you. No, we have a lot going on at NVIDIA obviously. You guys talked about partnerships. It all comes down to partnerships. The company, Jensen's had the foresight to really look at this beyond the chip, look at the full solution stack from the software, from the business outcomes all the way down to the really cool chips that we manufacture. So it's been all about the partnerships, finding the right partners to fill the gaps in our portfolio and make sure that we can come together with tested solutions. So that's what we're doing with our OEM partners, with VMware, with NetApp, with our DJX systems. It's really about making it very simple for enterprises to adopt AI, making it very easy for them to start developing AI algorithms. What we're also seeing is that companies that get started with one use case, multiply. They realize, hey, there's benefit here that I can derive in this group or this group. We're also starting to see what I like to call things converging to an AI center of excellence. So an enterprise starts that has little pockets of AI in their marketing department or their operations department. And what you're seeing is companies are starting to centralize these resources, the talent pool, the data, the compute, the way things are done, and it's really becoming mainstream in the enterprise. So we're gonna see AI, we're gonna see accelerated computing, we're gonna see NVIDIA, more so in the enterprise, we're gonna do it with NetApp, we're gonna do it with VMware, and it's all about the partnerships. Better together, partnerships is a great way to play the long game as they say in the business. How would you categorize a relationship with NetApp? Someone said, hey, how are you guys doing? How's the partnership? What are they known for? What are they good at? What's the partnership sentiment and vibe? Yeah, NetApp is extremely invested in AI. I think Keith and team took a risk some years ago and said, hey, we're gonna double down on this. We see where things are heading, we bought into the vision that NVIDIA has. So I think their first mover in that aspect, obviously, there isn't another storage company out there that has the brand recognition that NetApp has for what they do. So it's been a great partnership to make sure that the enterprise can start to democratize AI and adopt their stuff. And that's a partnership made in heaven. Peanut butter and jelly, as they say. We knew you would get that in here. 30 seconds to get the plug for NetApp, go. You know, for us, we got insight coming up in several weeks back in Vegas here. It'll be a combination of a lot of things, first time back in Vegas as a company for an event. And this relationship's special, VMware. We've lived in this show. Our partnership program kind of brings together all these solution types, you know, the AI thing that we've been building, VMware and its competency for a cloud. Like I said, we saw first-party services, which is not easy, where the only ones in the industry has done that across all of them, and done that now we can bring it all together. So VMware announces their Gen AI thing. We're going to be in the middle of that too, right? Well, we appreciate your partnership with us. Same as Nvidia coming on and sharing your perspective with our audience, sharing the data, feed the beast. They're all hungry out there. Everyone's hungry for the data. Thanks for sharing. And Matt, great to have you on theCUBE. Great to see you every year. It's like a tradition. Thank you guys. All right, that's it. Live coverage here. We're live on the floor in the hub where the action is. I'm Chairman Rob Stretzky. Dave Vellante's getting briefings in the analyst room. We've got Lisa Martin. We've got other analysts here covering Dave Nicholson and others. Great stuff. Stay right there. We'll be right back after this short break.