 Welcome back to theCUBE's coverage here in San Francisco at the GTC NVIDIA's conference for the era of AI. Dave Vellante's with me here. We've got a lot of great guests. We're unpacking the revolution that's happened in this new industrial revolution. We've got Anthony Dean of Field CTO for AI with Dell. Technology, Anthony, great to see you. Thanks for coming on. Thank you. So Michael Dell got a shout out, the keynote. Of course, we linked to it. Everyone else started linking to it. And then, obviously he's on the booth. I just saw him perusing through the booth. Sam Crocott, who runs product marketing, and he have all the CTOs are there. We've been chatting with a lot of the Dell folks. And it's been on the radar, at least on our radar, heavy AI activity. Matt Baker's on LinkedIn all the time, sharing his great discursion on RAG. We're both kind of from the same school of thought that we see great value in that. And we even heard Jensen say on our private one-on-ones earlier today that small language models are going to be on mobile and PCs as well. It's just with chatbots and agents where you're going to start to see inference training, context windows, these concepts. I won't say constrained, but use cases will drive a lot of it. You guys are doing so much. Let's jump into what are you guys up to now? Share with me the vision of what Dell's AI position is, how you guys are talking about it, and what you see customers using. Look, as Michael said once before, you can't have end-to-end solutions unless you have both ends. So leveraging the power of our portfolio and the relationships that we have with great companies like NVIDIA, we can deliver solutions. I hate to say that word, but we can deliver answers to some very complicated problems. Whether it's trying to help a developer data scientist on a workstation be able to take those models and port them over to a larger inference. And so we're actually demonstrating just today some RAG models, some data lake house. We've got a bunch of things. And in fact, we're the first ever Ethernet certified super pot. I don't know if Sam or anyone else has talked to you about that. No, not yet. Yeah, so what that means is our... All right, that's great breaking news on the cube. That's right. Super pot, that's NVIDIA's... It's NVIDIA's tippy top configuration that handles the world's most difficult problems. And we think that NVIDIA piercing through protein folding and all these other really difficult science problems, AI has become an adjunct to this high level of intense compute. Now, here's the thing. We've been doing artificial intelligence with our PowerScale platform for a very long time. It's a parallel operation, streaming rights, streaming reads. And now, if you look at it, you're going to find a lot more Ethernet in the data center than you already find in Finneban. So we're looking to democratize AI by making that more readily accessible without new skills. I love how you mentioned Ethernet because Dave and I have been talking about Infiniband versus Ethernet. Even Jensen gave the shout out to Ethernet as long as it can have the interconnects that are smart and high-performance, highly available, scalable interconnects. That brings up the notion, though, okay, these are systems that have been working together with these switches. So you got GPUs, switches. I mean, it sounds like a PC to me, you know, a server on steroids. It's a clustered system. I mean, and you guys have been doing HPC. It's not like you're new to this game. But at supercomputing last year, it was clear to me that if you guys are bringing HPC and AI almost mainstream, that's why I'm calling it clustered system is because HPC is usually a workload that's very unique. But mainstream enterprise needs with AI look a lot like those workloads, but have to be more resilient. Explain the difference between HPC and where the mainstream enterprise is going. Yeah, so HPC is, you know, working on the most intractable problems, the discovery of the universe, protein folding, weather modeling, these are very, very difficult. But this is called extreme sports. The Formula One of IT, if you will, and it takes a great degree of skills and intelligence to be able to solve those problems. The tech is there to serve. This is bare bone stuff. The data that's being put into scratch systems, they don't back it up. Why? Because it'll just rerun the model. But as we look about the requirements for an enterprise, to be able to roll back in time and find out where we made the decision is really important for regulators and really important for a lot of the enterprise customers. So relying back on these traditional enterprise features of snapshots and replays, super important. So the data center has changed, obviously. Now you're seeing a resurgence of distributed computing, public cloud, on-premises, and edge playing out. Now AI, we heard from Jensen and you guys have been talking about this, pulling forward the future by having an accelerated computing platform within video. So you get super pods. You get the AI foundry. Are you guys doing anything with the AI foundry? The NIMS, the NVIDIA inference microservices, I'm sure. NIMS is really very new. In fact, if you talk to the NVIDIA guys, I'm sure they'll share with you that they would like to get their hands on it as well. Yeah, but it's very exciting stuff to have this kind of portability and the whole idea of an AI factory. And this is really important to focus on. Look, we look at pictures of robotic arms. We look at pictures of empty warehouses and trucks moving around. But you don't have a factory unless you have output. And you can't get output unless you have input. And that inputs the data. And data is the most important thing. I want to make sure, we want to make sure at Dell, that everyone understands that the data differentiates them. And this is an interesting topic because every email that you write is part of the intellectual property of the company that you run. So I was talking, we commented, we had with Jensen, I was asked about mobile and PCs. Because now, you know, large language models work on the big machines, you have the big data center, the big spine, as he said. So now you got OpenGL for phones and PCs are kind of similar, okay? Sizes of reason, PCs have a big part of it. He also mentioned that they've had a lot of NVIDIA technology in Dell's for millions of units. So you almost have pre-existing Dell machines out there that have technology from NVIDIA, one. And then two, LLMs are going to spawn off SLMs, highly specialized models that are fine-tuned. And he said, you can see fine-tuning, confidence wins those rags, which is retrieval augmentation generation, prompting and then agents. This seems to me to be a sweet spot for Dell because you guys have all that data from where the customers are using it. So, you know, chat bots today turn into agents tomorrow. So this whole co-pilot way that's coming is going to be a huge opportunity for Dell because you have a lot of data. Dell data, but you also have customer data, but just the Dell data, whether it's product information, tools that you've built. I mean, Jensen said there's goldmine of data in the enterprise in these areas. Tell us what you guys see there and what we can expect. Look, one of the things that is incredibly important, no matter what industry you're in, you are in a relationship with someone else. And with that, that means being able to talk to your suppliers about the quality of the ingredients that we're getting, the variations. And so having that data and be able to have a real-time response is critically important. So when I look at it from the Dell lens inside of our own production systems, how we build physical products and software products, we have to connect and share with that information. And that's a really important attribute to this ongoing AI journey we're on. What is the customer conversation you're having on the field? Where are they in their journey? How's their mindset? They stare, enthusiasm's probably high. Maybe they kind of have an attitudinal scale that drops to a little skepticism, little challenge, competitive challenge there. Show me more. But generally, we can see people see the light. Yeah, I don't know. What's the conversation? enthusiasm is high, the board of directors wants to issue a check for more kit. But the reality is, the minute you start saying, so tell me about your data readiness for AI, the conversation comes to a screeching halt. That's because data is in silos. And the reason why they're in silos is because every business process follows a certain organizational structure who stores it, studies it, reports on it. And that's the reason why we're trying to bring solution architectures together like the Dell Data Lakehouse, provide a uniform, open, and full control democratization of data. Talk about the relationship with NVIDIA and Dell. Obviously, again, Michael got a great shout out on keynote. He actually, I think he might have said end-to-end PCs taking orders. I mean, it's kind of a nice, I mean. Yeah, he did say, Michael would be ready to take your order. So that was a comment had other meaning or, but obviously NVIDIA, I mean Dell has been increasing NVIDIA cards, machines. It's not like a stranger relationship. But what is the relationship? And with this new AI factory vision coming, it's going to create a renaissance in the data center. It's going to create a new renaissance in server, clustered servers. I call them clustered systems. You're going to see a lot more action for standing up big iron, in a big way, which is basically clustered servers connected with switches. Yeah, and look, we've got a press release or two about some really large purchases. But look, we're seeing such a brand new business model emerge where these small companies with big venture-back capital are landing mobile data centers in the parking lots of customers because they're out of space, talent, whatever it is. And so I would say that the enthusiasm's high, but the capacity is limited. And so there's some really interesting things that are happening in this market that none of us would have anticipated. The constraints in the past used to be the size of the motherboard, chassis, the box on the rack for you, 1U. Now it's power. Like you can have all the GPUs on a rack all you want. If there's not enough power. Yeah, power. So there's all new kinds of constraints happening. Yes, yes, absolutely. And in fact, sovereignty is a big part of it too. These mobile or floating data centers are now in the oceans. They're out of the reach of any national government. Final question, what's the vibe of the show? You've been around, been now two days in. A lot of dinners, a lot at meetings, a lot at social, a lot of hallway, chat. What's the vibe? What's the, for people who aren't here, share with them what your take is. What's the vibe? What's some of the conversations? Peg, the excitement levels in terms of hype and reality. Just share what's happening. Look, every business is going to be upended with artificial intelligence. I don't think there's anyone in any industry whether it's public or private or not going to do things differently. And so they're as much excited as afraid of missing out. And so the conversations we're having in the booth are both simple basic things like, wow, that's what an Nvidia board looks like. I can't believe you put four of those in a 3U chassis. So we had conversations like that, but we also have more sophisticated conversations like actually how does RAG work? Where does Dell Intellectual Property fit? Can I get the store system in the Azure marketplace? And the answer is yes, yes, and yes. Yeah. And then I need more help in actually getting figure out what data. I mean, one thing Jensen did say that the path, the enterprise was through the IT platforms, which is IT departments, and integrators. Yes. That's going to be distribution for him for the AI factor. You guys would definitely do both of those things. Yes. And our consulting practices has done nothing else but ramp up a tremendous amount of skills around the entire journey from prep all the way to outcome. Well, rising tide floats all boats. Anthony, thanks for coming on theCUBE. We appreciate you. All right, I'm John Furrier with Dave Vellante here at the conference for the era of AI as GTC. This is what it is. That's what the big marquee on the front of the building will say. We'll be back with more coverage for theCUBE after this break.