 Hello everyone, I'm John Furrier with the Cube. We are here in San Jose, not San Francisco, San Jose for the NVIDIA GTC24 with Zia's Carvella, with ZK Research, with keynote analysis. Got all the action going on, got trains going by, but nothing more impressive than a packed house at SAP Center. Jensen Wong was giving the keynote, looking more like a Taylor Swift concert. Taylor can't hold her candle to Jensen's awesome performance. Zia, it's great to have you on. Yeah, it's been a couple of weeks since I've been on the Cube. I thought it was interesting how Jensen called it a developer conference, because it is certainly much more than that. In fact, the evolution of GTC has been nice too, because it really was a developer conference for a while, but now it has become the industry AI trade show. It's interesting how that sounds like an HPC event. Everyone's going crazy. It's very Steve Jobs-like on one hand, very cult-like in the coolness of this next generation AI wave, but it also felt like supercomputing built in, so it's like cloud supercomputing. Reinvent meets Apple Developer Conference. Really an incredible momentum. To me, it was a point in history where I think the system's revolution is definitely happening. Well, it's all things accelerated computing, and that's really, I think, the nuance that people miss a lot. AI is just another form of accelerated computing, so is the high performance graphics, so is robotics, computer vision, all those things. It's all wrapped up in AI, but nobody does accelerated computing better than NVIDIA. Let's get into the highlights. Blackwell, as was announced, that replaces Hopper, but Hopper becomes a CPU, the AI super chip, also NV-Link switch, the key ingredient as part of this DGX cloud and system, 120 kilowatts in one rack, and exaplepsis in one rack was Monster. They call it the spine. It's a neural network. I mean, this is the future system. I joked on Twitter. It looks like a data center, and everyone was like, you're wrong. I was only kidding. What's your chip? Well, no, it is a data center. That's in a box. I think one of the interesting aspects of Blackwell is now, what is a GPU? What is a chip? Because it's actually two chips bound together, but then if you NV-Link them together, now you've got multiple ones, and then if you put them in a bigger NV-Link switch, you create a bigger one, and so there's a real blurring going on here about where the chip ends and where the system starts. In fact, I think you really can't differentiate between them anymore. I think AI systems are going to come fast. I call it clustered systems. I think this all started, in my opinion, with AWS. When AWS created that hyperscale performance, Google, Azure followed, Oracle's now trying to follow. The idea of James Hamilton, when he put together those data centers, they really innovate on the network. If you look at what's going on here in these engineered clustered systems, the connection point between GPUs, NV-Link switch, makes it happen. So he said on stage, GPUs talking to each other. This is now a new system, so I think you're starting to see the evolution of not just the GPU, but everything around GPUs plural. Now, GPU racks plural. It's an AI factory they call it. I call it Clustered Systems Superclay. Whatever you want to call it. This has changed. It's a different market. It is, and I think it's becoming more network-centric. It's going to be interesting to see. The tail ones, by Nvidia, are obvious. You just look at the stock price. Who else is going to benefit from this, John? I do think Pure Storage on the storage side. I think Arista on the network side. I think Cisco's got a lot of possible upside as well. And then, of course, Nvidia builds their own networking gear. But the network plays a critical role in AI, and I think that's been an underappreciated aspect of AI, AI systems for really up until now. When I saw Jason on stage, I want to get your reaction to the next comic, because I think this is kind of maybe a key point that's not really being talked about. But I felt very much like that was a reinvent wannabe pitch. Like he's saying, we're the cloud. We're the system. We're the AI factory. In a very much kind of narrative way, Andy Jassy used to talk about Amazon Cloud. Jensen's not saying he's going to be an Amazon wannabe, but it's the same narrative. It's the same movie. We've seen that before. Do you see that same thing on there? Because look at Michael Dell's out there in the audience, all these hardware vendors out there, Corwee's out there, and separate cloud could emerge. Yeah, I think the tone of it to me was they are trying to now become the center of this AI universe, right? And then they're going to have this huge ecosystem that surrounds them. And frankly, when you look at the Expo Hall, it's already become that way. But I think along with that becomes the responsibility to become a little more visionary. And so when you think back to those early Andy Jassy keynotes, and certainly every Steve Jobs keynote, it wasn't just about product. There was a lot of vision. And I think Jensen's trying to take that role now for AI to be not only the manufacturer of the infrastructure that makes AI, but the visionary to set the direction of where this goes. I think he had a home run with a vision. Big fan of what he's doing. I love his speech. I love what they're doing. I love this AI system. I like how he called it a new category. That's kind of a shot across the bow again to the existing incumbents. But you hear words like AI foundry and the ecosystem names. I mean, I got to say, I mean, that sounds very cloud like to me. You're enabled, not just developers. I mean 1.3 million robotics developers alone. All the systems coming together. There's horsepower is there. Enterprise move combined with the developer chops. I mean, you got to look at this as saying, if you're Amazon Web Services, you got to go, hmm, what's our relationship with NVIDIA? Well, right. They had a joint announcement here, and I think they want to be the enabler of more NVIDIA. I think, let's face it, NVIDIA stuff's not cheap. So not everybody can go out and buy a DGX and stick it in their data center. And so, unless, if you're not a hyperscaler, I think your path to that largely will be through an Amazon and Google or Microsoft. Now, you could argue that their value gets squeezed a little bit and they all wind up looking the same. But that's where some of the other things that they bring to bear and need to be part of it. Do you think there's a TAN expansion opportunity for NVIDIA without telegraphic to Amazon, any competitive? Because I can see a scenario where NVIDIA can say, I want to partner with Amazon. You're a public cloud. I just want to be the AEI cloud. I mean, it's a TAN expansion, isn't it? Oh, yeah, yeah. No, I think the idea here for NVIDIA would be to make sure everybody has access to AI everywhere, whether it's on-prem through their own systems or through the cloud. And the cloud's going to allow them to scale in the countries where they can't reach down to smaller businesses. Even some of the stuff that you're talking about, digital twin, anything that moves. Any system should have a digital twin. Well, if you're a mid-sized bank, you can't go forward to build your own digital twin, but you can buy that out of the Amazon cloud and then use that as a way to rebuild your bank office in the future. Well, here's my notes from the five takeaways from the keynote. One, it's an industry revolution around AI, a whole other way of doing business. Number two, new way how software is going to be built with Generator. That's the Blackwell connection. Software, right in software. Software, right in software. And the new computer will create this new paradigm, this new software. And it's how it's distributed. It's whole NIMS, NVIDIA, Inference, Machines. No, was it NVIDIA, Inference, Machines? Machines, I think, yeah. Machines, yeah. NIMS creates a whole new application. That's the foundry idea. And, of course, robotics, the physical world that converges between digital and physicals here with AI. I don't know if the industry, though, John, has figured out the modernization aspect of AI, because I think NVIDIA has. But I think when it gets down to some of the other companies, you look at the contact center companies, the security companies that are trying to use AI to make their products better. When I talk to them about how they're going to monetize it, they're not really sure. So one thing that concerns me here is from an industry perspective, is people are building AI, spend a lot of money building AI into their products, but they haven't figured out how to make money off it. And if they can't, then we want it with the deflationary effect for everybody but in NVIDIA, and that's ultimately bad for everybody. I mean, it checked all the boxes on all the hype. It had the NIMS, which is the new way to do that, the API for the cloud, they call it. Then they have this thing called the NEMO Retriever, which is really taking advantage of the whole RAG market. Retrieval augmentation generation saying, if you use NVIDIA, we're the home for RAG or retrieving. I love that. It was my pet announcement, pun intended. Of course, DGX Cloud was the key. So this whole AI foundry ecosystem is really built on this, the hype of Retriever, Engine, API for AI. So they're calling the NIMS the API for AI. Yeah, historically they use the APIs, but this is, I think, an easier way to do it. All right, final point. What do you expect for the week? We're going to be all week. We'll see you around. Dave's coming in tomorrow. This is just Monday. What are you expecting to see if that's the week? It's a partner announcement. Now that NVIDIA has released all of theirs, and so I think Jensen talked about some on stage, Cohesity, Amazon, Microsoft, Google, but I expect to see a lot more. One of the aspects I'd like to see more of it is show sustainability. I think NVIDIA, maybe unfairly, gets beat up because the GPUs use a lot more power than traditional CPUs, but there is an argument to be made though that the more GPU-enabled systems you have, the less CPU-enabled systems you have, which brings down the overall cost of cooling and power. And so I think, but he alluded to that on stage, but I would like to see NVIDIA hit the sustainability point a lot harder. Jensen, good to see you. We'll check in with you later, Analyst Angle later on the show. I'm John Furrier with theCUBE. You're watching theCUBE. More coverage, stay with us. Go to thecube.net, siliconangle.com, we'll be on Twitter, LinkedIn, all the channels. Look for us here, and let's go inside. As always, great being here.