 Welcome back everyone. Live coverage for VMware Explorer. Almost said VMworld again. Second year in a row Dave. 13th year covering VMware's flagship event. It's been an honor to do that. And as the chapter closes on the historic run of VMware and the next chapter emerges, we're super excited to continue to bring the reporting and the coverage community. And our next guest has been a big friend of the queue and a great guest and CUBE alumni. Raghu, Raghuram. Thanks for coming on. Now the CEO running the show. Big man on stage, Dave. All the questions are right here. Yeah, it's always great to have you, man. Yeah, absolutely. I mean, thank you and thank you for covering us for so many years. Like you said, great, I really enjoy your show and your writings. Raghuram, you haven't been a good friend of the queue before you were CEO. You're always in the hallway, always chatting. Very transparent, very community oriented. Love that. We always talk tech. Remember when you guys went to the cloud with AWS? That was a fun moment. But there was a time in VMware's history where virtualization was under threat. I remember Hyper-V was free. It was like, oh my God, that's our core business. And then next thing you know, you got HCI, you got V-SAN, you got historic pivots. VMware's going to be dead next year. It's always that case. It's just never, the team has done such a great job. And now as you guys put this next multi-cloud vision together, AI steps up and gives you a gift. Yes. Take us through how AI has completely given you a tailwind on an already good strategy that you put in place a few years ago. Yeah, I mean, look, our multi-cloud phenomenon, and by the way, even the previous deployments, they did not just come out of the woodwork, right? We always look at two extremes. One is, what are the engineers geeking out about? Second is, what are the customers trying to do, but are not getting down, right? And we saw this multi-cloud phenomenon fairly early, which is because they were all saying, okay, I'm going to put this here. I bought this company, that company's over there. I got my on-prem. So we said, look, there's going to be a bunch of problems with this, but you're going to solve it, right? And then with AI, we are seeing the exact same thing. I mean, it makes perfect sense that you would take all of your data, if you're a smaller company, and put it out of the cloud, and then do your AI there. But that's going to take a long, long, long time for any enterprise of any magnitude, because if that data is spread out all over. So what you've got to be thinking about is, how do I bring the computer and the model to the data, not the other way around all the time, right? And once you think about that, it's a multi-cloud problem. That is exactly how we came about it. In the briefings we had with the analyst briefing, we heard Chris Wolfe or Amanda, I forget who said it, we're good at IO. Yeah. And so that's also a factor in AI. Huge. All the costs right now are GPU based, which is a lot of IO and stuff going on and transforming around. Why is that important nuance that people should talk more about? Yeah, so if you look at the systems costs, I mean, let's keep aside the data scientists and all those things for a second. But if you look at what goes into an AI system, clearly there is GPUs, and by the way, it's multiple GPUs in a single box, and it's communication between those GPUs, and it's the pathways from the storage into the GPUs because these AI computations need a lot of bandwidth, right? They need a lot of data. And then you say, look, most of these workloads are going to run in a cluster of multiple servers. So all of that is expensive networking technology. It's complicated storage technology, and it's complicated GPU sharing technology. That's why we excel. And that's why we save people a ton of money because we found better ways to do all of those things. And it's not a scale game either. I mean, scale to the point, not like hyperscale. Yeah, yeah, yeah. You can do a lot in these clusters. Exactly. So here is the thing that people don't understand. If you have a smaller model, but you're willing to train for a little longer, and if you have very good data, meaning domain-specific data, your accuracy can be really as good as the larger models, and you're going to be a lot cheaper. So that is really the secret sauce here. Yeah, very domain-specific. John and I did a power law. We basically had the consumer stuff, very few large ones, but then a lot of long-tail domain-industry-specific. I wanted to ask you on your keynote. I was struck, you had the waves, and we've seen the waves before, but you had a little twist on them. You said, basically, we started with a PC, it delivered 100x improvement in price performance, then the web, another 100, mobile was 100x, and the app experience, and then Gen AI, you would expect- He missed the cloud with apps. Well, but that was web, mobile, but there's web and mobile. But because otherwise, you really wouldn't have seen. I mean, what would that have been? Some IT savings? Right, okay, but so, Gen AI, you're expecting another 100x in, we're talking about human productivity, not 100x in human productivity, but maybe, but it's going to be that type of wave that's coming. I wonder if you could double-click on that. No, no, absolutely. So if you think about all of our daily lives, what is the thing that's really hard, that characterizes some of the hardest things we do as humans? It is things that involve a lot of creativity, and I don't mean just artistic creativity, which is phenomenally hard, but also everyday creativity needed for all of our businesses to run, right? Knowledge creation, knowledge consumption, distilling things, summarizing things, bringing the right information to the point of decision-making. All those things require a lot of, a lot of, a lot of effort, and that is where I would see the 100x improvement, because lots of those things are not necessarily things that is going to require a lot of computer science, right? So think about PDF documents. I mean, we bought out Amy for a reason. One of the biggest consumers of PDF documents everywhere are legal documents, right? So being able to extract all the contract terms, for example, that I've done with all the Fortune 1000 customers, that's a gold mine for me. That is impossible to do today, right? But we could put an army of interns against you, but it's a boring task, but if you deliver it, it's a 100x improvement, right? Those are the kind of things that we're talking about. On the application side, you had a graph up there about the predictive old school, I don't say old school, machine learning. You said old school, I don't say old school. Machine learning is three years old, it's old school. At this pace, one month in, every month there's something massively happening in my eyes. It's super exciting. But let's just take classic data science, machine learning stuff out there. You put a premise out there that it's application specific in every department, and yet your general counsel come out, which we know is legalism in chat TPP, nice little sub-message there. Why is these apps so important? What about the apps will be changed? I'll see the toil, the undifferentiated heavy lifting, I get that. But what specifically do you see AI native apps having now that they didn't have before? What's your vision there on the app space? Yeah, I would say, look, the analogy, and I just briefly alluded to it, is the database, right? Today, when you're building any sort of enterprise application, it's a given that there's a database inside of it, right? Or many databases sometimes. It's going to be the same way with AI, because what is AI at the end of the day? What are we trying to do with it? It is distilling the core company's IP, on top of which you build a business process, or on top of which you build a customer interaction, et cetera. So that is why every application in a business is going to use some part of the, what makes the company unique, which is their IP. So that's why we see this pattern being repeated over and over again. On the point about the previous generations of models, these generative AI models, right? I mean, AI has served us fantastically well. I mean, you go into any bank, all the fraud detection is done using AI models, right? But there is a model very, very specifically trained for fraud detection. And there is new data that comes along if you decide to add some other dimensions of how you're checking for fraud. It's a default new model. It is a rapid, it's not something that can be built once and reused. Whereas, enterprise knowledge can be built and reused. That is where the reusability comes. Greg, you told the analysts yesterday, and it aligns perfectly with our data, with our partner, ETR, that 50% of the customers say they want to do it, you know, in public infrastructure, 50% of their activity for generative AI will be done in private infrastructure. You've got the advantage, as you talked about in your keynote, and you brought it to General Counsel, of the fear and certainty and doubt of legal actions, compliance, and all those issues, IP leakage, et cetera. The cloud, the pace of play is very fast. They have the innovation. What do you have to do to ensure that that mix is actually met? And that the cloud doesn't do what it did before, for a period of time, kind of overwhelm the sort of traditional incumbents? Yeah, so I want to be important to point out something. This is not a cloud versus on-prem. It is your AI infrastructure or your AI application, depending upon an AI infrastructure that is providing the kind of safeguards you need in order to avoid those sort of problems. And this could be done by a public cloud provider. It could be done by a standalone company. It could be done by anybody. VMware's unique advantages, we bring to bear two things that I think are collectively pretty unique. One is we can take the AI on the model to where, which is in other words, the compute, to where the data is. We are not constrained. We can run in your data center, any data center you want. We can run on your manufacturing shop floor, et cetera, et cetera, right? Or we can do the training. You can do the training on AWS or somewhere and you can take the weights and you can put it on the model in a manufacturing shop floor on top of our platform. So that is very, very unique. The cloud providers today prefer to focus on the large models which can rightfully so be done only in the public cloud. And the second is we have paid attention to, we are not serving the consumer on the enterprise. We have focused only the enterprise. And so we, right from the get go, we said we are not going to have a solution that doesn't pass master with our legal team. In fact, we could not use it internally. When OpenAI, when ChatGPD came around, as you can imagine, everybody started jumping on it and started writing code. And our legal team said no, no, no, no, not so fast. And then instead of stopping there, we said, okay, what do we do to solve this problem? That is where this private AI came about. So are we going to have a data supply chain problem like software supply chain? Because what you're saying is interesting point because you have to watch the flows. Yeah, there is absolutely parallels. You can have a prominence of the models, right? What is it that people are worked up about? One of the things that people have worked up about of these large models is they don't know where it is trained. There is no transparency, right? They did not know where it came from. In all of these open source models, what is it the, look at the latest Platypus model from Boston University. They're saying, look, we will tell you where the data came from. What the model rates, how is it trained? And therefore you feel safe using it. This is absolutely, you hit a nail on the head. Now there's going to be an issue. Yeah, we've been doubling clicking on that. So thanks for that little advice there, and we're going to bring that into the programming. I want to ask you about innovation because you got legal and compliance, Thread, you highlighted those in your keynote. The innovation going on, you mentioned Boston University, number one on Hugging Face. Chris Wolfe mentioned that as well. These models are popping out of nowhere. The organic intoxication of the developer community around this technology is off the charts. They are going crazy around this. And so you got the bottoms up organic innovation surge, top down, board mandates from the board room to the dorm room, things are happening. Okay, CEOs are putting a mandate for AI, but it's stalled, it's stopped at this blocker level. You mentioned legal, people are slowing down a little bit, but it's organically growing. How do you see that being busted down? How do you, how can you, what would you advise or how do you see it unfolding because it's kind of a corporate blocker right now because of the issues, it's got to be architected. So obviously architecture involved, you can talk about a system, but you got the surge on the organic growth. Maybe a company comes out and solves the data supply chain problem. That's a new company, new brands are going to emerge. So entrepreneurship is booming, but corporate is also booming. Do they meet in the middle? Yeah, look, I mean, this is, I was involved in the early days when the web became mainstream because I was working as a product manager at Netscape at the time, right? Same dialogue, right? And by the way, early days of the cloud, people said, hey, you can't do privacy, I'm sorry, you can't do security in the cloud, et cetera, but now that problem has been crossed, right? So I think it's just a matter of time. Number one, obviously I'm biased, but we certainly think the solutions that we've brought to the table today are exactly one of the blocker busters, if you will, right? So yeah, so that's why we're- Interesting, I know you got to go, we're going to end in seconds, I know your team's yelling at us, but the web, the cloud was horizontally scalable in the cloud. You're saying VMware has a unique opportunity because you're, in a way, horizontally across environments where AI can be vertically specialized in the apps where the domain is. So for the first time, there's a visibility into an operating model. Exactly. Is this the future that the internet is now an operating model for the cloud? Yep, yep, yep, definitely. I mean, I think there are a lot of parallels to all these examples that you just cited. And we are in the process of discovering it, right? I mean, let's not forget, chat GPT burst into our consciousness last Thanksgiving or last Christmas or something, right? So it's very early days. And your strategy has an execution of enabled you to be agnostic to the physical location. Naturally, what we do best for our customers. Well, I know we got to go. We didn't mention anything about the Broadcom and what's next. I know the next chapter, what you guys laid out today on stage in a great keynote, great surprise to see Jensen up there, kind of like looking at the historic moment of what VMware done. And congratulations to your team. As you go to the next chapter, as this one closes, a new one will open. What's your final word about this upcoming chapter? As you close the historic VMware chapter run and open up the next, yeah, I'm super optimistic. I think it's going to be exciting. You heard Haktan in the recorded message and in personal conversations, he would tell you the same thing, but he's very committed to continuing the tradition of engineering excellence and engineering innovation. He's going to put more money into the business than quite honestly we'd be able to do as a private standalone company. And that's one of the rationales for the acquisition. So I'm very optimistic that the combination of those two, the engineering approach and the increase availability of R&D investment is going to result in good things. Thank you so much for coming on theCUBE and your support, really appreciated. I'm John Furrier, Dave Vellante. He's a reader and viewer of Dave's breaking analysis. You're great as third. Thank you for doing that. Thank you so much. Thank you. We'll be right back with more live covers of VMware Explore at this short break.