 Hi everybody, welcome back to VMware Explore 2023. I almost said VMworld again, VMware Explore 2023. My name is Dave Vellante and we are live with Sarbjit Johal. I call him a cube analyst because he's like part of the family, but he's the principal and founder at StackPane. Great to see you, my friend. Thanks for coming on. Good to see you, Dave. Sarbjit, I got to start with the big picture. You know, every 10 or 15 years or so, this industry goes through a platform shift. I mean, it hasn't been through a real silicon platform shift in the enterprise in a long time, but it's happening. Help us understand, squint through what's going on in the industry. What do you make of this platform shift? How would you describe it? How should we think about it? Yeah, as we just talked briefly about just before we started rolling the cameras on, I think that there's a big huge shift happening and when we are in it, we don't tend to see it, but if you zoom out or see it from outside, the X86 platform is being challenged, mainly due to the AI developments, Generator AI, and on top of that, even before Generator AI came into picture, we have been sort of focusing on TPUs and putting more processing in the NIC cards, more processing in the storage arrays, right? We had that, you know, I mean, I'm discussing that day. So we need more intelligence at network level, more intelligence at storage level, of course, more intelligence at the compute level. And anywhere you need intelligence, where's the camera, that one? Anywhere you need intelligence, you need compute there, right? That's the fact and that's what's happening and new chips are emerging for, which are very specific to the workload. Of course, NVIDIA has done a great job of pitching and architecting their software solutions, their platform for developers to take advantage of the specialized workloads for specialized chips sort of concept, right? So the multi-cloud world is turning into a multi-chip world, almost. No, it's interesting, so when it comes to silicon, something that David Floyer taught me decades ago was the economics of silicon are a function of volume. It's Wright's law, which roughly stated, we all know Moore's law, Wright's law roughly stated, says when you double the cumulative volume of your output, you cut your cost by a constant, which in semiconductors is probably like 15%, and then you combine that with Moore's law and with doubling in transistors, you get the economics really favor you if you have the volume. And as we've talked about, I mean, we published years ago, the ARM scenario, the NVIDIA scenario, you had a podcast with Crawford Del Prett, the CEO of IDC and myself, and we were talking about at the time the combinatorial factors of the CPU, the GPU, the NPU, the accelerators, all the communications, technologies in between that like Broadcom makes, what you were just referring to, and that price performance curve blows away the historical Moore's law, and that's why the x86, which has done everything, it's done the core processing, it does the memory management, it does all the storage offloads, the networking offloads, does it all, and that was the wonderful gift of x86, it was this wonderful general purpose processor that could do everything well to your point, now it's all specialized, what does that mean for VMware in the context of AI? Just before we go there with VMware, I think there's some nuance there, like earlier hardware x86, we had this flat thing and then we have this machine code and assembly, and then your compile code is faster and then interpreters, that's how interpreters or interpreter-based languages, so they were slower, but then the languages matter, how close you are to the machine matters, power consumption matters, but most importantly, the speed matters, speed is a new scale, that's a new slogan, so because speed matters, that software which sits between Silicon and the programmers, that matters a lot, and CUDA from NVIDIA, and AMD is trying to go open source on their side to compete with CUDA, right? CUDA is their own thing, right? I think software and hardware combination is much stronger, not just the hardware, so I think that your comment about the volume, yes volume is very important with chips, but with that software, if you put the software with it, your margins will be more than 15%, having said that, recently CNBC folks actually, some of them do great job actually, some of them are just reading the news, so they- That is so true, I have a lot of respect for many of the reporters. Some are good actually, John Ford and a few others. They put, somebody pulled up the chart, like numbers speak for themselves, right? So when there's a 20 year long sort of chart, in the beginning of the wave, any computing change, the hardware people benefit more, and their stocks were going up like crazy, into all of the way up and all that stuff, and then they start going down, and when software picks up that hardware and uses that very creatively, that hardware, that segment or that family of hardware becomes like kind of a lot, a lot, right? So until next big thing comes. So I think NVIDIA will do great for next few quarters, and maybe a couple of years, but then the Silicon will take sort of backstage. This is what my reading is. Coming back to VMware, I think VMware is going to a chip company, right? So that's interesting. And actually yesterday I said that on the Q&A. So I think that they will benefit from it overall, because most of the enterprises are trying to make sense of AI, they want to bring the AI in-house, they need standardization, and there are only two pure software vendors in our world, which is Red Hat and VMware. VMware is much more pervasive in enterprise. We talked about that many times, like just like Microsoft, almost every enterprise has VMware with one flavor or other. Some people spend more money and some people less. So I think they will benefit, they will gain more share if they can pull off these hardware leverage, actually if they can pull it off. So a fundamental premise that Broadcom has put forth to the regulators, which all systems are go here, there are a couple of, I guess, countries in Asia or regulators in Asia that Asian countries still have to approve. But I guess, well, so the UK Competition Markets Committee approved, they give formal approval. And I guess, I didn't fully understand this on when I was flying out, but when we wrote our last breaking analysis, I was kind of hammering the FTC, dragging their feet. Well, they just chose not to respond. So the statute of limitations expired when they could respond, so effectively that means the US approved it. So they approved it by not saying anything, which is kind of a typical- Lina Khan is busy on her team. They are underfunded. You give her that excuse, are you? Yes. It's all about economics guys who are always listening at the end of the day, it's economics rules. I think Lina Khan's a little too busy. But anyway. Our budget is very small. Good. It'd be dangerous with a bigger, even more dangerous with a bigger budget, but we'll leave that for the Kube Park. But the premise is that by Broadcom owning VMware, it will create even more competition, not less. And then there's stuff about fiber channel cards or some other stuff that's minor factor that Broadcom's agreed. No problem, we'll take care of that. And Broadcom sells to NVIDIA, a big customer. So I just don't see that as an issue. The broader issue, the macro is a fourth cloud, which VMware is that fourth cloud. And when you look at the overlap, in terms of penetration, VMware clouds, highly penetrated as is AWS and Azure. However, all three big clouds do their own silicon. They all have arm-based designs. Nitro, Amazon's Nitro, which is their smart nick. Yes. And virtualization platform, which enables Graviton, which is an arm-based chip and Tranium and all the inferential and all the new AI chips. Those are all custom-built silicon. They're based on ARM. They're built by TSMC, I believe. Again, the volume advantage. VMware doesn't have, they have an equivalent essentially called Project Monterey, which we haven't heard much about. And so the economics are going to be somewhat challenging. It'd be interesting to see how Broadcom deals with that. But notwithstanding that, I think it's very clear that VMware is a viable fourth cloud. Do you buy that premise? I do, actually. I think the combination of VMware and a chip company, I think it's a long shot, but I think they will pick some ARM designs and start building their own chips. They may do that. I'm not sure. Broadcom. No, Broadcom with VMware, right? Yeah, right. So Broadcom can sell it to others too. Not only just VMware, right? So the problem with the VMware is that they don't have their own public cloud, right? So they give it to you to put in your data center, right? So they're pure software stack, if you will. So how close it can come to chips? It can if Broadcom wants it to be. I think they can do that. Project Monterey, I think it's under reevaluation. That's what my read is from UNI. It hasn't got the traction that they had hoped. Yeah, but I think now it may with Broadcom. Say again? Now with Broadcom, it may get that traction because Broadcom is a hardware chips company, right? Yeah, I think part of the issue is the value proposition of Graviton and Nitro is going to save money, right? Amazon makes a big deal about that. And because it's multi-tenant, you've got massive scale, you've got pretty high utilization. So a small improvement in price performance is going to make a big deal. In the data center, your utilization is a lot lower. So it may not have as much of an across-the-board impact. Yeah, I think there's optics as well. There's a reality in optics, you know? So even Amazon had to come out and say, hey, we have Nvidia chips and we have instances like New York Summit, they had to say that, right? Even though they have Inferentia chips, right? And they're model training chips, right? So Graviton and Inferentia, both they have it, but still they had to mention that. So I think optics matter a lot. Reality, of course, we know on-prem utilization of hardware is always very low, we know that. But just if you compare just both worlds, right? If here your cloud priorities are running at 80% capacity, just like they never published that record, by the way. So they're running at 80% capacity on CPU and in data center or storage or the rack, you know, whatever infrastructure they need, on data center, you are in the tens or 20% on storage or CPUs, right? Many times, right? So, but even if you improve that in that, even in that, if the price goes down, you're still better. But I think that coming back to the software stack, I think the game ends there because if you don't have the programmability of your chips, so between the programming language and the chips is there's a platform which is a really much needed buffer. It's a shock absorber for a new stack, if you will. That, well, that you have to have. And we ever doesn't have that, but they can go open source. But to your earlier point, I mean, NVIDIA made those investments years ago. Wall Street hated it. Now, that was the time everybody should have bought, but they made the bet. A lot of people didn't. Or when we published our NVIDIA is gonna own the data center because of AI, two years ago with David Floyer. I think the valuation was in the 300 billion. Now it's at over a trillion. But, all right. Multi-Cloud, big strategy. So really three big things. Broadcom, multi-Cloud and AI. Yeah. Multi-Cloud, enabled by Tanzu. They're blending or bringing Aria into the Tanzu umbrella. Trying to simplify that. Still, not the traction that they want in the market. Right, open shift is dominant in that space. Do they have to be vertically integrated there? Do they have to have their own application development stack, or should they go all open source? I mean, obviously going for it. I think Mix, I think Mix is better. I had a discussion with Betty Jeannotte. She is their chief marketing officer on that side, she's very technical. It's very configurable, or it's like modular approach to Tanzu. So you can mix the third-party plugins with VMware-provided base plugins, if you will. So it's expandable. It, community can participate. Partners can participate in it. And there's a lot of tool chains that get all this dab ops kind of tool chain can't plug into it. So there are ways to plug it into this SDLC, software development lifecycle management sort of stack. So it's VMware shift to left towards developers. I think it's, again, it's one of those things, it's optics. Number one, number two is that if you're not building on it, how will you parade on it? So if you're building in cloud, most probably you will parade in public cloud. If you're building in public cloud. So, there's a lot of development in, I was talking to somebody here earlier, just a few minutes back, a insurance company, they have nothing in cloud, nothing other than SaaS they use. None of them. No public cloud infrastructure, no IaaS. I was like, yeah, there are so many companies in mid-market, mid-to-lower-end, you can say, oh, okay, by the way, why, what was their rationale? Cost? It's cost, it's the, they don't, their staff is old, and they, their tenure is like employee tenure in the Midwest or like in the middle of America. As people stay at a company for 20, 25 years, they don't load new stuff. I'm sorry to say that, but it's, there are a lot of factors, economics. Wow, we think they don't, really they don't have any cloud. Like no shadow cloud. He told me, he's there for many years. Yeah. Wow. I was shocked. All right, let's hit on AI. You saw the private AI, I don't know if you saw the power law that we did. Oh yeah, that's great. Right? So the vertical axis is size of model. You know, open AI, llama, right? The Titan. Yeah. Right? Bard. Okay. That's on the vertical axis. Horizontal axis is a domain specificity. So a long tail. Yeah. And you got to get the torso getting pulled in by open source. They, VMware announced private AI betting on the Nvidia stack. They're talking, hugging face. Everybody's using hugging face. They've got, they brought out their general counsel. Yeah. That was clever. For the FUD. It's good. There's legitimacy there. What are their prospects? What do they have to do to succeed in your view? I think there are two things. Like when it comes to VMware and generated AI. So one is giving you as an enterprise facility so you can have access to GPUs and or the software stack to cook your own models or your private models or augment the large language models, augmentation of that. I will just use that term. There are so many terms being used, right? So like you're trying to tame the AI or you're trying to infuse your data into that. So it doesn't go back, but you have yours, right? So that's one thing. Like give you platform, infrastructure and platform so as an enterprise that you can build whatever you need to. The other one is that VMware itself for its ops, because VMware has a lot of software, right? So vCloud, Dracker and all these infrastructure as software, right? As code you can say. So there's all CLI SDKs, right? So they have hooked up a model, hugging face model in that context. So they are making the practitioners of VMware productive. That's one thing. Like that's the creation part when you are a VMware shop. So your people who practice it, they can become productive. But on the off side, they are giving you facility to own GPU as a service as well. So G, I could go on. I gotta go. Thanks so much for coming on. Great to have you as always. Awesome analysis and see you next time. Thank you very much. All right, man. Okay, keep it right there. Dave Vellante, John Furrier, Lisa Martin and Rob Streche, live from VMware. Explore 2023, formerly known as VMworld. You're watching theCUBE, but right back, right after this short break.