 Welcome back, everyone, to theCUBE's live coverage here in Barcelona for MWC 2024, formerly Mobile World Congress now at MWC. I'm John Furrier, your host with Dave Vellante, extracting the signal from the noise as we always do. Day three, we're kicking off with our featured guest here, Charlie Kowalski, who's the president of Broadcom. As you know, the world is moving to AI and the number one thing that everyone is looking at right now is how to make that possible and it goes down to the silicon, it comes down to the chips, it comes down to the hardware, it comes down to the software. The systems revolution is the big story here at MWC and we're pleased to have Charlie Kowalski here for President Broadcom. Charlie, thanks for coming on theCUBE. Really appreciate your valuable time. Appreciate you. Thank you. Thank you, John and Dave for having me here. As you can see, super busy hectic, so appreciate the time you're spending with me. You guys have been just phenomenal on the business front. The fundamentals are off the charts, obviously the valuation, revenue up from like, from 20, 10 years ago, $4 billion, pushing over $35 billion R&D, less than a billion 10 years ago, now well over 5 billion, and with VMware that's only going to get bigger. You have the broadest, if not the broadest silicon capability from architecture processor memory protocol, signal processing and connectivity in the planet. You guys are leading the revolution of the systems revolution that's happening now and you're in charge. You and Haktan are running the ship here. Now you got VMware, you got the chips to the stack and the application layer, all kind of coming together and look at NVIDIA, look at all the success out there. Everybody wants AI. You guys are at the front driver's seat of this. What's it like? I mean, wake up and pinch yourself in the morning and say, wow, we're really doing this. And what are you doing here at MWC? Okay, so what am I doing here? Let me start with that. So MWC came to Barcelona in 2006. So we've been here and I've been here actually since 2006, 18 years, almost two decades. It's incredible to see the transition of what happened from 2006 till today. At the time in 2006, let alone in 2019, nobody talked about AI as much. Today it's the only topic people are talking about, forgetting a little bit about 5G and what happens beyond 5G. So what are we doing here? You're right, Broadcom, the first word in Broadcom is broad. And you're right, the portfolio we have is broad, especially with the addition of our sisters and brothers from VMware who are actually just close by. We are very excited about the work we've been doing for decades with these service providers. And the reason why we're here is the time and effort that we've been spending literally for decades with these operators and service providers. And what's exciting this year is, as I said, and as you said, we have a new family member, VMware. VMware have invested in over two decades in terms of bringing virtualization and software capability for a full stack for the operators, not just in their data centers, from an IT point of view, but I would say equally, if not more importantly, across the network and the capabilities that it can bring there. And so we're very excited about this new level of engagement for the Broadcom family with our sisters and brothers from VMware. But if I focus on the hardware side, which is the business I'm responsible for, look, the exciting piece every morning when I get up, for me is like a little kid in Toys R Us. We have everything in terms of technology from starting from the handheld devices, wireless all the way as you get into the first base stations, backhaul, you get into access networks, metro networks, core networks, and ultimately data centers and cloud. One of the statistics that we've been very proud of, more than 99% of any service provider traffic actually, goes through at least one Broadcom chip. And so for us not to be here with our customers and partners and operators would be a big miss. So very, very excited to be here. So 2006, Mobile World Congress, which was called at the time, was actually at the original theater in Barcelona. Now we're not quite in Barcelona, we're just in the outskirts of town that I can't really pronounce that well, up until out there, something like that. But anyway, back then, Charlie, the world was CPU centric. And several years ago, I saw a video that you did where you were predicting that the world was moving from a CPU centric environment, 2006, the PC revolution to a connect centric environment. And that struck me, and I started to learn more about what Broadcom was doing, and obviously that bet has paid off. But for our audience, can you explain that premise and how it's transpired and what it means for the future? Yeah, so first of all, thank you for noticing that and bringing it up. This is actually something that's very dear to my heart personally, especially with my background and my career. I actually believe, depending on the workload, there are data centers, infrastructure worlds, that the CPU and the processor is the center and the heart of that system. When you talk AI, everything has changed because with AI you have these elephant workloads that cannot run on a single processor. Where it's a GPU, TPU, NPU, the flavor of the day, let's call them for the next 15 minutes, XPUs. So if you take these XPUs and you take some of these elephant workloads, you can't run them on a single XPU. You can't even run them on eight CPUs or XPUs or GPUs. You have to now scale into the thousands today, thousands of these. In order to do that, you've transcended now a single system where you've scaled up. Now you have to scale out to many, many of these systems and racks. And you have to interconnect them and guess what? If you don't have the right network strategy and connectivity strategy, this will not work. This will not scale up. I want to double down on that because one of the things we're hearing here at MWC this year obviously is the emphasis of its telco connectivity. I got to connect stuff. That's been around for a while, okay, check. AI is the focus. And they say, okay, I got to tune the software for every single system I build for that AI. There's no like common yet system because the workloads are different. Gender of AI is completely diverse in its capabilities. So the question is, what does AI need from a chip standpoint to make it work? Because everyone's putting clusters together, NICs, switches, components. They're essentially building their own new systems. Not by scratch, but you guys are supplying that. But like, you're starting to see clusters, GPU clusters, micro-clouds are emerging. We saw that at HPC at supercomputing 23. We'll be there next year. We'll probably see more HPC AI together, which is again another sign. These new systems need to be built. What is AI actually using? And why are people doing this? And what's the right approach? So look, super fun question. Honestly, I'm spending the majority of my time on this, both with the engineers inside Broadcom, but even more exciting with the engineers, especially with the hyperscalers. So on AI, especially generative AI, I would say more than 80% today of the spend is a handful of companies, a handful of hyperscalers. And this is a phenomenal challenge for an engineer. It's actually equally phenomenal for a business person. So if you combine both and you say, we've got a huge scale challenge here on an engineering side, but also how do we make sure that this actually will be monetized? We're working on these two things. Today, it's a little bit orthogonal. The more you try to do on the engineering, the more you're going to actually have to spend tens of billions of dollars per single super cluster. So when you look at this, the way we look at it and the feedback that consistently have been getting from the customers we have, but specifically the hyperscalers, comes down to three things. One, a lot of the systems that exist today are proprietary closed. Having an engineering background, I tell you the best innovation over the last 300 years, three century, come through open platforms. So the first thing that I think needs to happen in these generative AI and large clusters is open. This is really key. And a lot of the investments we're making, whether it's on the Ethernet side, whether it's on the PCIe side, Optics side, Interconnects, NICS as you mentioned, for us in delivering an open platform based on either standards that exist or standards we're collaborating with the industry with. Ultra Ethernet consortium is one example. Many others. So first open. Two, you talked about the clusters and the clusters require scale. And I mentioned this in the previous question you asked, we're not talking anymore about a cluster of 4,096 XPUs. Today, some of the largest clusters we're seeing are in the range of 24,000 to 32,000 XPUs, okay? The problem is that's not enough, okay? What do people want today, what we're hearing? They want to scale it beyond 100,000. They want to scale it beyond 500,000. Well, guess what? The second thing is scale. How do we scale this? One more thing I want to tell you. All of this is fun. Once you look at the power of a single XPU, forget everything else around it. It's on the hundreds of watts today. If you go to 100,000 of these, low power. Become skis, so I want you to remember these three things. Open, scale, and low power, OSP. I'm going to come back and ask you about these things. Remember that. Okay, we've studied for the quiz. First of all, I love that you use the term supercluster. Yes. Supercluster, a great way to describe it. So let's talk about the investment you're making. My question is, where are you guys investing? Because when you have the clusters and you have nicks in there, the question will come up, what about performance? Is there any degradation of performance when you've got all these nicks, you've got all these XPUs together? What's the performance criteria? How do you guys look at that, and how do you answer that engineering around the performance of the nick? Is it faster ethernet? I know that's been getting increased in speed. Open is, I love the open question, but what's the performance challenge there? Is it a problem, or how do you guys solve that? So I want to take you to the foundational technologies, because that's what we invest in ultimately, and we take all these foundational technologies and build the products that we're talking about. So foundational technologies from Broadcom is around connectivity. So they've gone back to connect, centric or network centric, is the future? Absolutely, the platform is going to be the network, not the XPU down the road as you get to these things. So foundational technologies like CERDI, we have the best CERDI's technology in the world delivering today 100 gigabits per second at capabilities that actually take the standard. They're standard based on IEEE, and we can deliver 2X the margin that the standard needs. Why is this important? When I take 100 gig CERDI's and I can deliver, let's call it 45 DB capabilities, it means now I can run these networks on four meter copper cables. When you run it on four meter copper cables, when the standard calls for two, you can completely change the way you interconnect these platforms at much lower power, significantly lower costs. So that's one example. As you take now this, one example of foundational technologies, now we say Ethernet, take Tomahawk family. Tomahawk family in a decade, two orders of magnitude in a decade is what we increase the capacities to. So today we're the only company shipping in volume with a single monolithic chip, 51 terabits per second. Now, the more interesting piece, remember OSP, we're going to come back to that. That's 90% less power than what we did 10 years ago. So when you think of this foundational technology and you say, okay, well, what would be the next step? 200 gig. Well, can we deliver the same thing at 200 gig? Can we go now instead of 51T, can we go to double that? We take that technology, apply it to the NICS. We take that technology, apply it to Jericho AI, and so forth. Now, to focus a little bit on the power, even though we dropped the power by 90%, and we're going now from five nanometer to future, lower power technologies, guess what? We need to now out innovate in a different dimension. So we're starting to bring our optics capability into the equation, and we're saying if we now bring silicon photonics, we can co-package it with the Tomahawk 5. The power that dropped 90% will drop another 40, 50% on top of that, so it's quite disruptive and exciting in terms of what's coming. So can you explain, thank you for that explanation. I mean, the business model and the R&D model of Broadcom is very, very focused, and it has implications on the sustainability of your customer base, and also your sales and marketing costs. Can you share with the audience your philosophy, because it's very different than what we often see in the technology industry. So we term and coin this capability and strategy and model sustainable franchise. So in Broadcom today, and by the way, it's applicable to hardware systems and software. And by hardware, I mean semiconductor silicon. And that's where the genesis of this, this is where it started. So what does that mean to have a sustainable franchise? We have 26 of them today, nine on the software side and 17 on the hardware side. It comes down to first, believe it or not, many people don't think of it this way, the market. So for us, we spend time making sure that when we want to invest in a market, we invest for at least a decade. Many people don't know this about Broadcom actually. So when we say we are in a certain business, we commit to 10 years plus. That is extremely important for us. Now, most people say, I want to invest in a space that has a hockey stick. Many people do that, that's not what we do. Sometimes it happens to be like AI and generative AI. Which is great. But you know what, that's not what we're looking for. Even if a market in 10 years is declining low single digits, that is actually exciting for us. For many people, it's boring, it's legacy. Big, big, big mistake. So first the market, 10 years plus. Two, most importantly, technology leadership. That is the most important thing that we do at Broadcom. In the 26 categories, and specifically the 17 on the hardware side, we have to be the technology leader over that span of 10 years. So if you look at a span of 10 years, sometimes a company A might be the leader, the next gen, another company. So just to clarify, so your philosophy on the market is, it's got to be big, but doesn't have to be growing. It's got to be established. Established, big enough to do some things in, but you're not chasing growth curves. There might be growth curves you'll hit, like AI. Correct. Because it just happens to be what you're focused on. But the franchise is a durable business model, sustainable, okay, got it. So market, market, and technology leadership. That's the second one. So technology is super key. And you said that, over five billion dollars is what we invest in annually, and VMware is making this larger. On top of that, we make sure technology comes from IP, IP comes from the brains of engineers we have. We invest heavily in our engineers, heavily. And we make sure that whatever technology we bring to that market is the best technology in the world. The third thing we do is seamless execution. And with that, we have to deliver specific business parameters for that leadership and technology for that durable market. And each category, over a span of X amount of years, way less than five, has to be the number one and sustain that leadership from a technology and a business point of view. So it comes down to these three things. And I've heard. The number one's a scoreboard. You got to maintain the number one. Number two's get cut or they kind of give a chance to get back to number one again. Look, I think we've done this almost for two decades and we've done it across different areas. I think with the right engineers and ensuring that you have the right level of investment out investing anybody else, sometimes we out invest the entire industry in that space, actually it works incredibly well. And I've heard, it relates to what John was just asking. I've heard Hawk Tan talk to Wall Street that each of those business units, what'd you say, 27? Are independent, 26 and they have to stand on their own. You don't allow them to intermix because it gets fuzzy. A lot of organizations would say, well, that's inefficient. We want to centralize everything, but you take a different philosophy that each have to drive their own P&L and be a leader. Is that right? Now, to be fair, we do have, I have a separate dedicated team that reports to me that we call it central engineering. So we have a central engineering team, that central engineering team will do the foundational things I talked about, like CERDIs, libraries, process technology. We have that. Now, once you take these foundational things and let's say you want to build a switch, that becomes a sustainable franchise. They're responsible for delivering this and we have a term called no crutches. Okay, you have to be number one on your own with your engineers. If you don't, you don't deserve to be on the platform. And that central engineering team feeds the P&Ls. It's not a P&L on of itself. Correct, correct, okay. Yes, you've got a great business model, fundamentals. Again, like the lay of the land, give people the foundation, give them the investment, give them the target market to go after, give them the scoreboard metric, go and they do it. Correct. And you feed that. Correct. Formula works. Yes. Okay, now what's interesting is that in today's market, that kind of is interesting because you have all the piece parts. I mean, you simplify your business to Wall Street, we got chips and software, I get that and there's a lot of divisions. But the bigger picture right now is that people are actually changing their business models around their actual infrastructures and how they do business with their technology. The computer industry is completely resetting. I agree. So just the world just spun in your direction and now people are looking at, okay, how do I change from my data center to cloud? They did that. How do I go from distributed computing, cloud, hybrid on premise and now edge and now device mobility? I mean, it's the perfect storm. That's where AI seems to be hitting. So back to AI, how does someone take advantage of Broadcom? If I'm an entrepreneur, I'm a business, I need to create a new enablement model to build in my next generation, next 20 years of my business. What, how do you speak to that specific situation as AI unfolds? What is that enablement that Broadcom brings to the table? I knew you're going to bring us back. I knew it. We love AI. We've been drinking the AI Kool-Aid for a long time. So this is the awesome part. Actually, you're right. We're going through a big inflection point that happens actually sometimes once a decade or sometimes once every two decades. So with AI, the model that we have with this structure that I described before. First, on the semiconductor side, we build merchant silicon and that's important. And the merchant silicon is about the connect centric or network centric capability. And it is about, okay, let me see if you remember, open, scalable and lower power. And I tell you, so those abilities that we drive today can be applied to hyperscalers who are absolutely investing each tens of billions of dollars, but it can be taken all the way down to an enterprise level that want to do their own inference on-prem or build their own platform on-prem. We can do that with the merchant silicon piece and most of the products that we have play in that role. So I would say more than 80, 90% of the products we have are merchant open, enable the scale at a low power. But on the other end of the spectrum, when somebody has massive volume of such platform, sometimes they say, I don't want the merchant play. I want to have a custom play. And that's the engagement model that we've changed where we said, okay, well, let me bring my foundational technologies. You bring your foundational technologies and software and let's sit down and say, okay, if you're going to build a cluster that is, call it 32,000 processors or 100,000, how do we do that from an open platform that is actually very scalable at a very low power? Because I tell you, power is the number one issue that we're having right now. And bringing that foundational technology in a custom or hybrid play is another big differentiator that I think we have the ability to do. Do you think a company that is, and I'm trying to think of industry examples, that is proprietary, can embrace open and in a seamless way that is not disruptive. I'm trying to think of an example. Look at IBM, they're almost the exception that proves the rule. They were proprietary and now they're much more open with Red Hat and it's taken decades for them to get back. Some of the hyperscalers you could say are pretty good. What do you think about that? Is it just a company's DNA that's proprietary? They will get stuck in that mud for a long time or do you feel like the industry is so vibrant and open that they can respond? I think innovation as it sparks initially starts with a proprietary approach because it'll be innovated, let's say, by maybe a handful of companies. In many cases, it's a single company. So as that spark happens, traditionally it is proprietary. The challenge is if it becomes such a disruptive force, just like we're discussing. I think bringing a million or two million or five million engineers across the ecosystem over a span of 10 years, remember the sustainable franchise, will out innovate any single company in the world. It takes time, it's not six months or a year, it's many, many years and that's where we're committed to. This is why you see us invest heavily actually with our peers in the industry and our partners and customers where we say, let's, we found it. We were the founding member for UltraEthernet Consortium. You know what? That's going to help us innovate. Of course if we take the power that we have in Ethernet and do it on our own, we will create for the next three years something so cool that nobody else has. But long term, is that the right thing to do for all of us? No. I remember when I was growing up in the industry Open Systems Interconnect was a big part of the revolution that took proprietary to open, it kind of stopped that TCP, but everyone's got standard of the chip level hardware that created massive innovation wave and wealth creation, frankly. So we're kind of in that moment again. So the question I want to ask you is as these clustered systems, as we call them are emerging, and you see them everywhere. People are standing up custom clouds and powers the constraint, so they're engineering it and you guys are a big part of that. So congratulations. As this next level comes, where's the investment from Broadcom and what do customers need to do to build these next gen clusters? Is it the NIC or the Switch? Because now you guys do both. NIC and Switch, it's connectivity based. In the cluster is the NIC, they work together. It used to be the Switch was the king of the castle. Now you got NICs, as you mentioned, connecting thousands and tens of thousands of XPUs. What's your focus on that? NICs or Switches or both? What's the- Look, absolutely both. I think part of the open scalable and power to enable these three things, I think we got to do it across all of this. So if I may take you back to, I think the second question you asked, the way I'm looking at this, I'm looking at the entire infrastructure. So as I said, you go back to an XPU, you have to scale up. So when you scale up, there are things now we're doing inside that XPU in terms of today, everybody I would say, in order to build an XPU has to go to two and a half D and we're starting to see a path toward the chiplet. By the way, that is another disruptive area. Number three, so you go to 3D where you go chip on chip or die on die. After you scale up, you have to go to scale out. Guess what? When you scale out, it's all about the Switch. You have to have the best performance switch with the lowest latency, no packet drops, the ability to actually take these elephant workloads and be able to deliver them with low latency. Then you have to interconnect them. Then you have to go to the front end networks. And so when you look at these sort of four components, scale up, scale out, front end and the interconnect for each of these, these are the areas where I believe, not just the NIC by the way and the Switch, I believe the XPUs have to play part of that. I believe the optics have to play part of it, the interconnects. And so all of these, what I call foundational to cluster level, system level cluster technologies have to come hand in hand. I got to ask you about chiplets. You brought the chiplets. It's such an interesting guy. I saw that. He's lit up. That's Dave. But chiplets are not new, right? I mean, I've been to chiplets 40 years ago. Yeah, that's not new. But it's interesting. They have time to market benefits. There are cost benefits. My question is to somebody who really understands deeply, technically, the connect centricity, and the monolithic system, my understanding is you've got a big shared SRAM and all those XPUs, you don't have to know what they are. They're sharing that SRAM very, very fast. Chiplets, you've got relatively slower connections is what I understand. And they're asynchronous. How do you, but so huge market for that, but I see a market for both monolithic and chiplet. Am I understanding that correctly? I wonder if you could give us your perspective. So the way we're looking at this is really to help around at least two of the OSP things. And remember what OSP stood for? Open scalable power. You got it. Perfect. You guys found it. We're following the bouncing ball right now. So on scale and power, especially, it could apply by the way to the O as well, but on scale and power. As we go from five nanometer to three nanometer, and three nanometers done for us, now we're actually starting to work and actually have full product designs in two. This foundational technologies that I talked about, it doesn't make sense to take all of these technologies every single node and keep building monolithic chips. Why? Because as you go towards two, you can't keep running that transistor faster and faster. So actually it becomes a big penalty from a cost point of view to take all these technologies all the way to two or sub two. So what happens at that point in time is it starts making sense to say, you know, certain technologies, let's say, 30s, maybe it makes sense, especially mixed signal, that I keep them in a certain node where the power is optimal and the cost is the lowest. And I build it once. And then the core, whether it's a switch or a processor or what it might be, that I can keep optimizing it because it's more digital and I'll take it to other and other technologies. And so as you do this, it completely takes that cluster you talked about, not necessarily all the way up beyond the data center, the exciting piece, it takes us inside that chip. So it actually disaggregates that capability and it allows us to focus on what's the right technology in the right platform. Hence, power would be significantly less and then scale allows us to scale that in a much better way. It gives you more flexibility. It's clearly what you said. Thank you for that. And I love the scale up concept there because if you look at distributed computing, clouds was as horizontally scalable compute, that's great. And obviously you got scaling up with the apps in the cloud. But if you go with distributed computing, edge and on-premise cloud operations, that's essentially distributed computing. We know that. We love that. Okay, so now coming back to the market, hyperscalers are the big customer right now. Absolutely. As the traditional enterprise, we're talking about IT people running general purpose infrastructure from switches and stuff they had before. Racks of servers, top of rack switch, stuff that was old school, now going to clustered systems. The big challenge we're seeing with the AI focus is that might be inadequate or ineffective for some of these foundational models. So if you look at the open source movement of Lama and now Mestrel, although Mestrel has got some issues, but they're coming up in capabilities and adoption to some of the proprietary open AI other ones. So that means that there's more demand to host those. So the question is, how do the hyperscale cluster models that are today being invested in come down to the mainstream enterprise? Because that's going to replace those racks with clusters. Correct. And it's going to look a lot like the cloud. Correct. But it's got to be easy enough so that Dell and HP and Lenovo, all your customers that were doing servers and have Nix can build the new thing that's coming. Yes, I agree with that. Totally. Actually, as a matter of fact, another driver for this is data privacy and governance. Certain workloads cannot leave the premise. And so how do you solve that? So that piece we're excited about, I think it's coming, as you said. The thing I'm also excited about is not just the enterprise. Actually, if you go back to October of 23, five months ago, Comcast announced that they're collaborating with us and we're very proud of that to enable AI inside the CPE at home. So we're going to set- Wish, by the way, you have chips in the set-top boxes. Set-top boxes, gateways, next generation pond systems. And I tell you, two benefits coming out of that. So we're seeing actually this capability move all the way to the edge, not just to the enterprise. And we actually have that in production today. Two advantages of it. The first advantage, using that capability and these models to completely change OPEX. So OPEX would be significantly lowered using that technology all the way to the CPE. Because there's so much data that today is not mined. You can train on it, inference, and be able, instead of having technicians reactively dispatch, proactively eliminate them. Two, ARPU, and that's part of why I'm here going back to your first question. This capability, what we call NPU on-premise, which we have today in production. This is not a PowerPoint or sampling. We have it. Can enable new services for the service providers and operators today? They're excited about it. We're collaborating with them. We're enabling that at the edge today. And you're seeing uptake there today. So it's our power law of AI. Like we said, that very domain-specific AI is going to happen on-prem. That's where the data is. It just makes so much sense. Exactly. And the enablement's just going to be off the chart. So if you connect the dots, okay, as the evolution of this embryonic market continues to grow, enterprise gets reset with a kind of new architecture column clustered systems. That's my word. I think that's a good word. I like that word, by the way. It's like it replaces the server. You're just clustering it. And the power is the constraint. Let's call it the evolution of the server. Evolution of the server, which is a good thing. We want more power, of course. Open, scalable, low power. See? You got it. Quick study here. And then the software will run it. So the enablement is the apps will run on multiple devices, whether it's set-top boxes, CPE. Including devices. Yeah, I mean on-premise equipment. So all this is going to be good. The question on the business model front, because this has been kind of a masterclass on both product and business, what should people be thinking about on the business model side? As an enterprise starts to rethink their architecture, what would you advise the CISO, the CIO, the CEO? We're going to enable this digital infrastructure. It's a transformational journey, but it's now changed. The game has changed. So let me first start where tens of billions of dollars are being spent by a single hyperscaler and there's several of these. Let's start with a business model there before we go to the enterprise. Because if you're spending, let's say if you're running that budget and you're spending 30 plus billion a year, you better have a way to monetize this and some level of ROI. So starting with that, our view might be a little bit different than the industry on this. So our view actually is a big picture. We see that this market has really two prongs. One consumer led and driven. The other one is enterprise. Let's start with the consumer side. So the consumer side is when you have a large consumer base, hundreds of millions, sometimes billions. And the model to monetize it there is the engagement with the consumer. So as you are able to provide higher quality, let's say, content and reels to your consumers, the engagement model and ultimately it translate down to eyeballs is successful today. So we've seen that with a big check mark. And that's good because I think that segment will continue to invest significant amount of money for the foreseeable future. Good for all of us including Broadcom. Now let's pivot to the other segment, which is enterprise. Our view, we are not there yet. People are investing in big, big clusters, but the take is not there. Now the question is, will people continue to invest billions and tens of billions in this space or would somebody in 24 or 25 come back and say, look we have not figured it out, let's back off? Honestly, we don't know right now. Nor do we. But the fact that you started with consumer is so important and you have visibility in that because that's where the volume occurs. That's where the innovation always occurs. We've seen it time and time again. And by the way, here at the telco show, we asked some of the telcos what they think the disruption of AI is and they said, you got to go to the device with the data and bring that back. So the life cycle of the data flows again back to the evolution of the server. By the way, it's tens of thousands of servers now. So it's selling a lot of servers. Your clients are probably like, pretty happy about this wave coming. Absolutely, and we're happy about it too. It's an amazing conversation because you're saying, Broadcom doesn't chase the S-curve in the waves. But yet you are maybe not the number one AI company right now, but certainly one of the top. You, NVIDIA, Broadcom and the hyperscalers. You're on the wave. You're right in that mix and you happen to be thrown into that wave. You're not chasing the wave, you're in the wave. Yeah, and for the record, by the way, we do a lot of collaboration and great work with NVIDIA. They're actually a great customer for me and they're actually one of my fastest growing customers. So we actually collaborate and work with everybody. Going back to the OSP, remember, Broadcom. It's a big wave, Dave, you've got two servers on there. So final question, we've got to wrap up. We're way over time. It's like a podcast, master class. Thank you for coming on. Final question is a personal one. I know you got a deep, multiple engineering degrees and you mentioned engineering multiple times. We are in probably the greatest renaissance of engineering right now. A new generation's coming in. AI's attracting a lot of young talent. New problems to solve. What's your vision for engineers out there who want to solve problems? What are some of the problems, spaces? You see opportunities for people to come in and bring their engineering minds and talent to the problems. What are some of that? It could be materials to software to signaling. What is the big opportunity? So I tell you, so I have four children. I am happy to tell you that two of them will be computer engineers. One just graduated and doing his PhD in AI actually. And the other are still in college and one finishing high school. I think this is the best time to be in engineering. We're biased, but this is the best time. If you go down to the materials, the level of innovation that needs to happen now as we hit two nanometer and sub two nanometer is phenomenal. How do you now take these types of wafers and now start stacking these chips to do 3D? Because look, a single chip now, a reticle is 800 square millimeter. It's done, it's over. We have so many now dice that are actually at 800. We need to stack them. So packaging is going to be the, or advanced packaging I should say would be the next level as well to invest in and work in. We're investing heavily in these first two elements. The third piece I would say is around what type of software models can we now innovate in, where we can do training in the cloud for example, or on-prem, but now we can take the inference all the way down to these types of devices. I think that area is probably the most exciting and unknown. We still don't know what we don't know because we don't know what type of innovation and models. Think of about 24 years ago when the dot com bubble happened. How many companies were out there? And if you now fast forward two and a half decades, out of these companies, a handful actually, right? Succeeded, but thrived in a much bigger weight, trillion dollar valuation plus. The same thing I think will happen over the next two decades. And we at Broadcom are super excited to be playing not just in the materials and semiconductor way, but with- So you see a great entrepreneurial opportunity coming. Totally, and for us including VMware and hopefully even future things down the road. Charlie, thank you so much for your time. Again, we went over, this is like a master podcast class. Thank you for spending your very valuable time and sharing with us theCUBE, we appreciate it. Thank you. Charlie, well as president of Broadcom here, kicking off day three live coverage. I'm John Furrier with Dave Vellante, bringing you the great content here from Mobile World Congress. Stay with us for more live coverage after this short break.