 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. We attended both Nvidia GTC and Broadcom's Investor Day this week, where the AI arrow was on full display. In our view, GTC24 was the single most important event in the history of the technology industry, surpassing Steve Jobs' iPod and iPhone launches. The event was not the largest, but in our opinion it was the most significant in terms of its reach, vision, ecosystem impact and broad-based recognition that the AI era will permanently change the world. Meanwhile, Broadcom's first Investor Day underscored both the importance of the AI era and the highly differentiated strategies and paths that both Nvidia and Broadcom are each taking. We believe Nvidia and Broadcom are the two best positioned companies to capitalize on the AI wave and will each dominate their respective markets for the better part of a decade. But importantly, we see them each as enablers of a broader ecosystem that collectively will create more value than either of these firms will in and of themselves. Hello and welcome to this week's theCUBE Research Insights powered by ETR. In this Breaking Analysis, we'll share our perspectives on the state of AI and how Nvidia and Broadcom are leading the way with dramatically different overlapping strategies that may eventually be headed for a collision course down the road. Now, you'd never know it from all the AI hype that's going on, but the macro is actually softening. ETR shared this chart earlier this week with its private clients. It's preliminary, but it shows the expected IT spending growth rates from more than 1500 IT decision-makers for three time periods, October of 2023, January of 2024 and the most recent survey which is in the field. Note the material drop in Q1 from 2.8% to 1.5% year over year growth and more than a 100 basis point drop in the expectations for the full year 2024. Now, as we reported earlier this year, the tech spending outlook is very much back loaded toward the second half and even the fourth quarter and it's further deteriorating from the January survey. Now that said, we're seeing countervailing effects and the AI trade is actually broadening. The exceptional performance of the market in 2023 was extremely narrow with just a few names powering the tech rally. Well, both NVIDIA and Broadcom were part of that momentum along with Microsoft and some other names. 2024 is shaping up to have a much broader participation in the AI tailwind. Companies like Ansys, Cadence, Synopsys, Dell, Micron, IBM, Supermicro, Pure Storage, Vast Data, ServiceNow, CrowdStrike, ARM, UiPath, LAM Research and many others are benefiting from the froth around AI generally in GTC24 specifically. And here we show the recent movement post GTC for Ansys, Cadence, Synopsys and Dell, each of whom got major shout outs from Jensen Wang and his keynote on Monday. You can see they all moved up nicely and we believe it's from a combination of the GTC froth and strong fundamentals including demand outstripping supply for these companies. And in the case of Ansys, Cadence and Synopsys, they are critical design software providers for Silicon Chips. In the case of Dell, the company's supply chain leverage is putting in a very good position to secure GPU supply. Its execution is throwing off cash that it's returning to investors in the form of dividends and stock buybacks. As well, it's benefiting from this AI tailwind that back Jensen Wang said in his keynote that there is no company better positioned to deliver end to end systems, AI systems than Dell. Wow, and Michael Dell was in the front row and he said, hey, there he is. Give me your order and their booth was packed. Now we also show in this chart Cisco. This is a company that's not considered an AI leader despite the fact that it has a robust portfolio that includes many AI innovations. Cisco has Silicon to power AI networking. It's got AI in its own networks. It's got AI in collaboration software. And with WebEx, it's got data from recently completed Splunk acquisition. It's got Thousand Eyes, which is gonna provide, which does provide network intelligence data and telemetry information that can feed Splunk. And if they can put Splunk and data at the core of their business, they can really power AI throughout their portfolio. But the macro is hurting Cisco right now as the company has disappointed investors in the last two quarters. Snowflake's another company that in theory should be benefiting from AI because it houses so much critical analytic data. It has partnerships with the likes of NVIDIA. It has acquired AI expertise in the form of NEVA. But investors were shaken by Frank Slutman stepping down as CEO and the street is waiting for Snowflake's new leader, Sri Ramaswamy, to prove that Snowflake is still on track. Look, AI is everywhere, but it still is in the experimental phase in the enterprise. In our premises that while the macro has been challenging, AI will be infused into virtually all sectors and will eventually be a rising tide for all areas of tech. However, at the moment, while the ROI of AI is clear for consumer internet companies like Google and Meta, the business impact for enterprise AI has not been as evident. Until it shows up in the quarterly numbers, we expect CFOs to remain conservative with their budget allocations. The point is, while a rally is broadening, it is still not a tide that lists all ships. Nonetheless, a broadening of a rally signals that investors, they're concerned about the valuations that the leading companies being too high and they're making the case for putting money to work with other AI beneficiaries that could give them better returns. As well, others like the Cisco and Snowflake examples we shared will eventually benefit in our opinion. These are signals in our view that there's still plenty of upside in this AI run. We understand it's just a matter of time before the other AI winners become highly valued and the market gets toppy, but we're not there yet in our view. Now, NVIDIA GTC was a huge event, as we said. I want to share some of the big takeaways. It was a remarkable moment for our industry. The event was held in San Jose at the convention center, a venue that was filled to the max back in 2019. The last time GTC was an in-person event, GTC 24 was far too large for this venue. The keynote had to take place at the SAP Sports Arena, which is about a mile from the main event. And the main venue was packed. The exhibit halls for days, Monday was the biggest day. You know, perhaps as many as 30,000 people attended over the course of the week, but what made GTC so important was the reach, not only in the tech community, but with many presenters outside of the tech community. It was stunning to see how many folks at GTC were automotive, healthcare, financial services, virtually every industry was represented. So big takeaways, and there are many, we're not going to touch on many of them today, but we'll give you a few high level ones, started with Jensen's keynote and his vision for the industry and what he calls accelerated computing and the building of AI factories. And this new vision he put forth during the week was a world of generated content. He said the world of generated content is here and the images from his keynote, he said it was all simulation, it was not animation and the simulations were very impressive, presumably, you know, prompted by AI. He also put forth a vision of a token economy where the computer sees tokens and these tokens can be generalized. And his contention is if you can generalize, then you can talk to it and generate images perform gene analysis or control robotic arms and more. Jensen said that Blackwell, its new super chip named after mathematician David Blackwell was built for the Gen AI moment. He described a reinforcement learning loop where training, learning and generation become part of the same process and if done well and enough times, it becomes, he said, reality. He implied that this workflow would perform both training and inference and be a critical enable and one of the critical enablers is NVLink, a high-speed interconnect that is proprietary to NVIDIA. He said Blackwell took three years and 25,000 people to develop and cost $10 billion to create. It had incredible complexity. I believe the number was 600,000 parts, lots of software to do switching, all the links, the cables, all the plumbing. And he also said, he built the entire system to work. It actually showed pictures of racks and data centers. So they got the system to work and then they disaggregated and then sells them all to customers. That system view I think has some system vendors a little bit concerned, but we'll save that for another day. Jensen laid out a vision for developers and sort of a new development paradigm with NIMS, which are inference containers and NEMO, a platform for developing end-to-end generative AI and using retrieval augmented generation RAG at very high performance. And he discussed the AI Foundry, which packages all this technology into systems, as I was saying, and it essentially changes the way data centers are built and thought of. He also introduced Omniverse. It talked about digital twins at length and he had robots up on stage with him. In a private meeting with Jensen, he told us that within eight to 10 years, robots will be generally fully functioning. And he said, this will be done with, quote, just a bunch of tokens, I think tokenizing movement. He said, we got lots of data and there's a finite vocabulary around things that humans do like walking, sprinting, skipping, throwing, hugging, we know what that looks like and that can be simulated with AI very accurately by being tokenized and made generative. It's a different vision that you hear from most AI leaders. A key, he said, was to be able to ground Omniverse in the laws of physics. He said, AI doesn't understand laws of physics that he said Sora kinda does and he described a cycle of, you start by imitating, then you learn, then you adapt further and then you generalize and then you reinforce and then you ground the AI in physics. Now, as well, the ecosystem was on display at GTC. Virtually every company with a play in AI was there basking in the AI glow and hence the broadening of the AI tailwind we're seeing post GTC. And you left the event truly believing that we are seeing proof, proof points, tangible proof points of a new era emerging and coming into full swing. The days of Moore's law-driving industry innovation, they're over, a new performance curve has emerged, and new styles of computing will absolutely dominate virtually all sectors for the next decade. We'll be infused into software and hardware and networking and industry. This was the strong feeling one gets when assessing the impact of GTC 2024. Now, switching gears a little bit, a different but very compelling AI vision was put forth by Broadcom. Broadcom held its first investor day ever in San Jose on Wednesday morning. It was attended by both buy side and sell side analysts and only a few industry analysts were invited. We had the pleasure of being there. This was Charlie Cowes' event to educate the investment community on Broadcom's unique approach in the market. Charlie Cowes runs the semiconductor division and businesses at Broadcom. And we learned a lot about Broadcom's strategy. Its philosophy, its technology, its execution ethos, its engineering talent, and we saw demos of some of the most leading edge silicon technology on the planet. In fact, there were five demo areas. We spent most of our time, it was in this one, this purple one, demo zone number five for AI accelerators, they had, and I'll come back to that, they had a demo zone for AI systems. They had one for AI connectivity, one for AI optics and one for overall AI technology, the foundational technologies, like CERDIs that are the main spring of Broadcom's innovations. Broadcom has 26 divisions or P&Ls and 17 of them are in the semiconductor group. Semi's account for nearly 30 billion, I think it's $28 billion in revenue last year. It's growing in the double digits, like 13%, I believe, last year. And R&D spending is $3 billion annually. The company took us through the history of Broadcom, how it came to be, with roots in iconic names, like Bell Labs, HP, HP Labs, they did some amazing stuff out of HP Labs, LSI Logic, LSI Avago, Broadcom itself, Procaded, more recently, software assets like CA, Symantec, and of course, VMware. Charlie Cowis took us through the company's unique three-prong strategy, which is shown here. They don't look for hockey sticks to jump on an S-curve. Rather, Broadcom looks for durable markets with sustainable franchises that have a decade or more of runway. Now, in those markets, they go for technology leadership, they invest R&D, they really focus, they get a massive engineering team, and then they execute to a very firm plan. So look, this is a nice PowerPoint, but what was most impressive to us is that the five presenters that came in after Charlie Cowis, each demonstrated a deep understanding of their markets and the history of their businesses. So that durable franchise steeped in a deep understanding of the history and durability of the business. And then each presenter demonstrated technology leadership, showing us when they entered the market, how they were first with innovations like PCIE, AI, they started doing AI chips in 2014. They showed us their roadmaps and they showed us their execution timelines, really strong proof points. There were many from AI networks, from Ram Velaga, Server Interconnects from Jazz Tremblay, Optical Interconnects from near Margulet. Then the foundational technologies, this is not a P&L, but it's a mainspring of technology that all the divisions can take advantage of, these things like CERDs, and they're shared across the P&Ls. We heard from Vijay Janapardi about the custom AI accelerators. That was that Zone 5 that we talked about in the purple, this one here. And so we heard that from Frank Ostajik. And now the big news and why the stock popped in addition to the credibility of the presentations was the announcement of a third custom silicon customer. Now customer number one for sure we know is Google, they've been a customer, a silicon, custom silicon customer of Broadcom for probably a decade. Number two was Meta, we think. It's been a customer probably the past four years. And you remember, Octane joined the board of Meta just recently. And then Charlie Cowis announced we have a third custom silicon company. Of course, we were all trying to guess who it was. I think it's ByteDance, but it wasn't disclosed. Charlie Cowis, the reason I think it's ByteDance, one of the analysts, Charlie Cowis was asked when the analyst was fishing in the Q&A if the technologies are restricted from selling to places like China. And his answer was something to the effect of currently there are no restrictions on their technologies in terms of where they can sell. So I think ByteDance, the owner of TikTok can't be ruled out. And there are other reasons why we feel that way. But the big takeaway from this part of the discussion was the value of custom silicon. And there were two points that Charlie Cowis made that we'll share. First is the business case for consumer AI at companies like Google and Meta, and I'm suspecting TikTok ByteDance. They're very strong business cases. The bigger the AI cluster that they can build and support and keep up and running and keep loaded and utilizing the better AI they have, the better they can predict what their customers are gonna wanna consume and they can serve content to those customers and the more ad revenue that they can generate. So kind of a proportional relationship between the size of the AI cluster and the amount of money that these internet consumer companies, consumer internet companies are gonna make. And again, that's why another feeling is that it's more likely ByteDance than say an Apple, which seems to be sort of not really a front and center in AI yet, or a Tesla probably doesn't have the volume. You know, Amazon consumer really again, think that social media is where the ROI is. So that's why we think TikTok or ByteDance. Now the second big point that Charlie Cowis sort of left us with is the consumer internet giants have very specific workloads. And if you can customize silicon for those workloads, you can significantly cut costs at power. I think there was one example, they showed a chip that saves 80 watts. You can imagine that over a million chips. I mean, it's just significant for these data center operators. So we think customer number three again is ByteDance. As I say, could be Amazon, could be Apple, could be Tesla, but we think ByteDance makes the most sense. And the reason this is important is because these are durable customers with long life cycles and highly differentiated and very difficult business to displace. And it's good margin, really good margin, because it's hard. And this AI driven era is gonna really contribute to Broadcom's AI revenue. So Broadcom really wasn't hyping the AI wave early on. In fact, at one point, one of the earnings calls that they cocked hands said, to us, it's just another workload. But the company increased its AI contribution forecast saying that 35% of its semiconductor revenues are gonna come from AI in 2024. That's a 10% increase from previous forecasts. And it's up from less than 5% in 2021. So they had forecast 25%, they're now up to 35%, so it's becoming a major portion of Broadcom's revenue. There's so much to unpack for Broadcom's meeting. We have 89 slides of content and dozens of pages of notes from the event. But one of the key point we wanna leave you with is shown here. This was one of the most stunning of group points. Broadcom's premise is that we're moving from a world that is CPU-centric to one that is connectivity-centric. We've talked about this before. But the emergence of alternate processors, alternative processors beyond the CPU, like the GPU, the neural processing unit, the LPU or language processing unit, you'll call them XPUs, they require high-speed connections between them. And that is Broadcom's specialty and the power of its business model is shown in this slide. This is an example of a type of system needed to support AI workloads. Broadcom technologies are shown in the red, so that's their served available market. So the company is surrounded in the AI infrastructure. Now imagine this configuration scaling up and out to many, many, many, many clusters. And the key takeaway is shown on all of Broadcom's slides. It's like Charlie Cowes' mantra, open, scalable, and power efficient. And Broadcom is focused on open technology standards. Two examples are PCIe, where they were an early innovator, and Ethernet, they were the founder of the UltraEthernet consortium, one that will allow this massive scale out of these clustered systems. As John Ferrier says, we're entering clustered systems in the AI era, and it's a new type of clustering. It's got internal switching that's very, very fast, and it's got external switching as well. And there's only two companies on the planet that really are in a position to provide that. NVIDIA and Broadcom, NVIDIA taking a proprietary approach, Broadcom with an open approach. So it's very nuanced, but interesting tidbit that was shared by Jazz Tremblay about Blackwell. And Blackwell's NVIDIA's giant GPU, just as a reminder. The southbound connectivity is proprietary NV Lake, a very important technology that NVIDIA had to develop in order to make Blackwell work. But the northbound connection is PCIe. So even NVIDIA, which is a highly proprietary architecture, you know, which is built around InfiniBand from the Melanox acquisition, even NVIDIA must play in the open standards world. Kowas also mentioned, it was kind of interesting in the meeting because, you know, Broadcom and NVIDIA, they kind of do this around Ethernet and InfiniBand, but Kowas mentioned that Broadcom has a great relationship with NVIDIA, and NVIDIA is its fastest growing customer. So you can begin to see how compelling Broadcom's business model is because it sells to consumer internet companies, device manufacturers like Apple, hyperscalers, enterprise players like Dell and HPE and others. As well, the clear takeaway is that Broadcom does the really hard silicon work that others can't do. The example was given, was that a lot of folks can relate to is anybody can ski the green trails, you know, the green circles. But those double blacks and extreme skiing with the warning signs and the skeletons, Broadcom does that really well. They love that business. And there's a huge barrier to entry there and the margins are high in that business. So look, there's so much more we could talk about from this meeting and we'll share more over the coming weeks and months as we digest this material. I mean, especially again, this InfiniBand versus Ethernet discussion, which was fascinating, Jensen essentially saying that Ethernet is pretty much useless for AI workloads and Broadcom countering with many, many proof points of AI leaders like Meta, Google and others that are adopting Ethernet. And of course, Ultra Ethernet is what's going to allow the industry to scale out open standards. You know, open over the long run, generally wins. We'll see how much runway NVIDIA has. Personally, I think it's significant because of the CUDA software and the lead they have there. But again, open systems has always proven that the innovation flywheel can catch up with proprietary. So we're going to leave you with the following thoughts. The AI tailwind, it's broadening and we're seeing sort of unprecedented AI demand for some companies. So clearly that's the case with NVIDIA. Everybody's lining up to get Blackwell. The big three hyperscalers are getting at those, the specialized clouds like CoreWeave and Genesis and others, you know, we're very well funded. Dell is leveraging its supply chain to get those GPUs, you know, HPE and supercomputers. And so there's strong demand for those GPUs and the companies like Synopsys and Cadence and Ansys who provide design software and LAM Research and obviously TSM are seeing unprecedented demand for these high performance and highly complicated AI chips and AI infrastructure. You know, others are more susceptible to the macro right now. We gave two examples of Cisco and Snowflake. You know, they got some other issues that they're working through, but they're going to have to prove out their AI innovations and demonstrate to their customers that they can, you know, drive value. We think they will, but they've got to show that they can do that and execute if they're going to participate in this AI updraft. NVIDIA and Broadcom, they've got dramatically different strategies and routes to market, but both have sustainable motes with what we think are very long runways. There's a lot of discussion and I think confusion around training versus inference. You know, NVIDIA is doing a lot of training, although in their earnings call, Jensen said that over 40% of their revenue came from inference. Now a lot of that inference was chat GPT, but nonetheless, NVIDIA's got solutions for inference, but as costs come down and the requirements sort of shift toward inferencing, you know, NVIDIA is being challenged by some of the competition. And so you're seeing, you know, a lot of folks sort of with knives out, trying to attack NVIDIA, particularly the inferencing, because frankly, the training is, you know, they've got the high watermark in training and that will continue, but you know, it's going to be interesting to see, will today's inferencing or will today's training become tomorrow's inferencing with depreciated assets that'll do inferencing just fine? How will NVIDIA approach that? NVIDIA announced a number of software tools that they'll license for $4,500 per year per GPU to build AI, to build LLMs, to do RAG, and a lot of that's going to go toward inference. So that's a relatively low-cost entry, you know, with the software-defined model that NVIDIA has. Really at Broadcom is 100% going to participate in both training and inference because it doesn't care what you do. It's just going to supply that connectivity to any workload. Now, again, as we said, consumer AI ROI is very clear. And the really important point here is consumer markets always lead and eventually the enterprise adopts. AI ROI has to be tangible in the enterprise for the macro headwinds to really subside. And that's something that we're watching very closely. All right, that's it for now. Thanks to Alex Meyerson and Ken Schiffman on production and Alex does our podcast. Kristen Martin and Cheryl Knight helped get the word out on social media and in our newsletters and Rob Holt is our EIC over at SiliconANGLE.com. Remember, all these episodes are available as podcasts wherever you listen. All you got to do is search breaking analysis podcasts. They publish each week on theCUBEresearch.com and SiliconANGLE.com and you can email me at David.Ballante at SiliconANGLE.com or DM me at D-Ballante. Comment on my LinkedIn posts and please check out ETR.AI, they have the best survey data in the enterprise tech business. This is Dave Ballante for theCUBE Research Insights powered by ETR. Thanks for watching everybody. We'll see you next time on breaking analysis.