 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. And Vidya wants to completely transform enterprise computing by making data centers run 10x faster at one-tenth the cost. And Vidya's CEO, Jensen Wang, is crafting a strategy to re-architect today's on-prem data centers, public clouds and edge computing installations with a vision that leverages the company's strong position in AI architectures. The keys to this end-to-end strategy include a clarity of vision, massive chip design skills, a new arm-based architecture approach that integrates memory, processors, IO and networking in a compelling software consumption model. Even if Vidya is unsuccessful at acquiring ARM, we believe it will still be able to execute this strategy by actively participating in the ARM ecosystem. However, if its attempts to acquire ARM are successful, we believe it will transform in Vidya from the world's most valuable chip company into the world's most valuable supplier of integrated computing architectures. Hello everyone and welcome to this week's Wikibon Cube Insights, powered by ETR. In this Breaking Analysis, we'll explain why we believe that Vidya is in the right position to power the world's computing centers and how it plans to disrupt the grip that x86 architectures have had on the data center for decades. The data center market is in transition. Like the universe, the cloud is expanding at an accelerated pace. No longer is the cloud an opaque set of remote services, we say somewhere out there, sitting in a mega data center. No, rather the cloud is extending to on-premises data centers. Data centers are moving into the cloud and they're connecting through adjacent locations that create hybrid interactions. Clouds are being meshed together across regions and eventually will stretch to the far edge. This new definition or view of cloud will be hyper-distributed and run by software. Kubernetes is changing the world of software development and enabling workloads to run anywhere. Open APIs, external applications, expanding the digital supply chains and this expanding cloud, they all increase the threat service and vulnerability to the most sensitive information that resides within the data center and around the world, zero trust has become a mandate. We're also seeing AI being injected into every application and it's the technology area that we see with the most momentum coming out of the pandemic. This new world will not be powered by general purpose x86 processors, rather it will be supported by an ecosystem of arm-based providers in our opinion that are affecting an unprecedented increase in processor performance as we have been reporting. And NVIDIA in our view is sitting in the poll position and is currently the favorite to dominate the next era of computing architecture for global data centers, public clouds, as well as the near and far edge. Let's talk about Jensen Wang's clarity of vision for this new world. Here's a chart that underscores some of the fundamental assumptions that he's leveraging to expand his market. The first is that there's a lot of waste in the data center. He claims that only half of the CPU core is deployed in the data center today, actually support applications. The other half are processing the infrastructure all around the applications that run the software defined data center and they're terribly underutilized. NVIDIA's Bluefield 3DPU, the data processing unit, was described in a blog post on Silicon Angle by analyst Zias Caravalla as a complete mini server on a card. I like that with software defined networking storage and security acceleration built in. This product has the bandwidth and according to NVIDIA can replace 300 general purpose x86 cores. Jensen believes that every network chip will be intelligent, programmable, and capable of this type of acceleration to offload conventional CPUs. He believes that every server node will have this capability and enable every packet and every application to be monitored in real time all the time for intrusion. And as servers move to the edge, Bluefield will be included as a core component in his view. And this last statement by Jensen is critical in our opinion. He says AI is the most powerful force of our time. Whether you agree with that or not, it's relevant because AI is everywhere and NVIDIA's position in AI and the architectures the company is building are the fundamental linchpin of its data center enterprise strategy. So let's take a look at some ETR spending data to see where AI fits on the priority list. Here's a set of data in a view that we often like to share. The horizontal axis is market share or pervasiveness in the ETR data. But we want to call your attention to the vertical axis. That's really where we want to pay attention today. That's a net score or spending momentum. Exiting the pandemic, we've seen AI capture the number one position in the last two surveys. We think this dynamic will continue for quite some time as AI becomes the staple of digital transformations and automations. And AI will be infused in every single dot you see on this chart. NVIDIA's architectures, it just so happens are tailor-made for AI workloads. And that is how it will enter these markets. Let's quantify what that means and lay out our view of how NVIDIA with the help of ARM will go after the enterprise market. Here's some data from Wikibon Research that depicts the percent of worldwide spending on server infrastructure by workload type. Here are the key points. First, the market last year was around $78 billion worldwide as expected to approach 115 billion by the end of the decade. This might even be a conservative figure. Now we've split the market into three broad workload categories. The blue is AI and other related applications, what David Floyer calls matrix workloads. The orange is general purpose. Think things like ERP, supply chain, HCM collaboration, basically Oracle, SAPs and Microsoft work that's being supported today and of course many other software providers. And the gray, that's the area that Jensen was referring to is about being wasted, the offload work for networking and storage and all the software defined management and data centers around the world. Okay, you can see the squeeze that we think compute infrastructure is going to occur around that orange area, that general purpose workloads that we think is going to really get squeezed in the next several years on a percentage basis. And on an absolute basis, it's really not growing nearly as fast as the other two. And VIDIA with ARM in our view is well positioned to attack that blue area and the gray area, those workload offsets and the new emerging AI applications. But even the orange as we've reported is under pressure as for example, companies like AWS and Oracle, they use ARM based designs to service general purpose workloads. Why are they doing that? Cost is the reason because x86 generally and Intel specifically are not delivering the price performance and efficiency required to keep up with the demands to reduce data center costs. And if Intel doesn't respond, which we believe it will, but if it doesn't act, ARM we think will get 50% of the general purpose workloads by the end of the decade. And with Nvidia, it will dominate the blue, the AI and the gray, the offload work. When we say dominate, we're talking like capture 90% of the available market if Intel doesn't respond. Now Intel, they're not just going to sit back and let that happen. Pat Gelsinger is well aware of this and moving Intel to a new strategy, but Nvidia and ARM are way ahead in the game in our view. And as we've reported, this is going to be a real challenge for Intel to catch up. Now let's take a quick look at what Nvidia is doing with relevant parts of its pretty massive portfolio. Here's a slide that shows Nvidia's three chip strategy. The company is shifting to ARM based architectures which we'll describe in more detail in a moment. The slide shows that the top line, Nvidia's ampere architecture, not to be confused with the company, ampere computing. Nvidia is taking a GPU centric approach, no surprise, obvious reasons there, that's their sort of stronghold. But we think over time, it may rethink this a little bit and lean more into NPUs, the neural processing unit. We look at what Apple's doing, what Tesla are doing, we see opportunities for companies like Nvidia to really sort of go after that. But we'll save that for another day. And Nvidia has announced it's Grace CPU and not to the famous computer scientist Grace Hopper. Grace is a new architecture that doesn't rely on x86 and much more efficiently uses memory resources. We'll again describe this in more detail later. And the bottom line there, that roadmap line shows the blue field DPU which we described is essentially a complete server on a card. In this approach, using ARM will reduce the elapsed time to go from chip design to production by 50%. We're talking about shaving years down to 18 months or less. We don't have a time to do a deep dive into Nvidia's portfolio, it's large. But we want to share some things that we think are important. And this next graphic is one of them. This shows some of the details of Nvidia's Jetson architecture which is designed to accelerate those AI plus workloads that we showed earlier. And the reason is that this is important in our view is because the same software supports from small to very large, including edge systems. And we think this type of architecture is very well suited for AI inference at the edge as well as core data center applications that use AI. And as we've said before, a lot of the action in AI is going to happen at the edge. So this is a good example of leveraging an architecture across a wide spectrum of performance and cost. Now we want to take a moment to explain why the move to ARM based architectures is so critical to Nvidia. One of the biggest cost challenges for Nvidia today is keeping the GPU utilized. Typical utilization of GPU is well below 20%. Here's why. The left hand side of this chart shows essentially racks if you will of traditional compute and the bottlenecks that Nvidia faces. The processor and DRAM they're tied together in separate blocks. Imagine there are thousands of cores in a rack. Every time you need data that lives in another processor you have to send a request and go retrieve it. It's very overhead intensive. Now technologies like Rocky are designed to help but it doesn't solve the fundamental architectural bottleneck. Every GPU shown here also has its own DRAM and it has to communicate with the processors to get the data, i.e. they can't communicate with each other efficiently. Now the right hand side shows where Nvidia is headed. Start in the middle with system on chip SOCs. CPUs are packaged in with NPUs, IPUs, that's the image processing unit, dot, dot, dot, XPUs, the alternative processors. They're all connected with SRAM which is think of that as a high speed layer like an layer one cache. The OS for the system on a chip lives inside of this and that's where Nvidia has this killer software model. What they're doing is they're licensing the consumption of the operating system that's running this system on chip and this entire system and they're affecting a new and really compelling subscription model. Maybe they should just give away the chips and charge for the software like a razor blade model, talk about disruptive. Now the outer layer is the DPU and the shared DRAM and other resources like the Ampere computing, the company this time, CPUs, SSDs and other resources. These are the processors that will manage the SOCs together. This design is based on Nvidia's three chip approach using Bluefield DPU, leveraging Melanox, that's the networking component. The network enables shared DRAM across the CPUs which will eventually be all arm based. Grace lives inside the system on a chip and also on the outside layers and of course the GPU lives inside the SOC in a scaled down version, like for instance, a rendering GPU and we show some GPUs on the outer layer as well for AI workloads at least in the near term. You know, eventually we think they may reside solely in the system on chip but only time will tell. Okay, so you as you can see Nvidia's making some serious moves and by teaming up with ARM and leaning into the ARM ecosystem it plans to take the company to its next level. So let's talk about how we think competition for the next era of compute stacks up. Here's that same XY graph that we love to show market share or pervasiveness on the horizontal track against next net score on the vertical, net score again is spending velocity and we've cut the ETR data to capture players that are big in compute and storage and networking. We plugged in a couple of the cloud players. These are the guys that we feel are vying for data center leadership around compute. AWS is a very strong position. We believe that more than half of its revenues comes from compute, you know, EC2 who were talking about more than $25 billion on a run rate basis, that's huge. The company designs its own silicon, Graviton 2, et cetera and is working with ISVs to run general purpose workloads on ARM based Graviton chips. Microsoft and Google, they're going to follow suit. They're big consumers of compute. They sell a lot, but Microsoft in particular, you know, they'll likely to continue to work with OEM partners to attack that on-prem data center opportunity. But it's really Intel that's the provider of compute to the likes of HPE, Adele, and Cisco and the ODMs, which are the ODMs are not shown here. Now, HPE, let's talk about them for a second. They have architectures. I hate to bring it up, but remember the machine? I know it's the butt of many jokes, especially from competitors, it had been, you know, frankly HPE and HPE, they deserve some of that heat for all the fanfare and then that they put out there and then quietly, you know, pulled the machine or put it out the pasture. But HPE has a strong position in high performance computing and the work that it did on new computing architectures with the machine and shared memories, that might be still kicking around somewhere inside of HPE and could come in handy for some day in the future. So HPE has some chops there, plus HPE has been known, HPE historically has been known to design its own custom silicon. So I would not count them out as an innovator in this race. Cisco is interesting because it not only has custom silicon designs, but its entry into the compute business with UCS a decade ago was notable and they created a new way to think about integrating resources, particularly compute and networking with partnerships to add in the storage piece. Initially it was within EMC prior to the Dell acquisition but, you know, it continues with NetApp and Pure and others. Cisco invests, they spend money investing in architectures and we expect the next generation of UCS, I don't know, UCS 2.0 will mark another notable milestone in the company's data center business. Dell just had an amazing quarterly earnings report. The company grew top line revenue by around 12% and it wasn't because of an easy compare to last year. Dell is simply executing despite continued softness in the legacy EMC storage business. Laptop demand continued to soar and Dell's server business, it's growing again. But we don't see Dell as an architectural innovator per se in compute. Rather we think the company will be content to partner with suppliers, whether it's Intel, Nvidia, ARM based partners or all of the above. Dell we think will rely on its massive portfolio, its excellent supply chain and execution ethos to compete. Now IBM is notable for historical reasons. With its mainframe, IBM created the first great compute monopoly before it unwittingly handed it to Intel along with Microsoft. We don't see IBM necessarily aspiring to retake that compute platform mantle that once held with mainframes rather red hat in the March to hybrid cloud is the path that we think in our view is IBM's approach. Now let's get down to the elephants in the room. Intel, Nvidia and China Inc. China is of course relevant because of companies like Alibaba and Huawei and the Chinese government's desire to be self-sufficient in semiconductor technology and technology generally. But our premise here is that the trends are favoring Nvidia over Intel in this picture because Nvidia is making moves to further position itself for new workloads in the data center and compete for Intel's stronghold. Intel is going to attempt to remake itself but it should have been doing this seven years ago what Pat Gelsinger is doing today. Intel is simply far behind and it's going to take at least a couple of years for them to really start to make inroads in this new model. Let's stay on the Nvidia v. Intel comparison for a moment and take a snapshot of the two companies. Here's a quick chart that we put together with some basic KPIs. Some of these figures are approximations or they're rounded so don't stress over it too much. But you can see Intel is an $80 billion company, four X the size of Nvidia but Nvidia's market cap far exceeds that of Intel. Why is that? Well, of course growth. In our view it's justified due to that growth and Nvidia's strategic positioning. Intel used to be the gross margin king but Nvidia has much higher gross margins. Interesting. Now when it comes down to free cash flow Intel is still dominant. As it pertains to the balance sheet Intel is way more capital intensive than Nvidia and as it starts to build out its foundries that's going to eat into Intel's cash position. Now what we did is we put together a little pro forma on the third column of Nvidia plus ARM circa let's say the end of 2022. We think they could get to a run rate that is about half the size of Intel and that could propel the company's market cap to well over half a trillion dollars if they get any credit for ARM. They're paying $40 billion for ARM a company that's sub two billion. The risk is that because of the ARM deal is based on cash plus tons of stock it could put pressure on the market capitalization for some time. ARM has 90% gross margins because it's pretty much has a pure license model so it helps the gross margin line a little bit for this in this pro forma. And the balance sheet is a swag. ARM has said that it's not going to take on debt to do the transaction but we haven't had time to really dig into that and figure out how they're going to structure it. So we took a swag and what we would do with this low interest rate environment but take that with a grain of salt. We'll do more research in there. The point is given the momentum and growth of NVIDIA its strategic position in AI is and it's deep engineering they're aimed at all the right places and its potential to unlock huge value with ARM on paper it looks like the horse to beat if it can execute. All right let's wrap up. Here's a summary look the architectures on which NVIDIA is building its dominant AI business are evolving and NVIDIA is well positioned to drive a truck right through the enterprise in our view. The power has shifted from Intel to the ARM ecosystem and NVIDIA is leaning in big time. Whereas Intel it has to preserve its current business while recreating itself at the same time. This is going to take a couple of years but Intel potentially has the powerful backing of the US government to strategic to fail. The wild card is will NVIDIA be successful in acquiring ARM certain factions in the UK and EU are fighting the deal because they don't want the US dictating to whom ARM can sell its technology. For example the restrictions placed on Huawei for many suppliers of ARM based chips based on US sanctions. NVIDIA's competitors like Broadcom, Qualcomm at all are nervous that if NVIDIA gets ARM they will be at a competitive disadvantage they being NVIDIA's competitors. And for sure China doesn't want NVIDIA controlling ARM for obvious reasons and it will do what it can to block the deal and or put handcuffs on how business can be done in China. We can see a scenario where the US government pressures the UK and EU regulators to let this deal go through. Look AI and semiconductors you can't get much more strategic than that for the US military and the US long-term competitiveness. In exchange for maybe facilitating the deal the government pressures NVIDIA to guarantee some feed to the Intel foundry business while at the same time imposing conditions that secure access to ARM based technology for NVIDIA's competitors. And maybe as we've talked about before having them funnel business to Intel's foundry. Actually we've talked about the US government enticing Apple to do so but it could also entice NVIDIA's competitors to do so propping up Intel's foundry business which is clearly starting from ground zero and is going to need help outside of Intel's own semiconductor manufacturing internally. Look we don't have any inside information as to what's happening behind the scenes of the US government and so forth but on its earnings call NVIDIA said they're working with regulators that are on track to complete the deal in early 2022. We'll see. Okay that's it for today. Thank you to David Floyer who co-created this episode with me and remember I publish each week on wikibon.com and siliconangle.com these episodes they're all available as podcasts all you got to do is search breaking analysis podcast and you can always connect with me on Twitter at dvolante email me at david.volante at siliconangle.com always appreciate the comments on LinkedIn and in the clubhouse please follow me so you could be notified when we start a room and riff on these topics and don't forget to check out etr.plus for all the survey data. This is Dave Vellante for theCUBE Insights powered by ETR be well and we'll see you next time.