 Hello, everyone. Welcome to the podcast for the Cube, CubePod 39. I'm John Furrier with Dave Vellante. We're here on a two-week break. We had Thanksgiving and then we were at re-invent. We were just too burnt out to do a podcast, but we had 41 interviews on site in Las Vegas as part of AWS re-invent, and as well as over two, three dozen interviews. And Palo Alto is part of our SuperCloud 5 battle for AI supremacy. SuperCloud 5's two-year anniversary. Still a thing, Dave. Great to see you. Great to see you, John. Absolutely. Despite what the naysayers say. Yeah, we love Charles Fitzgerald. Lovin' keepin' the flame alive. Look, a big week. I mean, we just had been in our last podcast. I was in Seattle right off to Adam Silesi. I think when we did that podcast, that was the day that Sam Ultman drama went down. So much has happened since then. This past week, we're coming off re-invent, which is a monster show. It sets the agenda for the industry. And again, the re-transformation of all AWS stuff. Great success for theCUBE. And theCUBE Research, formerly Wikibon. Congratulations, Dave. You got personally called up by the CEOs of both companies for your reporting and tracking your research. So congratulations on that. Great, great work on theCUBE Research, building that team out. It's going to be really good to pay dividends as the content continues to be great. The things that go on when a review in this podcast is we're going to review what happened at re-invent. Focus on the Silicon Chips. The war continues to see the battle and video on stage at re-invent. ARM just had a big event. Intel has an event coming out. And then the theme of data flipping the script, this is going to be the topic of our next SuperCloud 6 event coming either end of January or beginning of February. We're going to do data plus AI where the data focus is flipping the script. This is a really nuanced point that came out of re-invent. It's kind of part of the AMD announcement. You're starting to squint through all the chip makers and all the hardware people. It's going to come down to who can handle the data better. Whether it's a CPU, GPU, NPU, what do you want to call it, TPUs? This is now a new architecture that's being built in the industry. We're on top of it from day one. So you're going to hear about data flipping the script. And I want to get your take on the breaking analysis, Dave. You've got two major themes that are continuing to roll through and have an impact on the agenda in the industry. And frankly, changing how industry analysts are covering the industry, I want to get into that data, the sixth platform. I think you think you called it the sixth platform. Sixth data platform. The sixth data platform. Google announces Gemini that debuts integrating Gemini's models into applications with Google AI Studio and Google Cloud Vertex available December 13th. Huge announcement, huge game changer, multimodal, and a CUBE alumni is working on that project. Eli Collins, formerly of Cloudera. I just pinged him on LinkedIn. We've been going back and forth. So he's going to come on the CUBE. And obviously AMD had a big event. We're going to dig into that because there was a lot of announcements that we were kind of covering in the data center side, Dave, from the ARM side, as well as Intel. I know you have an opinion. I got some inside reporting from AMD directly from people close to the company and close to the situation that hasn't been reported yet. Intel's got an event coming up. I pinged them. Apparently it's going to be the New York Stock Exchange with Pat Gelsinger. And they're going to do a remote interview there. And then Microsoft's $13 billion open AI investment is under FTC scrutiny, Dave. So again... Oh, there's a shock. Okay. So... Lean is at it again. There's other stuff too. So again, but this is important because what happened with the Alpen situation two weeks ago, again, since our last pod, the drama has been off the charts. So we're going to dig into that. This quasi-rand and material, because at Reinvent, remember, this whole idea of model choice. Google and Amazon are saying model choice. Microsoft, not so much. And we're going to talk about the Broadcom VMware situation. I posted a picture of the sign that says Broadcom, not a VMware company. Their headquarters is moving to Palo Alto. We've reported that. Now it's now confirmed. They're divesting their end-user computing in carbon black business as we had predicted. And again, it's the two-year anniversary of SuperCloud. Dave, it's still a thing subject to the debate in the industry with Charles Fitzgerald, who I think he's recognizing it now that it is a thing, but he's going to die on that hill. So we'll have fun with it, and it's going to be a great time. So let's kick it off. SuperCloud, Dave, it's a thing. SuperCloud 6 coming up. We're going to continue to do that. We've had five SuperClouds now, and we're bringing, it's not just our thing. We bring in the community, and we're getting all these really deep technical experts to talk about it. Whether you call it multi-cloud management or cross-cloud, or Dave Lenticum calls it meta-cloud, there's definitely a SuperCloud trend, and it's gaining traction. We've seen guys like Cloudflare, Matthew Prince, they actually use the term SuperCloud. Sky computing, I mean, it's basically everywhere. So yeah, Fitzgerald just loves to hammer that. But I mean, even Microsoft is doing what I would consider SuperClouds, so which is his, you know, Microsoft is his camp. So, we had the battle, we had the SuperCloud 5 event, which we had coined because it was a studio event for Palo Alto, that we ran in conjunction with our on location in re-invent, our 41 interviews on site, those editorial interviews. And the theme was battle for AI supremacy. Well, Dave, there's many battles going on. There's battles at the chip level, and battles at the middle of the stack level, and the models, and how that's going to interact. So, I mean- This battle is at the board level too. Board level, I mean, so many battles. Because the stakes are high, the AI game is so powerful that the stakes are high, and everyone's realizing what to do, and what they're trying to figure out is, which side of the street do we need to be on, to be on the right side of history? And that's what's going to happen right now, and you're going to see all the companies and developers and architects really thinking through, how do I deploy my resources to maximize the benefits of AI today, and going into tomorrow without foreclosing the headroom opportunities that are going to emerge? And that's going to come around the combination of which hardware you buy, which chips you use, and how you manage your data. That, to me, is the big takeaway from re-invent. The flywheel that Swami and I were talking about on our interview was notable, and that was kind of original content, where he actually were riffing in real time. There's a data flywheel happening, Dave, and it doesn't look like yesterday's flywheel. And this is really interesting. And so the silicon chips, the relationships between chips, hardware, what's around the chips, GPUs, CPUs, NPUs, you know, the neural processing unit is becoming quite the offload accelerator option in the architecture, and the chips that interconnect around it are going to be the thing. We saw that at re-invent when video was on stage, when Gents was on stage with Adam Sileski, they were specific in their architecture. The gains that they're getting by cobbling together the GPUs is significant. So I think we had that right at supercomputing when we were talking about that, but this is going to be the battle. What does the cluster look like? How is it architected? And does it support the ability to span out clusters to support the new data model, the six data platform, as you're reporting? Right, and AMD made a big push this week. They announced that they're shipping their latest AI chip. They said it's comparable in performance to the H100. Of course, it doesn't have the software richness of the CUDA architecture. But one of the stats that Lisa Sue threw out was the market, I don't know whose stat this was, said maybe it's their stat, $400 billion by 2027. And I was like, wow. Okay, so Nvidia's trailing 12 month revenue is probably like 45 billion. AMD's probably 20, low 20s. So if that's the case, that the market's actually that big. Well, what's the TAM? TAM's going to be at least half a billion. She's saying $400 billion by 2027 for AI chips. Okay, so they get 10% of that. That's massive numbers. Yeah, I think that's exactly my little stake in the ground, John. If they're, let's say, $2022 billion today trailing 12 months, can they be, can they double by 2027 AMD? No question in my mind. The bigger, more interesting question to me is, what happens to Nvidia? Because I think they're going to have two thirds of the market. I mean, they could be $200, $250 billion, if that's the case. You know, Intel's going to get its piece. AMD's going to get, the cloud guys are going to get their piece. But Nvidia could have half that market, half that $400 billion. Well, I tell you, AMD was the first one to really come out with an AI engine dedicated chip. The Phoenix chip came out earlier in the year. That had NPUs, I believe. And this new announcement with the new MI 300 is interesting. And the rise in chip as well has that software stack capability. Now, mostly for Windows workloads. But the question, Dave, is the question that I have, and I couldn't make the event. I wasn't feeling welcome back for re-invent, but I did watch it online. Can the software stack, AMD software stack, which is called ROCM, really handle the prime time LLM stuff? Or is it just going to be dedicated to say, you know, Windows workloads or Microsoft Office? Not that that's a bad low hanging fruit option. I just, is the market moving too fast? And this is going to be the question that we're going to watch and squint through. I think there's so much demand for these types of systems that even if, I mean, I have, when I've talked to some like deep AI experts, I won't share who, one person like told me this. He said, look, even Intel's chips, which, you know, are relative to Nvidia, you know, far less capable, we use them because we can get work done with them. It just, it's taking longer. And it's, you know, it's made, it's more expensive overall relative to Nvidia, you know, Jensen, you know, spend more, you'll save more kind of thing. But so there's so much demand that I think Intel can do well. I think AMD can do well. Obviously, Nvidia is going to do well. You also have Google, AWS, Microsoft, Meta, all building AI chips because the demand is just enormous. Yeah, so if you think about the developer angle though, Dave, what's interesting is all the success that Nvidia is having, a lot of it is all positioning well with CUDA, right? So CUDA is out there getting that abstraction layer to really build around and get the maximum out of the chips themselves. Now you got the CPU, GPU and NPU, the NPU being the neural processing units, you know, your TPUs are out there, that's their version. And other people call it APUs, whatever you want to call it, doesn't matter. It's now the holy trinity of the chip design. You got to architect that offload. And so, you know, this is a field day for the OEMs. You look at who was on stage at the AMD event, you had Dell, Meta, Microsoft, Oracle's the world, even Lenovo. I mean, Lenovo was on there super micro. You know, all these people are dedicated to buying chips. Okay, so the question is, everybody wins in this rising tide. So even Intel, and to me, my walk away from this was, Intel and AMD are totally head-to-head. Major win for AMD, as AMD and Intel go head-to-head and the whole data center theme. You know, they didn't talk much about what we were reporting prior to re-invent and during re-invent about Nvidia's DGX cloud. Yeah, but so here's the problem for Intel. I mean, yes, Intel can do well, you know, get everybody's rising tide. The problem for Intel is Intel, for so many years, decades had a monopoly, whereas AMD was, you know, taken, like we like to say, croissants off the breakfast table. And so AMD has got upside opportunity, whereas Intel is basically, it's not even slowly, it's grip on its historical wind-tell monopoly is just, it's loosening quite dramatically as it fights this multi-front war with Foundry. That's going to be one of my topics. I think, yeah, next week on Breaking Analysis is can Intel, you know, make it in Foundry? And I got Ben from Creative Strategies coming on. He's a real expert on this. But so the problem is that Intel is AMD and Nvidia, you know, they're rising, the cloud guys are rising Intel's hanging on for dear life and it's going from whatever, mid-60% gross margins down to, you know, run of the mill. Now, who knows? Maybe they can bounce off that bottom, but that's the problem that I see for Intel is that, bye-bye monopoly. Yeah, and Intel was coming shooting a bullet across the bow of AMD. So we weren't that impressed. We had that story up on SiliconANGLE as well. Dell debuted new power scale storage systems, IBM unveiled NextGen, Quantum processor, a lot of stuff, you know, Gecko robotics raised 100 million for critical infrastructure, AI powered robots. And Dell, you know, you take a company like Dell, Lenovo, HPE, I mean, they're basically buying chips they have for years. They're used to, you know, paying up for whether it was Intel and now Nvidia. So for them, to the extent that they can have higher ASPs, that's goodness. I mean, Dell knows how to thrive in a low margin world as do many of those other companies. So they benefit in my mind from the AI tailwind. And it is, even Intel, you know, can do well here. And it may be the, you know, between the US government, the CHIPS Act, the massive demand for semiconductors that doesn't seem to be waning, you know, could save Intel's, but I hope it does because we need Intel. Well, the chips are happening, the chips wars, and we're seeing Amazon continue to have good results from the re-invent vibe outside of the snarky comments from the fits of the world who were kind of down on Amazon. I thought they did a great event. The whole, right up around Q being hallucinating, I think platformer Casey Newton, he's not a real strong enterprise writer. I think he was just jumping on the bandwagon. His sources weren't that strong. But come on, they got a hold of some internal document. Yeah, they would do the QA. He was just, hey, we got an issue. We got to fix it. Like what? Q is going to be, first of all, Q, the fact that AWS was able to get a legitimate demo out by re-invent was, I think, pretty astounding. Now, to me, it was a demo because my sources indicate that Q was really trained on Titan, which is not, you know, an advanced foundation model, like OpenAI, you know, like Google's models, like Anthropic. Okay, that's fine. They are going to port Q over to Anthropic. They're, I'm sure, well at hand doing that already. Anthropic and other LLMs inside of Bedrock. So the fact that they were able to get that demo and actually, the demo was actually really good. But it is a demo. So of course there's going to be problems. And you get internal documents. I mean, that doesn't concern me. You know, the bigger issue is, when will we actually see something like that go into production and become a platform that developers can use to build intelligent apps? Yeah. Well, you cut my rant off, but I appreciate that comment. But yeah, exactly. It was, the misread on that story was, it was an internal, they're testing it. That's what you do. You test for bugs. It's a big story. Oh my God, I got a huge scoop. Have you ever seen software with no bugs? I mean, that was stupid. When you read, when you actually read the story, it's like a bullshit headline. And then you read the story, you go, oh, okay, they got an internal document. They have bugs in it and they're fixing them. Okay. That was a waste of it. They're racing for scoops. On one hand, you got the press racing for scoops. On the other hand, you got people waving their hands looking for attention. We're the next big thing. You know, that kind of thing's going on. Look, at the end of the day, the world is going to go through as quickly with AI. The flight to quality is going to be there. We're going to see the good stuff go on. But anyway, back to Amazon. I counted 44 generative AI announcements at Reinvent. Now, half of them were in preview or were announcing the general availability of, but still there were 20 legit new AI announcements, gen AI announcements at Reinvent. I don't see how that's a miss. I mean, we've been to so many shows this year where it's just like AI washing. We compiled, our research team just compiled, Dave, and you know, because you and I are working on this with them, the recent, all the announcements, they had a zillion announcements by category. Like over 200. I mean, it was pretty astounding. I tweeted about it. I was like blown away by just, I mean, it's always like that, but still. Well, let's go ahead. I wanted to just segue from the chips because the next bullet item was the flipping of the script because one of the things we've been reporting up into Reinvent. I mean, you wrote a post with George about the whole uberization of enterprise. That thing still was resonating and getting tons of traffic. I think that's a seminal piece. We're going to look back at that this year as one of the stakes in the ground that changed it. But we were going in and challenging the Amazon executives around the notion that if you believe this gen AI stack is here and we do, what's that going to do for the data management scripts? And we were saying it's going to flip the scripts. Well, guess what? It's flipping the data. This is a big deal. And there's some data to point to this. So just recently, Bass Data, which we launched that company on theCUBE, they did an event with us on our studio. They launched inside Palo Alto, and we actually ran their launch event for them. They just got financing at a $9.1 billion valuation. They only raised $118 million because they're going to do $1 billion in revenue, Dave. So do the math on the cap table. For $118 million, they gave up 1.3% of the company. It's like the cap table barely didn't change. Dave, they went 3x increase step up in a matter of months. So they only did it to get fidelity, I think, on the cap. They didn't need the money. I mean, what's $118 million? They've only raised like $380 million. And they got like 700 people working for them. Well, that's insane. Their bookings at Supercomputing, they had hundreds of meetings. They probably blew it out at Reinvent. It's resonating with customers, their approach. And so that's, again, a data point that points to this new sixth platform. So this is, to me, again, the right side of history. You're starting to see the signs. The markers are laying down right now what success looks like in the new era. So you'll appreciate this. It's going to be a lot of losers on the other side. But you'll appreciate this work that we did. So you think about the modern data platform, which I think Snowflake is the poster child of the modern data platform. Basically, they were the first, at least to popularize the separation of compute and storage. Tons of VC money poured into that. But if you think about the cloud databases and the five modern databases, database platforms, you're talking about Snowflake Databricks and the big three cloud guys, AWS, Google BigQuery, and you can put Microsoft in there. You could argue they're not, but it's Microsoft. Let's put them in there. So all of those were built on a shared nothing architecture because it allowed you to separate compute for storage in a shared nothing architecture. As you know, each node in the system works independently. You don't share memory or storage with the other nodes. So what is that? Why is that important? It gives you flexibility at scale, infinite scale. Like I can throw as much compute at it as I want. I've separated the compute from storage. I don't have to buy it in clusters like an exadata. Again, Oracle has evolved its platform, but originally it was not. It was a shared everything, not a shared nothing. So you get the scale flexibility of a shared nothing, but if you want to get coherence and consistent performance across the nodes, it's very, very challenging. The point is the modern data stack was built on shared nothing and to your point, the new data stack is going to be built on shared everything. So scale up, John, is back is the point that I want to make here. And the reason is, and this is get kind of esoteric, but we're moving to a world where we're to your Uber point. We're trying to turn strings that database language understands, ASCII code, objects, files, tables into things that represent real world businesses like Uber, people, places and things, riders, drivers, ETAs, transactions. It's all this unstructured and structured data coming together in a coherent way, semantic, that can be operated on in real time. That necessitates a completely new data architecture and data center architecture. So it's going to be a whole new world, my opinion. Yeah, and I think that's the interesting dynamic is that the vast data points to that is also the trend we saw developing early on is the disaggregation of, say, memory pools for on-premise action. So this idea of disaggregation is happening and if scale-ups coming back, I guess the question, Dave, is since Wikibon research, now called theCUBE research, essentially pioneered the concept and research around hyperconvergence. Okay, are we going back to hyperconversion? We're undoing that because if you look at the disaggregation trend, they're decoupling systems from each other for more efficiency. So in the cloud, you might want to say, I want everything together in the cloud, but for latency reasons, high-tail latency, but for distributed computing, decoupling actually gets you more scale, especially as you have this kind of chip model, what's around the chip, built in these high-intense clusters, like, say, GPU clusters, it's easier to stack up a bunch of memory pooling, for instance, and then just have everything underneath it, like objects store or S3. So are we moving away from hyperconvergence, HCI? So over time, we are moving toward, so the problem was going to be at exabyte scale. To get consistent rights at exabyte scale, it's really hard, right? And so to your point, we're going to see a new architecture emerge that can accommodate exabyte scale. If for you to do exabyte scale at scale out, you can do it, but to get coherence and to do rights and make sure everything's updated, it's really, really slow. So that's the problem. So today, you have all this metadata. So the interesting thing is, what's AWS going to do? I really pushed them at reinvent. Like, how are you going to unify your metadata? How are you going to create a unified storage platform? And I don't mean S3. I mean access to structured and unstructured, semi-structured, complex data. With all the metadata, the operational metadata, the technical metadata, the business metadata, all in one place so that co-pilots or Q can actually operate with confidence that it's coherent and then take action without having a human involved. That's going to require, like today, don't quote me on this, but basically as an example, technical data, metadata might be in glue. The business metadata might be in data zone. And so it's in different places. They have to unify that, and the challenge has been, and Werner Vogels talked about this, and he goes, this is your fault. You guys wanted all these different services. You wanted primitive access. You wanted granularity. We gave it to you. But I think what's going to be really interesting, John, I'd love your thought on this, is how AWS deals with that. Because they've always stepped up to the customer challenge, customer obsessed. And so you know they're thinking about that. I think data zone is going to be the way they do this. But it's going to be really interesting to see how that comes together, how long it takes, and how they can, because two pizza teams are great at getting a function out fast, but they're not designed to actually make software composable, if that makes sense. Yeah, I mean my take on it is a couple of things. One is the idea of flipping the script is interesting because now Amazon has all this complexity. So in the old school enterprise days, remember Dave, enterprise software back in the Oracle days and when everyone was the old enterprise, you solve complexity with adding more complexity. Get the lock in. In this market of AI, and we've said this on theCUBE and it's been validated by pretty much all the leaders that are innovating, in these inflection points, like the web, the success form is pretty well known. Reduce the steps it takes to do a task. Make it simpler and easier to understand. Intuitive, those are the success. You can't just bolt on more complexity. So yes, about Amazon, my thinking is Amazon will change the user experience. They have to reduce the complexity around their cloud. From a developer standpoint, from a provisioning standpoint, from a hardware standpoint, I mean, I think they have to get almost into the magical phase of it's magic cloud, it's a magic cloud. They have to be magical if they are. They got to leverage their AI so that it's so damn easy that they become like the hardened top for performance. All the stuff works, the chips are fast, developers get action fast, AI is working, it's reliable, it's secure, it's fast, it supports multiple topologies, global infrastructure. Again, they talk about regions. I mean, just the issues around region is huge, huge issue around data. So again, back into that's just Amazon. Now, before you get off of Amazon, let me ask you a question. And this is my sort of mental model here. So Amazon made developers really productive because it took away all the heavy lifting on provisioning and managing infrastructure. The market is shifting to your point, the flip, to actually, now it's developer productivity in terms of writing code for intelligent apps. And that seems to be, that's what's going to drive the next decade of productivity. And so Amazon is really good at infrastructure and making that simpler. It's challenge to me is it's got to get really good on helping developers write code. I know it's got code whisperer, et cetera. But Amazon's software is generally designed to make hardware run better. And so the next wave is to be able to compose different software components so that businesses can run better. So that's going to be the interesting challenge, I think. Well, I agree with you, but it's all even making more complicated. Back in the Cloud 1.0 days, the early days when Amazon came out of nowhere and became the leader, we all know the history we documented on theCUBE and HSC, CEO, et cetera, et cetera. At that time, the developer market was full stack developer. Full stack developer means from the bottom of the stack to the top of the stack, they had to code everything. Okay, when Amazon came out, they just essentially provided the hardware. The developer still had to do the rest of the stack, the middle layer and then the app layer. Now we have things with LLMs and data where it's kind of changing significantly at the middleware layer where those developers have to rely on cloud services for the middleware and can't control their own destiny. Okay, this is where the power dynamics are shifting with data because data at scale gives the developer the advantage. So I think the cloud players will absolutely win on the data layer. They have to because they got the horsepower to back it up unless there's a massive trend for the enterprise to stand up these quote clusters and this where Nvidia is going in with their DGX cloud and core weaver just got $7 billion of funding. I think the announcement was or valuation this week. And this points to a date. Navin Rao, the guy who sold this company Databricks, okay AI, CUBE alumni, he wrote a tweet. Once true, continuous reinforcement learning is solved for large scale neural nets will need quote LLM psychologists to help diagnose why systems have gone depressive and aren't learning effectively. Again, he's making a joke but he's talking about something around the LLM dynamic. Prompt injections. So this whole LLM foundation model thing is the new middle layer. This is where the cloud players will either win or die on the hill because the developers need that layer, right? So again, back to the developers. It's not a full stack. Building a SaaS app on the old cloud was easy. Get some stuff on the server. EC2, S3, queuing, basic building blocks. Yeah, they add more services. I grow, CapEx not needed win. Now the developers still have to rely on the clouds for the hardware and the middle layer. That's complicated. So Amazon has an Azure and Google, they have to make it easier. They have to make it easier for the developer so that they can get the job done and not make it complex and problematic. Azure and Google have been getting killed on quote support calls. I've been seeing all over Twitter. I've been waiting four days for a resolve. It's going to be interesting, Dave. I mean, the cloud guys got to ramp up and make this shit simpler for developers because developers are the gold standards. They're the ones who set the standards. That's the next battle. But by the way, just as a quick aside, you mentioned CoreWeave. I saw a stat, it might have been in the journal around Vast's announcement, you said 9.1 billion, right? So there were only like four, maybe it was five, but I could think of four companies that had an uptick in valuation. Over the last, I don't know, maybe it was 12 months. It was CoreWeave, it was vast. It was open AI, of course. And Anthropic. I think that was it. All the other data platforms were like, had a downmarked down valuation. And Fidelity led both CoreWeave and Vast. Again, so is Fidelity the bellwether? We'll see, but I mean, there's still cash going out in a market that's down on the later stage financing. I mean, we saw, we're reporting on theCUBE today, things like MongoDB, Elastic Vector Search, general availability coming out soon. Extropic raises $14 million to build physics-based computing hardware for generative AI. Assembly AI raised $50 million for cloud-based speech models. Nexus Flow outperforms GPT software tools. Arthur Chat launches to leverage proprietary data for the liquid AI raised $37 million to build liquid neural networks. Meta debuts new generative AI features for consumers. Purple Llama saved generative AI. The list goes on and on. Dave, it's still funding this AI way. You can see the dots connecting. This high entrepreneurial activity going on. Meta is a big player. You're seeing the success of Meta. I mean, talk about missing a trend. Metaverse, it's an AI company now. So you're gonna see Meta getting the game a big time. And I said this in the queue. Remember what we said this last year. Meta's open-source strategy was genius. Feed the developers. Open-source is going to be where the battleground will win for the companies that nurture and get these developers innovating and competing with the big models. Okay. And I said it again. I'll say it after reinvent, too, because they even get it. As the price performance of the hardware comes down and the models can be run on hardware that's coming out, the chips, as we say. The opportunities for startups to innovate will be very big, just like the web. So I'm anticipating a surge of startups and funding. And the AMD thing helps developers, too. Now, that could help developers, certainly, within the Microsoft area, for sure, out of the gate with the Ryzen 8040 CPUs and the M1300X chip. But AI software stacks are coming. And that's going to be, to me, an area to watch. We're going to watch that very closely. Well, Thomas Friedman wrote the book, The World is Flat. He made a lot of money and became very famous. Every time he talked, he talked about Moore's Law, Moore's Law, Moore's Law. And he would draw that comparison with how innovation occurred. You ain't seen nothing yet. I mean, the amount of data, the amount of processing power, the amount of GPU capabilities is going to blow away. The curve is bending. And so the innovations that we're going to see come out of this are just going to be like nothing we've ever seen before, in my opinion. So what do you think about Naveen Rao's tweet that, you know, once continuous reinforcement learning is solved for large-scale neural networks, will there be a need for LLM psychologists to help diagnose why systems have gone depressive and aren't learning effectively? Again, this is, again, a new role. I didn't see that. That's good. He tweeted that on December 3rd. But he brings up a good point. These new roles that are emerging, right? If you talk about data, how do you handle governance, all the classic data management stuff? If you're scaling data at large scale, if you don't build it in from day one, then you're going to constantly be chasing inefficiencies. So the question is, what do you optimize for from a data standpoint? So this is why I'm very bullish on this data flipping the script, because it's happening. Everyone we talk to is saying the same thing. So the other thing, too, is when flash storage came about, so the spinning disk was always the bottleneck, right? In system architecture, when flash storage came about and you started to get things like NVMe and atomic rights, all of a sudden the network became the bottleneck. And what you're seeing now, whether it's InfiniBand or Ethernet, and Ethernet's exceedingly capable, and you're seeing software written on top to really take advantage of higher network speeds. And so now you can do this sort of ante to ante. So this is why I was saying before about shared nothing becomes shared everything. And any node on the network can access any storage, any compute can get to any data over this really, really fast network. That changes the game. And that's where at exabyte scale, it's going to be interesting. This shared nothing that we've grown to love and really become used to with the cloud, I think it's going to be challenged. And it's going to be really interesting to see how all this infrastructure that we have out there is going to evolve. Well, I think the architecture has changed. Again, this is why we've been saying it's going to reinvent the super cloud two-year anniversary kind of speaks to what we saw. And remember last year, reinvent prior to this year, we said this would be a next-gen cloud. Look, we've done five super cloud events in Palo Alto. All the top industry leaders coming in, all kind of agreeing with the notion that this idea of not multicloud, multiple environments are have to run as one operating system. That's my word, not their words, but in general, the general consensus on the industry is, yes, we need to have some control plane and data plane control plane layers that allow us to operate high velocity data traversal and we need more compute, we need more horsepower. And then in comes the big language model push with inference and training. So as we were saying in theCUBE for months and months now, that inference is the killer app. That is becoming the thing. And I think that's going to be where the developers will have the most opportunity. Being a data developer, as we say, this is now the new thing. What data sets does your code have? So just staying on this architecture for just a minute, I'm going to run this by a seed. The idea of separating compute from storage, snowflake, popularize that. The next wave is going to be separating compute from the data, meaning any compute can get to any data. And so that's why you see, for instance, compute AI, one of the super clouds, Vikram, Josie came on. And that's what they're doing to start up. But basically they're saying, wasn't the stat, who was the stat said compute should be free? Who gave us that? That was Joel Inman. Joel Inman said that. No, Vikram said that. Did he say that? Vikram said that super cloud four. Right. So basically democratizing compute. But you think about it, all this data, truly bringing the compute to the data without having to move all the data is going to require new thinking on architectures. We're flipping it. You described it, I think, very well. I mean, the compute is oxygen was a great line because it highlights the necessity, right? So remember the old Maslow hierarchy of need slides? And then they added a new layer called Wi-Fi. When Wi-Fi was hot, it's like, we need more connectivity. Can't get enough Wi-Fi. This generation of users says, I can't get enough horsepower. I can't get enough data. So the engine room in these organizations, and for developers, will be, what data do I have? How much data do I have? Do I have the right data? Can I process it fast enough? And can I move it around and or co-locate it in the areas I needed to be available at all times? High availability and highly available systems will be the number one optimized piece. That's why the Broadcoms of the world and TELS, AMD and the custom Silicon players are talking about this system architecture because what's around the chips will determine how that's going to run. Because what you put at the edge of a telco tower for service is going to be a lot different than the device you put, say, in a rack with a bunch of GPUs. But they all got to work together because the user is the consumer. They could be walking around with their phone. They could have their wearable on. Could be stuff coming down from space. The future is how fast can you move these packets around? Where's the compute? And compute should be like oxygen in the sense that it should be a utility. So I get the idea, but it's never going to be free. It might be priced low. That's freaked out a lot of people at SuperCloud. They were like, well, hold on. Yeah, ubiquitous is a better term because ubiquitous at a very low cost. If the value shifts, you can price in the compute, right? So if you have good value shifting happening with the market, you can just shift the compute to be freer. Now, I think what's more compelling is the CPU, GPU, NPU architecture. And then the interconnects around it. That's what got my attention to reinvent. When Jensen was on stage and he showed the demo, and discussed the idea of clustering all those Grace Hoppers together, that was huge. That was a huge moment because what he's basically saying is that Amazon is going to have a supercomputer at will. Just stand up a supercomputer at will. Exaflops. So many, many exaflops. By the way, so there was a lot of conversation prior to reinvent about how, I'll just say it in my words, I'll paraphrase, that NVIDIA was punishing AWS on allocation because it wouldn't lean into DGX cloud. And they pay public, Dave Brown actually last June, I think publicly said, we'd prefer to buy in component parts. And so you remember, we asked, I won't say who it was, but we asked a really qualified, knowledgeable source inside AWS who said, that's BS. Okay, we don't have, we don't have, we're not getting punished. It's not like some kind of punishing allocation for us now. Part of that could have been that they agreed, okay, we're going to actually do the DGX cloud in the quid pro quo could have been, we're going to get access to more GPUs. I don't know. But I think given Amazon's size and volume, that that's probably, our source is probably telling us the truth and was very credible source. It was engineering source, not some marketing source. There's many sources there. And I would believe that source to be accurate, but I think I have a different perspective on top of that. Clearly from a competitive standpoint, Core Weave and Lambda, another company, they're successful by providing GPU clouds, right? So as we look at GPU cloud as a service, we saw that that's supercomputing. The HPC market is not your cloud market. They're completely different animals, but they operate under the same principles. The large-scale computing and they're now targeting, the AI wave has been a gift for the supercomputing and HPC community because all the years have been grinding it out inch by inch, get that extra flop out of it. They're now in pole position because now they can start purpose building clouds, right? So that's going to be a huge renaissance for on-premises activity. Now, I don't think there's a repatriation issue here. I think it's more of the net new market. It's going to be net new revenue for... So, yeah, it's not repatriation. It's incremental to the existing, I mean, HPE with its supercomputer chops, it's bought Cray, Dell. You were at Supercomput, Dell had a presence. Yeah, but think, my point is, my point is, yeah, this is going to see more expansion, again, rising tide dimensions before, more chips, it's going to be a renaissance in hardware. Hardware's back, we called it years ago. Okay, we were right on that, and that's why, you know, Jeff Clarke probably, yeah, I sat down on the cube first. Well, the whole hardware matters, the little programming that we did was beautiful. That is clearly turning out to be... Hardware matters more now than ever. Yeah, but we were covering Annapurna back then, we were covering Graviton first gen. When you start, we saw that coming clearly. But here's what's the nuance point. The on-premise growth of the more, some people call it private clouds back, it doesn't matter what you call it, it's still cloud operations. And here's the nuance. If you're going to run these GPU clouds with Andorra purpose-built supercomputing, or HPC scale systems, they're going to have to be run with cloud operations in mind, meaning the red hats are going to win. That's why IBM stowed with the red hat acquisition. You see Amazon building cloud-favoring technology so that their cloud wins in a distributed computing architecture, because cloud on-premise and edge, cloud core edge, as they call it on-premise edge, that's the new architecture. That is why clouds are fighting right now, like crazy to be the LLM layer, because they want that all computing on their cloud. Well, you saw Jassy on TV the other day. He was, I thought, pretty optimistic. By the way, I agree with him. I like Amazon's hand. I like Microsoft's hand. You know, Charles Fitz is... Charles Fitz is like, always makes it an either or. I mean, Microsoft's got a great business model. Fitz is a softy. He anti-Amazon. I know, but I guess my point is, if you want to compare any company with Microsoft, okay, I mean, Microsoft's just got a phenomenal business model. I still remember talking to Bill Tai and Palo Alto when Microsoft was $26 a share date. They were an adulteress. They weren't a world-class organization then. They've always been world-class, but the point is... I hear you, but I guess they've got a good business model. No doubt. But I like Amazon's runway. He said he gave a stat, which I do agree with, actually. And maybe I misunderstood before. He said only 10% of IT spend is in the cloud, meaning his type of cloud, not on-prem cloud. And I always felt like I always sort of challenged. I thought they had earlier stated that 10% of infrastructure spend, but he's saying IT spend. And so maybe I just misunderstood. That's legit. There's probably only 10% of IT spend in the cloud, but most of that spend is services and software. If the market's whatever, $4 trillion, probably $2 trillion, services and local services and local VARs and GSIs and then later software on top of that, the infrastructure runway is still big, but it's not as big. And so Amazon, it's going to be interesting to see how they reinvent. I think they're going to be challenged with the day zero. And there's no... One of the things Fitzy said is that they're on the wrong side of no compression algorithm for experience. I don't know if that's true. It's funny. I go back and forth. Will open AI be able to have a competitive mode and sustainable advantage, first mover advantage, or will others catch up? Open AI's tools are good. I mean, we know from our own usage. Open AI tools are better than what we're getting out of Lama2 and other platforms. And we'll see. I really want to see what Bedrock does for us. I was talking to our engineering team the other day. They said, yeah, we can do that Bedrock. Here's what it's going to cost. I'm like, yeah, let's play around with it and see. Yeah. I mean, I think Open AI will have an advantage. They're the first mover, Dave. It's there to lose, right? So like Netscape before them and the browser wars. Yeah, but they fumbled that. Netscape, look what happened to Netscape, right? That's what I was comparing the Sam Altman blunder two weeks ago to the Netscape moment. In fact, it's still not over Sam Altman. Natasha TQ has an article in Washington Post that senior leaders say that Sam Altman has been psychologically abusive where major factors in the board's decision to fire him. So see that article in the journal about kind of getting my rant. I won't go there now. But you know, Sam Altman was the big winner in all this. Everybody said Microsoft was the big winner. Microsoft, I guess they won in a shootout in overtime. I mean, the Microsoft was not a big winner. I don't know. There's no shootout. They lost. Yeah. Microsoft's time and catch on the end zone to push into overtime. They're worse shaped now than they were the Thursday after Ignite where they had all this momentum. I agree. Sam Altman is the big winner. And Ilya was the loser. I mean, you know, they made a bad move. Ilya and the previous board were the big losers. But Sam Altman now is, you know, in the process of consolidating power. The problem is, okay, so Microsoft gets an observer seat, a board observer seat, so they won't be surprised inside swiped again. I just think the whole structure of open AI has to change. You cannot have, and this is not my rant, but I'll rant anyway, you cannot have a non-profit running a for-profit. That's a tech company. That's stupid. It's just no way. They're just misaligned. You can have a for-profit running, controlling a non-profit, but you can't have the reverse. And that's exactly what happened. You had- Well, let's get into the rants then. I'll get into my rant. You saw the journal article, Helen Toner, who was, you know, she's an academic and she was on open AI board. And you know what? I have no doubt. She's a person trying to do the right thing. But the fundamental problem is the structure, because basically what she was saying is, hey, our mission is to do AI safely. And you know- I reported this already before the journal, two weeks before the journal. I know you did. It was Helen Toner who got him ousted because they clashed over the paper. She wrote about Anthropocene better. Yes, and you did talk about that, and I think you wrote about it as well. But so- Of course. Okay, but so- But what's your point? But my point is that you cannot have a non-profit running a for-profit. It's ridiculous. We should do that. I mean, there's so many things going on with these structures. I see these other companies like, we should just copy what's working, I guess. We can make the cube into this non-profit and funnel money this way and have all types of shell corporations and fake companies and fake momentum. Oh my God. But how do they unwind this, John? I mean, there's probably tax implications of trying to restructure this. And of course, you saw Vinod. Vinod came out basically ripping Helen Toner and the board because Vinod has a lot in. They probably has tens of millions of dollars. Here's what's going to happen. One of two things is going to happen. Of course, I'll predict this and it'll be right. So it's either going to die from being ripped apart internally by all this fight them fighting because of the structure. Because you're right, the structure's flawed. Okay? Or there's so much money and leadership here that people are going to have a half a brain. And I tell you, Satya Netella being in the working that weekend gives, does give him prospects. He had to save his investment. It was going to, his optionality, his call on open AI was a $10 billion call. Okay? That was going to go to zero. So his whole quote, he said, he's such a great executive. He had to work. Otherwise going to go to zero. So with guys like Netella who are smart, guys like Jassy, I mean, the Jassy interview on CNBC, just to make the point, was so obvious that he's in full command. He's a great CEO. He knows, he's getting, he can get in the weeds. And there's no question of the press can't throw in that he can't have a good answer for. He's got his handle on the business, got lieutenants in charge. Netella's the same ways. He ran infrastructure, knows the tech. He can be, he can be high level and jump into the weeds. He can be like a helicopter. Those two guys need to be around the table because they're smart. And so with open AI, there's two scenarios. Open AI, we will get ripped apart because of the greed and the fight that was in the egos. Or they're going to figure out how to get this billions of dollars of value out of that structure quickly. And as you and I talk about the venture capital world out here, the difference from the East Coast and the West Coast is the West Coast, it's easy to get in and then they change it back in when things are working. And the East Coast, they negotiate everything upfront. Here in the Valley, it's well documented that you can take something that's working and just cut deals. So I think what's going to happen is open AI will survive, given the observability seat for Microsoft. That's going to be the, we're going to make sure the kids don't blow this. Right? But behind the scenes, the money people will make things happen. They'll figure out a structure. They'll make it work because they have to, because it's like a $90 billion valuation on the table. Exactly. I mean, John, the idea that a nonprofit can somehow control a $90 billion value company with guys like Satya Nadella, Microsoft, Vinod Kozla, etc. With that much money, tens and tens of billions at stake, the idea that a nonprofit and a think tank board can actually control that is absurd. And they make it even more worse. What happened was a child play. You basically had, Tony wrote an article from an academic standpoint. She thought was fine. Sam had gotten to a pissing contest with her up over it. And they just had like back and forth. And it's like, they just, I can't, it's like, I can't Hollywood. I can't work with her. Right. It's like the creative clash, creative culture clash. The fact that that even happened is childish. That the $90 billion valuation and the fact that it all went down that way is a red flag. And that's why the FTC is looking at it because, and I'd be like, what, is this even fair? Is there self-dealing going on? Is Microsoft's option call? What are the, what's that call about? Are they colluding with the competition? I mean, I don't think there's going to be any of that's going to found out. But what Altman did in that child, child display that went down with all that at stake, you got to have zero confidence in the operations. And then it was just reported today that people that was so-called sign that letter were coerced into it. They didn't even want to go work for Microsoft. So it was completely a clown car and like children playing in kindergarten. But it should have been adults in the room. So, you know, that, that to me is really the red flag. And that was what everyone was talking about through that whole week, board governance, board structure. Here's how boards work. Boards are supposed to protect that from happening where the kids playing with dynamite blow up. I didn't see Fitzy commenting on this much. Maybe I just missed it, but. So that to me is a really bad look. He's too busy celebrating the super class two-year anniversary. I shared with you, maybe I didn't. I can't remember who it was after the Q, but ETR and we, ETR did a flash survey, basically. In real time, Eric Bradley just mobilized the troops. And I talked to a bunch of Microsoft and OpenAI customers. They surveyed like 10 or 15 of them. Like a very large number were shutting off co-pilots. Now, maybe that was a knee-jerk reaction, but I would be shutting off co-pilots. I told you, I ran a straw poll on that. I actually ran a straw poll with Microsoft customers and it was unanimous. No one kept it on. Most everyone in security companies that have Microsoft turn off co-pilot. And so all this hubbub about Q, the demo being leaking, co-pilots in production. So again, we're going to have problems with AI, that people have to just get over. That's going to be like bumps in the road, but that's not catastrophic. And that's my whole point about the Andy Grove quote we always say. Let chaos reign and reign in the chaos. Andy Grove's favorite quote. So we love that on theCUBE. Anyway, by the way, I thought Azure Open AI was generally available, didn't you? It's not. I've been digging around. Just as an aside, it says, I found something that says here, access is currently limited as we navigate high demand, upcoming product improvements and Microsoft's commitment to responsible AI. For now, we're working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations. More specific information is included in the application form. So this is an article on how do I get access to Azure Open AI? I thought it was, this is on their website actually. This is on Microsoft's website. How do I get access to Azure Open AI? I thought it was general availability. It's not, so. Well, again, they have a lot of that and they're absolutely first in on that. Well, my rant, Dave, is going to be- Let's hear it. Okay, so my rant is on the whole Ivy League schools in front of Congress around the whole call for genocide of the Jews in Israel. And what my rant is about is not so much that they're there. It's just, it's the encapsulating of the hypocrisy of the woke culture, right? Here you have them standing behind all their language around how they kind of like got there and how bad these institutions are, okay, in terms of how they're run. Harvard, Penn in particular, were like embarrassing and they were just on like, like total PR scripts when the basic concept was if people were having rallies and they're trying to say free speech, campuses is allowed when it's saying kill all the Jews, genocide, promote genocide. They're like, that doesn't violate our code of conduct. Like, what the fuck? That was like, oh my God. And then just watching them just embarrass themselves with their talking points. And my point is, my rant is this, we got to get over talking points and get into real conversations that are just kind of a, we got to get into this, you know, no fault zone of being honest and having intellectual honesty around what's happening. And that's a unique situation. But, you know, technically it's free speech. But it's, they got the whole me too culture thing going on. They got free speech. It's not free speech. It's hate speech. Hate speech is not free speech, John. I'm sure you agree. Hate speech is not free speech. Hate speech is not free speech. And it's ridiculous. But watching them justify it. You know, and they, like they had answers. They were like prepped. Like, and because they knew it's kind of a gotcha question, but they just won't be honest. And then what they do is they realize, holy shit, everyone's pulling their donations. I just saw Sam Lesson saying, I'm going to go run for a bore. He went to Harvard. People are saying, I'm stopping my donations. I'm so happy because let everyone see how stupid it is. And this is why I think we're in this cultural failed leadership in academic academia and government, frankly. So I think, again, I've been saying this counterculture movement's coming. And then you're starting to see a little bit of, I think the AI wave is going to have it. I think the counterculture, John Markoff wrote a book about this, about how the counterculture really spurred the growth of the computer industry back in the 60s, okay? Access to computers and standards, well-documented. What the doormouth said is the name of the book. It's a must read if you're not in, understand that generation. And it really was that counterculture was the rebels. And so I think you're going to see our kids, Dave, who are under the age of 30 or under the age of 25, and born Gen Z, saying, I didn't bargain for this. This makes no sense. Not that it's a Republican stance. It's just a common sense stance. So that was an embarrassing moment that for me was a flash point of, okay, we've reached the culture point of this, whatever you want to call it, hypocrisy, woke culture. I don't even know the word for it, but it was just so obvious that they're so tangled in their narratives. Were you taught the modern history of Israel in high school? Yes. I went to Catholic school. We learned all our religions. Yeah, but specifically, not religion, but the formation of Israel as a country. Yes. Yeah. So I wasn't. So anything I learned, this is from my own reading, my kids were certainly taught that. And I guess the point I'm making is that we certainly have empathy and it's painful to see innocent Palestinians get killed or disrupted as an understatement. But I go back to what you were saying before. That doesn't mean that you can turn hate speech into free speech. And I think what the IDF is doing, they're obviously being influenced by global international sentiment. But at the same time, I can't blame them for wanting to wipe out Hamas. I know how I felt after 9-11 and this is their 9-11. So again, I think people are conflating hate speech with free speech. You can't yell fire in a crowded theater. That's not free speech and hate speech. Similarly, you can't hide under the cloak of free speech when you're out pushing hate speech, period. Exactly. And that takes away from the real conversation is you can have the Palestinians and the Israeli to-state conversation. Right? So again, I'm not a big political person. So like I sometimes will step on myself when I try to get into these, wait into these arguments. But to me, it's just the culture of like just the word salad of all these narratives around how they justify it. Hate speech being justified is terrible, Dave. So that's my rant. What's yours? I think, well, my rant was what I shared with you, the AI board. It's got to change. I mean, and I don't know what the tax implications are, but you can't have a nonprofit running a $90 billion for profit. It's absurd. The for-profit is always going to win. And so because you have too much money, too much power, and you cannot have academics and folks on a nonprofit board who want to do the right thing, I'm sure Helen Toner thought, hey, I'm doing the right thing. I'm doing what the charter of the nonprofit was designed to do. By the way, she wasn't wrong in her paper. Anthropic was taking safety more seriously, but they didn't have the $90 billion valuation that Altman was pushing. Because he just had a successful dev day. I mean, the dev day was a huge success. It was good. His dev day was, did you see it? It was outstanding. I mean, it was like really, really good. I mean, I got, my respect for Sam Altman increased after that now. At the last thing we need is another musk. So I'm hoping that he can mature. He apologized. And I guess, I don't know if he did publicly or not, but supposedly he did. I think that he's such an important figure. I think he's got to hopefully take the high road now. He's the big winner. He's consolidated his power. And obviously the employees are loyal to him. Ilya lost his board seat, made a bad move. He was completely miscalculated and apologized for it publicly. That was just a bad look. Just naive. Sam Altman obviously, very, very smart player. But look, just you're running, but right now could be arguably the most important company in the world, in some respects. So you got to really take that responsibility seriously. I'm sure he does, but in a new light. I mean, you have, it's like when you become president, you know, with the exception of Trump like this, you really have to think about the impact of your behavior. And so I hope he does. Well, Dave, great, great to chat with you. Great to the recover from, took me four days, a little sick from reinventing the cold going around. But like Naveen Rao's tweet points out, Naveen was the founder of Mosaic ML, sold the data bricks in July for billions of millions of dollars. Cube alumni worked at Intel. This idea of a LLM psychologist is interesting comment, but that points to the new generation that's here. New roles are emerging. We're going to see a lot more coming at it. We're going to have a lot more end of year predictions on the cube and siliconangle.com. And our 2024 cube calendar is looking busier than ever. We're going to have a great, and our research team is building out. Dave, you're doing a great job with that. And siliconangle continues to grow in traffic. The stories are strong. We just had a great reinvent. SuperCloud 5 will probably get another just happened. We're going to probably do 6, 7, 8, 9, 10, a lot going on, a lot of growth, a lot of action in our business. And I want to thank everyone for listening. Go to siliconangle.com. That's where all the traffic goes to. Cube.net to find the videos you need and find out what we'll be next. Next week, we're doing the Super Studio. Yeah, we've got a big event next week. The Roads of Cyber Resilience, another live stage performance. You'll see a lot more of that this year. So I'm psyched for that. Just overall, just the content flow and the quality, Dave, has just been amazing. So we'll continue to do that for next year. I'm stoked about that. Great holiday seasoners, some customers coming in. So that's going to be really good. Another ecosystem event. And that's the Dell. Holiday season, everyone's having a good, safe holiday. We'll keep the pod rolling through the end of the year. We're around all month. Take some time off. End of year. Stretch, Dave. It's like the seventh of anything stretch. You taking time off? I'm going to take some time off. I'm going to go skiing? I'm not sure yet. I need new boots this year. Last year, I went through my favorite boots. Finally broke, so I got to get new boots. So once I get the new boots, I'll be out there. Not much snow out right now. Only got about a 30-foot base. I mean, 30 inches. 30 inches, 30 inches, 30 inches. That can change fast in Tahoe. Yeah. They're skiing at what you said. I can see from my house. The lights are on at what you said. They're skiing. I might play golf this weekend, too. So we'll see. That's awesome. Have a great weekend. We'll see you next week. Thanks, guys.