 Hello, welcome to the special CUBE presentation. This is a breaking analysis, breaking news, power panel, big news hitting the tech scene today as NVIDIA stock is up hundreds of over 100 points today off the big news yesterday and Intel had their big event, big move in semiconductor. There's a lot of action going on in AI and cloud. Of course, we've got the analyst angle here with the esteemed guests in CUBE alumni. We've got David Lindthicombe, who's principal asked for the CUBE research. Just joined our team. David, great to see you. Thanks for coming on the power panel. First time as an analyst in the CUBE research. Thanks for coming on. Appreciate you. Happy to be here. Thank you very much for the invite. Tim Crawford, CIO, strategic advisor of OS out in the trenches with the Intel event on the field, back into the event circuit, pound the pavement, always has the great analysis. Thanks for coming on, Tim. Appreciate it. Thanks, John. Great to be here. And Dion Hinchcliff, VP and principal analyst, constellation research, award-winning books, been around the block, written many things, been around many cycles of innovation. Dion, thanks for coming on. Great to be here, John. Thanks. Guys, the path forward in tech now is obviously set. It's AI, it's cloud-scale, home-next-gen, a lot of stuff we've been talking about on Twitter, on X, LinkedIn, on our back channel. But the path forward with NVIDIA's earnings yesterday is really a story, it's a blend of strategic brilliance, technological innovation, market acumen with Jensen, going with software. And as everyone's trying to navigate this changing tech landscape, the impact of AI remains kind of profound, setting the stage for this next generation and set of advancements. And then the industry leadership positions start to change, see NVIDIA rising as a bellwether, lifting the stock market across the globe. So, you know, all this diverse source of new content, new data, AI hitting the stage is really kind of hitting hard. So I want to get into this panel and get your thoughts on what's happening. Why is this NVIDIA news so important? There are pretty much last year and this year too, you're seeing Jensen the C of NVIDIA on every stage, kissing all the frogs. Pretty much every event, yeah, exactly. Hacking all the hands. Shaking hands, wearing this leather jacket. NVIDIA, what a move. Yeah. Yeah, it's pretty incredible to see, John, what was really remarkable, you know, it takes 30 years for NVIDIA to become an overnight AI success. You know, they were just, you know, two years ago, primarily a gaming and a crypto mining company. And now two years later, you know, we saw the data set of revenue come in. There are first and foremost now an AI company and they're selling to businesses. And, you know, no longer are GPUs running games, they're now running our businesses. That's the new reality. And that's why we've seen such a really profound shift and their fortunes. The growth is due to the fact that AI is the future of business. They've plugged into that and they, I believe, have a really powerful, competitive mode around their ability to deliver on that, as we'll talk about a little bit later. Let's get that mode. I mean, the joke is, if every one of us with our kids, if we bought NVIDIA stock, when they said, can you know anyone at NVIDIA to get me a better graphics card for my game, we'd be millionaire because, you know, it's always been a culture of great performance. They've really done their work. I mean, the dye was cast on performance and having great systems. But when the software investment hit many, many moons ago, you really, no one really saw that coming. And then with the crypto bubble bursting, you know, you saw gaming, crypto, and now AI, just the world spun in that direction, did ChatGPT get lucky? The open AI get lucky at that with the timing of everything. It's almost the perfect storm. And this kind of sets the tone for this next cloud generation. I mean, David, you and I were talking off camera, like this high performance capability, specialized applications with data. What a perfect storm for this next wave. It's absolutely a perfect storm, John. I think they did get lucky to the point. I'll go ahead and throw that out there because they were basically a graphics coprocessor. And then accidentally found the fact that they had this great application in the generative AI stuff where you had to have this parallel processing occurred to make these things operate at the speed you need them. So you do get a huge performance bump for having them. So suddenly everything went to AI and these guys had the keys to the castle, so to speak, and the ability to kind of run it fast. No one else really was in that market as well as them. So they got a little lucky, but I think they deserved their luck. But now it's going to be concentrating on keeping the market, keeping the momentum going. It reminded me today of the Yahoo earnings back in 1998, kind of a similar vibe. But are we going to see the thing rise up into a bubble very much like it did 20, 30 years ago? And that kind of remains to be seen. Tim, you see everyone kind of like groping for processors, the wave of just being in the arena. I mean, luck and preparation is opportunities. That's for the whole expression, that luck is really where opportunity and preparation meet. They're in the arena. They saw Infiniband, they got this company acquired, they stuck them in there. They got some Infiniband, they put Grace Hoppers together. Now they got a supercomputer. I mean, there's a little bit of luck involved, kind of like they were on point here with their action. Yeah, I think David's right though. I think there was definitely some luck that kind of played in in terms of the timing of it. It just so happened that there was a coincidence in the timing of what was needed and what was available. And so of course you saw crypto kind of dropping off and then even gaming, it was still on the increase, but nothing like what we're seeing from AI today and how that parallel processing is being used. It's actually more of a story similar to research computing because research computing for a long time has been working in that parallel processing world, but people really didn't understand how it worked. And I think some of the chipset changes, but then also the applications that are being used by developers to help them build applications on top of this new methodology is really kind of helping catapult it as well because we were kind of tapping out in where we were with regards to the traditional processing opportunities and we needed something different. Now we've seen one extreme here with GPUs and NVIDIA, really high performance, great product, but also very costly. Does everybody need that though? And are there ways to use some of these more general purpose at the other end of the spectrum, general purpose processors or CPUs to do some of the same functions? And then you've got folks in the middle that are saying, wait a second, we can capitalize on this too by creating custom silicon that is purpose-built for this use because a GPU can be used for multiple things as we've seen from gaming to crypto to AI, but if you look at some of the purpose-built custom silicon, the price performance actually comes way down on those, but they can only be used for a very specific use. So I think we'll see a combination of these kind of coming into the mix, which I'm sure we'll talk about a little bit too. Yeah, let's get into the narrative around their history. I love the concept about they were a gaming company, but again, they got lucky, they had the timing of everything, but their multifaceted journey is really about how they redefined themselves. Like I said, if you were a janitor at NVIDIA 10 years ago, you're a millionaire now, and they're a big company. So you look at monopolies in the past like Microsoft, they were a DOS operating system, and then they get Windows, now they get the Office Suite, they get a monopoly. You look at Amazon, they were a bookstore online that becomes a cloud player. There's moments in time, guys, where you just are in the right place at the right time. I mean, I remember when the back on the web services days, web services around for a while, and all of a sudden Amazon becomes the web services cloud. So being at the right place at the right time, are we at this moment where there's a whole AI operating system, clustered systems, AI factories, that codeword for, I'm going to run the AI systems. There's a path here that says they could be the next monopoly in AI potentially. And I do think there's that potential because let's not forget NVIDIA made a really big bet saying that, all right, we have this hardware, but it's difficult to develop on, and the existing APIs really weren't that capable. And so they offered their own software solution called CUDA. It's a set of APIs and it's an architecture, and they've managed to convince the developer community that it's the best way to develop games and all the other ways to use GPUs. And we have to remember, Amazon was really focused on devs as the king makers. The developers choose the technologies, primarily to build these solutions on. And if you appeal to developers and you give them solutions that work better and you build critical mass on it, which NVIDIA really has built this critical mass around the developer community that understands how to use their proprietary APIs to produce the world's most compelling gaming software and artificial intelligence models. So you could argue that they're breaking strategic bat, which was really risky back in the day to say use this proprietary API. Don't use all the open stuff, use the things that only work with NVIDIA GPUs. They convinced the developer community to do that. That made all the difference so that when tremendous compute was needed, whatever it came from, it could be crypto and now it's AI. They were poised to tap into that because they had corralled that developer community. They had built the best in class API which has market dominance right now. And it's very difficult to pull developers away from that and have them rewrite their apps. So I think that it will happen if the competitors can come up with really compelling price points or much better models in CUDA, but until they do that, it's NVIDIA's party. And I think, yes, there's a monopoly potential here. David, we've seen the big elephants war in the past. IBM when they had their monopoly in the 70s, 80s, got toppled down from big shifts in the market. So here we have shifts going on that could be an opportunity for new new interests to get in and you got the global supply chain which these NVIDIA relies on, will have to rely on the trajectory of AI being the same, assuming a steady state and no supply constraints and geopolitical on the horizon. If the market shifts, if the devs do decide something else, that could change the game. For instance, I mean, I'm looking at Gemini Pro 1.5 is incredible. Their context window's got a million tokens. It's just that it's just going to maybe change the game. So the application layer, the developers are going to come in. Is that where the pivot point here or does the infrastructure still need to be built out? How do you guys see that tension? Because Diana pointed out a great point, the devs drive everything and open source is booming on the Lambda and the Mistral side of AI. But yeah, where do you host those? You know, you got to still- I'm not really as bullish with the fact that I think that everybody's going to move toward Nvidia each and every time. I think ultimately this is going to be other proprietary chips come into the play, you know, different GPUs, alternatives on demand. And also people using CPUs, which they can be used to just brought up a great point is that they're certainly capable of running GNI systems, maybe none of the speed that can be run. So that means the market's approachable. So I wouldn't go out there and start, you know, counting my proprietary dollars because other people can certainly swoop in there. There's other alternatives. And I understand there's the API that's there, but the proprietary nature of the API could actually be an Achilles heel. People like the open source stuff. So it's going to depend on how the cloud providers and how the end users adopt this technology. What are the alternatives out there we're going to be able to leverage? And I think that it's not as untouchable as people think. So we come out of these earnings and these guys are great. They're going to go on forever. It never works that way. Someone always, I mean, we talked about Yahoo. So someone always comes up and builds a better mousetrap. And I think that's where these guys have vulnerability right now. They're going to continue to be a market leader, by the way, not taking that away from them. But right now there's so many other alternatives that we can leverage, including CPUs that are alternatives to them where I don't have to necessarily buy these particular chip sets to get into AI. Let's get into a strategic position and the future prospects. Let's have that debate. I mean, you could also argue that, and this is what I did with Dave today on the Hubpod is Microsoft in 1988 was at the all-time stock price high. But then when the monopoly kicked in and then they had a monopoly. The question is, will Nvidia have a monopoly? So I think that's a good debate. Where they go from here and what's the cracks in this story? Because again, what is moving in the market that you see that's going to create an opportunity? Where do the CPU and the TPUs and the other systems come into play? Where do you guys see the trends? What's the analysis of what's the alternative to Nvidia running the table? I mean, certainly from a stock perspective, people are looking at it as an investment. But also companies are getting back in the game. I mean, Dell's selling more servers. HP's selling more servers. More GreenLeg has a lot more action going on. Where is this tide shifting? Can you guys analyze the growth prospects? Where's the opportunities for others? Well, I think there are two dimensions that we have to talk about as well. When you think about GPUs, you're talking about cloud. It has to be in cloud, in some fashion, whether that's a really big private cloud or public cloud. Just because the cost and performance with these systems is so large, you need the scale to be able to amortize those costs. But I think the two dimensions that are gonna become really interesting to watch is when you start to think about edge to cloud and you start thinking about moving some of these models all the way to the edge. We're already starting to see use cases where that's the case. In manufacturing, you're pulling those models all the way to robots. And it's not the really big models that are running on these GPUs. It's much smaller models. I mean, Microsoft has been talking about model size and model selection for over a year now and looking at how you start to build smaller, more efficient models. And then there are other companies like Leyton AI that are starting to move some of these models out to the edge. So I think the edge to cloud dynamic is definitely one piece that will make others shine and give Nvidia a little bit of pressure. The second dimension that I think is really interesting to talk about is when you start to talk about the devs and how they're using some of the pre-built configurations and reference architectures and pre-built to applications. So now it's not a matter of I'm trying to develop for this particular GPU or CPU, but rather I'm building an application at a higher level. It's almost the, in a simple example, it's the database as a service. Instead of having a database and putting the compute and storage underneath it, managing it and optimizing it. Now I just want to run the application. I want to move my data to it and run the application. I really don't care what runs underneath it. And so as you start to up level that architecture which we're seeing in the enterprise space right, left and center, I think you'll also see another pressure that's going to come against Nvidia. I'm with Dave. I'm not as bullish on Nvidia. That's not to take away from their success and where they're going to be for the next several quarters. But I do think there are definite headwinds that are coming down the pike. Let's get into the alternatives. What's the, where can people fill in the gap whether entrepreneur or another supplier? Is it the cloud? Is it the specialty clouds or micro clouds? David, is it more models becoming clouds and interacting with each other? I mean, the cloud do benefit from foundation models. I mean, if you look at the, but the cloud is complicated, right? We know how it can be. I mean, almost a dream scenario to have AI make sense of this ops. Yeah, I think that the cloud is definitely going to be the most convenient place to put these AI applications and GPUs are going to power a lot of that moving forward but it's going to be a mix of heterogeneous mess in how we're going to deploy this AI systems. You got to remember, the clouds are very expensive. And so many organizations who are looking to run AI at scale aren't necessarily going to be able to afford the cloud. And they're going to want to build the thing on premise and they're also going to build the thing on premise with a mixed processor model. In other words, they understand that not all these applications are going to be worthy of using a GPU because they're more expensive, use more power, they're going to use CPUs in some instances. Also we have these micro clouds or journey thing.ai out there that are providing these GPU systems as a service and they become a good alternative as well. And they're going to be able to, I think provide discounted price. They're going to be venture backed. They're looking to do it in the loss leader scenario or a lot of the entrepreneurs out there are going to be able to build the infrastructure on the public cloud providers with that we all know and love the micro clouds as well as continue to do it on premise. And even though that's kind of a bad world and the word in the tech world, that's where a lot of this stuff is happening. Not if it works, not if it works. Can you define micro cloud? We want to get that on the table. So I think it's important that we just set context because we're almost seeing like this whole movie play over again with web services and cloud again because if I got a micro cloud talking to another micro cloud, talking to a big cloud. So I got big cloud, medium cloud, small cloud all with software. What is a micro cloud? Take us through your definition of what that means. That's a great question. Is the independent cloud provider that is basically providing something specific as a service in this case, providing GPU based platforms as a service. And that's all they do. They may not have storage in the mix. They may not have a lot of management monitoring in the mix. And what they're trying to do is basically become a cloud service within your portfolio that just focuses on running AI applications extremely fast. And we call it a micro cloud because they're going after a specific narrow version of the technology. We also see the same in the database world and even the graphic processing world and VR world. Well now people are looking at the A world. Huge amount of venture capital is being pushed into building these infrastructures. And I think many people are going to find them as an attractive alternative to leveraging the big name public cloud providers because they're going to be cheaper and they're going to be just as reliable. Well, at HPC that world just got booted up. Big time with AI is starting to see core weaves of the world out there. These specialty micro clouds that are kind of domain specific purpose built. I mean, is this a feature or a bug? Diane, what's your take on this? I think it's going to be a feature. So ultimately, I'm sorry about that, Diane. I think it's going to be a feature ultimately because we're looking for a cheaper and better and more reliable alternatives. And we already know how to do multi-cloud. So therefore we're already able to manage this complexity. We have the notion of the super cloud which is the ability to have abstract services able to run across these platforms and have the ability to plug in different systems between them. So why not just plug in another cloud that's going to be better, cheaper and perhaps more reliable because they're purpose built. They're not doing 2,000 other things. They're just doing GPU services. So I'm buying that right now, but I'll buy my own. TIO had on though, I would say I don't want to deal with all these different cloud providers and negotiating with them and managing them and dealing with all their integrations and the performance issues and talking with each other. So we do have these countervailing forces that we've always had that businesses have always had in selecting enterprise IT is that a large number of fine-grained suppliers is really hard to manage and make all work together. And then traditionally that has been difficult. So we have to remember that. I agree that the trend is towards more specialized services that are optimized for what they do. But if we look at the broader market going to your question, John, about things like custom silicon, Intel's acquisition of Abana Gowdy and very specialized GPU training capabilities it has. Nvidia's lock isn't going to last forever. I think they're probably close to their peak of market dominance, but they have the maturity and they have the developers. And I think that's what's going to keep them along for the next couple of years while the Intel's and the AMD's get the maturity in their AI stacks and in their AI chipsets and start really taking on the market. So we're going to have a lot more compute in our future. It's going to be a lot more varied and we're going to get it from many more places just because these were at the beginning of the demand curve for these very powerful AI's. It's interesting the AI stories about LLMs or foundation models. And when we did the power law with first they called them proprietary ones. Open AI was actually called proprietary. Now they call them pioneer models. I guess they didn't like the word. But as you went down into the long tail and from the neck torso into the tail the open source rose up fast to fill that void. So I got to put on the table the question that David kind of brought up with his answer, which is, you know the next logical question is, okay multi-vendor, multi-platform requires open operating standards at least some level of operational standards at least within the company. Never mind cross company. So is there a future where the standards are emerging faster with AI? You started to see with cloud I was going to distribute computing and cloud operations. You're seeing a lot of that. What's, what needs to be in place from a standard standpoint to make all this run because okay, if an AI system emerges it may not look like this what we have now. What changes, what do you guys see there? What's the, let's unpack that a little bit. What's the open part of this? Yeah, I think you're absolutely right, John. And kind of to Dion's point of we are going to see the specialization and we're starting to see some of that. Let's face it, the public cloud just like our traditional stacks whether you're looking at APACs or GreenLake they're traditional. They're a very general purpose architecture and we need much more specialized architectures but Dion brings up a really good point. We saw this with data integration links where you start to infuse this fragility into your architecture and that becomes really problematic. Something breaks, you don't know where it's broken trying to fix it is a nightmare. And at the same, while this is all happening at the same time, your business is down. And so we've seen this kind of movie play out. Who's gonna run it? Who's gonna run the operations? So I think this is the piece that we need to see maturing is for providers to start collaborating and working more closely together. I do think the AI space is one where we're gonna see that have to happen and we are in terms of model selection because not every model works the same way, right? Open AI and their models work very different than Gemini which work very different than Cohere and Anthropic. So I think it's important to look at those ways that they come together. Now, is there a way to bring those together right now? You're seeing that from some of the public cloud providers where they're trying to infuse model selection but that's at a very coarse kind of a gross level and we need to see much more granularity in that but then I'll go back to what I was saying earlier we also need to see how that plays out from public cloud and research computing which is even bigger than what you have capability within public cloud but then going the other direction too into on-prem data centers and then all the way to the edge we cannot forget models being run at the edge. And so that's gonna be something that we're gonna have to navigate through. Nobody's in a really great position to do this today. I think from a chip standpoint Intel has a real interesting portfolio play but aside from OpenVINO as their CUDA equivalent I guess you could say there really isn't a lot of momentum behind it to build out that portfolio and really kind of bring the developers along to build essentially an edge to cloud management layer or operating system if you will and then layer in the AI that goes on top of it. We're not seeing that yet but that's where we have to go. Dianne and David any comment on the Open what needs to happen? Obviously someone's got to run operations if it's not standard. You just brought up a great point John I don't think we have a vacuum right now and some open standards and some open software that needs to exist in between these various G&AI systems and AI systems. So I don't think people are gonna allow the cloud providers to run that. They're not gonna have them on the infrastructure so it is going to be a group stepping up and developing these things to allow them to interact one in between and we don't have anything right now. It's customized software very much like we did integration back in the 90s and everything's going to be a big mess. So we have to abstract this into some sort of a layer that everybody agrees upon. Everybody can participate in the development of it including the processor vendors out there the NVIDIAs all that stuff and then give up the proprietary nature of the software turn it over to an open source community and they're going to take it to the next level and they're going to provide more value for the enterprises. And the IT operator, traditional IT, rack and stack, you know, person, grinding away, getting, putting a gun to their heads, be a platform engineer, can't spell microservices. I mean, it's hard to fill that gap without some sort of assistance augmentation. So co-pilot models work well if there's some standards, right? So. Well, and this is what enterprises are demanding a little bit more from AI than the consumers are. So they want, they understand that the results they get are accurate. So they want grounding. They want to ensure that their AI policies are being enforced and that they're meeting all their compliance obligations. So they want to hook into all the different models they are to have a consistent way to ensure all that they're being compliant and following all the regulations. And they have an illegal obligation to do that. And so they have, they have, they need to be able to hook into these different AIs. And the open source AI model community is amazing. I've been watching, I tracked this on GitHub. There's hundreds of projects. Their pace of innovation is a little more intense even than the commercial side, but they don't have the safety layers and they don't have a lot of the governance pieces that enterprises are going to want. And so that is the piece that's missing. And there are many AI standards like organizations that have already formed. None of them have really achieved critical mass. And so right now it's the wild west. If you want to run private AI and you want to do it well and you want to do it consistently and you want to cost manage it, we're in the very beginning stages of that. We're going to see a cottage industry of companies that are going to provide a way of you, either using your service providers to build and run your models or do it yourself or more likely a combination of the two. But all these are opportunities for startups because these things don't really exist. And someone needs to talk sense to a lot of these model developers to ensure that we have the hooks to provide the enterprise capabilities and to what we're going to have. It's hard to put another point in AI when it's not, when it's constantly changing, right? Generally AI is changing. No control point that you need some help there. Yeah, there's another piece to this, John, which is it's not a typical direction, but I do think to Dion's point, there's a lot less tolerance for mistakes within the enterprise. There's a lot more expectation that it works and it works well and it's solid. And that's why we see a lot more commercial than open source in the enterprise today and have. But I do kind of wonder if it isn't time to kind of shift that a bit such that enterprises start to get involved in driving those standards as opposed to taking it as a defacto from the vendor community, but rather starting to drive it in terms of what they expect and what they need. And so it becomes, it would require a much closer partnership between enterprises and startups and incumbents. Let's not forget them too, across the myriad of... I'm not sure. I'm not sure. I mean, I hear what you're saying, but I'm not sure I agree because I think the enterprise don't know what they want. I think you got to manage services are easy, like open AI, but they still got to run stuff on their premise. So I think I buy the premise action is happening because if I want to own my stuff, I want to run it in my environment, but I might not know what to do. So I think to me, I think that's the open question. I don't think, I think the enterprise will say to the community, I need standards. And I think the opportunity for the suppliers is to rise up and make those standards happen. That'll come from entrepreneurs because if you look at today, there's RAG that's developing retrieval augmentation generation, which is just trivial version of playing with new kinds of vector database which has been around for a while, but it's not 100%. It's easy to get to 60% accuracy, but it's extremely difficult to get to 90, right? So that's just on the RAG. That's a data problem, but there's still no generative AI native stack yet. So I think there's still early days, but the point is that the enterprises that are playing around with the RAG is it's easy. But I think you're starting to see the adoption, but it's like so many holes in that. Like, how does someone actually run this? So I think if I'm an enterprise, I'm going to say, what is even standard? Is it the default? Is it the integration? We may agree to disagree on this, John, but I do strongly believe that the enterprise has to take a much more engaged position than they ever have as we go forward, especially as you start to think about data, you start to think about the edge of cloud continuum with regards to data. And then let's not ignore the fact that the regulatory and compliance requirements that are on the table today and coming down the pike that are working their way through legislative bodies across the globe are going to be incredibly complicated. And if they don't get involved, the challenge is going to be, how do they effectively manage the enterprise? And I'm sorry, but I don't think we can rely, we as enterprises can rely on just startups and incumbents to drive that forward. I think we have to play a very active participant in that conversation. I think the enterprises are going to step up and build applications to fill the void initially. And so we're going to have a bunch of applications that are built in specific silos, either on-premise or in the cloud to do what that is. I just see it happening right now. Everybody's moving so fast and furious into generative AI. They're not going to wait for these standards to emerge. They're just going to build the applications to intermix in and between them. And it's going to be a big complex mess that we're going to have to unravel in a few years and then retrofit the open solutions that we're talking about right now, which is the biggest downside to early adopters of generative AI. We just don't have the plumbing in there and the standards that are baked enough where we're able to repeat these things over and over again in a commoditized way. So you're saying they're going to run fast and then they're going to say to the market, fix this, otherwise you don't have my shipment. We've seen this movie before. Yeah, exactly. Okay, I find that. Well, that's a good point. I mean, it comes back to who's going to take a leadership role in this, right? Who's going to take the leadership role to drive this forward? And I think David's absolutely right. And we've already started to see that, not just with Gen AI, but with cloud where folks within a specific industry would start to bring up their own private cloud for their industry, right? Because they understand their industry best. And so I think we'll see more of that, especially as you start to think about Gen AI and the different models that are getting built specifically for industries. I fully expect to see more of that come about. I think you spot on. Okay, guys, since I got your experts here and you're talking to customers, you know all the vendor suppliers, this wave is coming. There's going to be a set of winners and losers, right? So as these shifts happen, what's the right side of history look like here? What's the right side of the street that you want to be on if you're a company that wants to take an aggressive lean in and start building their future? I'll say it's cloud on premise edge. There's multi-cloud, super cloud and whatnot. What's the right winning hand? What does the winners look like and what do the losers look like? The winning hand is data. I mean, when I look at these eight generative AI applications, just data applications at the end of the day, they consume a major amount of data in different formats in different ways. So if you're able to provide a better data platform, you're going to have a very nice next few years. And by the way, most of your stuff is baked. You know how to use it. People are just going to consume more of it. So anybody who does data is able to do a data hygiene and really to feed these beasts with what they're going to need is lots of information where the biases are ripped out of them. That's database operation. So they're going to see the largest uptake from all this stuff other than the GPU chip makers. Does data Brexit and Snowflake get bigger? Is there a new entrant? Is data warehouse completely dead at this point? I mean, data needs to be managed from day one. Is governance now a bit that's flipped at the lower level? Guys, what's the end data? I mean, it's a whole market on data. I do think you're going to see a role for specialized data services. So to answer your question, John, multi-cloud competency with the ability to manage your data cross-cloud, that's what all enterprises have to get good at. If you can't get at your data in the cloud and bring it all together and deliver it to your AI, you're not going to get very far. So you really have the habitability to mobilize data. And this is where we see specialty databases like data stacks and others that are coming together saying, you know, don't put it in a big hyperscaler, put it in a place that you control your data whenever possible, run workloads wherever you need them to run, but have really strong data story and really strong data foundation. So I agree with David, but it's like cross-cloud capability. So the winning hand has got to have multiple environments across in clouds and manage data freely and make available for developers. Yeah. And I would agree. I mean, data is where people are going to make the money. How it gets done, that's going to evolve over time. GPU today, CPU tomorrow, something else the next day. I mean, at the end of the day, it kind of doesn't matter to the end user. What matters is what they're doing with the data and how they can gain the insights from the data. So it's not just bits on a disk or bits on a page, but rather what are we doing with the data and how is that changing our customer experience, our employee experience, our business operations and supply chain. That's where the rubber really hits the road. The underlying pieces, that's where the magic needs to happen to be able to get to that, but that top level has to be the orientation that folks focus on. And of course, data is at the root. All right, guys, really appreciate you guys coming on to this flash mob panel, power panel on the news. Real quick, give a quick update on what you guys are working on. Dave, we'll start with you. What's your focus right now? It's theCUBE research. Welcome to the team. What's your eyes on right now? What are you unpacking? What are you analyzing right now? I'm looking at the effects of generative AI and how people are going to the adoption of it and also looking at abstraction layers that sit up above different platforms, platform heterogeneity, super cloud in essence and how that market is moving, evolving all the pieces and the components and what's going to be most important to the enterprises. Awesome. Tim, what's your focus right now? What are you unpacking? What are you analyzing? Yeah, I'm looking across, as I mentioned, the big three customer experience, employee experience and business options supply chain. And I think those are where the rubber hits the road, but it all stems today from generative AI and cloud. And those are two big pieces, especially as we start to think about that edge to cloud continuum. Diane, what's your focus? What you're working on right now? You got a lot going on. I'm working with our CIO audience and trying to figure out two things. What are they really going to do about generative AI in 2024 and 2025? And where are they going to put their workloads in the future, given that it doesn't seem like public cloud is the sole destination anymore, that we're really going to have more of a balance and I'm trying to understand what that balance is. Gentlemen, appreciate your time. Thanks for paying it forward today in theCUBE. We appreciate your time coming on. Of course, big market move, path forwards, everyone's looking at it. What do they do with multi-cloud? What I figured with the geni scenario, how do I navigate technological innovation so they can get the brilliance, the innovation, and get that market capture opportunity for their customers? Thanks for coming on theCUBE. Thanks, John. Thank you. theCUBE conversation here at Palo Alto. Thanks for watching.