 Welcome back to San Jose. We're here at GTC24. It's people that call it the Woodstock of AI. It's not really Woodstock, it's like, but it's the business of AI. It's all happening here. This is theCUBE. I'm Dave Vellante. Leo Leung is here. He's the Vice President of OCI and Technology at Oracle. Leo, it's great to see you again. It's been a while. Yeah, it's great to see you, Dave. Yeah, so the journey of OCI has really has been misunderstood. You know, a lot of people will go back and say, well, Oracle poo pooed the cloud. Well, not really. Larry just said, look, the cloud is databases and servers and storage and software, which is all true. But of course, it takes a while to get that right. But judging from the results, you guys got it right. Give us your version of the OCI story. Yeah, I mean, it really started in 2016. So we launched what was now called Oracle Cloud Infrastructure back then. And I still remember the days when it was a few hundred engineers and we were starting from nothing. And now it's a six billion dollar run rate business. Yeah, so I mean, I get a lot of people, I've been following Oracle for a long time, have great respect for their security, their mission criticality. As I look at it, you've got a highly differentiated strategy. You're essentially, you know, you're not really doing the commodity cloud thing. It's really, I think of it as the mission critical cloud, the database cloud, so Oracle applications cloud, you know, the ecosystem around that. But is that a fair description and understanding that it's really those mission critical workloads where you guys differentiate and focus? Yeah, that's definitely where we started, which would be insane for a startup to start with the most critical workloads that a customer has. But that's how we started. And what's really interesting is the businesses progress from those mission critical types of workloads, whether it's financials or supply chain or transactions into the AI space. And that's kind of why we're here, is that we've had a great few years running these big AI workloads for these companies that are training these new models, whether it's a cohere, as well as other customers that are training these models, as well as customers that are using us for inferencing, including Microsoft themselves, is using us for inferencing for Bing. So from this kind of database, as you say, critical application cloud to now AI cloud, it's been an interesting evolution. Well, the database technology has become really interesting over the last 10 years, right? It was kind of getting boring for a while. You guys dominated that business. But the database technology has morphed and has included a lot of different data types, a lot of different formats. You guys have introduced MySQL Heatwave, which we've written about and talked about a lot. But talk to me about the database itself and why that is so important in terms of the center of the universe, if you will, because it supports virtually everything, including the autonomous activities that you're building, the applications that you support and now AI. Yeah, well, obviously this is the AI conference, as you say, but if you go to just about any session, one of the big topics is data, right? And where that data is, the controller of that data, where it's located, and so much of that critical data is still within databases, right? So we see it as a really interesting opportunity to take that converged approach that you talked about, where, again, a lot of the world's companies store their transactional critical, mission critical data inside of Oracle Database, and now they want to be able to do AI on that data, right? And to be able to, for example, use vectors to go anonymize that data and make it available to models in order to provide that proprietary internal data to these models so that they can serve up the best results. We see that as the next trend. And again, our position as a database company, as a cloud company for critical workloads, puts us, I think, in a really good position to go address that. Well, vectors is a good example of how you're extending, and you're not the only one, but other database suppliers are extending, bringing vector embeddings into their database. We've always said it feels to us like a feature, not really a market. Would you agree with that? I mean, we think of it as an important technology where, again, we want to make it simpler for customers to be able to access that capability without deploying yet another database in their infrastructure, right? I mean, it's interesting when I talk to customers, they themselves would admit, well, we have about a dozen database technologies or data management technologies in our enterprise. And honestly, we would love to actually simplify that scope a little bit, right? We think we have good solutions for that, and that's how we think about it as well, is you want access to that data, you want the control over that access, you want to be able to locate it, and that vector capability is something you want to add on to an existing data store, versus, again, implementing something else. So how should we think about the Oracle AI stack? You don't make your own chips or you don't design your own silicon, but you pretty much have AI everywhere else. So how would you describe that stack? Yeah, it's very much a complete enterprise stack, where we have it in our infrastructure, we have it in our data platform, we have AI services, for example, our new GNI service, as well as things like vision, computer vision, and document understanding that type of stuff, and it's embedded within our SaaS and our applications, right? So you can think of it that way. Another way to think about it is the different types of business users that are going to need AI and want to interact with AI, we can go everywhere from the builders that are building the foundation models and training them to the developers that are trying to add AI capabilities to their applications, all the way up to the end users that really don't care about any of that stuff, they just want better job descriptions or an easier performance review process, right? Where the AI is just under the covers, powered by the stack, they could care less about GPUs, that type of stuff. So there's a big goal rush for getting GPUs, we've seen it, I mean, all the big three hyperscalers, no offense, I don't put you in the hyperscaler camp. Oh, we gotta change that. Well, we should talk about that, we will second, because I get a lot of grief from that from my friends at Oracle, like, well, define a hyperscaler, I'm like, ah, I'm gonna, I'm gonna, but anyway, we'll come back to that. But you're in that mix, we saw your CAPEX bouncing up last quarter, your guidance was more CAPEX, so that's clear indicator of demand for GPUs. You, as I started to say, you don't design your own chips, but you guys, you got one of everything. You got Intel, you got AMD, you got ARM with Ampere, you're doing stuff with NVIDIA, it's horses for courses, right? So how should we think about your strategy with regard to chips and silicon and all this alternative processing that's been going on? Yeah, I mean, it's very much a best of breed type of strategy, right? I think that certainly other providers are getting into the chip design business, but in reality, what the customers are looking for is they tend to be looking for NVIDIA, right? So one of the announcements we had yesterday was expanding our partnership with NVIDIA to adopt their new Grace Blackwell technology, right? And I think there's still the cutting edge, right? When it comes to both the GPU as well as their super chip type of architecture, some of the stuff I've seen about it is really amazing because again, having one company really optimize across those different types of chip sets and provide that solution, that's something that really only a company like NVIDIA could do, and we just get to take advantage of that technology with our services and our infrastructure on top of it. Yeah, so Jensen in his keynote yesterday basically said we need bigger chips. Yeah. Is that how you see it? Bigger AI chips, that's what your customers are demanding? Some of them, yeah, right? So some of them are using what we call our super cluster, which is tens of thousands of GPUs in a single system, right? And certainly if you look at that and you look at the architecture that Blackwell is, you're going to get way more power with way more power efficiency, right? And as a CSP, we do care about that kind of stuff, right? There are other customers that don't have those requirements. I think the majority of customers don't really need that. They may need a slice of it. They may need it for inferencing and real-time inferencing. And again, when you look at whether it's Grace Blackwell or even their older generation of technologies, really, really great stuff to feed that inferencing, to feed the use cases that customers have. So in some cases, yeah, that ridiculously massive machine that he had on stage is the right solution. In other cases, it's a slice of that technology. So, because he talks about that. He said, we need to sell systems. He said, basically, we build systems and then we break it apart and then they sell them as different SKUs, piece parts. My question is, how should we think about training and inference? Is it, are they different spaces? I almost infer from listening to Jensen that the same infrastructure can do both. I've also heard him talk on Ernie's calls like, today's training GPUs will be tomorrow's inference GPUs. And I'm like, hmm, does that mean it's a depreciated asset and you're just rolling it down? Or is it really the same infrastructure? And then, of course, I think of my iPhone, it's got inferencing in there, it's got GPUs. And I think about the edge and low power and low cost. And so I'm trying to squint through that. Can you help me understand how you see the different markets and the different requirements for those different workloads? Yeah, I mean, I think there's gonna be the very, very large scale consumer facing types of inferencing products, right? Like your chat GPT or the customer we have Soundhound, which takes commands from a Mercedes or a Hyundai and turns it into, okay, you're gonna play that song, right? Or you're gonna give those directions. There's gonna be a different scale of that versus an enterprise, which is gonna have a few thousand end customers, 10,000 end customers. They're not gonna need this massive cluster behind their inferencing engine, right? So I do see it as a scale where, yes, some of the very biggest consumer facing products, absolutely, they're still gonna require the massive Grace Blackwell type of machine behind the scenes. But there's many, many others that won't. And they'll hit the, as you say, commoditization curve and be very happy with a virtualized A100 or H100 behind the scenes, right? And the end user won't know the difference, right? In some cases, the use case may be more real-time where you literally want that super, super fast response. And again, you're gonna go up your scale to the most advanced, most highest horsepower type of platform and you're gonna be willing to pay the price, right? You know the cases, maybe not. Okay, let's have that academic discussion about hyperscalers, so, and again, if you ask me to define it, I'd be stumbling and bumbling, but let's try to, so you would agree, AWS, Google, and Azure hyperscalers, why should, and I'm open-minded to this, put Oracle in that camp, why should we be sort of counting Oracle as a hyperscaler? You got a legitimate cloud, no question about it. You've done the engineering work and it's a really good cloud. Same, same, you guys were first to have a cloud of customer and the exact same homogeneous infrastructure in the cloud, huge benefits for customers, for consistency, ease of migration, data movement, et cetera, et cetera, but why are you a hyperscaler? Well, a few different reasons, one footprint, right? So global footprint, we're roughly equivalent. You could argue we actually have more regions and more presence in more places than anybody else. That's number one. Number two, the types of customers we have now, so again, maybe a few years ago you could say, well, not really, now we've got customers like Uber that are running on us, right? And you would say that Uber is a mega-scale type of company that is serving users all around the world, right? You can't serve a customer like that unless you're a hyperscaler. Three, we do offer things that like our supercluster, where we have customers today that are using computers that take up an entire data center, right? Over 30,000 GPUs in a single machine to serve a customer. So by those types of attributes, I would say that we're in that mix. So I'm always in the interesting position of defending Oracle, because everybody loves to like pound on Oracle, ah, prices are too high, et cetera, et cetera, but I recognize the quality of the infrastructure. I talk to a lot of Oracle customers, a lot of Exadata customers, they're actually really happy. They buy the next one sight unseen because it just prints money for them. When you talk to big banks and the like. But my antagonist, Charles Fitzgerald, I don't know if you know, Charles Fitzsie, we call him, says, no comparison because of the capex. So you may have more footprint than anybody, but it's the capex man that determines a hyperscaler. So I ask him, well, what's the level at which you become, you know, crossover? And there's not really a good answer there. But is capex an indicator, or is that sort of a false positive? I think at some level, and I think we're there as well, right? When you look at the latest earnings, right? We've been hovering around the seven, eight billion mark, and based on the latest earnings, it's going to grow from there, right? So you can't say we're not spending money, right? No, Oracle's, listen, when you have a chairman and founder-led company who understands and appreciates technology and assists that you invest in product development, it shows. And that's purely for our cloud, right? We don't have warehouses, we don't have a search business, we don't have a gaming business, we don't have an office productivity business, none of that, right? So this is pure cloud infrastructure and services for enterprises. Well, Oracle stocks close to an all-time high, you guys are pumping them all cylinders, you got the AI going, you got the autonomous database, there's still the database king. Leo, thanks so much for coming back in theCUBE. Yeah, thanks, great to see you. Thanks for having me. Thanks for having me. You bet. All right, keep it right there, more action from GTC24, this is Dave Vellante. John Furrier's in the house, you're watching theCUBE.