 For those of you who've been following the cloud database space, you know that MySQL Heatwave has been on a technology tear over the last 24 months with Oracle claiming record-breaking benchmarks relative to other database platforms. So far, those benchmarks remain industry-leading as competitors have chosen not to respond, perhaps because they don't feel the need to or maybe they don't feel that doing so would serve their interest. Regardless, the Heatwave team at Oracle has been very aggressive about its performance claims, making lots of noise, challenging the competition to respond, publishing their scripts to GitHub. But so far, there are no takers, but customers seem to be picking up on these moves by Oracle and it's likely the performance numbers resonate with them. Now the other area we want to explore, which we haven't thus far, is the engine behind Heatwave and that is AMD. AMD's epic processors have been the powerhouse on OCI running MySQL Heatwave since day one. And today we're going to explore how these two technology companies are working together to deliver these performance gains and some compelling TCO metrics. In fact, a recent Wikibon analysis from senior analyst Mark Stamer made some TCO comparisons in OLAP workloads relative to AWS, Snowflake, GCP and Azure databases. You can find that research on wikibon.com. And with that, let me introduce today's guest, Nipin Agrawal, senior vice president of MySQL Heatwave and Kumaran Siva, who's the corporate vice president for strategic business development at AMD. Welcome to theCUBE, gentlemen. Welcome. Thank you. Thank you, Dave. Hey, Nipin, you and I have talked a lot about this. You've been on theCUBE a number of times talking about MySQL Heatwave. But for viewers who may not have seen those episodes, maybe you could give us an overview of Heatwave and how it's different from competitive cloud database offerings. Sure. So MySQL Heatwave is a fully managed MySQL database service offering from Oracle. It's a single database which can be used to run transactional processing analytics in machine learning workloads. So in the past, MySQL has been designed and optimized for transactional processing. So customers of MySQL, when they had to run analytics machine learning would need to extract the data out of MySQL into some other database or service to run analytics or machine learning. MySQL Heatwave offers a single database for running all kinds of workloads. So customers don't need to extract data into some other database. In addition to having a single database, MySQL Heatwave is also very performant compared to one-off databases. And also it is very price competitive. So the advantages are single database, very performant and very good price performance. Yes, and you've published some pretty impressive price performance numbers against competitors. Maybe you could describe those benchmarks and highlight some of the results, please. Sure. So one thing to notice that the performance of any database is going to like vary the performance and the mileage is going to vary based on the size of the data in the specific workload. So the mileage varies, that's the first thing to know. So what we have done is we have published multiple benchmarks. So we have benchmarks on PPCH or PPCDS and we have benchmarks on different data sizes because based on the customer's workload, the mileage is going to vary. So we want to give customers a broad range of comparisons so that they can decide for themselves. So in a specific case where we are running on a 30 terabyte PPCH workload, heat wave is about 18 times better price performance compared to Redshift. 18 times better compared to Redshift, about 33 times better price performance compared to Snowflake and 42 times better price performance compared to Google BigQuery. So this is on 30 terabyte PPCH. Now, if the data size is different or the workload is different, the characteristics may vary slightly, but this is just to give a flavor of the kind of performance advantage my SQL heat wave offers. Right, and then my last question before we bring in Kumaran. You know, we've talked about the secret sauce being the tight integration between hardware and software, but would you add anything to that? What is that secret sauce and heat wave that enables you to achieve these performance results and what does it mean for customers? All right, so the three parts to this. One is heat wave has been designed with a scale-out architecture in mind. So we have invented and implemented new algorithms for scale-out query processing for analytics. The second aspect is that heat wave has been really optimized for cloud, commodity cloud, and that's where A&B comes in. So for instance, many of the partitioning schemes we have for processing heat wave, we optimize them for the L3 cache of the A&B processor. The thing which is very important to our customers is not just the sheer performance but the price performance. And that's where we have had a very good partnership with A&B because not only does A&B help us provide very good performance, but the price performance, right? And that all these numbers which I was showing, big part of it is because we are running on A&B which provides very good price performance. So that's the second aspect. And the third aspect is MySQL autopilot which provides machine learning based automation. So it's really these three things, a combination of new algorithms designed for scale-out query processing, optimized for commodity cloud hardware, specifically A&B processors, and third MySQL autopilot which gives us this performance advantage. Great, thank you. So that's a good segue for AMD and Qumran. So Qumran, what is AMD bringing to the table? What are the like, for instance, relevant specs of the chips that are used in Oracle Cloud infrastructure and what makes them unique? Yeah, yeah, no, thanks Dave, that's a good question. So OCI is a great customer for us. They use what we call the top of stack devices meaning that they have the highest core count and they also are very, very fast cores. So these are currently Zen3 cores. I think the heat wave product is right now deployed on Zen2 but will shortly be also on the Zen3 core as well. But we provide, in the case of OCI, 64 cores. So that's the largest devices that we build. What actually happens is because the large number of CPUs in a single package and therefore increasing the density of the node, you end up with this fantastic TCO equation and the cost per performance, the cost per deployed services like heat wave actually ends up being extraordinarily competitive and that's a big part of the contribution that we're bringing in here. Yeah, so Zen3 is the AMD micro architecture which you introduced I think in 2017 and it's the basis for APEC which is sort of the enterprise grade that you really attack the enterprise with. Maybe you could elaborate a little bit and double click on how your chips contribute specifically to heat waves price performance results. Yeah, absolutely. So in the case of heat wave, so as Nipon alluded to, we have very large L3 caches, right? So in our very, very top end parts, just like the Milan X devices, we can go all the way up to like 768 megabytes of L3 cache and that gives you just enormous performance and performance gains and that's part of what we're seeing with heat wave today and that not that they're currently on the second generation ROM based product because there's a 7,002 based product line running at with the 64 cores but as time goes on, they'll be adopting the next generation Milan as well. And the other part of it too is, as our chiplet architecture has evolved, we, so from the first generation Naples way back in 2017, we went from having multiple memory domains and sort of a sort of Numa architecture at the time. Today, we've really optimized that architecture. We use a common IO die that has all of the memory channels attached to it. And what that means is that, these scale out applications like heat wave are able to really scale very efficiently as they go from a small domain of CPUs to for example, the entire chip, all 64 cores, that scaling has been a key focus for AMD and being able to design and build architectures that can take advantage of that and then have applications like heat wave that scale so well on it has been a key aim of us. Right, and Gen 3, moving up the Italian countryside. You've taken the somewhat unusual step of posting the benchmark parameters making them public on GitHub. Now, heat wave is relatively new. So people felt that when Oracle gained ownership of MySQL, it would let it wilt on the vine in favor of Oracle database. So you lost some ground and now you're getting very aggressive with heat wave. What's the reason for publishing those benchmark parameters on GitHub? Okay, so the main reason for us to publish price performance numbers for heat wave is to communicate to our customers a sense of what are the benefits they're going to get when they use heat wave. But we want to be very transparent because as I said, the performance advantages for the customers may vary based on the data size, based on the specific workloads. So one of the reasons for us to publish all the scripts on GitHub is for transparency. So we want customers to take a look at the scripts, know what we have done, and be confident that we stand by the numbers which we are publishing. And they're very welcome to try these numbers themselves. In fact, we have had customers who have downloaded the scripts from GitHub and run them on our service to kind of validate. The second aspect is in some cases there may be some deviations from what we are publishing versus what the customer would like to run in their production department. So it provides an easy way for customers to take the scripts, modify them in some ways which may suit their real world scenario and run to see what the performance advantages are. So that's the main reason. First, transparency so that customers can see what we are doing because of the comparison and B, if they want to modify to suit their needs and then see what is the performance of heat wave, they are very welcome to do so. So have customers done that? Have they taken the benchmarks? And I mean, if I were a competitor, honestly, I wouldn't get into that food fight because of the impressive performance. But unless I had to, I mean, have customers picked up on that, Nipin? Absolutely. In fact, we have had many customers who have benchmarked the performance of my SQL heat wave with other services. And the fact that the scripts are available gives them a very good starting point. And then they've also tweaked those queries in some cases to see what the delta would be. And in some cases, customers got back to us saying, hey, the performance advantage of heat wave is actually slightly higher than what was published and what is the reason? And the reason was when the customers were trying, they were trying on the latest version of the service and our benchmark results were posted, let's say two months back. So the service had improved in those two to three months and customers actually saw better performance. So yes, absolutely. We have seen customers download the scripts, try them and also modify them to some extent and then do the comparison of heat wave with other services. Interesting. Maybe a question for both of you. How is the competition responding to this? They haven't said, hey, we're going to come up with our own benchmarks, which is very common. You oftentimes see that, although, like for instance, Snowflake hasn't responded to Databricks, so that's not their game. But if the customers are actually putting a lot of faith in the benchmarks and actually using that for buying decisions, then it's inevitable. But how have you seen the competition respond to the MySQL heat wave and AMD combo? So maybe I can take the first crack from the database service standpoint. When customers have more choice, it is invariably advantages for the customer because then the competition is going to react. So the way we have seen the reaction is that we do believe that the other database services are going to take a closer eye to the price performance. Because if you're offering such good price performance, the vendors are already looking at it. And you know, instances where they have offered, let's say discounts to the customers to kind of at least close the gap to some extent. And the second thing would be in terms of the capability. So like one of the things which I should have mentioned even early on is that not only does MySQL heat wave on AMD provide very good price performance, say on like a small cluster, but it's all the way up to a cluster size of 64 nodes which has about 1,000 cores. So the point is that heat wave performance very well, both on a small system as well as HD scale out. And this is again, one of those things which is a differentiation compared to other services who we expect that even other database services will have to improve their offerings to provide the same good scale factor which customers are now starting to expect and see MySQL heat wave. Come on, anything you'd add to that? I mean, you guys are arms dealer. You love all your OEMs, but at the same time you've got chip competitors, silicon competitors. How do you see the competitive? So I mean, I'd say the broader answer in the big picture for AMD, we're very maniacally focused on our customers, right? And OCI and Oracle are huge and important customers for us. And this particular use cases is extremely interesting, both in that it takes advantage very well of our architecture and it pulls out some of the value that AMD bring, right? So I think from a big picture standpoint, where I think our aim is to execute to be able to bring out generations of CPUs, kind of do what we say and say what we do and do what we say. And so from that point of view, we're hitting the schedules that we say and being able to bring out the latest technology and bring it in a TCO value proposition that generationally keeps OCI and heat wave ahead, right? That's the crux of our partnership here. Yeah, the execution's been obvious the last several years. Kermit, I was saying with you, how would you characterize the collaboration between AMD engineers and the heat wave engineering team? How do you guys work together? No, I'd say we're in very, very deep collaboration, right? So there's a few aspects where, you know, we've actually been working together very closely on the code and being able to optimize for both the large L3 cache that AMD has. And so to be able to take advantage of that and then also to be able to take advantage of the scaling, right? So going between our architecture is chiplet based. So we have these, the CPU cores on, we call them CCDs and the inter CCD communication, there's opportunities to optimize an application level. And that's something we've been engaged with. You know, in the broader engagement, you know, we're going back now for multiple generations with OCI. And there's a lot of input that now, you know, kind of resonates in the product line itself, right? And so we value this very close collaboration with heat wave and OCI. Yeah, then the cadence, Nip and you and I have talked about this quite a bit. The cadence has been quite rapid. It's like this constant cycle. Every couple of months I turn around is something new in heat wave. But for question again, for both of you, what new things do you think that organizations, customers are going to be able to do with MySQL heat wave if you could look out next 12 to 18 months? Is there anything you can share at this time about future collaborations? Right, look, 12 to 18 months is a long time. There's going to be a lot of innovation, a lot of new capabilities coming out in MySQL heat wave. But even based on what we are currently offering, and the trend we are seeing is that customers are bringing more classes of workloads. So we started off with OLTP for MySQL, then it went to analytics, then we increased it to mixed workloads. And now we offer like machine learning as well. So one is we are seeing more and more classes of workloads come to MySQL heat wave. And the second is a scale, the kind of data volumes people are using heat wave for to process these mixed workloads, analytics, machine learning, OLTP, that's increasing. Now, along the way, we are making it simpler to use. We're making it more cost effective use. So for instance, last time when we talked about we had introduced this real-time elasticity. And that's something which is a very, very popular feature because customers want the ability to be able to scale out or scale down very efficiently. That's something we provided. We provided support for compression. So all of these capabilities are making it more efficient for customers to run a larger part of their workloads on MySQL heat wave. And they will continue to make it richer in the next 12 to 18 months. Thank you, Kuberan. Anything you'd add to that? We'll give you the last word as we get a wrap. Yeah, no, absolutely. So, you know, next 12 to 18 months we will have our Zen 4 CPUs out. So this could potentially go into the next generation of the OCI infrastructure. This would be with the Genoa and then Bergamo CPUs taking us to 96 and 128 cores with 12 channels, the DDR5. This capability, when applied to an application like heat wave, you can see like it'll open up another order of magnitude potentially of use cases, right? And we're excited to see what customers can do with that. It certainly will make kind of this service and the cloud in general, this cloud migration, I think even more attractive. So we're pretty excited to see how things evolve in the spirit of time. Yeah, the innovations are coming together. Guys, thanks so much. We got to leave it there. I really appreciate your time. Thank you. Thank you. All right, and thank you for watching this special CUBE conversation. This is Dave Vellante and we'll see you next time.