 Hello, wonderful cloud community and welcome to theCUBE's continuing coverage of AWS re-invent. My name is Savannah Peterson and I am very excited to be joined by two brilliant gentlemen today. Please welcome Keith from Cockroach Labs and Jeff from AMD. Thank you both for tuning in coming in from the East Coast. How you doing? Not too bad, a little cold, but we're going. Doing great. Love that and I love the enthusiasm. Keith, you're definitely bringing the heat in the green room before we got on. So I'm going to open this up with you. Cockroach Labs puts out a pretty infamous and useful cloud report each year. Can you tell us a little bit about that, the approach and the data that you report on? Yeah, so Cockroach Labs builds a distributed SQL database that we are able to run across multiple cloud regions, multiple sites, multiple data centers, frequently is running hybrid kind of a use case. And it's important for our customers to be able to compare the performance of configurations when they don't have exactly the same hardware available to them in every single location. So since we were already doing this internally for ourselves and for our customers, we decided to turn it into something we shared with the greater community and it's been a great experience for us. A lot of people come and ask us every year, hey, when's the new cloud report coming out? Because they want to read it. It's been a great win for us. How many different things are you looking at? I mean, when you're comparing configurations, I imagine there's a lot of different complex variables there, just how much are you taking into consideration when you publish this report? Yeah, so we look at micro benchmarks around CPU network and storage. And then our flagship benchmark is we use the database itself where we have the most expertise to create a real world benchmark on across all of these instances. We, this year, I think we tested over 150 different discrete configurations. And it's a bit of a labor of love for us because we then not only do we consume it for best practices for our own as a service offering, but we share it with our customers. We use it internally to make all kinds of different decisions. Yeah, 150 different comparisons is not a small number. And Jeff, I know that AMD's position in this cloud report is really important. Where do you fit into all of this and what does it mean for you? Right, so what it means for us and for our customers is there's a good breadth and depth of testing that has gone on from the lab and you look at this cloud report and it helps them traverse this landscape of why to go on instance A, B or C on certain workloads. And it really is very meaningful because they now have the real data across all those dimensional kinds of tests. So this definitely helps not only the customers but also for ourselves. So we can now look at ourselves more independently for feedback loops and say, hey, here's where we're doing well. Here's where we're doing okay. Here's where we need to improve on. All those things are important for us. So love seeing the lab present out such a great report as I've seen, very comprehensive. So I very much appreciate it. And specifically, I love that you're both fans of each other obviously, specifically in there. What does it mean that AMD had the best performance ratio tested on AWS instances? Yeah, so when we're looking at instances we're not just looking at how fast something is. We're also looking at how much it costs to get that level of performance because CockroachDB has a distributed system has the opportunity to scale up and out. And so rather than necessarily wanting the fastest single instance performance which is an important metric for certain use cases for sure, the comparison of price for performance when you can add nodes to get more performance can be a much more economical thing for a lot of our customers. And so AMD has had a great showing on the price performance ratio for I think two years now. And it makes it hard to justify other instance types in a lot of circumstances, simply because you know it's cheaper to get for each transaction per second that you need, it's cheaper to use an AMD instance than it would be a competitive instance from another vendor. I mean, everyone I think no matter their sector wants to do things faster and cheaper when you're able to achieve both, it's easy to see why it's a choice that many folks would like to make. So what do these results mean for CIOs and CTOs? I can imagine there's a lot of value here in the fin-ups worlds. Yep, I'll start a few of them. So from the C-suite when they're really looking at the problem statement, think of it as less granular but higher level, right? So they're really looking at CapEx, OpEx, sustainability, security sort of ecosystem on there. And then as Keith pointed out, hey, there's this TCO conversation that has to happen. In other words, as they're moving from sort of this lift and shift from their on-prem into the cloud, what does that mean to them for spend? So now if you're looking at the consistency around sort of the performance and the total cost of running this to their insights, to the conclusions, less time, more money in their pocket and maybe a reduction for their own customers so they can provide better for the customer side. What you're actually seeing is that's the challenge that they're facing in that landscape that they're driving towards that they need guidance and help with towards that. And we find AMD lends itself well to that scale out architecture that connects so well with how cloud microservices are run today. I, that's not surprising to hear that. Keith, what other tips and tricks do you have for CIOs and CTOs trying to reduce FinOps and continue to excel as they're building out? Yeah, so there were a couple of other insights that we learned this year. One of those two insights that I'd like to mention is that it's not always obvious what size and shape infrastructure you need to acquire to maximize your cost productions. So we found that smaller instance types were by and large, had a better TCO than larger instances even across the exact same configurations. We kept everything else the same, smaller instances had a better price performance ratio than the larger instances. The other thing that we discovered this year that was really interesting we did a bit of a cost analysis on networking and largely because we're a distributed system we can span across availability zones, we can span across regions, right? And one of the things we discovered this year is the amount of cost for transferring data between availability zones and the amount of cost for transferring data across regions, at least in the United States was the same as in the United States. So you could potentially get more resiliency by spanning your infrastructure across regions than you would necessarily just spanning across availability zones. So you could be across multiple regions at the same cost as you were across availability zones which for something like Cockroach DB we were designed to support those workloads is a really big, big, big, big, big, big, big, big, an important thing for us. Now you have to be very particular about where you're purchasing your infrastructure and where those regions are, right? Because those data transfer rates changed depending on what the source and the target is but at least within the United States we found that there was a strong correlation to being more survivable if you were in a multi-region deployment and the cost stayed pretty flat. That's interesting. So it's interesting to see what the correlation is between things and when you think there may be a relationship between variables and when there maybe isn't. So on that note, since it seems like you're both always learning, I can imagine what are you excited to test or learn about looking forward? Jeff, let's start with you actually. For sort of future testing, one of those things is certainly those more scale out sort of workloads with respect to showing scale, meaning as I'm creasing the working set, as I'm creasing the number of connections, variability is another big thing, right? Of showing that minimization from run to run because performance is interesting but consistency is better. And as the lower side is from the instance sizes as I was talking about earlier is these architecture lends itself so well to it because they have the local caching and the CCDs that you can now put a number of VCPUs that will benefit from that delivery of the local caching and drive better performance at the lower side for that scale out sort of architecture which is so consistent with the microservices. So I would be looking for more of those dimensional testing variability across a variety of workloads that you can go from memory intense workloads to database persistence store as well as a blend of the two, Kafka, et cetera. So there's a great breadth and depth of testing that I am looking for and to more connect with sort of the CTOs and CIOs the higher level that really show them that CAPEX-OPEX sustainability and provide a bit more around that side of it because those are the big things that they're focused on as well as security. The fact that based on working sets, et cetera, AMD has the ability with confidential compute around those kinds of offerings that can start to drive to those outcomes and help from what the CTOs and CIOs are looking for from compliance as well. So set them up. You're excited about a lot. No, that's great. That means you're very excited about the future. It's a journey that continues as Keith knows. There's always something new. Yeah, absolutely. What about you, Keith? What is the most excited on the journey? Yeah, so there are a couple of things I'd like to see us test next year. One of those is to test a multi-region cockroach DB config. We have a lot of customers running in that configuration and production but we haven't scaled that testing up to the same breadth that we do with our single-region testing, which is what we've based the cloud report on for the past four years. The other thing that I'd really love to see us do, I'm a Kubernetes, it's me, at least that's kind of my technical background. I would love to see us get to a spot where we're comparing the performance of raw EC2 instances to using that same infrastructure running cockroach DB via EKS. We're going to see what the differences are there. The vast majority of cockroach DB customers are running at least a portion of their infrastructure in Kubernetes. So I feel like that would be a real great value add to the report for the next time that we go around, but go about publishing it. If I don't mind adding to that, just to volley it back for a moment. And also as I was saying about the scale out and how it leverages our AMD architecture so well with EKS specifically around the spin up, spin down. So you think of a whole development life cycle, right? As they grow and shrink the resources over time, time of those spin ups to spin downs are expensive. So that has to be as reduced as much as possible. And I think they'll see a lot of benefits in AMD's architecture with EKS running on it as well. The future is bright. There's a lot of hype about many of the technologies that you both just mentioned. So I'm very curious to see what the next cloud report looks like. Thank you, Keith and the team for the labor of love that you put into that every year. And Jeff, I hope that you continue to be as well positioned as everyone's innovation journey continues. Even Jeff, thank you so much for being on the show with us today. As you know, this is a continuation of our coverage of AWS re-invent here on theCUBE. My name is Savannah Peterson and we'll see you for our next fascinating segment.