 Welcome to theCUBE's continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Damley. He's a senior director for software and solutions at AMD. And we're also joined by Seamus Jones, who's the director of server engineering at Dell Technologies. Welcome gentlemen, how are you? I'm very good, thank you. Welcome to theCUBE. So let's start out really quickly. Seamus, give us a thumbnail sketch of what you do at Dell. Yeah, so I'm the director of technical marketing engineering here at Dell. And our team really takes a look at the technical server portfolio and solutions ensures that we can look at the performance metrics, benchmarks and performance characteristics so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy power edge from Dell. Milland, how about you? What's new at AMD? What do you do there? Great to be here. Thank you for having me. At AMD, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of seeing. We do a lot of benchmarks, improve performance and demonstrate with wonderful partners such as Seamus and Dell, the combined leverage that AMD fourth generation processors and Dell systems can bring to bear on a multitude of applications across the industry spectrum. Seamus, talk about that relationship a little bit more. The relationship between AMD and Dell, how far back does it go? What does it look like in practical terms? Absolutely. Ever since AMD re-entered the server space, we've had a very close relationship. It's one of those things where we are offering solutions that are out there to our customers, no matter what generation of portfolio if they're demanding either from their competitor or AMD, we offer portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better, really exciting things happening from AMD at the moment. And we're seeing that as we engineer those CPU stacks into our server portfolio, we're really seeing unprecedented performance across the board. So excited about the history, my team and Malin's team work very closely together so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the benchmarks testing and validation efforts. So Malin, are you happy with these PowerEdge boxes that Seamus is building to house your baby? We are delighted. It's hard to find stronger partners than Seamus and Dell. With AMD's second generation Epic Server CPUs, we already had undisputable industry performance leadership. And then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform and the CPU level. Everybody focuses on the high core counts, but there's also the DDR5, the memory, the IO and the storage subsystems. So we believe we have a fantastic performance and performance per dollar, performance per watt edge over competition. And we look at the partners such as Dell to help us showcase that leadership. Well, so Seamus, yeah, go ahead. What I'd add, Dave, is that through the partnership that we've had, we've been able to develop subsystems and platform features that historically we couldn't have. Really things around thermals, power efficiency and efficiency within the platform that means that customers can get the most out of their compute infrastructure. So this is gonna be a big question moving forward as next generation platforms are rolled out. There's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a physical enclosure versus 96 cores. And I guess the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Seamus, you wanna hit that first? You guys are interviewing me. Absolutely, yeah, sorry, absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment. Right? We've taken a look at the cost basis of keeping older infrastructure in place. Let's say five or seven year old infrastructure servers that are drawing more power, maybe are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding, right, is that they can take a dynamic consolidation, sometimes five, seven, eight to one consolidation, depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Malinsteam partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from, things like TPCX AI as a framework. And I'm talking about here, specifically the CPU based performance, even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AMD portfolio today offer multiple GPU offerings. So we're seeing a balance between a huge amount of CPU gain and performance, as well as more and more GPU offerings within the platform. That was a real challenge for us because of the thermal challenges. I mean, you'd think GPUs are going up 300, 400 watt. These CPUs at 96 score are quite demanding thermally, but what we're able to do is through some unique smart cooling engineering within the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform. So that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. Malinsteam, the CUBE was at the supercomputing conference in Dallas this year, supercomputing conference 2022. And a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you're bringing to the party? It's kind of a potluck. We mentioned PCIe Gen 5 or 5.0, whatever you want to call it. New DDR, storage cards, NICs, accelerators, all of those things. How do you keep that straight when those aren't things that you actually build? Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem, from the IO subsystem, PCI lanes for the storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers, HPC space, some of the AI applications. But on the lower end, you have database applications or some other ISP applications that care a lot about those. So it's, I guess, different things matter to different folks across verticals. So we partnered with Dell way early in the cycle and it's really a joint co-engineering. She must talk about the focus on AI with TPC XAI. So we set five world records in that space, just on that one benchmark with AMD and Dell. So fantastic kickoff to that across a multitude of state factors. But TPC XAI is not just the only thing we're focusing on. We're also collaborating with Dell and Desi AI on some of the transformer based natural language processing models that we worked on, for example. So it's not just the CPU story, it's CPU platform, memory subsystem, software and the whole thing delivering goodness across the board to solve end user problems in AI and other verticals. Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and they're fantastic. I know, Seamus, you know that end user customers might immediately have the reaction, well, I don't need a Ferrari in my data center or what I need is to be able to do more with less. Well, aren't we delivering that also? And you mentioned, Milland, you mentioned natural language processing. Seamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? I mean, while the adoption of the top bin CPU stack is definitely the exception, not the rule today, we are seeing marked performance even when we look at the mid-bin CPU offerings from AMD, those are the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Milland was talking about. The fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but actually all the way through to the application performance. So it's trying to find the correct balance between the application needs, your budget, power draw, and infrastructure within the data center, right? Because not only could you, you could be purchasing and look to deploy the most powerful systems, but if you don't have an infrastructure that's got the right power, right? That's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems. Might you wanna ensure that you can accommodate those for not just today, but in the future, right? So it's finding that balance. If I may just add on to that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, you've got so many cores. But as Sheamus correctly said, it's not just that one core count OPN. It's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have dozens and dozens of offerings. We have a fairly simple SKU stack, but we also have a very efficient SKU stack. So even though at the top end, we've got 96 cores, the thermal budget that we require is fairly reasonable. And look with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they are also super focused on performance per watt. And so we believe with this generation, we've really delivered not just on raw performance, but also on performance per dollar and performance per watt. Yeah, and it's not just Europe. We're here in Palo Alto right now, which is in California, where we all know the cost of an individual kilowatt hour of electricity because it's quite high. So thermal, power, cooling, all of that goes together and that drives cost. So it's a question of how much can you get done per dollar? Shameless you made the point that you're not, you don't just have a one size fits all solution that it's fit for function. I'm curious to hear from the two of you what your thoughts are from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public. Some will write stories for you based on prom. Some will create images for you. One of the more popular ones will create sort of your superhero alter ego for, I can't wait to do it. I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things they can think on their own in a certain way. What do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it? Malan, do you have anything to say? Yeah, yeah, yeah. Yeah, good. So the couple of examples Dave that you mentioned are, I guess it's a blend of novelty and curiosity. People using AI to write stories or poems or even carve out little jokes, check grammar and spelling, very useful, but still kind of in the realm of novelty. In the mainstream in the enterprise, look, in my opinion, AI is not just going to be a vertical, it's going to be a horizontal capability. We're seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing, to things like image classification or object detection that you talked about in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability. And frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. Yeah, absolutely. There's an AI as an outcome is really something that companies, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melinda was talking about is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say five, six, seven years, right? The fact that you have object detection within manufacturing to be able to do defect detection within manufacturing lines, now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, in the data center, you can bring it right out to the edge and have that high performance inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially so that way you can have more and better use cases for some of these instances, things like smart cities with video detection so that way they can see, especially during COVID, we saw a lot of hospitals and a lot of customers that were using image and spatial detection within their video feeds to be able to determine who and what employees were at risk during COVID. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I know my kids, my daughters love that portion of it, but really what's been happening has been exciting for quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's right out at the edge. I know Dave in the past, you've said things like the data center of 20 years ago is now in my hand as my cell phone and that's a fact. It's exciting to think where it's gonna be in the next 10 or 20 years. One terabyte baby, one terabyte, it's mind boggling. It's mind boggling and it makes me feel old. Yeah, me too. And Seamus, that all sounded great. All I want is a picture of me as a superhero though. So you guys are already way ahead of the curve. You know, with that on that note, Seamus wrap us up with kind of a summary of the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from AMD. Absolutely, so within the TPC XAI frameworks that Melinda and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen-on-gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a huge amount of benefit both in reduction in the number of units that you need to deploy and the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per watt specifically on the two socket to you platform using the fourth generation AMD Epic. You know, we're seeing a 55% higher CPU performance per watt. That is, you know, for people who aren't necessarily looking at these statistics every generation of servers, that is a huge jump leap forward. That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top pin, we're actually seeing, you know, large percentile improvements across the mid bins as well. You know, things in the range of like 70 to 90% performance improvements in those standard bins. So it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on their deployment size. Thanks for that, Seamus. Sadly, gentlemen, our time has expired. With that, I wanna thank both of you. It's a very interesting conversation. Thanks for being with us, both of you. Thanks for joining us here on theCUBE for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on doeshardwarematter.com.