 Welcome to theCUBE's continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm here in our Palo Alto studios talking to Greg Gibby, Senior Product Manager, Data Center products from AMD, and Mohan Rakham, Technical Marketing Engineer at Dell. Welcome gentlemen. Hello. Thank you. Glad to be here. Good to see each of you. Just really quickly I want to start out, let us know a little bit about yourselves. Mohan, let's start with you. What do you do at Dell exactly? So I'm a Technical Marketing Engineer at Dell. I've been at Dell for around 15 years now. And my goal is to really look at the Dell Powered servers and see how do customers take advantage of some of the features we have, especially with the AMD Epic processes that have just come out. Greg, what do you do at AMD? Yeah, so I manage our software defined infrastructure solutions team and really as a cradle to grave where we work with the ISVs in the market, so VMware, Nutanix, Microsoft, et cetera, to integrate the features that we're putting into our processors and make sure they're ready to go and enabled. And then we work with our valued partners like Dell on putting those into actual solutions that customers can buy. And then we work with them to sell those solutions into the market. Before we get into the details on the fourth generation Epic launch and what that means and why people should care, Mohan, maybe you can tell us a little about the relationship between Dell and AMD, how that works. And then Greg, if you've got commentary on that afterwards, it'll be great. Yeah, Mohan. Absolutely. I mean, Dell and AMD have a long standing partnership. Especially now with the Epic series, we have had products that since Epic first generation, we have been doing solutions. We have across the whole range of Dell ecosystem, we have integrated AMD quite thoroughly and effectively. And we really love how performant these systems are. So, yeah. Greg, what are your thoughts? Yeah, I would say the other thing too is, is that we need to point out is, is that we both have really strong relationships across the entire ecosystem. So, you know, memory vendors, the software providers, et cetera. We have technical relationships, we're working with them to optimize solutions so that ultimately when the customer buys that, they get a great user experience right out of the box. So Mohan, I know that you and your team do a lot of performance validation testing as time goes by. I suspect that you had early releases of the fourth gen Epic processor technology. What have you been seeing so far? What can you tell us? I mean, AMD has definitely knocked it out of the park. Time and again, in the past four generations, I mean, in the past five years alone, we have done some database work where in five years, we have seen five ex-performance. And across the board, AMD is the leader in benchmarks. We have done virtualization where we were consolidated from five into one system. We have world records and AI, we have world records and databases, we have world records and virtualization. It's the AMD Epic series has been absolutely performant. I'll leave you with one number here. When we went from top of stack Milan to top of stack Genoa, we saw a performance bump of 120% and that number just blew my mind. So that prompts a question for Greg. Often we in industry insiders think in terms of performance gains over the last generation or the current generation. A lot of customers in the real world, however, are N minus two, you know, there are ways back. So I guess two points on that. First of all, the kinds of increases the average person is going to see when they move to this architecture, correct me if I'm wrong, but it's even more significant than a lot of the headline numbers because they're moving two generations, number one, correct me if I'm wrong on that. But then the other thing is the question to you, Greg. I like very long complicated questions, as you can tell. The question is, is it okay for people to skip generations or make the case for upgrades? I guess is the problem. Well, yeah, so a couple of thoughts on that. First two, you know, Mohan talked about that 5X over the generation improvements that we've seen. The other key point with that too, is that we've made significant process improvements along the way, moving to seven nanometer, now five nanometer. And that's really reducing the total amount of power but performance for what the customers can realize as well. So, and when we kind of look at, you know, why would a customer want to upgrade, right? And I want to rephrase that as to why aren't you, right? And there is a real cost of not upgrading. And so when you look at infrastructure, the average age of a server in the data center is over five years old, right? And if you look at the most popular processors that were sold in that timeframe, it's, you know, eight, 10, 12 quarts, right? So now you've got a bunch of servers that you need in order to deliver the applications and meet your SLAs to your end users. And all those servers pull power. They require maintenance. They have the opportunity to go down, et cetera. You have got to pay licensing and service and support costs and all those. And when you kind of look at all the costs that roll up, even though the hardware is paid for just to keep the lights on and not even talking about the soft costs of unplanned downtime and, you know, not meeting your SLAs, et cetera, it's very expensive to keep those servers running. Now, if you refresh, right? And now you have processors that have 32, 64, 96 cores, now you can consolidate that infrastructure and reduce your total power bill. You can reduce your capex, you reduce your ongoing OPEX, you improve your performance and you improve your security profile. So it really is more cost effective to refresh than not to refresh. So Mohan, what has your experience been? You know, kind of double clicking on this topic of consolidation, I know that we're gonna talk about virtualization and some of the results that you've seen. What have you seen in that regard? Does this favor better consolidation in virtualized environments? And are you both assuring us that the ROI and TCO pencil out on these new big bad machines? I mean, Greg definitely hit the nail on the head, right? We are seeing tremendous savings, really, if you're consolidating from two generations old. We went from, as I said, five is to one, you're going from five full servers, probably paid off down to one single server. That itself is, you know, in terms of, if you look at licensing costs, which again with things like VMware does get pretty expensive, you do have, if you move to a single system, yes, we are a 32, 64, 96 course, but if you compare to the licensing costs of, you know, 10 cores, two sockets, that's still pretty significant, right? That's one huge thing. Another thing which actually really drives the thing is we are looking at security and in today's environment, security becomes a major driving factor for upgrades. Dell has its own set of cyber resilient architecture as we call it. And that really is integrated from processor all the way up into the OS. And those are some of the features which customers really can take advantage of and help predict their ecosystems. So what kinds of virtual virtualized environments did you test? We have done, I mean, virtualization across, primarily of course with VMware, but Azure Stack, we have looked at Nutanix, we have, you know, PowerFlex is another one within Dell. We have V-Sand ready nodes. All of these, I mean, OpenShift, we have a broad variety of solutions from Dell and AMD really fits into almost every one of them very well. So where does hyper-converged infrastructure fit into this puzzle? We can think of a server as something that contains not only AMD's latest architecture, but also latest PCIe bus technology and all of the, you know, faster memory, faster storage cards, faster NICs, all of that comes together. But how does that play out in Dell's hyper-converged infrastructure or HCI strategy? I mean, Dell is a leader in hyper-converged infrastructure. I mean, we have the very popular VxRail line, we have the PowerFlex, which is now going into the AWS ecosystem as well, Nutanix and of course Azure Stack. With all these, when you look at AMD, we have up to 96 cores coming in, right? We have PCIe Gen 5, which means you can now connect dual port, 100 and 200 gig NICs and get line rate on those. So you can connect to your ecosystem and I don't know if you've seen the news, 200 and 400 gig routers and switches are selling out. That's not slowing down. The network infrastructure is booming. If you want to look at the AI ML side of things, the VDI side of things, accelerator cards are becoming more and more powerful, more and more popular. And of course they need that higher end, you know, data path that PCIe Gen 5 brings to the table. GDR5 is another huge improvement in terms of performance and latencies. So when we take all this together, we talk about hyperconverged, all of them add in to making sure that A, with hyperconverged, you get ease of management, but B, just because you have ease of management doesn't mean you need to compromise on anything. And the AMD servers effectively is our no compromise offering that we at Dell are able to offer to our customers. So Greg, I've got a question a little bit from left field for you. We covered Super Compute Conference 2022. We were in Dallas a couple of weeks ago. And there was a lot of discussion of the current processor manufacturer of battles and a lot of buzz around fourth gen Epic being launched and what's coming over the next year. Do you have any thoughts on what this architecture can deliver for us in terms of things like AI? We talk about virtualization, but if you look out over the next year, do you see this kind of architecture driving significant change in the world? Yeah, yeah, it has the real potential to do that, right? From just the building blocks, right? So where we have our chiplet architecture, we call it, so you have an IO die, and then you have your core complexes that go around that and we integrate it all with our infinity fabric. That architecture allows you, if we wanted to replace some of those CCDs with specific accelerators. And so when we look two, three, four years down the road, that architecture and that capability already built into what we're delivering and can easily be moved in. We just need to make sure that when you look at doing that, that the power that's required to do that and the software, et cetera, and there was accelerators actually deliver better performance as a dedicated engine versus just using standard CPUs. The other things that I would say too, right, is if you look at emerging workloads, right? So data center modernization is one of the buzzwords you're gonna hear, cloud native, right? And in these container environments, well, A in these architecture is really just screens support for those type of environments, right? Where when you get into these larger core accounts and the consolidation that Mohan talked about, now when I'm in a container environment, that blast radius, so a lot of customers have concerns around, hey, having a single point of failure and having more than X number of cores concerns me. If I'm in containers, that becomes less of a concern, right? And so when you look at cloud native, containerized applications, data center modernization, AMD is extremely well positioned to take advantage of those use cases as well. Yeah, Mohan, and when we talk about virtualization, I think sometimes we have to remind everyone that, yeah, we're talking about not only virtualization that has a full blown operating system in the bucket, but also virtualization where the containers have microservices and things like that. I think you had something to add, Mohan. I did, and I think going back to the accelerator side of business, right? When we're looking at the current technology and looking at accelerators, and AMD has done a fantastic job of adding in features like AVX 512, we have the B-float 16 and int 8 features. And some of what these do is they have the effectively built-in accelerators for certain workloads, especially in the AI and media spaces. And in some of these use cases we look at, for example, our inference. Traditionally, we have used external accelerator cards, but for some of the entry-level and mid-level use cases, CPU is gonna work just fine, especially with the newer CPUs that we are seeing this fantastic performance from, the accelerators just help get us to the point where, if I'm at the edge, if I'm in certain use cases, I don't need to have an accelerator in there. I can run most of my inference workloads right on the CPU. Yeah, yeah, you know the game. It's an endless chase to find the bottleneck. And once we've solved the puzzle, we've created a bottleneck somewhere else. You know, back to the super compute conversations we had, specifically about some of the AMD Epic processor technology and the way that Dell is packaging it up and leveraging things like connectivity. That was one of the things that was also highlighted. This idea that increasingly connectivity is critically important, not just for super computing, but for high performance computing, that is, that's finding its way out of the realms of simple, you know, of low salamos and down to the enterprise level. Gentlemen, any more thoughts about the partnership or maybe a hint at what's coming in the future? I know that the original AMD announcement was announcing and previewing some things that are rolling out over the next several months. So let me just toss it to Greg. What are we gonna see in 2023 in terms of rollouts that you can share with us? That I can share with you. Yeah, so I think look forward to see more advancements in the technology at the core level. I think we've already announced, you know, our product code name Bergamo will have up to 128 cores per socket, right? And then as we look in, how do we continually address this demand for data, this demand for, I need actionable insights immediately, right? Look for us to continue to drive performance leadership in our products that are coming out and then address specific workloads and accelerators where appropriate and where we see a growing market. Mohan, final thoughts. On the Dell side, of course, we have four very rich and rich and configurable options for with AMD AIPEX servers, but beyond that, we will see a lot more solutions that some of what Greg has been talking about around the next generation of processors or the next update to processors, you'll start seeing some of those and you definitely see more use cases from us and how you customers can implement them and take advantage of the features that, I mean, it's just exciting stuff. Exciting stuff indeed, gentlemen, we have a great year ahead of us as we approach possibly the holiday seasons. I wish both of you well. Thank you for joining us. From here in the Palo Alto studios, again, Dave Nicholson here, stay tuned for our continuing coverage of AMD's fourth generation Epic launch. Thanks for joining us.