 We're back at SC22 Supercomputing Conference in Dallas. My name's Paul Gellin, my co-host, John Furrier, SiliconANGLE founder, and huge exhibit floor here. So much activity, so much going on in HPC, and much of it around the chips from AMD, which has been on a roll lately, and in partnership with Dell. Our guests are Brian Payne, Dell Technologies, VP of Product Management for ISG, Mid-Range Technical Solutions, and Raghu Nambiar, Corporate Vice President of Data Center, Ecosystem and Application Engineering. That's quite a mouthful at AMD. And gentlemen, welcome, thank you. Thanks for having us. This has been an evolving relationship between you two companies, obviously a growing one, and something Dell was part of the big general rollout, AMD's new chipset last week. Talk about how that relationship has evolved over the last five years. Yeah, sure. Well, so it goes back to the advent of the Epic architecture. So we were there from the beginning, partnering well before the launch five years ago, thinking about, hey, how can we come up with a way to solve customer problems, address workloads in unique ways? And that was kind of the origin of the relationship. We came out with some really disruptive and capable platforms, and then it continues. It's continued till then, all the way to the launch of last week, where we've introduced four of the most capable platforms we've ever had in the PowerEdge portfolio. Yeah, I'm really excited about the partnership with Dell. As Brian said, we have been partnering very closely for the last five years, since we introduced the first generation of Epic. So we collaborate on system design, validation, performance benchmarks, and more importantly, on software optimizations and the solutions to offer out-of-the-box experience to our customers, whether it is HPC or databases, big data analytics or AI. You know, you guys have been on the Cube, you're just a veteran, 2012, 2014, back in the day. So much has changed over the years. Raghu, you were on the founding chair of the TPC for AI. We've talked about the different iterations of power service, so much has changed. Why the focus on these workloads now? What's the inflection point that we're seeing here at Supercomputing? It feels like we've been in this, you know, run the ball, gain a yard, move the chains, you know. But we feel, I feel like there's a moment where there's going to be an unleashing of innovation around new use cases. Where's the workloads? Why the performance? What are some of those use cases right now that are front and center? Yeah, I mean, if you look at today, the enterprise ecosystem has become extremely complex. Okay, people are running traditional workloads like relational database management systems, also new generation of workloads with AI and HPC, and actually like AI, actually HPC augmented with some of the AI technologies. So what customers are looking for is, as I said, out-of-the-box experience, or time to value is extremely critical. Unlike in the past, you know, customers don't have the time and resources to run months long of POCs, okay, so that's one idea that we are focusing, you know, working closely with the Dell to give out-of-the-box experience. Again, you know, the enterprise-applicated ecosystem is, you know, really becoming complex. And, you know, as you mentioned, some of the industry-standard benchmark is designed to give the fair comparison of performance and price performance for our end of customers. And, you know, Brian and my team have been working closely to demonstrate our joint capabilities in the AI space with a set of TPC-XAI benchmark cards last week. It was the major highlight of our launch last week. Brian, you got showing the demo in the booth at Dell here. Yeah. That demo, the product is available. What are you seeing, and for your use cases, that customers are kind of rallying around now, and what are they doubling down on? Yeah, you know, so Raghu, I think, teed it up well. The really data is the currency of business and all organizations today, and that's what's pushing people to figure out, hey, both traditional workloads as well as new workloads. So we've got, in the traditional workload space, you still have ERP systems like SAP, et cetera, and we've announced world records there, 100 plus percent improvements in our single socket system, 70 percent in dual. We actually posted a 40 percent advantage over the best general result just this week. So I mean, we're excited about that in the traditional space, but what's exciting? Like why are we here? Why are people thinking about HPC and AI? It's about how do we make use of that data, that data being the currency, and how do we push in that space? So Raghu mentioned the TPC-AI benchmark. We launched, or we announced in collaboration, you talk about how do we work together, nine world records in that space. In one case, it's a 3x improvement over prior generation. So the workloads that people care about is like, how can I process this data more effectively? How can I store it and secure it more effectively? And ultimately, how do I make decisions about where we're going, whether it's a scientific breakthrough or a commercial application? That's what's really driving the use cases and the demand from our customers today. I think one of the interesting trends we've seen over the last couple of years is a resurgence in interest in task-specific hardware around AI. In fact, venture capital companies invested at $1.8 billion last year in AI hardware startups. I wonder, and these companies are not doing CPUs necessarily, or GPUs, they're doing accelerators, FPGAs, ASICs, but you have to be looking at that activity and what these companies are doing. What are you taking away from that? How does that affect your own product development plans both on the chip side and on the system side? I think the future of computing is going to be heterogeneous. Okay, CPU solving certain type of problems like general purpose computing, databases, big data analytics, GPU solving problems in AI and visualization, and the GPUs and FPGAs accelerators solving, offloading some of the tasks from the CPU and providing real-time performance. And of course, the software optimizations are going to be critical to stitch everything together. Whether it is HPC or AI or other workloads, again, as I said, heterogeneous computing is going to be the future. And for us as a platform provider, the heterogeneous solutions mean we have to design systems that are capable of supporting that. So as you think about the compute power, whether it's a GPU or a CPU, continuing to push the envelope in terms of to do the computations, power consumption, things like that, how do we design a system that can be incredibly efficient and also be able to support the scaling to solve those complex problems? So that gets into challenges around both liquid cooling, but also making the most out of air cooling. And so we're seeing not only are we driving up the capability of these systems, we're actually improving the energy efficiency and the most recent systems that we launched around the CPU, which is still kind of at the heart of everything today, you know, we're seeing 50% improvements, you know, gen-to-gen in terms of performance per watt capability. So it's about like, how do we package these systems in effective ways and make sure that our customers can get, you know, the advertised benefits, so to speak, of the new chip technologies? Yeah, to add to that, you know, performance, scalability, total cost of ownership, these are the key considerations. But now energy efficiency has become more important than our, you know, our commitment to sustainability. So one of the things that we have demonstrated last week was with our new generation of Epic Genova-based systems, we can do a five to one consolidation, significantly reducing the energy requirements. The power is huge, costs are going up, it's a global issue. Yeah, it is. How do you squeeze more performance two out of at the same time? I mean, smaller, faster, cheaper, Paul, you wrote a story about, you know, this weekend about hardware and AI, making hardware so much more important. You got more power requirements, you got sustainability, but you need more horsepower, more compute. What's different in the architecture? If you guys could share, like today versus years ago, what's different as these generations step function value increases? What's different? So one of the major drivers of perspective is, if you look at the latest generation of processes, the five nanometer technology bringing efficiency and density, so we are able to pack 96 process of course, you know, in a two socket system, you're talking about 196 process of course. And of course, you know, other enhancements like IPC uplift, bringing DDR5 to the market, IPC agent for the market, offering overall, you know, performance uplift of more than 2.5X for certain workloads. And of course, you know, significantly producing the power footprint. I'm sorry, I was just going to, I mean, architecturally speaking, you know, how do we take the 96 cores and surround it and deliver a balanced ecosystem to make sure that we can get the IO out of the system and make sure we've got the right data storage. So I mean, you'll see 60% improvements total storage in the system. I think in 2012, we're talking about 10 gig ethernet. Well, you know, now we're on to 100 and 400 on the forefront. So it's like, how do we keep up with this increased power by having our computing capabilities, both offload and core computing and make sure we've got a system that can deliver the desired results. So the little things like the bus, the PCI cards, the NICs, the connectors, have to be rethought through. Is that what you're getting at? The GPUs, which are huge power consumers. Yeah, absolutely. So I mean, cooling, we introduce and we call it smart cooling as a part of our latest generation of servers. I mean, the thermal design inside of a server is a complex system, right? And doing that efficiently because of course fans consume power. So I mean, yeah, those are the kind of considerations that we have to put through to make sure that you're not either throttling performance because you don't have, you know, keeping the chips at the right temperature and, you know, ultimately when you do that, you're hurting the productivity of the investment. So I mean, it's our responsibility to put our thoughts and deliver the systems that are going to work. You mentioned data too. If you're bringing the data, one of the big discussions going into the big Amazon show coming up reinvent is egress costs. Yeah. So now you've got compute and how you design data latency, you know, processing. It's not just contained in a machine. You got to think about outside that machine talking to other machines. Is there an intelligent network developing? I mean, what's the future? Well, I mean, this is an area that's, you know, it's fun and, you know, Dell's in a unique position to work on this problem, right? We have 70% of the mission house, 70% of the mission critical data that exists in the world. How do we bring that closer to compute? How do we deliver system level solutions? So server, compute. So recently we announced innovations around NVMe over fabric. So now you've got the NVMe technology in the sand. How do we connect that more efficiently across the servers? Those are the kinds, and then guide our customers to make use of that. Those are the kinds of challenges that we're trying to unlock the value of the data by making sure we perform it. I mean, there are a lot of lessons learned from, you know, classic HPC and some of the big data analytics like, you know, Hadoops of the world, you know, you know, distributed processing for crunching a large amount of data. With the growth of the cloud, you see, you know, some pundits saying that data centers will become obsolete in five years and everything's going to move to the cloud. Obviously, data center market is still growing and is projected to continue to grow. But what's the argument for captive hardware, for owning a data center these days when the cloud offers such convenience and allegedly cost benefit? I would say the reality is that we're, and I think the industry at large has acknowledged this, that we're living in a multi-cloud world and multi-cloud methods are going to be necessary to, you know, to solve problems and compete. And so, I mean, you know, in some cases, whether it's security or latency, you know, there's a push to have things in your own data center and then, of course, growth at the edge, right? I mean, that's really turning, you know, things on their head, if you will, getting data closer to where it's being generated. And so, I would say we're going to live in this edge cloud, you know, and core data center environment with multi, you know, different cloud providers providing solutions and services where it makes sense and it's incumbent on us to figure out how do we stitch together that data platform, that data layer and help customers, you know, synthesize this data to generate, you know, the results they need. You know, one of the things I want to get into on the cloud, you mentioned that, Paul, is that we see the rise of graph databases. And so, is that on the radar for the AI? Because a lot of more graph data is being brought in. The database market's incredibly robust. It's one of the key areas that people want performance out of. And as cloud native becomes the modern application development, a lot more infrastructure as code's happening, which means that the internet and the networks and the process should be programmable. Right. So, graph database has been one of those things. Have you guys done any work there? What's some data there you can share on that? Actually, you know, we have worked closely with a company called Tiger Graph. They are in the graph database space. And we have done a couple of case studies, one on the healthcare side and the other one on the financial side for fraud detection. Yeah, I think they have, this is an emerging area. And we are able to demonstrate the industry-leading performance for graph databases. Very excited about it. It's interesting, it brings up the vertical versus horizontal applications. Where is the AI, HPC kind of shining? Is it like horizontal and vertical solutions? Or what's your vision? Yeah, well, this is a case where I'm also a user. So I own our analytics platform internally. We actually, we have a chat box for our product development organization to figure out, hey, what trends are going on with the systems that we sell, whether it's how they're being consumed or what we've sold. We actually use graph database technology in order to power that chat box. So I'm actually in a position where I'm like, I want to get these new systems into our environment so we can deliver. Graphs underlie most machine learning models. Yeah, yeah. We could talk about, there's so much to talk about in this space, so little time and unfortunately we're out of that. So, fascinating discussion, Brian Payne, Dell Technologies, Raghu, Nambiar, AMD, congratulations on the successful launch of your new chip set and the growth in your relationship over these past years. Thanks so much for being with us here on theCUBE. Super, thank you very much. It's great to be back. We'll be right back from Supercomputing 22 in Dallas.