 We're here in theCUBE covering the innovations in AI and supercomputing all things HPC and how the high performance computing sector is evolving and changing the world. And in this segment, we're going to take a trip inside compute and HPC solutions and look at the innovations that Dell is driving with its partners and its line of power edge servers and overall it's HPE solutions. And with me are Armando Acosta who's the director of HPC product management Shria Shah, who's the portfolio strategist for AI and HPC both with Dell technologies. Folks, welcome back to theCUBE. Good to see you again. Thank you for having us. All right, let's start with a big picture Armando. The trends, how do you think about the current state of HPC in the context of the latest trends? Obviously, AI is front and center. Data analytics is big. What are the latest advancements in technology as it relates to AI and data analytics that you're seeing? Yeah, no, that's a great question, Dave. So when you look at HPC today, I know we've been talking about the convergence for the last five years, but we do see, you know, truly see that convergence happenings. And so what I mean by the convergence is we see customers asking to run HPC, AI and data analytics workloads on the same cluster to gain economies of scale and scope, return on investment and total cost of ownership. And so when you look at what customers want to do that, that really changes the way they want to build their environments, right? Now they want to have heterogeneous environments. They want to have the ability to have not only CPU and memory nodes, but they also want to have nodes with accelerators and that, right? When you look at AI, this is driving different architectures with CPUs, GPUs, as well as essentially, how do you tie this all together and how do you make it interoperable? So we're really excited about where customers are going, how they're solving these solutions and using HPC and AI data analytics, but it's definitely changing and driving what we're trying to do with our portfolio and making sure that we meet those needs. Great, thank you, Armando. Sure, so based on what Armando just said, how does that inform your strategy? I mean, what specifically are you doing in the portfolio to map to these trends? Yeah, so, you know, as Armando said, right? The data sets are growing. The amount of compute power that's required is immense, right? We're talking about more and more cores. We're talking about, you know, higher frequencies. We're talking about more memory, more bandwidth. And when you put this together, you know, if you look at the compute and accelerator space, right? Be it CPUs, GPUs, FPGAs, or even DSAs, which we think of as domain-specific accelerators, really specialized basics for these workloads, you're seeing that they're getting bigger, right? And the amount of power, thermal dissipated power is growing at a phenomenal rate. So to be able to take these accelerators and be really able to get the maximum value out of it, we have to really size up our portfolio effectively to be able to, you know, be able to bring the performance, really performance per dollar per watt is what we focus on. Right, I mean, I think that's the right way to look at it. And you're right, there's all these alternative, you know, processors, CPUs, GPUs you mentioned, but then all the supporting components around them to create a balanced systems. I wonder if we could talk about training versus inference. A lot of the AI work we've seen has been focused on training. We've written that inference is going to explode. AI inferencing, you know, you have all this GPT hype, which is pretty impressive. And it seems like we're rethinking where you do the training and where you run these models. You've got latency considerations. I've talked to customers who say, well, we're going to put this on-prem because we want to show off our supercomputing complex to potential donors, like you hear that from universities. People are worried about IP leakage. Are you seeing customers rethink where they're training their models and where do you see inference fitting? Absolutely. So you're absolutely right. It's easier, you know, when you're just starting off to say, I'm going to just start off in the cloud. What happens is that very quickly, we come back to this performance per dollar per watt metric. If you really want to capitalize and maximize that, it makes sense to have this hybrid approach, you know, as you get more and more efficient and as you understand your environment, bring it back on-prem. We have a really great breadth of portfolio. We talked about inferencing, we talked about training, but there's also the, you know, we talk about thermal dissipated power. How do you go support this in your environment? Do you need liquid cooling? Can you stay with air cooling? And so this balanced portfolio where you have these heavy workloads like training that require this massive amount of processing power, we have a system XE 9680 that we have focused on for maximum GPU to GPU coherency from a memory aspect to be able to handle these training workloads. On the flip side of it, we have the R760XA that is really fantastic for when you finish training, how do you go implement the inferencing, be it, you know, in the core data center or close to the near edge when you think about the spectrum of cloud core and edge. Okay, so I was going to ask you, what does this all mean for Dell? And I think you just answered it. And essentially you're saying you're optimizing for these specific use cases and workloads. Is that correct? Absolutely. So it's the use cases, but also the implementation. You know, I talked about this balance of, you know, these TDPs exploding. Well, we have the air cool systems and then we have liquid cool systems as well. So for example, the XE 8640 is a fantastic place for our customers. If they're doing this mixed workload that Armando talked about, both HPC and AI and we have a complimentary system, the XE 9640 that is the liquid cool version of it. So depending on where you are in your journey from a data center aspect and from a workload aspect, we have the right fit for you depending on your journey. Thank you for that. So Armando, what about solutions? You know, I think people wake up, we always love to talk about product, but really it's an outcome, it's taking a solution and applying it to my business to get an outcome. So solutions are really important. So what specific HPC solutions are you delivering? Yeah, so it really goes back to what Trey said earlier, right, use case and what vertical market that customer plays in. So for us, what we do is we call these Dell validated designs. And what we do with these Dell validated designs is we look at specific markets. So for example, we look at essentially research and government, we look at financial services, we look at life sciences, we look at manufacturing. But what we do is with these Dell validated designs we actually give you a true end point of view of how you go and build these environments, right? We look at the different compute building blocks. Australia said, hey, maybe you need that NB link to NB link communication. So that's a good building block. The 9680 is perfect for you. In some other cases, they say, hey, I don't need NB link, but hey, I want to use a 760XA because I do want accelerator. So we look at the different server compute building blocks. We look at the network building blocks. So whether you want to run Infiniband, now there's interest in Ethernet. What can we do with Ethernet and what can we run there and NPI on top of Ethernet? So interesting stuff we're looking at there. And then last but not least is what your storage options, right? If you look at market research today, most of our customers have four to five tiers of storage in their HBC AI and Daniel Lake's environment. So they need us to come in with a point of view and really help them understand, hey, if I want to run this specific use case of financial services and I need low latency and I need the answer as fast as possible, what's the architecture I choose, right? What's a server building block? What's the fabric I choose? What's the storage I choose in order to get that best result? And like you said, it's a business outcome, right? It's, you know, we love to talk about technology because that's what we do, but in the end it's how do we enable our customers to go faster and get that faster time to value? And so these are why we do these validated designs because I don't want them to use their important time and resources, doing benchmarking, optimization and tuning. Let us go do that hard work for you so that you can essentially, you know, use your valuable time to work on your problem and essentially work on your application because that's the value. You know, you got to build a cluster, you have to make all these things that are operable. We want to do that upfront hard work for you so that you don't have to waste your valuable time and you can get that faster time to value. So how does that work? You do these validated designs in your lab, you work with your customers within these specific like industries, you mentioned research, financial services, life science, manufacturing. And then you validate these designs and then publish them, is that right? Yes, sir. So yeah, so, you know, I'm going to name drop here because we've got some really smart people in our lab. So we have engineers that, you know, essentially have expertise. So one example, you know, there's a gentleman in our lab, his name is Joshua Wege. He has been working on manufacturing codes for the last 25 years. And so what we do is we work experts like him and he looks at all the different manufacturing codes from ANSYS, Fluent, Siemens, Abacus. And what we do is we look at those codes and we optimize it based on essentially what performance per watt, you know, customers want or performance per dollar or essentially a good, better, best configuration, right? And so where we really, when we try to look for the sweet spots is when you talk to manufacturing customers and they run these ISV codes, they just can't run an unlimited amount of cores because that will triple or double their licensee costs. And so that is essentially those are total cost of ownership story out the window. And so what we do is Joshua goes and looks at these codes and said, hey, you know, 96 cores not good for you, but the sweet spot in this new architecture where there'd be Intel AMD, hey, 32 cores of sweet spot. And oh, by the way, you get to stay in the same licensee cost, but you get an uptick in performance by 15 or 20% whatever it might be. And not only that, he looked at Ethernet, he looks at fabrics, you know, bit of band, he looks at the different storage options. And then essentially we put it together in an architecture and then we optimize and tune. And then what we do is we produce blogs, we produce solution briefs. And then not only that, we can help you do proof of concepts in our AI and HPC innovation lab to even give you further value for our customers. Got it. What about, you know, we touched on an earlier Amando, but what about clouds, generally about hybrid clouds specifically in terms of where these solutions and PowerEdge fit in? Yeah, so we believe it's both, right? So, you know, some people want to take action, hey, it's one or the other, but we do believe that you do have to take a hybrid approach. You know what Shreya said? A lot of times your data scientists will go to the cloud really quickly because he needs to test something and he needs to say, hey, is there actually value in this model or is there actually value in this data? And so they do that quickly in the cloud, but as Shreya said, you start to grow and start to use more data and as soon as you start to use more models, well, guess what? That data gravity and that data movement piece starts to cost you a lot of money. And so now what you have to do is look at, hey, how do I solve that data gravity and how do I solve that ingress and egress bees? And guess what? It makes it more efficient to run it within your four walls. So what we want to be able to do is, hey, if you ever want to go burst and go to the cloud again, we want to give you options to do that and enable you to do that. And we're not just going to try to lock you into on-premise because like I said, we believe it's both. And so we are going to give our customers flexibility and choice, right? And that's what we do. We have our apex for HBC offers that we have out there today. We have these architectures that I taught to you that you run within your four walls. And so we're going to give you both and what we want to be able to do is give you this same look and feel whether you do it within your four walls or whether you do it in the cloud. Got it. Last question, I wonder if you could answer maybe Armando, you could start and then Trey, you can bring us home. Give me the pitch. Why Dell, what differentiates your solutions from the competitors in HPC and how does the company plan to stay ahead of the curve going forward with all this, the pace of the market is just accelerating? Pitch me. Yeah. So for us, when you look at, you know, Dell and you look at the way we approach solutions is we give you that in point of view so that you can make the most educated decision on what's best for you. And then not only that, I have invested millions and millions of dollars in our HPC AI Innovation Lab where we can go execute and help you do proof of concepts and show you the value before you go and invest all these dollars. And here's the beautiful thing about this is when we go and execute these POCs, we work with our customers, we design it and guess what? We don't charge you to do this because we believe that if we can democratize HPC and we can lower the barrier of entry, more customers will jump into high performance computing and it'll be better for all of us. And so when you look at why we're differentiated, it's our experts, it's our HPC AI Innovation Lab and it's the way we build our Dell Validate designs to give you a more vertical market approach and also to teach you how it'll be specific to help you solve your problem for your use case. That's a nice freebie. All right, Shreya, we'll give you the last word here. I think Armando did a fantastic job. What I'd add to that is there's two points the way I think about it. It starts with the architectural design. As much as everybody believes that all infrastructure is created equal, it is not. And so all the work that we do in our labs to make sure you're getting that maximum performance. And then as we think about the lifecycle, you have to deploy these massive systems. How do you go do that? And so we have our deployment and support services. But on top of that, if you're in that journey where you're not capable, for example, of liquid cooling, we have our modular data center team that can actually take these deployments, rack and stack it and just drop it on your side. And all you have to do is cable up your power and we take care of all the other requirements. And so that's one piece of it. The systems management, how do you keep it together? How do you keep upgrading and maintaining it? That's another really big piece that Dell focuses on. So that's one part of it. But the second piece that kind of Armando touched on is it's the people at Dell that really make this special, that make it come together. The AI HPC innovation team is an example. But as we think about how do we keep the knowledge base together, we have a really big HPC community as well that we focus on, that we bring together. Armando, maybe you want to talk about that a little bit. Yeah, so the other big thing we do is with our HPC community, I don't know if you had some time today, but we run it every Wednesday. Today we talked about MPI and where it's going, where it needs to go in the future there. But with our HPC community, really what it is, it's about the community, it's about our partners, it's about us, it's your customers coming together so that we share knowledge, we share what we're doing with our specific research. And then not only that, like I said, we educate each other so we can raise all boats. And so thank you, Shreya, for reminding me about that and tying in the HPC community event. She gave me another freebie. But yes, that's huge with us. We do want to make sure that we build a community and we do want to make sure that we share our expertise and that we make everybody better. And like I said, democratize, optimize, and advancing. That's our strategy and that's core to what we do with the community. This is cool. So it's dellhpc.org, I'm just on the website now. And you can request to join and you guys run these sessions. Looks like, like you said, every Wednesday or actually more frequently, right? Is it every Wednesday? Yeah, okay. It's every Wednesday. And if you're at ISC next week, yeah, you know, we'll be there. We'll have a live event. And so please join us. We'd love to have you. That looks great. In addition to that, Dell Technologies World will also be operating at the same time that ISC will look out for some very exciting announcements on the AI solution side of the house as well. Excellent. Okay, folks, thanks so much for your time. Really appreciate it. Great to have you again. Thank you very much. We appreciate it. All right, keep it right there for more innovations in HPC. You're watching theCUBE's coverage of ISC 23.