 Just look forward. Good afternoon everyone and welcome back to Supercomputing. We're live here with theCUBE in Dallas. I'm joined by my co-host David. Wonderful to be sharing the afternoon with you. And we are going to be kicking things off with a very thrilling discussion from two important thought leaders at Dell. Babush and Shreya, thank you so much for being on the show. Welcome, how are you doing? How does it feel to be at Supercomputing? Pretty good. We really enjoying the show and enjoying a lot of customer conversation ongoing. Are most of your customers here? Yes, most of our customers are mostly in the hide over there and a lot of discussions ongoing. Yeah, must be nice to see everybody show up. Are you enjoying the show so far Shreya? Yeah, I missed this for two years and so it's nice to be back and meeting people in person. Yeah, definitely, we all missed it. So it's been a very exciting week for Dell. Do you want to talk about what you're most excited about in the announcement portfolio that we saw yesterday? Absolutely. Go for it, Shreya. Yeah, so before we get into the portfolio side of the house, we really wanted to kind of share our thoughts in terms of what is it that's kind of moving HPC and Supercomputing. For a long time. Talk trends. For a long time HPC and Supercomputing has been driven by packing the racks, maximizing the performance. And as the work that Pavesh and I have been doing over the last couple of generations, we're seeing an emerging trend and that is the thermal dissipated power is actually exploding. And so the idea of packing the racks is now turning into how do you maximize your performance but are able to deliver the infrastructure in that limited kilowatts per rack that you have in your data center. So it's been interesting walking around the show, seeing how many businesses associated with cooling Greg are here. And it's funny to see they open up the cabinet and it's almost 19th century looking technology. It's pipes and pumps. And very industrial looking. Yeah, and I think so that's where the trends are more in the power and cooling. That is what everybody is trying to solve from an industry perspective. And what we did when we looked at our portfolio what we want to bring out in this timeframe for targeting more the HPC and AI space. There were a couple of vectors we had to look at. We had to look at cooling. We had to look at power where the trends are happening. We had to look at what are the data center needs showing up, be it in the colo space, be it in the HPC space, be it in the large install happening out there. So looking at those trends and then factoring in how do you build a node out? We said, okay, we need to diversify and build out an infrastructure. And that's what me and Shreya looked into not only looking at the silicon diversity showing up but more looking at, okay, there is this power, there is this cooling, there is silicon diversity. Now how do you start packing it up and bringing it to the marketplace? So kind of those are some of the trends that we captured and that's what you see kind of in the exhibit floor today even, right? And Dell Technologies supports both liquid cooling, air cooling, do you have a preference? Is it more just a customer based? It is going to be, and Shreya can allude to it, it's more workload and application focus. That is what we want to be thinking about and it's not going to be siloed into, okay, we're going to be just targeting air cooling. We wanted to target a breadth between air to liquid and that's how we built into our portfolio when we looked at our GPUs. To add to that, if we look at our customer landscape, we see that there's a peak between 35 to 45 kilowatts per rack, we see another peak at 60, we see another peak at 80 and we've got select very specialized customers above 100 kilowatts per rack. And so if we take that 35 to 45 kilowatts per rack, you can pack maybe three or four of these chassis, right? And so to what Bhavesh is saying, we're really trying to provide the flexibility for what our customers can deliver in their data centers, whether it be at the 35 end, where air cooling may make complete sense as you get above 45 and above, maybe that's the time to pivot to a liquid cool solution. So you said that there are situations where you could have 90 kilowatts being consumed by a rack of equipment. So I live in California where we are very, very closely attuned to things like the price for a kilowatt hour of electricity. Seriously. And I'm kind of an electric car nerd. So for the folks who really aren't as attuned, 90 kilowatts, that's like over 100 horsepower. So think about 100 horsepower worth of energy being used for compute in one of these racks. It's insane. So you can kind of imagine, a layperson can kind of imagine the variables that go into this equation of, how do we bring the power in, get the maximum bang for per kilowatt hour? But are there any kind of interesting, odd twists in your equations that you find when you're trying to figure out, do you have a? Yeah, and a lot of these trends when we look at it, okay, it's not, we think about it more from a power density that we want to try to go and solve. We're mindful about all the, from an energy perspective where the energy prices are moving. So what we do is we try to be optimizing right at the node level and how are we going to do our liquid cooling and air cooled infrastructure. So it's trying to, how do you keep a balance with it? That's what we are thinking about. And thinking about it is not just delivering or consuming the power that is maybe not needed for that particular node itself. So that's what we are thinking about. The other way we optimize when we build this infrastructure out is we're thinking about, okay, how are we going to deliver it at the rack level and more keeping in mind as to how this liquid cooling plumbing will happen? Where is it coming into the data center? Is it coming in the bottom of the floor? Are we going to do it on the left-hand side of your rack or the right-hand side? It's a big thing. It's like it becomes, okay, yeah, it doesn't matter which side you put it on, but there is a piece of it going into our decision as to how we are going to build that node out. So there are multiple factors coming in. And besides the power and cooling which we all touched upon, what you and me also look at is where this whole GPU and accelerators are moving into. So we're not just looking at the current set of GPUs and where they're moving from a power perspective. We're looking at this whole silicon diversity that is happening out there. So we've been looking at multiple accelerators that are multiple companies out there. And we can tell you there'll be over 330 to 50 silicon companies out there that we are actually engaged in looking into. So our decision in building this particular portfolio out was being mindful about what the maturity curve is from a software point of view, from a hardware point of view, and what can we deliver what the customer really needs in it, yeah. It's a balancing act. It is a balancing act. Let's stay in that zone a little bit. What other trends, Shreya, let's go to you on this one. What other trends are you seeing in the acceleration landscape? Yeah, I think to your point, the balancing act is actually a very interesting paradigm. One of the things that Pavesh and I constantly think about and we call it the Goldilocks syndrome, which is at that 90 and 100, right? Density matters. A lot. But what we've done is we have really figured out what that optimal point is because we don't want to be the thinnest most possible. You lose a lot of power redundancy. You lose a lot of IO capability. You lose a lot of storage capability. And so from our portfolio perspective, we've really tried to think about the Goldilocks syndrome and where that sweet spot is. I love that. I love the thought of you all just standing around server racks, having a little bit of porridge and determining exactly the thickness that you want in terms of the density trade off there. Yeah, I love that though. I mean, it's very digestible. Are you seeing anything else? No, I think so. That's pretty much what Shreya summed it up when we think about what we are thinking about where the technology features are moving and what we are thinking in terms of our portfolio. So it is, yeah. So just a lesson, Shreya, a lesson for us, rudimentary lesson. You put power into a CPU or a GPU and you're getting something out. And a lot of what we get out is heat. Is there a measure, is there an objective measure of efficiency in these devices that we look at? Because you could think of a 100 watt light bulb. An incandescent light bulb is going to give out a certain amount of light and a certain amount of heat. A 100 watt equivalent LED in terms of the lumens that it's putting out in terms of light. A lot more light for the power going in, a lot less heat. We have LED lights around us, thankfully, instead of incandescent lights. But what is, when you put power into a CPU or a GPU, how do you measure that efficiency? Because it's sort of funny, because it's like, it's not moving. So it's not like measuring putting power into a vehicle and measuring forward motion and heat. You're measuring this sort of esoteric thing, this processing thing that you can't see or touch. But I mean, how much per watt of power, how do you measure it, I guess? Help us out from the base up understanding. Because most people have never been in a data center before. Maybe they've put their hand behind the fan in a personal computer, or they've had a laptop feel warm on their lap. But we're talking about massive amounts of heat being generated. Can you kind of explain the fundamentals of that? So the way we think about it is, there's a performance per dollar metric. There's a performance per dollar per watt metric, and that's where the power kind of comes in. But on the flip side, we have something called PUE power utilization efficiency from a data center aspect. And so we try to marry up those concepts together and really try to find that sweet spot. Is there anything in the way of harvesting that heat to do other worthwhile work? I mean, you know, it's like, hey, everybody that works in the data center, you all have your own personal shower now. Water heated. Re-circulating too. Courtesy of Intel AMD. Yeah, yeah, yeah, you're so cool. I like it. So that's the circulation of, or recycling of that thermal heat that you're talking about, absolutely. And we see that our customers in the Europe region, actually a lot more advanced in terms of taking that power and doing something that's valuable with it, right? Cooking croissant and making lattes, probably, right? Or heating your home. Or heating the pool, croissants. That would be a good use. It's more on the PUE aspect of it. It's more thinking about how are we more energy efficient in our design even. So we're more thinking about what's the best efficiency we can get, but what's the amount of heat capture we can get? How are we just kind of wasting any heat out there? So that's always the goal when designing these platforms. So that's something that we had kept in mind with a lot of our power and cooling experts within Dell. When thinking about, okay, is it, how much can we get, can we capture? If we are not capturing anything, then what are we kind of re-circulating it back in order to get much better efficiency when we think about it at a rack level and for the other equipment, which is going to be purely air-cooled out there and what can we do about it? So. Do you think both of these technologies are going to continue to work in tandem, air cooling and liquid cooling? Yeah, so we're not going to see. Yeah, we don't kind of, when we think about our portfolio and what we see the trends moving in the future, I think so, air cooling is definitely going to be there. There'll be a huge amount of usage for customers looking into air cooling. Air cooling is not going to go away. Liquid cooling is definitely something that a lot of customers are looking into adopting. PUE becomes a bigger factor for it. How much can I heat capture with it? That's a bigger equation that is coming into the picture and that's where we said, okay, we have a transition happening and that's what you see in our portfolio now. Yeah, Intel is, excuse me, Dell is agnostic when it comes to things like Intel, AMD, Broadcom, Nvidia, so you can look at this landscape and I think make a fair judgment. When we talk about GPU versus CPU in terms of efficiency, do you see that as something that will live on into the future for some applications, meaning look, GPU is the answer? Or is it simply a question of leveraging what we think of as CPU cores differently? Is this going to be, is this going to ebb and flow back and forth, Shreya, are things going to change? Because right now, a lot of what's announced recently in the high performance compute area leverages GPUs, but we're right in the season of AMD and Intel coming out with NextGen processor architectures. Great point, yeah. Any thoughts? Yeah, so what I'll tell you is that it is all application dependent. If you rewind a couple of generations, you'll see that the journey for GPU just started, right? And so there is an ROI, a minimum threshold ROI, that customers have to realize in order to move their workloads from CPU base to GPU base. As the technology evolves and matures, you'll have more and more applications that will fit within that bucket. Does that mean that everything will fit in that bucket? I don't believe so. But as the technology will continue to mature on the CPU side, but also on the GPU side. And so depending on where the customer is in their journey, it's the same for air versus liquid. Liquid is not an if, but it's a when. And when the environment, the data center environment is ready to support that. And when you have that ROI that goes with it is when it makes sense to transition to one way or the other. That's awesome. All right, last question for you both in a succinct phrase, if possible. I won't character count. What do you hope that we get to talk about next year when we have you back on theCUBE? Shreya, we'll start with you. That's a good one. I'm going to let Povash go first. Go for it. What do you think, Povash? Next year I think so, what you'll see more because I'm in the CTIA group, more talking about where cash coherency is moving. So that's what I'll just leave it at that and we'll talk about it. All right, more tantalizing. I was going to say a little window in there, yep. And I think to kind of add to that, I'm excited to see what the future holds with CPUs, GPUs, smartnecks and the integration of these technologies and where that all is headed and how that helps ultimately, you know, our customers being able to solve these really, really large and complex problems. The problems our globe faces. Wow, well, it was absolutely fantastic to have you both on the show. Time just flew, David, wonderful questions. As always, thank you all for tuning in to theCUBE here live from Dallas where we are broadcasting all about supercomputing, high performance computing and everything that a hardware nerd like I loves. My name is Savannah Peterson, we'll see you again soon.