 Good afternoon friends, and welcome back to super computing. We're live here at theCUBE in Dallas. I'm joined by my co-host, David. My name is Savannah Peterson, and our fabulous guest. I feel like this is almost his show to a degree, given his role at Dell. He is the vice president of HPC over at Dell. Roger Pohani, thank you so much for being on the show with us. How you doing? Thank you guys. I'm doing okay. It's good to be back in person. It's a great show. It's really filled in nicely today, and a lot of great stuff happening. It's great to be around all of our fellow hardware nerds. The Dell portfolio grew by three products, I believe. Can you give us a bit of an intro on that? Sure, well, yesterday afternoon and yesterday evening, we had a series of events that announced our new AI portfolio, our official intelligence portfolio, which will really help scale where I think the world is going in the future with the creation of all this data and what we can do with it. So, yeah, it was an exciting day for us yesterday. We had a session over in a ballroom where we did a product announce, and then in the evening had an unveil in our booth here at the Super Compute Conference, which was pretty eventful. Cupcakes, champagne, drinks, and most importantly, Yeah, sounds like we've been on for a good time. Should we get the invite? Most importantly, some really cool new servers for our customers. Well, tell us about them. Yeah. So what's new, what's in the news? Well, you know, as you think about artificial intelligence and what customers are needing to do, and the way artificial intelligence is going to change how, you know, frankly, the world works, we have now developed and designed new purpose-built hardware, new purpose-built servers for a variety of AI and artificial intelligence needs. We launched our first eight-way, you know, NVIDIA H100, A100, SXM product yesterday. We launched a four-way H100 product yesterday and a two-U, fully liquid-cooled Intel Datacenter Max GPU server yesterday as well. So, you know, a full range of portfolio for a variety of customer needs, depending on their use cases, what they're trying to do, their infrastructure, we're able to now provide, you know, servers and hardware that help, you know, meet those needs and those use cases. So, I want to double-click. You just said something interesting, water-cooled. Yeah. So where does, at what point do you need to move in the direction of water-cooling? And, you know, I know you mentioned, you know, GPU-centric, but talk about that, that balance between, you know, a density and what you can achieve with the power that's going into the system. It all depends on what the customers are trying to accommodate, right? I think that there's a dichotomy that's existing now between customers who have already or are planning liquid-cooled infrastructures and power distribution to the rack. So you take those two together and if you have the power distribution to the rack, you want to take advantage of the density. To take advantage of the density, you need to be able to cool the servers and therefore liquid-cooling comes into play. Now, you have other customers that either don't have the power to the rack or aren't ready for liquid-cooling and at that point, you know, they're not going to want to take advantage, they can't take advantage of the density. So there's this dichotomy in products and that's why we've got our XE9640, which is a 2U dense liquid-cooled, but we also have our XE8640, which is a 4U air-cooled, right? Or liquid-assisted air-cooled, right? So depending on where you are on your journey, whether it's power infrastructure, liquid-cooling infrastructure, we've got the right solution for you that, you know, meets your needs. You don't have to take advantage of the density, the expense of liquid-cooling, unless you're ready to do that. Otherwise, we've got this other option for you. And so that's really what dichotomy is beginning to exist in our customers' infrastructures today. I was curious about that. So do you see, is there a category or a vertical that is more in the liquid-cooling zone because that's a priority in terms of the density or, yeah. Yeah, I mean, you've got your large HPC installations, right? Your large clusters that not only have the power have, you know, the liquid-cooling density that they've built in. You've got, you know, federal government installations. You've got financial tech installations. You've got colos that are built for sustainability and density and space that can also take advantage of it. Then you've got others that are, you know, more enterprises, more in the mainstream of what they do where, you know, they're not ready for that. So it just depends on the scale of the customer that we're talking about and what they're trying to do and where they're doing it. So we hear, you know, we hear a supercomputing conference and HPC is sort of the kind of trailing, mini version of supercomputing in a way where maybe you have someone who, they don't need two million CPU cores, but maybe they need 100,000 CPU cores. So it's all a matter of scale. What is, can you identify kind of an HPC sweet spot right now as Dell customers are adopting the kinds of things that you just announced? You know, I think- How big are these clusters at this point? Well, let me hit something else first. I think people talk about HPC as something really specific. And what we're seeing now with the, you know, vast amount of data creation, the need for computational analytics, the need for artificial intelligence, the HPC is kind of morphing, right, into, you know, more and more general customer use cases. And so where before you used to think about HPC is research and academics and computational dynamics. Now, you know, there is a significant Venn diagram overlap with just regular artificial intelligence, right? And so that is beginning to change the nature of how we think about HPC. You think about the vast data that's being created. You've got data-driven HPC, where you're running computational analytics on this data that's giving you insights or outcomes or information. It's not just, hey, I'm running, you know, physics calculations or astronomical calculations. It is now expanding in a variety of ways where it's democratizing into, you know, customers who wouldn't actually talk about themselves as HPC customers. And when you meet with them, it's like, well, yeah, but your compute needs are actually looking like HPC customers. So let's talk to you about these products. Let's talk to you about these solutions, whether it's software solutions, hardware solutions, or even purpose-built hardware like we talked about. That now becomes the new norm. Customer feedback and community engagement is big for you. You know, this portfolio of products that was developed based on customer feedback, correct? Yep. So everything we do at Dell is customer-driven. We want to drive customer-driven innovation, customer-driven value to meet our customers' needs. So yeah, we spent a while researching these products, researching these needs, understanding is this one product, is it two products, is it three products? Talking to our partners, driving our own innovation and IP, and then where they're going with their road maps to be able to deliver kind of a harmonized solution to customers. So yeah, it was a good amount of customer engagement. I know I was on the road quite a bit talking to customers, you know, one of our products was, you know, we almost named after one of our customers, right? I'm like, hey, we've talked about this, this is what you said you wanted. Now, he was representative of a group of customers and we validated that with other customers and it's also a way of me making sure he buys it. But it's heavily customer-driven and that's where understanding those use cases and where they fit drove the various products in terms of capability, in terms of size, in terms of liquid versus air cooling, in terms of things like number of PCIe lanes, right? What the networking infrastructure was going to look like. All customer-driven, all designed to meet where customers are going in their artificial intelligence journey, in their AI journey. It feels really collaborative. I mean, you've got both the Intel and the Nvidia GPU on your new product. There's a lot of co-lab between academics and the private sector. What has you most excited today about supercomputing? What it's going to enable. If you think about what artificial intelligence is going to enable, it's going to enable faster medical research, right? Genomics, the next pandemic, hopefully not anytime soon, we'll be able to diagnose, we'll be able to track it so much faster through artificial intelligence, right? The data that was created in this last one is going to be an amazing source of research to go address stuff like that in the future and get to the heart of the problem faster. If you think about manufacturing and process improvement, you can now simulate your entire manufacturing process. You don't have to run physical pilots, right? You can simulate it all, get 90% of the way there, which means you're either a factory process, we'll get reinvented faster, or a new factory can get up and running faster. Think about retail, how retail products are laid out. You can use media analytics to track how customers go through the store, what they're buying, you can lay things out differently. You're not going to have in the future people going to test cell phone reception, can you hear me now, can you hear me now? You can simulate where customers are, patterns to ensure that the 5G infrastructure is set up to the maximum advantage. All of that through digital simulation, through digital twins, through media analytics, through natural language processing, customer experience is going to be better, communication's going to be better, all of this stuff with using this data, training it and then applying it is probably what excites me the most about supercomputing and really computing the future. So on the hardware front, kind of digging down below the covers, the surface a little more, Dell has been well known for democratizing things in IT, making them available at a variety of levels. Never a one size fits all company. These latest announcements would be fair to say they represent sort of the tip of the spear in terms of high performance. What about RPC, regular performance computing? Where's the overlap, because we're in this season where we've got AMD and Intel leapfrogging one another, new bus architectures, the connectivity that's plugged into these things are getting faster and faster and faster. So from a Dell perspective, where does my term RPC, regular performance computing and HPC begin? Are you seeing people build stuff on kind of general purpose clusters also? Well, sure, I mean, you can run a good amount of artificial acceleration on high core count CPUs without acceleration. And you can do it with PCIe accelerators, and then you can do it with some of the very specific high performance accelerators like the Intel data center max GPUs or NVIDIAs A100 or H100. So there are these scale up opportunities. If you think about our mission to democratize compute, not just HPC, but general compute, is about making it easier for customers to implement to get the value out of what they're trying to do. So we focus on that with reference designs or validated designs that take out a good amount of time that customers would have to do it on their own. We can cut by six to 12 months the ability for customers. I'm going to use an HPC example and then I'll come back to your regular performance compute by us doing the work, us determining the configuration, determining the software packages, testing it, tuning it so that by the time it gets to the customer, they get to take advantage of the expertise of Dell engineers, Dell scale, and they are ready to go in a much faster point of view. The challenge with AI is, and he talked to customers, they all know what it can lead to and the benefits of it. Sometimes they just don't know how to start. We are trying to make it easier for customers to start. Whether it is using regular RPC or non-optimized, non-specialized compute, or as you move up the value stack into compute capability, our goal is to make it easier for customers to start to get on their journey and to get to what they're trying to do faster. So where do I see regular performance compute? They go hand in hand, right? As you think about what customers are trying to do and I think a lot of customers, like we talked about, don't actually think about what they're trying to do is high performance computing. They don't think of themselves as one of those specialized institutions that are HPC but they're on this glide path to greater and greater compute needs and greater and greater compute attributes that merge kind of regular performance computing and high performance computing to where it's hard to really draw the line, especially when you get to data-driven HPC. Data's everywhere. And so much data and it sounds like a lot of people are very early in this journey from our conversation with Travis. I mean, five AI programs per very large company or less at this point for 75% of customers. That's pretty wild. I mean, you're an educational coach, your teachers, your innovating on the hardware front, you're doing everything at Dell. Last question for you, you've been at Dell 24 years. 25 in this coming March. What has a company like that done to retain talent like you for more than two and a half decades? You know, for me, and I'd like to say I had an atypical journey, but I don't think I have, right, there has always been opportunity for me, right? You know, I started off as a quality engineer. A couple of years later, I'm living in Singapore, running services for enterprise and APJ. I come back a couple of years in Austin, then I'm in our Bangalore Development Center helping set that up, then I come back, then I'm in our Taiwan Development Center helping with some of the work out there. And then I come back. There has always been the next opportunity before I could even think about am I ready for the next opportunity? And so for me, why would I leave, right? Why would I do anything different given that there's always been the next opportunity? The other thing is jobs are what you make of it and Dell embraces that. So if there's something that needs to be done or there was an opportunity or even in the case of our AI ML portfolio, we saw an opportunity, we reviewed it, we talked about it and then we went all in. So that innovation, that opportunity and then most of all the people at Dell, right? I can't ask to work with a better set of folks from the top on down. Simple as that. So it's culture. It is culture. Really at the end of the day. It is culture. That's fantastic. Roger, thank you so much for being here with us. Thank you guys. The show. Thanks. Really appreciate it. Really appreciate it. Yeah, this was such a pleasure and thank you for tuning into theCUBE. Live from Dallas, here at Supercomputing, my name is Savannah Peterson and we'll see y'all in just a little bit.