 Welcome back to theCUBE's coverage of Supercomputing Conference 2022. We're here at day three, covering the amazing events that are occurring here. I'm Dave Nicholson, I'm with my co-host, Paul Gillan. How's it going, Paul? Fine, Dave. It's winding down here, but still plenty of action. Interesting stuff. No, we had a full day of coverage, and we have a really, really interesting conversation as we sort of wrap things up at Supercomputing 22 here in Dallas. I've got two very special guests with me. Scott from Intel and David from Dell to talk about, yeah, super computing, but guess what, we've got some really cool stuff coming up after this whole thing wraps. So not all of the holiday gifts have been unwrapped yet, kids. Welcome, gentlemen. Thanks so much for having us. Thanks for having us. So let's start with you, David. First of all, explain the relationship in general between Dell and Intel. Sure, so obviously, Intel's been an outstanding partner. We built some great solutions over the years. I think the market reflects that. Our customers tell us that. The feedback's strong. The products you see out here this week at Supercomput, put that on display for everybody to see. And then as we think about AI in machine learning, there's so many different directions we need to go to help our customers deliver AI outcomes, right? So we recognize that AI has kind of spread outside of just the confines of everything we've seen here this week. And now we've got really accessible AI use cases that we can explain to friends and family. We can talk about going into retail environments and how AI is being used to track inventory, to monitor traffic, et cetera. But really what that means to us as a bunch of hardware folks is we have to deliver the right platforms and the right designs for a variety of environments, both inside and outside the data center. And so if you look at our portfolio, we have some great products here this week, but we also have other platforms like the XR 4000, our shortest rack server ever that's designed to go into edge environments, but is also built for those edge AI use cases. It supports GPUs, it supports AI on the CPU as well. And so there's a lot of really compelling platforms that we're starting to talk about have already been talking about and it's going to really enable our customers to deliver AI in a variety of ways. Well now, you mentioned AI on the CPU or maybe there's a question for Scott, what does that mean AI on the CPU? Well, as David was talking about, we're just seeing this explosion of different use cases and some of those on the edge, some of them in the cloud, some of them on-prem. But within those individual deployments, there's often different ways that you can do AI, whether that's training or inference. And what we're seeing is a lot of times the memory locality matters quite a bit. You don't want to have to pay necessarily the cost of going across the PCI Express bus, especially with some of our newer products like the CPU Max series, where you can have a huge amount of high bandwidth memory just sitting right on the CPU, things that traditionally would have been accelerator only can now live on a CPU. And that includes both on the inference side, we're seeing some really great things with images where you might have a giant medical image that you need to be able to do extremely high resolution inference on, or even text where you might have an huge corpus of extremely sparse text that you need to be able to randomly sample very efficiently. So how are these needs influencing the evolution of Intel CPU architectures? So we're talking to our customers, we're talking to our partners, this presents both an opportunity, but also a challenge with all of these different places, places that you can put these great products as well as applications. And so we're very thoughtfully trying to go to the market, see where their needs are, and then meet those needs. This industry obviously has a lot of great players in it, and it's no longer the case that if you build it, they will come. So what we're doing is we're finding where are those choke points, how can we have that biggest difference? Sometimes the generational leaps, and I know David can speak to this, can be huge from one system to the next just because everything's accelerated on the software side, the hardware side, and the platforms themselves. That's right, and we're really excited about that leap. If you take what Scott just described, we've been writing white papers, our team with Scott's team, we've been talking about those types of use cases using, doing large image analysis and leveraging system memory, leveraging the CPU to do that. We've been talking about that for several generations now, going back to Cascade Lake, going back to what we would call 14th generation PowerEdge. And so now as we prepare and continue to unveil, kind of we're in launch season, right? You and I were talking about how we're in launch season. As we continue to unveil and launch more products, the performance improvements are just going to be outstanding and we'll continue that evolution that Scott described. Yeah, I'd like to applaud Dell just for a moment for its restraint because I know you could have come in and taken all of the space in the convention center to show everything that you do in the HPC space. Now, worst kept secrets on earth at this point, vying for number one place is the fact that there is a new Mission Impossible movie coming. And there's also new stuff coming from Intel. I know we're kind, you know, I think allegedly we're getting close. What can you share with us on that front? And I appreciate it if you can't share a ton of specifics, but where are we going? Well, David just alluded to it. Yeah, as David talked about, we've been working on some of these things for many years and it's just this momentum is continuing to build. Both in respect to some of our hardware investments, we've unveiled some things here, both on the CPU side and the accelerator side, but also on the software side. One API is gathering more and more traction and the ecosystem is continuing to blossom. Some of our AI and HPC workloads and the combination thereof are becoming more and more viable as well as displacing traditional approaches to some of these problems. And it's this type of thing where it's not linear. It all builds on itself. And we've seen some of these investments that we've made for the better half of a decade starting to bear fruit, but that's, it's not just a one time thing. It's just going to continue to roll out and we're going to be seeing more and more of this. So I want to follow up on something that you mentioned. I don't know if you've ever heard that the Charlie Brown saying that sometimes the most discouraging thing can be to have immense potential. Yeah. Because between Dell and Intel, you offer so many different versions of things from a fit for function perspective. As a practical matter, how do you work with customers and maybe this is a question for you, David, how do you work with customers to figure out what the right fit is? I'll give you a great example. Just this week, customer conversations, and we can put it in terms of kilowatts to rack, right? How many kilowatts are you delivering at a rack level inside your data center? I've had an answer anywhere from five all the way up to 90. There's some that have been a bit higher that probably don't want to talk about those cases, kind of customers we're meeting with very privately, but the range is really, really large, right? And there's a variety of environments. Customers might be ready for liquid today. They may not be ready for it. They may want to maximize air cooling. Those are the conversations. And then of course, it all maps back to the workloads they wish to enable. AI is an extremely overloaded term. We don't have enough time to talk about all the different things that tuck under that umbrella. But the workloads and the outcomes they wish to enable, we have the right solutions. And then we take it a step further by considering where they are today, where they need to go. And I love that five to 90 example of not every customer has an identical cookie cutter environment, so we've got to have the right platforms, the right solutions for the right workloads, for the right environments. So I like to dive in on this power issue to give people who are watching an idea, because we say five kilowatts, 90 kilowatts, and people are like, oh wow, what does that mean? 90 kilowatts is more than 100 horsepower if you want to translate it over. It's a massive amount of power. So if you think of EV terms. You know, five kilowatts is about a hair dryer is around a kilowatt, 1,000 watts, right? But the point is 90 kilowatts in Iraq, that's insane. It's absolutely insane. The heat that that generates has got to be insane. And so it's important. It's several houses in the size of a closet. Exactly, exactly. Yeah, in Iraq, I explained to people, you know, it's like a refrigerator. But so in the arena of thermals, I mean, is that something that, you know, during the development of next-gen architectures, is that something that's been taken into consideration? Or is it just a race to die size? Well, you definitely have to take thermals into account, as well as just the power consumption themselves. I mean, people are looking at their total cost of ownership. They're looking at sustainability. And at the end of the day, they need to solve a problem. But there's many paths up that mountain. And it's about choosing that right path. We've talked about this before, having extremely thoughtful partners. We're not just going to combinatorially try every single solution. We're going to try to find the ones that fit that right mold for that customer. And we're seeing more and more people, excuse me, care about this, more and more people wanting to say, how do I do this in the most sustainable way? How do I do this in the most reliable way, given maybe different fluctuations in their power consumption or their power pricing? We're developing more software tools and obviously partnering with great partners to make sure we do this in the most thoughtful way possible. Intel put a lot of, made a big investment by buying Habana Labs for its acceleration technology. They're based in Israel. You're based in the West Coast. How are you coordinating with them? How will the Habana technology work its way into more mainstream Intel products? And how would Dell integrate those into your servers? A question, I guess I can kick this off. So Habana is part of the Intel family. Now, they've been integrated in. It's been a great journey with them as some of their products have launched on AWS and they've had some very good wins on ML Perf and things like that. I think it's about finding the right tool for the job, right? Not every problem is a nail. So you need more than just a hammer. And so we have the Xeon series, which is incredibly flexible, can do so many different things. That's what we've come to know and love. On the other end of the spectrum, we obviously have some of these more deep learning focused accelerators. And if that's your problem, then you can solve that problem in incredibly efficient ways. The accelerators themselves are somewhere in the middle. So you get that kind of Goldilocks zone of flexibility and power. And depending on your use case, depending on what you know your workloads are going to be day in and day out, one of these solutions might work better for you. A combination might work better for you. Hybrid compute starts to become really interesting. Maybe you have something that you need 24-7, but then you only need a burst to certain things. There's a lot of different options. It was a portfolio approach. Exactly. And what I love about the work that Scott's team is doing customers have told us this week in our meetings, they do not want to spend developers time porting code from one step to the next. They want that flexibility of choice. Everyone does. We want it in our lives, in our everyday lives. They need that flexibility of choice, but they also, there's an opportunity cost when their developers have to choose to port some code over from one stack to another, or spend time improving algorithms and doing things that actually generate meaningful outcomes for their business or their research. And so they are desperately searching, I would say, for that solution and for help in that area, and that's what we're working to enable. And this is what I love about OneAPI, our software stack. It's open first, heterogeneous first. You can take sickle code, it can run on competitors hardware, it can run on Intel hardware. It's one of these things that you have to believe long term, the future is open. Wall gardens, the walls eventually crumble. And we're just trying to continue to invest in that ecosystem to make sure that the end developer at the end of the day really gets what they need to do, which is solving their business problem, not tinkering with our drivers. Yeah, I actually saw an interesting announcement that I hadn't been tracking. I hadn't been tracking this area. Chiplets and the idea of an open standard where competitors of Intel from a silicon perspective can have their chips integrated via a universal standard. And basically you had the top three silicon vendors saying, yeah, absolutely, let's work together. Cats and dogs. Exactly, but at the end of the day, it's whatever menagerie solves the problem. Right, right, exactly. And of course, Dell can solve it from any angle. Yeah, we need strong partners to build the platforms to actually do it. At the end of the day, silicon without software is just sand, software without silicon is poorly written prose, but without an actual platform to put it on, it's nothing. It's a box that sits in the corner. David, you mentioned that 90% of PowerEdge servers now support GPUs. So how is this high performance, how's the growth of high performance computing to demand influencing the evolution of your server architecture? Great question, a couple of ways. I would say 90% of our platforms support GPUs, 100% of our platforms support AI use cases. And it goes back to the CPU compute stat. As we look at how we deliver different form factors for customers that we go back to that range, I said that PowerRange this week of how do we enable the right air cooling solutions, how do we deliver the right liquid cooling solutions so that wherever the customer is in their environment and whatever footprint they have, we're ready to meet it. That's something you'll see as we go into kind of the second half of launch season and continue rolling out products. You're going to see some very compelling solutions, not just in air cooling, but liquid cooling as well. Want to be more specific? We can't unveil everything, it's super confusing. We have a lot of great stuff coming up here in the next few months. It's kind of like being at a great restaurant when they offer you dessert. And you're like, yeah, dessert would be great, but I just can't take anymore. It's a multi-course meal. At this point. Well, as we wrap, I've got one more question for each of you, same question for each of you. When you think about high performance computing, super computing, all of the things that you're doing in your partnership, driving artificial intelligence at that tip of the spear, what kind of insights are you looking forward to us being able to gain from this technology? In other words, what cool thing, what do you think is cool out there from an AI perspective? What problems do you think we can solve in the near future? What problems would you like to solve? What gets you out of bed in the morning? Because it's not the little, it's not the bits and the bobs and the speeds and the feeds. It's what we're going to do with them. So what do you think, David? I'll give you an example. And I think, I saw some of my colleagues talk about this earlier in the week, but for me, what we could do in the past two years to enable our customers in a quarantine pandemic environment, we were delivering platforms and solutions to help them do their jobs, help them carry on in their lives. And that's just one example. And if I were to map that forward, it's about enabling that human progress. And it's, you ask a 20 year version of me 20 years ago, if you could imagine some of these things, I don't know what kind of answer you would get. And so mapping forward next decade, next two decades, I can go back to that example of, hey, we did great things in the past couple of years to enable our customers. Just imagine what we're going to be able to do going forward to enable that human progress. There's great use cases, there's great image analysis. We talked about some, the images that Scott was referring to had to do with taking CAT scan images and being able to scan them for tumors and other things in the healthcare industry. That is stuff that feels good about when you get out of bed in the morning to know that you're enabling that type of progress. Scott, quick thoughts? Yeah, and I'll echo that. It's not one specific use case, but it's really this wave front of all of these use cases from the very micro of developing the next drug to finding the next battery technology all the way up to the macro of trying to have an impact on climate change or even the origins of the universe itself. All of these fields are seeing these massive gains both from the software, the hardware, the platforms that we're bringing to bear to these problems. And at the end of the day, humanity is going to be fundamentally transformed by the computation that we're launching and working on today. Fantastic, fantastic. Thank you gentlemen. You heard it here first Intel and Dell just committed to solving the secrets of the universe by New Year's Eve 2023. Well, next super computer, let's give this a little time. Next super computer, SC23 will come back and see what problems have been solved. You heard it here first on theCUBE folks. By SC23, Dell and Intel are going to reveal the secrets of the universe. From here at SC22, I'd like to thank you for joining our conversation. I'm Dave Nicholson with my co-host Paul Gillin. Stay tuned to theCUBE's coverage of Super Computing Conference 22. We'll be back after a short break.