 We are back in the final stretch at Supercomputing22 here in Dallas. I'm your host Paul Gillum with my co-host, Dave Nicholson. And we've been talking to so many smart people this week. It just, it boggles my mind. Our next guest, Jay Boiseau, is the HPC and AI technology strategist at Dell. Jay also has a PhD in astronomy from the University of Texas. And I'm guessing you were up watching the Artemis launch the other night. I really should have been, but I wasn't. I was in full supercomputing conference mode. So that means discussions at various venues with people into the wee hours. How did you make the transition from a PhD in astronomy to an HPC expert? It was actually really straightforward. I did theoretical astrophysics and I was modeling what white dwarfs look like when they accrete matter and then explode as type 1A supernovae, which is a class of stars that blow up. And it's a very important class because they blow up almost exactly the same way. So if you know how bright they are physically, not just how bright they appear in the sky, but if you can determine from first principles, how bright they are, then you have a standard ruler for the universe. When they go off in a galaxy, you know how far the galaxy is about how faint it is. So to model these though, you had to understand equations of physics, including electron degeneracy pressure, as well as normal fluid dynamics kinds of things. And so you're solving for an explosive burning front, ripping through something, and that required a supercomputer to have anywhere close to the fidelity to get a reasonable answer and hopefully some understanding. So I've always said electrons are degenerates. I've always said it. And I mentioned to Paul earlier, I said finally we're going to get a guest who can sort through this whole dark energy, dark matter thing for us. We'll do that after the segment. That's a whole different thought. Well, I guess supercomputing being a natural tool that you would use. What is, what do you do in your role as a strategist? So I'm in the product management team. I spend a lot of time talking to customers about what they want to do next. HPC customers are always trying to be maximally productive of what they've got, but always wanting to know what's coming next because if you think about it, we can't simulate the entire human body cell for cell on any supercomputer day. We can simulate parts of itself or cell or the whole body with macroscopic physics, but not at the atomic level, the entire organism. So we're always trying to build more powerful computers to solve larger problems with more fidelity and less approximations in it. And so I help people try to understand which technologies for their next system might give them the best advance in capabilities for their simulation work, their data analytics work, their AI work, et cetera. Another part of it is talking to our great technology partner ecosystem and learning about which technologies they have because it feeds the first thing, right? So understanding what's coming and we're very proud of our large partner ecosystem. We embrace many different partners in that with different capabilities. So understanding those helps understand what your future systems might be. Those are two of the major roles in it, strategic customers and strategic technologies. So you've had four days to wander this massive floor here and lots of startups, lots of established companies doing interesting things. What have you seen this week that really excites you? So I'm going to tell you a dirty little secret here. If you are working for someone who makes supercomputers, you don't get as much time to wander the floor as you would think because you get lots of meetings with people who really want to understand in an NDA way, not just in the public way that's on the floor. But what are you not telling us on the floor? What's coming next? And so I've been in a large number of customer meetings as well as being on the floor. And well, I can't obviously share everything that's a non-disclosure topic in those. Some things that we're hearing a lot about. People are really concerned with power because they see the TDPs on the road maps for all the silicon providers going way up. And so people with power comes heat as waste and so that means cooling. So power and cooling has been a big topic here. Obviously, accelerators are increasing in importance in HPC, not just for AI calculations, but now also for simulation calculations. And we are very proud of the three new accelerator platforms we launched here at the show that are coming out in a quarter or so. Those are two of the big topics. We've seen, as you walk the floor here, you see lots of interesting storage vendors. HPC community's been doing storage the same way for roughly 20 years, but now we see a lot of interesting players in that space. We have some great things in storage now and some great things that are coming in a year or two as well. So it's interesting to see that diversity of that space. And then there's always the fun, exciting topics like quantum computing. We unveiled our first hybrid classical quantum computing system here with IONQ. And I can't say what the future holds in this format, but I can say we believe strongly in the future of quantum computing and that future will be integrated with the kind of classical computing infrastructure that we make and that will help make quantum computing more powerful downstream. Well, let's go down that rabbit hole because quantum computing has been talked about for a long time. There was a lot of excitement about it four or five years ago. Some of the major vendors were announcing quantum computers in the cloud. Excitement has kind of died down. We don't see a lot of activity around, no, not a lot of talk around commercial quantum computers yet. You're deep into this. How close are we to having a true quantum computer? Or is it a hybrid? More like it. So there are probably more than 20 and I think close to 40 companies trying different approaches to make quantum computers. So, you know, Microsoft's pursuing a topological approach. Xanadu, a photonics based approach. IONQ and IONTRAP approach. These are all different ways of trying to leverage the quantum properties of nature. We know the properties exist. We use them in other technologies. We know the physics, but the engineering is very difficult. It's very difficult. I mean, just like it was difficult at one point to split the atom, it's very difficult to build technologies that leverage quantum properties of nature in a consistent and reliable and durable way, right? So, you know, I wouldn't want to make a prediction, but I will tell you, I'm an optimist. I believe that when a tremendous capability with tremendous monetary gain potential lines up with another incentive, national security, engineering seems to evolve faster when those things line up. When there's plenty of investment and plenty of incentive, things happen. So, I think a lot of my friends in the office of the CTO at Dell Technologies, when they're really leading this effort for us, you know, they would say a few to several years probably. I'm an optimist, so I believe that, you know, I believe that we will sell some of the solution we announced here in the next year for people that are trying to get their feet wet with quantum, and I believe we'll be selling multiple quantum hybrid, classical Dell, quantum computing systems, multiple a year in a year or two. And then, of course, you hope it goes to tens and hundreds of, you know, by the end of the decade. When people talk about, I'm talking about people at large, leaders in supercomputing, I would say Dell's name doesn't come up in conversations I have. What would you like them to know that they don't know? You know, I hope that's not true, but I guess I understand it. We are so good at making the products from which people make clusters that we're number one in servers, we're number one in enterprise storage, we're number one in so many areas of enterprise technology that I think in some ways, being number one in those things detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. But, you know, depending on which analysts you talk to and how they count, we're number one or number two in the world in supercomputing revenue. We don't always do the biggest splashy of systems. We do, the Frontera system at TAC, the HPC-5 system at ENI in Europe. That's the largest academic supercomputer in the world and the largest industrial supercomputer in the world. That's based on Dell hardware. On Dell hardware, yeah. But we, I think our vision is really that we want to help more people use HPC to solve more problems than any vendor in the world. And those problems are various scales. So we're really concerned about the more. We're democratizing HPC to make it easier for more people to get in at any scale that their budget and workloads require. We're optimizing it to make sure that it's not just some parts they're getting that they are validated to work together with maximum scalability and performance. We have a great HPC and AI innovation lab that does this engineering work. Cause, you know, one of the myths is, oh, I can just go buy a bunch of servers from company X and a network from company Y and a storage system from company Z and then it'll all work as an equivalent cluster, right? Not true. It'll probably work, but it won't be the highest performance, highest scalability, highest reliability. So we spend a lot of time optimizing. And then we are doing things to try to advance the state of HPC as well. What our future systems look like in the second half of this decade might be very different than what they look like right now. You mentioned a great example of a limitation that we're running up against right now. You mentioned an entire human body as an organism. Or any large system that you try to model at the atomic level, but it's a huge macro system. Right, so will we be able to reach milestones where we can get our arms entirely around something like an entire human organism with simply quantitative advances as opposed to qualitative advances? Right now, as an example, let's just, let's go down to the basics from a Dell perspective. You're in a season where microprocessor vendors are coming out with NextGen stuff. And those NextGen microprocessors, GPUs and CPUs are going to be plugged into NextGen motherboards, PCIe, Gen5, Gen6 coming. Faster memory, bigger memory, faster networking, whether it's Nix or Infiniband, storage controllers, all bigger, better, faster, stronger. And I suspect that systems like Frontera, I don't know, but I suspect that a lot of the systems that are out there are not on necessarily what we would think of as current generation technology, but maybe they're N minus one as a practical matter. So, yeah, they have a lifetime, so the lifetime is longer than the evolution of the technologies, yeah. So what some people miss is the reality that when we move forward with the latest things that are being talked about here, it's often a two generation move for an individual organization. Yep, now some organizations will have multiple systems and the systems leapfrog in technology generations, even if one is their real large system, the next one might be newer technology, but smaller, the next one might be a larger one with newer technology and such. So the biggest supercomputing sites are often running more than one HPC system that have been specifically designed with the latest technologies and designed and configured for maybe a different subset of their workloads. Yeah, so to go back to kind of the core question, in your opinion, do we need that qualitative leap to something like quantum computing in order to get to the point, or is it simply a question of scale and power at the individual node level to get us to the point where we can, in fact, gain insight from a digital model of an entire human body, not just looking at an organ. And to your point, it's not just about human body, any system that we would characterize as being chaotic today. So a weather system, whatever. Are there any milestones that you're thinking of where you're like, wow, I understand everything that's going on, and I think we're a year away, we're a compute generation away from being able to gain insight out of systems that right now we can't simply because of scale. It's a very, very long question that I just asked you, but I think, but hopefully you're tracking it. What are your thoughts? What are these inflection points in your mind? So I'll start simple. Remember when we used to buy laptops and we worried about what gigahertz the clock speed was and everybody knew the gigahertz of it, right? There's some tasks at which we're so good at making the hardware that now the primary issues are how great is the screen, how light is it, what's the battery life like, et cetera, because for the set of applications on there, we have enough compute power. We don't really need your laptop. Most people don't need their laptop to have twice as powerful a processor. They'd actually rather have twice the battery life on it or whatnot. We make great laptops. We design for all of those parameters now and we see some customers want more of X, well, some want more of Y, but the general point is that the amazing progress in microprocessors is sufficient for most of the workloads at that level. Let's go to HPC level or scientific and technical level and when it needs HPC. If you're trying to model the orbit of the moon around the earth, you don't really need a supercomputer for that. You can get a highly accurate model on a workstation, on a server, no problem. It won't even really make it break a sweat. I had to do it with a slide rule. Look, that might make you break a sweat, but to do it with a single body orbiting with another body, I say orbiting around, but we both know it's really, they're both ordering the center of mass. It's just that if one is much larger, it seems like one's going entirely around the other. So that's not a supercomputing problem. What about the stars in a galaxy, trying to understand how galaxies form spiral arms and how they spur star formation, right? Now you're talking 100 billion stars plus a massive amount of interstellar medium in there. So can you solve that on that server? Absolutely not. Not even close. Can you solve it on the largest supercomputer in the world today? Yes and no. You can solve it with approximations on the largest supercomputer in the world today, but there's a lot of approximations that go into even that. The good news is the simulations produce things that we see through our great telescopes, so we know the approximations are sufficient to get good fidelity, but until you really are doing direct numerical simulation of every particle, which is impossible to do. You need a computer as big as the universe to do that, but the approximations in the science, as well as the known parts of the science are good enough to give fidelity. So in answer your question, there's a tremendous number of problem scales. There are problems in every field of science and study that exceed the direct numerical simulation capabilities of systems today. And so we always want more computing power. It's not machoflops, it's real. We need it, we need exoflops, and we will need zetaflops beyond that and yadaflops beyond that, but an increasing number of problems will be solved as we keep working to solve problems that are farther out there. So in terms of qualitative steps, I do think technologies like quantum computing to be clear as part of a hybrid classical quantum system because they're really just accelerators for certain kinds of algorithms, not for general purpose algorithms. I do think advances like that are going to be necessary to solve some of the very hardest problems. It's easy to actually formulate an optimization problem that is absolutely intractable by the largest systems in the world today, but quantum systems happen to be, in theory, when they're big and stable enough, great at that kind of problem. That should be understood. Quantum is not a cure-all for the shortage of computing power. It's very good for certain problems. And as you said, at this super computing, we see some quantum, but it's a little bit quieter than I've probably expected. I think we're in a period now of everybody saying, okay, there's been a lot of buzz. We know it's going to be real, but let's calm down a little bit and figure out what the right solutions are. And I'm very proud that we offered one of those. We have barely scratched the surface of what we could talk about as we get into intergalactic space, but unfortunately, we only have so many minutes and we're out of them. Jay Fosso, HPC and AI technology strategist at Dell. Thanks for a fascinating conversation. Thanks for having me. Happy to do it any time. We'll be back with our last interview of Supercomputing 22 in Dallas. This is Paul Gillin with Dave Nicholson. Stay with us.