 The Cube presents HPE Discover 2022, brought to you by HPE. Hey everyone, welcome back to The Cube's coverage of HPE Discover 22, live from the Sands Expo Center in Las Vegas, Lisa Martin here with Dave Vellante. We have an alumni back joining us to talk about high-performance computing and AI, Justin Hotard, EVP and general manager of HPC and AI at HPE. That's a mouthful, welcome back. It is, now it's great to be back and wow, it's great to be back in person as well. It's life-changing to be back in person. The keynote this morning was great. Dave was saying the energy that he's seen is probably the most out of any discovery that you've been at and we've been feeling that and it's only day one. Yeah, I agree and I think it's a testament to the places in the market that we're leading, the innovation we're driving. I mean, obviously the leadership in HPE GreenLake and enabling as a service for every customer, not just those in the public cloud, providing that capability and then obviously what we're doing in HPC and AI, breaking records and advancing the industry. I just saw the Q2 numbers, nice revenue growth there for HPC and AI. Talk to us about the lay of the land, what's going on, what are customers excited about? Yeah, it's a really fascinating time in this business because we just delivered the world's first exascale system, right? And that's a huge milestone for our industry, the breakthrough, 13 years ago we did the first petascale system, now we're doing the first exascale system, huge advance forward. But what's exciting too is these systems are enabling new applications, new workloads, breakthroughs in AI, the beginning of being able to do proper quantum simulations which will lead us to a much brighter future with quantum and then actually better and more granular models which have the ability to really change the world. I was telling Lisa that during the pandemic we did exascale day, it was like this co-producement. And we weren't quite at exascale yet but we could see it coming and so it was great to see it frontier and the keynote, you guys broke through that. Is that a natural evolution of HPC or are we entering a new era? Yeah, I think it's a new era and I think it's a new era for a few reasons because that breakthrough really, it starts to enable a different class of use cases and it's combined with the fact that I think, you look at where the rest of the enterprise data set has gone, we've got a lot more data, a lot more visibility to data, but we don't know how to use it and now with this computing power we can start to create new insights and new applications and so I think this is going to be a path to making HPC more broadly available and of course it introduces AI models at scale and that's really critical because AI is a buzzword, I mean lots of people say they're doing AI but to build true intelligence, not effectively a machine that learns data and then can only handle that data, but to build true intelligence where you've got something that can continue to learn and understand and grow and evolve, you need this class of system and so I think we're at the forefront of a lot of exciting innovation. In terms of innovation, how important is it that you're able to combine as a service and HPC, what does that mean for customers, for experimentation and innovation? You know a couple of things, I've actually been talking to customers about that over the last day and a half and one is you think about these systems, they're very large and they're pretty big bets if you're a customer, so getting early access to them is really key, making sure that they can migrate their software, their applications, again in our space most of our applications are custom built, whether you're a government or a private sector company that's using these systems, you're doing these are pretty specialized, so getting that early access is important and then actually what we're seeing is with the growth and explosion of insight that we can enable and some of the diversity of new accelerator partners and new processors that are on the market is actually the attraction of diversity and so making things available where customers can use multimodal systems and we've seen that in this era, like our customer Lumi in Finland, the number three fastest system in the world actually has two sides to their system, so there's a compute side, dense compute side and a dense accelerator side. So Oak Ridge National Labs was on stage with Antonio this morning, talking about Frontier, the Frontier system, I thought, what a great name, very apropos, but it was also just named the number one to the super computing top 500, that's a pretty big accomplishment, talk about the impact of what that really means. Yeah, I think a couple of things, first of all, anytime you have this breakthrough of number one, you see a massive acceleration of applications and if you really, if you look at the applications that were built because when the US Department of Energy funded these exascale products or platforms, they also funded a set of applications and so it's the ability to get more accurate Earth models for long-term climate science, it's the ability to model the electrical grid and understand better how to build resiliency into that grid. This ability is, Dr. Tarasi talked about a progressing cancer research and cancer breakthroughs. I mean, there's so many benefits to the world that we can bring with these systems. That's one element. The other big part of this breakthrough is actually a list, a lesser known list from the top 500 called the green 500 and that's where we measure performance over power consumption and what's a huge breakthrough in this system is that not only did Frontier debut at number one on the top 500, it's actually got the top two spots because it's got a small test system that also is up there, it's got the top two spots on the green 500 and that's actually a real huge breakthrough because now we're doing a ton more computation at far lesser power and that's really important because you think about these systems, ultimately you can't continue to consume power linearly with scaling up performance. There's a huge issue on our impact on our environment but it's the impact to the power grid, it's the impact to heat dissipation, there's a lot of complexities. So this breakthrough with Frontier also enables us, no pun intended, to really accelerate the capacity and scale of these systems and what we can deliver. It feels like we're entering a new renaissance of HPC, I mean I'm old enough to remember, it wasn't, recently my wife, not recently, maybe five, six years ago, my wife threw out my green thinking machines T-shirt that Danny Hillis gave me, you guys probably both too young to remember, but you had thinking machines, can do square research, convex tried to build a mini computer HPC, okay there was a lot of innovation going on around that time and then it just became too expensive and other things, the x86 happened, but it feels like now we're entering a new era of HPC, is that valid, is it true, what's that mean for HPC as an industry and for industry? Yeah, I think it's a breadth, it's a market that's opening and getting much more broader, the number of applications you can run, and we've traditionally had scientific applications, there's obviously a ton in energy and physics and some of the traditional areas that obviously the Department of Energy sponsor, but we saw this with even the COVID pandemic, our supercomputers were used to identify the spike protein, to help and validate and test these vaccines and bring them to market in record time, we saw some of the benefits of these breakthroughs and I think it's this combination of that we actually have the data, it's digital, it's captured, we're capturing it at the edge, we're capturing it and storing it obviously more broadly, so we have the access to the data and now we have the compute power to run it and the other big thing is the techniques around artificial intelligence, I mean what we're able to do with neural networks, computer vision, large language models, natural language processing, these are breakthroughs that one, require these large systems, but two, as you give them the large systems, you can actually really enable acceleration of how sophisticated these applications can get. Let's talk about the impact of the convergence of HPC and AI, what are some of the things that you're seeing now and what are some of the things that we're going to see? Yeah, so one thing I like to talk about is, it's really, it's not a convergence, I think it sometimes gets a little bit oversimplified, it's actually, it's traditional modeling and simulation, leveraging machine learning to refine the simulation and this is one of the things we talk about a lot in AI, it's using machine learning to actually create code in real time rather than humans doing it, that ability to refine the model as you're running, so we have an example, we did a, we actually launched an open source solution called SmartSim and the first application of that was climate science and it's, what it's doing is it's actually learning the data from the model as the simulation is running to provide more accurate climate prediction, but you think about that, that could be run for anything that has a complex model, you could run that for financial modeling, you can use AI and so we're seeing things like that and I think we'll continue to see that. The other side of that is using modeling and simulation to actually represent what you see in AI, so we were talking about the grid, this is one of the exascale compute projects, you could actually use, once you actually get the data and you can start modeling the behavior of every electrical endpoint in a city, you know, the meter in your house, the substation, the transformers, you can start measuring the effects of that, you can then build equations, well once you build those equations, you can then take a model because you've learned what actually happens in the real world, build the equation and then you can provide that to someone who doesn't need a exascale supercomputer to run it, but that your local energy company can better understand what's happening and they'll know, oh, there's a problem here, we need to shift the grid or respond more dynamically and hopefully that avoids brownouts or some of the catastrophic outages we've seen. So they can deploy that model which inherently has that intelligence on sort of more cost-effective systems and then apply it to a much broader range. Do any of those smart simulations on climate suggest that it's all a hoax? You don't have to answer that question. What? The temperature outside, Dave might give you a little bit of an argument to that. Tell us about quantum. What's your point of view there? Is it becoming more stable? What's HPE doing there? Yeah, so look, I think there's two things to understand with quantum. There's quantum hardware, right? Fundamentally, how, that runs very differently than how we run traditional computers. And then there's the applications and ultimately a quantum application on quantum hardware will be far more efficient in the future than anything else. We see the opportunity for, much like we see with HPC and AI, we just talked about for quantum to be complementary. It runs really well with certain applications that fabricate themselves as quantum problems and some great examples are the life sciences, obviously quantum chemistry, actually some opportunities in AI and other areas where quantum has a very, it just lends itself more naturally to the behavior of the problem. And what we believe is that in the short term we can actually model quantum effectively on these supercomputers because there's not a perfect quantum hardware replacement. Over time, we would anticipate that will evolve and we'll see quantum accelerators much like we see AI accelerators today in this space. So we think it's going to be a natural evolution and progression, but there's certain applications that are just going to be solved better by quantum and that's the future we'll run into. And you're suggesting if I understood it correctly you can start building those applications and at least modeling what those applications look like today with today's technology. That's interesting because I mean, I think it's something rudimentary compared to quantum as flash storage, right? When you got rid of the spinning disk it changed the way in which people thought about writing applications. So if I understand it, new applications that can take advantage of quantum are going to change the way in which developers write not one or a zero, it's one and virtually infinite combinations. Yeah, and actually I think that's what's compelling about the opportunity is that you can, if you think about a lot of traditional computing industry you always had to kind of wait for the hardware to be there to really write and test the application. And we even see that with our customers in HBC and AI, right? They build a model and then they actually have to optimize it across the hardware once they deploy it at scale. And with quantum what's interesting is you can actually you can actually model and make progress on the software and then as the hardware becomes available optimize it. And that's why we see this, we talk about this concept of quantum accelerators is really interesting. Where are the customer conversations these days as there's been so much evolution in HBC and AI and the technology so much change in the world in the last couple of years is it elevating up the C stack in terms of your conversations with customers wanting to come familiar with Exascale computing, for example? Yeah, I think two things. One is we see a real rise in digital sovereignty and Exascale and HPC is a core fundamental foundation. So you see what Europe is doing with the Euro HPC initiative as one example, we see the same kind of leadership coming out of the UK with the system we deployed with them in archer two. We've got many customers across the globe deploying next generation weather forecasting systems, but everybody feels they understand the foundation of having a strong super computing and HPC capability and competence and not just the hardware, the software development, the scientific research, the computational scientists to enable them to remain competitive economically. It's important for defense purposes. It's important for helping their citizens providing services and betterment. So I'd say that's one big theme. The other one is something Dave touched on before around as a service and why we think HP GreenLake will be a beautiful marriage with our HPC and AI systems over time, which is customers also are going to scale up and build really complex models and then they'll simplify them and deploy them in other places. And so there's a number of examples. We see them in places like oil and gas. We see them in manufacturing where I've got to build a really complex model, figure out what it looks like, then I can reduce it to a certain equation or application that I can then deploy so I understand what's happening and running. Because you of course, as much as I would love it, you're not going to have every enterprise around the world or every endpoint have an exascale system, right? So that ability to really provide an as a service element with HP GreenLake we think is really compelling. HPE's move into HPC, the acquisitions you've made, it really have become a differentiator for the company, hasn't it? Yeah, and I think what's unique about us today, if you look at the landscape is we're really the only system provider globally. You know, there are local players that we compete with but we are the one true global system provider. And we're also the only, I would say the only holistic innovator at the system level to credit my team on the work they're doing. But we're also very committed to open standards. We're investing in a number of places where we contribute the software assets to open source. We're doing work with standards bodies to progress and accelerate the industry and enable the ecosystem. And I think that ultimately the last thing I'd say is we are so connected in through the legacy or the legend of Hula Packard Labs which now also reports an immediate, we have these really tight ties into advanced research. And some of that advanced research which isn't just around kind of core processing silicon is really critical to enabling better applications, better use cases and accelerating the outcomes we see in these systems going forward. Can you double click on that? I wasn't aware that it kind of reported into your group. So, you know, the roots of HP are in vent, right? HP labs are renowned. It kind of lost that formula for a while and now it sounds like it's coming back. What are some of the cool things that you guys are working on? Well, let me start with a little bit of recent history. So we just talked about the Exascale program. I mean, that's a great example of where we had a public-private partnership with the Department of Energy and it wasn't just that we built a system and delivered it, but if you go back a decade ago or five years ago, there were innovations that were built to accelerate that system. One is our slingshot fabric is an example, which is a core enable of this accelerated computing environment, but others in software applications and services that allowed us to really deliver a complete solution into the market. Today, we're looking at things around trustworthy and ethical AI, so trustworthy AI in the sense that the models are accurate. And that's a challenge on two dimensions because one is the model is only as good as the data it's studying. So you need to validate that the data is accurate and then you need to really study how do I make sure that even if the data is accurate, I've got a model that then is gonna predict the right things and not call a dog a cat or a cat a mouse or whatever that is. So that's important and so that's one area. The other is future system architectures because as we've talked about before, Dave, you have this constant tension between the fabric, the interconnect, the compute and the storage and you're constantly balancing it. And so we're really looking at that. How do we do more shared memory access? How do we do more direct rights? Like looking at some future system architectures and thinking about that. And we think that's really, really critical in this part of the business because these heterogeneous systems are not saying I'm gonna have one monolithic application but I'm gonna have applications that need to take advantage of different code, different technologies at different times and be able to move that seamlessly across the architecture. We think it's gonna be a part of the hallmark of the exoscale era. Including edge, which is a completely different animal. I think that's where some disruptions is gonna bubble up here in the next decade. Yeah, and that's the last thing I'd say is as we look at AI at scale, which is another core part of the business, that can run on these large clusters, that means getting all the way down to the edge and doing inference at scale. And inference at scale is, I was about a month ago, I was at the World Economic Forum where we're talking about the space economy and to me it's the perfect example of inference because if you get a set of data that is out at Mars, it doesn't matter whether you want to push all that data back to Earth for processing or not, you don't really have a choice because it's just gonna take too long. Don't have that time. Justin, thank you so much for spending some of your time with Dave and me talking about what's going on with HBC and AI. The frontier just seems endless and very exciting. We appreciate your time on your insights. Great, thanks so much. And don't call a dog a cat. I thought I learned from you. A dog, I know. Nope. Nope. For Justin and Dave Vellante, I'm Lisa Martin. You're watching theCUBE's coverage of day one from HPE Discover 22. The Cube is, guess what? The leader, the leader in live tech coverage. We'll be right back with our next guest.