 Welcome back everyone to theCUBE's live coverage of HPE Discover in Barcelona. I'm Rebecca Knight, your host, along with my co-host and analyst, Rob Streche. We are welcoming back to theCUBE, a CUBE alum so many times around. Justin Hotard, he is the executive vice president in GM, HPC, and AI business group, and Pulit Packard Labs. Welcome, Justin. Thanks, Rebecca. Great to be with you and Rob, and great to be back at Discover in Europe. Yeah, no, it's a pretty cool conference. I mean, Barcelona, you can't go wrong. It's hard to go wrong. It's a great city. So I want to ask, you want to start by asking you about this new announcement at HPE earlier this month, a super computing solution for generative AI. Tell our viewers a little bit more about this turnkey solution. Yeah, I think fundamentally, there's a couple of things you're seeing with GenAI. First of all, lots of energy around foundation, large language models, but there's actually different parts to generative AI. People are training models for video, for voice, for image generation. Stable diffusion is the common one we talk about. And what we realized is, even in our scientific and technological communities, one size doesn't fit all, but people need an easy way to get started. And so we're working closely with NVIDIA. We saw this opportunity with Grace Hopper coming out to build basically a scalable AI supercomputer in a box. And you can buy one, you can buy four, you can buy eight. We like, you know, in AI and HPC, we like multiples of four, so we'll stick to that, but it allows you to really scale easily. And it's all turnkey, it has all the software you need, comes completely packaged with our services, and of course is liquid cooled, which we think is really important because that means you've got a much lower PUE out of the gate. Yeah, and I think, again, we talked last back in June, and it feels like a decade ago at this point. From an AI perspective. It does. And I think a lot of that has to do with, people are not sold on going to cloud for AI in particular. I mean, because a lot of the data is still on-prem and they're looking to go and leverage that data out. Is that what you're seeing and why this solution really kind of fit the bill for this time? Yeah, I think that's right, Rob. The reality is that AI is going to drive a different architecture. If you think about the cloud native architecture, the cloud of native architecture was all about virtualization. Much like what we do with our PCs or our mobile phones, how many apps can we run on one device, one server, one building block? That optimization, which makes a ton of sense whether you're in the public cloud or the private clouds, like we've traditionally deployed in HPE GreenLake, that's not the same architecture that you need for AI. And AI, what you need is a system that can scale out and run one workload. We talk a lot about the data center being the computer. And that's true in model training, but it's also true in tuning when you're trying to tweak a model with your own proprietary data, which lots of people are starting to look at doing with large language models. And it's even going to be true in inferencing where you're trying to replicate and scale these large models for very low latency, very fast, and very cost-effective inferencing. That's what I was going to ask, is do you see that going, inferencing tends to be where the sensors are, where a co-location of a bunch of them closer to where the data is. Is that what you're seeing and one of the things that this is solving for? Yeah, absolutely. I think inferencing is going to be everywhere. I think inferencing is going to be where it makes the most sense balancing for latency. You know, you're obviously, we talk a lot about autonomous vehicles. You're not going to put the inference engine for an autonomous vehicle in the cloud. But there's a lot of places, if you think about some of the embedded apps and services that we're seeing in GenAI where it makes sense to have, it makes sense to have an app that's hosted. I mean, of course, most of our enterprise apps are now cloud apps. Putting the inferencing in the cloud for those apps makes sense. So it's going to be about where the data is and to the point you made earlier, also making it seamless to move data between all of these locations, right? Because ultimately when I'm training or tuning or retraining a model, I want to have the convenience of access to my data. And of course, we're in Europe, so it's important to remember that many countries have very thoughtful regulations around data privacy, data mobility. And so we can't just assume we can centralize all this compute in one place. We have to move the compute to where the data is. Yeah, maybe one day the U.S. will get that, but we'll see. We can kind of hope. Yeah. When it comes to AI, and as you said, a year has felt like a decade, what are you seeing as the growing, where are the growing opportunities? Yeah, I think Rebecca, they're in a couple of areas. I think one is in innovation around how to use it in enterprise use cases. So we'll talk more about this this week, but RAG, which is a new term, which is basically retrieval augmented generation. So that's where I ask my AI chatbot to share my interface, something through my prompt. And it doesn't pull the answer out of the AI model, but it goes and finds a document. This makes a ton of sense if I've got a bunch of technical support publications and I'm a customer service organization and I ask a question and it just goes and finds the paragraph and provides the answer to the customer. Much better customer experience, more accurate, eliminates the chance for human error and something like that. So that's one example. I think on the training side, we're just seeing more and more demand for specialized use cases, right? And so our announcement with the University of Bristol, here's a UK government who literally in a very short period of time is going to announce and deploy their largest supercomputer completely focused on AI with this similar stack to what we just talked about a couple of weeks ago that we launched a couple of weeks ago with our complete turnkey cluster with Grace Hopper notes. So this is an example where the UK realizes we need to get ahead, need to enable AI for our scientists and accelerate. And I think we're seeing more and more of that both in private enterprises as well as the public sector. What does it mean to be AI native architecture and what are kind of the key attributes of that? Yeah, if you were, you really start with the data. We've been talking about data first modernization. Antonio, I think talked about that a few years ago at Discover and I think maybe people didn't realize how at the time how prescient it was in terms of where we were headed, but starts with the data. You know, it's not just enough to have the data. You've got to organize and structure the data. It's got to be formatted in a way you can manage it. You also need to be able to have the right controls on that data. You may find that certain, we talked about privacy. You may find that as your training models, there's data sets that you have to exclude for privacy. How do you make sure you exclude those without impacting the integrity of your model? It's something we look at in terms of data reproducibility. It's a solution we provide. So starts with the data. Then it's about having a full stack of solutions from software all the way down into the infrastructure to services for a purpose built for AI, whether you're training or tuning or deploying inferencing. And then finally it's about recognizing that this entire environment needs to be hybrid because as we talked about where one customer trains versus where they deploy is going to be distributed from edge to cloud and we need to make sure that they can deploy everywhere and use the right solution, whether it's public or private cloud, write the right architecture for them to deploy and use the environment. Another place that Antonio was very early on was the hybrid nature. We talk about super cloud and a cloud operating model everywhere. And I think that he definitely leaned in on that. Yeah, and what I'm excited about, we'll have some announcements this week. I'm excited about the fact that we're also now saying GreenLink was the right place for your private cloud, but some of the capabilities we're building into it, we've been building into it actually have set us up perfectly to make it the ideal destination when you go build that AI native architecture. And the last point I'd make there is sustainability. We announced the sustainability dashboard earlier this year. You know, that's core to everything we do in HBC and supercomputing. We're a world leader in liquid cooling. We've got tons of IP and patents in that space. And that's really important because we can't add all of this computing and expect to just impact our carbon footprint. It has to be done in a way that's carbon neutral or ideally carbon negative if we can do it. I mean, that is so interesting to me. We're having a lot of European guests. And I know you live basically on an airplane. So you, but we've having a lot of European guests who are talking about sustainability and making sure that it really is fundamentally built into the system. A lot of that is because there are these more thoughtful regulations in Europe. How do you talk about it with customers? How do you approach this with customers? And what are you hearing if there are differences between how the US and how European countries are thinking about this? Yeah, I think in the US, there's still a bit of a divide. I think there's a lot of words. There's probably less action when we talk to customers. And some customers are saying, I just want to get the solution as quickly as possible. I'll address, yes, it's important, but I'll address it later. I think everybody says it's important, but you see that action. And then we've got some great customers. You know, we announced a couple of weeks ago in a partnership with a company called DarkBite, DarkBite AI that's really leading in this space. They're building a secure cloud and an incredibly green cloud. And that's the kind of solution that, you know, where we think there's a lot of, you know, really good thought leadership and they're well positioned because they're starting with the premise that their entire ecosystem needs to be carbon negative and they see it not only as an importance for sustainability, but also for security. So we see customers, you know, a little bit bifurcated in the U.S. In Europe, I think the message is very clear. I mean, you know, partners like Tyga Cloud who's here on the show floor with us, you know, they're starting with the premise that everything they do has to be sustainable because it's not going to be viable in Europe, you know, and certainly for their customers, many of the model developers and employers, unless it's sustainable. Yeah, I think that you just hit on a really good point. And I think want to double click into it is that it's not that people won't build models in the cloud, but there's also a lot of these smaller clouds that are popping up that are AI specific. Are you seeing that as really a strong space for where, hey, you've been building, you know, frontier and others for years now and taking that DNA and injecting it into those clouds? Yeah, I think for a couple of reasons, Rob. First is you touched on the one which is we have the expertise scaling large systems and it's more than just the engineering and how we design these systems, but it's the onsite services, how we deliver. It's very, very different than a traditional, you know, private or public cloud environment where a node fails and you just don't worry about it because you redistribute your workloads. On these systems, every node is critical, right? And getting high availability reliability matters. So the AI native architecture, we see that. The other thing I think some of these startups are doing is they're, because they're very, very focused on large scale, we're working with them closely, but we're seeing differentiation because they recognize that a customer needs a full commitment, right? It's not like I want to have this capacity for a short period of time, you know, and I'm willing to be flexible on my use or I'm going to use it for a couple of hours. Customers are looking for commitments of scale and capacity over weeks, months, even years in some cases. And that's really, that's the other reason we see some of these, you know, these cloud startups as being really relevant in addition to being thoughtful around sustainability or the security models that matter as I touched on earlier. So November 30th marks the anniversary when Chachi BT was unleashed into the world. And, but one thing that's really coming through in this conversation with you is the real need to do AI responsibly and ethically to reduce risk, but also because it is the right thing to do. How do you talk to customers about this and what are you hearing from them in terms of how they're approaching this? Yeah, I think first of all, Rebecca, we've been a leader in this for some time. We actually worked with the World Economic Forum and some other companies to create a standard around responsible AI and view on AI ethics. And so that's something that's very core to us. When we talk to customers about it, I think one of the benefits of Chachi BT is people have realized that this is important. And so we're building it into our tools. It's embedded in some of the tools I talked about that provide reproducibility. That's one element, expanding our machine learning development environment into areas where it allows you to do so responsibly. We've got a demo that's on the show floor around our Gen AI studio, which allows you to figure out how best to use it, but it provides the visibility. So a developer, a traditional software developer, not even a data scientist can see what's happening and be able to interrogate it. I think it's going to be, it's a place where customers are very, very focused on it. They see the value, but they really want to make sure they're managing it responsibly. And then the last thing I'll say is, I think beyond just LLMs, right? There's a lot of excitement about LLMs and Chachi BT has been a phenomenal accelerator, right? In terms of the broader AI market. But when you get into some of our partners and customers doing work around scientific models or looking at long-term or mid-term climate forecasting or thinking about how to implement AI to accelerate traditional high performance computing simulations, they realize that the integrity of the model is foundational. And that's sort of a non-negotiable for most of them, because if they don't have that, then their entire research project or business model is built on a house of cards. It would make sense that, again, AI, like you said, has been around for a long time. It's not brand new, even though Chachi BT is a year old. I think ML, which is the basis for a lot of that models, has been around for, and that's really the core to where this technology has been used. Like you said, weather forecasting has been done on this stuff for quite some time. Like I love seeing the European model versus the North American model and all that stuff. Are you seeing people now that they've been exposed to things like gen AI that are revisiting how they might use ML as well? Is that really spurring that out long as well? Yeah, I think traditional classification learning is actually getting a boost as well. I mean, I was at a forum in New York about a month and a half ago, and it was interesting we were talking, we were actually, it was all about health and life sciences, and we were talking about, one of the phenomenal applications is computer vision for radiology, helping radiologists, but the penetration's been fairly low. But now, because of the boon in AI, you're seeing some of the manufacturers embed more and more of this into their core product, so it's no longer an adjunct. And this was kind of an open discussion in the forum, many of the companies they're presenting and some of the health professionals. There's probably one example, but I think what's been great about gen AI, it feels a bit like the Netscape or the mosaic moment of the internet, right? Everyone's all of a sudden realized, I remember that moment when I was in college and we realized we could check basketball scores and whether I wasn't checking the stock as much as I probably should have been in college, you could check those things in a relatively real time. I think that aha moment is what's happened and it is causing people to go back to all the various ways they can use this, which is exciting, because I think that's probably where more of the tremendous impact in terms of the social value and the real value in terms of how we advance the way people live and work, which is core to our purpose at HPE, where that real value actually comes to bear. Are you seeing that, like you said, you brought up healthcare retail, like being able to do recommendations engines and things of that nature, where you have to be careful about PII and other things. Are you seeing different applications of ML and AI in those different, across the different verticals? Yeah, I think it's probably the federation of that to the broader industry, right? If you look at the big, the large internet players or tier one hyperscalers as we think of them, they've got tons of ML and DL in their models, right? And recommenders and other various elements of analytics, obviously computer vision in other areas. I think what this does is it brings the bar, it lowers the hurdle, right? So that everybody can raise the bar in terms of a retailer that now wants to provide that recommender engine who may not be one of the two big online retailers in the world, right? Yeah, excellent. Well, just, these are exciting times. Thank you so much for coming back on theCUBE. Yeah, it was great to see you both and thanks again for having me. I'm Rebecca Knight for Robstretch. You are watching theCUBE and there is much more of our coverage of HPE Discover in Barcelona. You're watching theCUBE, the leader in technology coverage.