 Welcome back to HPE Discover 2023. This is theCUBE's continuous coverage day three here in Las Vegas. We're in the Venetian conference center myself, Dave Vellante with Rob Streche. We haven't been outside all week, have we? No, I don't know if it's sunny out at this point. It's all good though, because it's pretty hot in here with all the talk of AI and new compute architectures. And we're going to talk to Krista Sat-Earthweight, who's the senior vice president and general manager of the HPE mainstream compute business. Great to see you, thanks for coming on. It's great to be here, thanks for having me. You're welcome. So, obviously great business. I mean, HPE's renowned, it's got a long history of building compute architectures. So, it's been a big show. I must be excited. What's hot at the show besides AI? Well, you know, I have to talk about AI a little bit because I'm really excited about the announcements that we had here today. So, first of all, of course you saw, we announced HPE GreenLake for large language models. And the reason I'm excited is because there's so many enterprise customers that really need that access to supercomputing. And now, with an on-demand supercomputing service, they can get that access without having to be, you know, buying a supercomputer themselves. And then when it comes to HPE Proliant, we introduced some inferencing solutions, several of them actually. And with our Proliant Gen 11, which is our new server line, we created two new servers for accelerator optimization. And we didn't have these servers for Gen 10 and Gen 10 Plus. One of them's the DL320, and it's great for edge inferencing, fits four GPUs in one use, so a compact space. And so, at the edge, you know, customers are doing video analytics, customers are doing computer vision. And this is a perfect platform for that. The second platform is a DL380A. So we have had the DL380 for years and years and years. Now we have the DL380A. And the A is because it's accelerator optimized. And it accepts four double-wide GPUs. And this is great for generative visual AI and AI natural language processing. That's great. And like you said, I think still the DL380 is one of the most popular servers out in customers today. And having been with you guys for a couple years back, definitely I even built product on the DL380s back in the day. I think what seems really interesting, and I think you hit on it, is the whole edge-to-cloud strategy that has been laid out and that Antonio was definitely early on. I think bringing inference to the edge, that sounds really exciting. What types of use cases are you seeing around that? Yeah, so, smart spaces, that's one. The other one is a lot of retail stores are trying to do a lot of video analytics, analyzing things. There's the airlines that are trying to use cameras to time, how long it takes them to do things when the planes come in. It's fascinating to watch because they start the clock when it pulls up, when doors open, when doors close, they can see exactly how long everything takes. So it's amazing, there's so many use cases at the edge and customers are trying to do more at the edge than ever before. So there's a couple of architectural evolutions going on here. One is just the AI, is this new sort of different workload, obviously very data intensive and then the other is inferencing at the edge, which is kind of real-time or near real-time. Can you talk about how that affects how HPE thinks about architecting systems? Yeah, so obviously customers need more compute than ever before when it comes to AI. So the use cases I mentioned, the inferencing use cases, we partner closely with NVIDIA to support the L4 card and that's really for the edge with the DL320 and then the L40 and the H100 cards and that's more at the data center. So they need levels of GPUs that we've never seen before and that's why we have these two new platforms. And at the same time, a lot of times at the edge, they're very concerned about power, power consumption and obviously cost. So that requires different thinking as well. So how are you solving for that? Yeah, so when it comes to power, luckily the new generation is much more power efficient than previous generations. And it's funny because when I talk to customers a lot of times, they love to show me their old server that's been running for 15 years. And when I say they love to show it to me, they drag me in to see it and they're like, look at it. And it's always at the bottom of the rack. Look at it, it's been running for 15 years. They're proud and excited, I'm proud and excited but the truth is that server is no longer serving them the way that it should because it's taken way too much power for the performance that it's delivering. So a lot of customers don't realize that, you know, if it's been a long time, you're probably not getting the most out of the server. Never mind the features you're missing out on and the security innovations that you're missing out on. So everybody talks about, well, you know, generative AI is obviously the big buzz and a lot of people say, well, we've been thinking about this or actually doing this for many, many years. This is not, it wasn't invented last November by open AI. Okay, that's cool, it's funny. What were the customer conversations like, however, before sort of the AI heard around the world and have they changed or has it always been sort of, you know, steady as she goes? Yeah, so I think the big difference is, and we've been doing AI for a really, really long time. We've been doing super computing for a really, really long time. I think the difference is who's talking about AI now. Used to be a lot of, you know, universities talking about AI and really big laboratories talking about AI. Now, enterprise customers are talking about AI. Small businesses are talking about AI. Everybody's trying to figure out how they can take advantage of it to be more efficient. The other architectural question I have if you could help us understand it is the distributed nature of computers. You're building out this massive distributed network of systems. How is that different? What does that mean for the way in which you architect systems? Yeah, it's a really good point. Everyone's trying to do more at the edge than ever before, as I mentioned, which means they need more compute power at the edge, but it also means they need to be able to manage that. So with our Gen 11 servers, it comes with an intuitive cloud operating experience. And it's called HPE GreenLake for compute ops management and we call it COM for short. And basically you can easily onboard thousands of devices wherever they are into one console and see everything. It's a game changer and customers, especially customers with a lot of locations are seeing the improvements when it comes to firmware enhancements. One customer said it used to take them four hours and now 45 minutes. So amazing innovation and we have to look at things differently because we're no longer just managing the data center anymore. How does that affect the security model, Krista? Yeah, so security is a huge concern for customers. It's a huge concern for us. Actually six years ago right here at Discover, we introduced Silicon Root of Trust. We're the first ones to have this innovation and we've been innovating ever since and with ProLiant Gen 11 we expanded this to include protection for the cards that plug into the servers. So storage controllers, networking controllers that have SPDM are protected now as well. We also have secure trusted supply chain and it's available in more use cases and it's available worldwide now. So we keep raising the bar when it comes to server security. What's SPDM? Oh, so it's a protocol that some of our partners are using so that can tie in with our ILO 6, Silicon Root of Trust technology. Excellent. I remember that announcement from a couple of years ago. You were the first. Yes, we were. So what about coming back to energy consumption? We had a couple of sessions this week and really the business case is starting to become more front and center. What is that conversation like with customers? Yeah, with power consumption. Yeah, we have customers that are trying to save on power because they've run out of power or their power bill is too high or they're trying to meet sustainability goals. So it's really all over the spectrum and there's a few things besides more power efficient servers that we're doing. The other one is the HPE GreenLake Sustainability Dashboard. So now you can see where you are when it comes to what's taking up power. It could help you manage to reduce your carbon footprint and that's just not on HPE infrastructure. It's on infrastructure from others as well. Yeah, so running out of power, that's not a good thing. No, no, no, definitely. Server runs out of power. That's a very bad day for everybody. But I think that the sustainability message has been great this week and I think it is key. Are you seeing that there's areas like obviously in Europe where the price of electricity went through the roof, they have to be more concerned about this type of thing and is that accelerating refresh cycles and that type of thinking? It is and what we see is we have people on different ends of the spectrum. We have people that are very keenly aware of every single watt all the way down to how many watts the fan uses. And then we have customers who know it's coming but they're not getting as much pressure but they want to be prepared because everybody understands that whether it's your board of directors asking about how are you going to reduce your carbon or actually we have a lot of customers that have customers that they need to prove that they're making advancement when it comes to net zero. So the world's obviously been pretty crazy for the last few years. We've had this unbelievable ebb and flow and unpredictability. What's the current supply situation look like is it starting to moderate? Oh, the supply situation is good. Luckily, we are in a whole different place than we were when the industry shortage was occurring last year. So we are back to pre-pandemic levels when it comes to supply. Good, and I know there's all kinds of macro headwinds and everything else, but eventually you got to, like you said, refresh. And so I think when, you know, it's like almost like recession is self-fulfilling. Oh, it's going to be a recession and it's going to be the headlines. People watch CNBC, oh, it must be. So everybody taps the brakes, but it feels like things are calming down a little bit. You know, we heard Chair Powell yesterday. You know, the market seems to be okay and it's people get used to it. Consumer's strength is there. And I think, you know, at some point, people say, all right, the world's going to be okay. We're going to move on. And it's funny because it hasn't been, unlike other sort of, I don't even think of this as a downturn. I feel like, okay, there's been a little bit of, couple of speed bumps and economic headwinds, but it's not like, you know, remember .com in 2007, not like that, you know. It's been, we're okay, you know, I feel like. And I think, again, when you're talking about inference at the edge and you're talking about how companies are really embracing Gen AI and other AI components, they're trying to figure out where the right thing fits. Like you said, the LLM green light up there, great. So super computer, if I need that, if I need 100,000 cores or something with, you know, or 100,000 GPUs for a big model. But I think there's going to be that middle ground where people are, you know, we're running it, we bring the model back. Is that really a design focus of where the DL line and the ProLiant line, I guess you could say, is really aimed at? Yes, we're really aimed at inferencing. So you're exactly right. It's great though, to have the full end to end solution all the way up and down. But when it comes to ProLiant, we're focused on inferencing. And that's, I think I personally think that's where a lot of the action is going to be in the next 10 years. And then, you know, we can't forget digital transformation. And that was all the talk during the pandemic and coming out of the pandemic. I think it was Equinix had its investor day yesterday and they talked a lot about digital transformation and it hasn't gone away just because everybody's talking about AI. In fact, it's part of digital transformation, right? That automation, that AI, you know, doing new things that you haven't been able to do before. That is digital, right? So that's front and center still. It is. And that's where AI can really fuel the digital transformation. So I think they kind of go hand in hand. Yeah, good, please. No, and I think that digital transformation is not just about LLMs and things like that. People still need to actually change how they do their business as well. And I think that's where, again, like you said, the edge means a lot of different things to a lot of different people, but being able to be as close as possible seems to make sense. Do you see people bringing, you know, inference to sensor data and that's, you know, a lot of, like you were talking about retail stores with the video. That makes a lot of sense as well. Yeah, and we do see people trying to tie in the sensor data to get a lot better analytics in real time and get a lot more eyes out there in all their different locations. So it's actually really exciting, all the different use cases for AI. And I think everybody's just trying to shop around for the right solutions for their use cases. Yeah, everybody's trying to get it right. So what's next? What's ahead? So what's ahead? We're going to continue to focus on making sure our edge is optimized. We're going to make sure that we're continuing to take security up a notch. We need to make sure we're focused on that. And the great thing about HPE GreenLake for compute ops management is it's a cloud-based management tool and we're updating it all the time with functionality. For example, at the show, we tied in one view addition to COM. We've also included integrations with VMware vSphere. We've integrated servers now. So it's great. It just keeps getting better and better. It's a cloud-based management tool that we can continue to iterate on to provide new features. Well, Krista, thanks for coming on theCUBE and sharing with us what's happening in compute. It's HPE's flagship business. I mean, it is. It's a good business and it continues to get interesting. So really appreciate your time. Oh, thanks for having me. You're very welcome. All right, Rob Stretchy and I will be back. After this, we're going to talk about more open source, Rob. Again, we are bringing open source to this conversation here. We didn't hear a ton on the keynotes, but we've been talking all week about open source. That's where the innovation is. This is theCUBE's coverage of HPE Discover 2023 from Las Vegas. We'll be right back.