 Good morning, everyone, and welcome to theCUBE's day two coverage of HPE Discover 23 from the Venetian Expo Center. I'm Lisa Martin. I got a great lineup of analysts here. Yesterday we brought to you day one of theCUBE's 12th year of covering HPE Discover. John, Dave, Rob, we're here breaking down the news from yesterday, talking about HPE's strategies, its growth, its profitability, expanded partnerships. Today I've got three great folks talking about what's happening today, and takeaways from the show. I'm joined by my co-host and co-founder of theCUBE, co-host at EO, Dave Vellante. We also have a CUBE analyst, Rob Streche joining us, and Bob Adonal, chief technologist, chief analyst at Technologist, excuse me. Welcome back. Thank you, happy to be here. Great to have you. Dave, I want to start with you. Some of the takeaways from this morning's keynote, the audience just rushed out, another standing room only morning. Yeah, well, so Fidel Maruso, who unfortunately not coming to theCUBE just came on last year, we weren't able to get her on this year, but she basically gave sort of an overview of what Antonio talked about yesterday, went under the covers a little bit. I thought that messaging was really good. My big takeaway is she said at the end, we are the only company that does edge to cloud in a single unified platform. And then I'm like, okay, well, keep going. And then she said that delivers AI essentially as a service. And that is different than any other company that we've seen so far. Bob, you and I, we should talk about this because we've been at Dell, we were at Cisco, and we're now at HPE. I think large language models as a service is a definite differentiator for these guys. I think they also had a very strong sustainability message, which I think all these big companies do. We saw it at Dell, we saw it at Cisco, we're seeing it here. It's part of their responsibility. You see it all the big cloud guys. They have that responsibility because they're consuming so much energy. And those are my two big takeaways and I think we could get into it a lot more. We can, Bob, I was reading your news letter yesterday where you said to the surprise of absolutely no one, one of the biggest announcements from Discover had to do with generative AI. But you also said what may catch folks off guard is the manner in which they're entering the gen AI market. What do you mean by that? So two things, number one, as Dave mentioned, I mean, this is a public cloud service of offering generative AI. That's not necessarily what you would expect HPE to do. But more importantly, the manner with which they're offering it, it's the supercomputing stuff, right? That's the, they have that heritage. They bought Cray in 2019, I think it was. They've got this history of working with supercomputing and the story they laid out now, and it remains to be seen to be clear in terms of how it works in the real world. But it was a very compelling discussion because they said, look, the nature of the physical architecture of supercomputers is different. The speed of the interconnect into these GPUs between the GPUs, they were claiming could be up to 16 times faster, which means the GPUs run all the time, which means they get things done more quickly. That theoretically translates to better performance, better price performance. So that's really interesting. Also leveraging their AI software that they, or their, excuse me, their supercomputer software. So to me, it's that combination of those things that's really interesting. The other thing they talked about from a supercomputer perspective was the reliability of getting those jobs done. Now they, again, they made claims that with traditional architectures, some of these jobs only complete 15% of the time. And you got to start over. And then you got to start over. And exactly, and they're saying, hey, you do on the supercomputer, you're going to get it done almost 100% of the time. I don't know if they actually claimed 100%, but I'm sure it's somewhere near there. And that's, again, a very different model. So real world tests will be the ultimate benchmark for how that difference works. But it's an intriguing story, and it certainly differentiates them from everybody else out there. And that interconnect strategy is kind of interesting because they think of things called Slingshot. Of course, NVIDIA bought Melanox and they use Infiniband. This is Ethernet based, right? And I think, again, well, I think they're using both. I think they use both Infiniband and Ethernet across the supercomputer infrastructure. And I think that they have a lot of the pieces. I think what was really interesting is when we were in the analyst fireside chat with Antonio when he talked about this was going to actually bring in new TAM to HPE. So they were looking at new personas. And the fact they're out there running data scientist based marketing groups and going and having these discussions. So I think they're definitely leaning into the LLM space. I think it's interesting that will they be only their cloud or will they take their software and go to other clouds and do a, not as performant, but hey, we have this intellectual property from Cray and now we can do a software based cloud somewhere else. That's interesting. I don't think they're going to do that, at least initially. I think Antonio is going to say, I'm keeping my IP in my house and see if they can make that work. It's a little IBM-ish in that regard. Hopefully they'll have more success than they do. And yet the other thing to remember, and this was kind of downplayed, but really what they developed was supercomputing as a service. But they didn't launch that. What they launched was an application sitting on top of supercomputing as a service. Not the IS, is you. Exactly. And it's the Gen AI initially, but then they want to do life sciences, transportation, other things like that, which makes perfect sense. I mean, that's the traditional places we've seen supercomputing being used. So it's an interesting kind of strategy and approach to take with that. Yeah, I think that's it. It's a platform as a service versus an IaaS play. And I think that they even talked about the three use cases for the LLMs right now and that FISERV is coming on next. And they mentioned that is kind of one of those things. I just don't know how you can go and keep just building out specific LLMs and get to the hundred million in incremental ARR that they're projecting that they potentially could have out of that. Well, I mean, and yeah, a lot of it's going to depend on people's ability to really use these things. I mean, one of the challenges that all these companies are faced is we were talking about what Dell did with NVIDIA, what Cisco's doing and what these guys are doing as well. Everybody is trying to figure out sort of the easy button for Gen AI, right? I mean, in all the demos today in the K2 keynote, they're talking about this easy button notion, but the real trick is if I am an enterprise, I have this base of data. How do I actually import that data and actually train from that? How hard is that to do? What sort of skill sets do I need in-house to be able to do this? That's a huge question mark. The other question mark for me is this partnership with ALIF-ALFA, I think I'm saying it right. I've never heard of these people until today, until yesterday, and I'm sure a lot of people are like that. And I think there's going to be a lot of questions like, hey, why didn't you do chat GBT or why didn't you work with Google or sort of a bigger name potentially? So it's going to be interesting to see how that works. Now, one nice thing that Al of Health, well, two things they have. Number one, being German based and European focused, they immediately work with five languages, so that helps. The other thing is, their CEO talked about this briefly yesterday, that they are approaching explainability in a new way. Again, details are a little vague to me of exactly how that's going to work, but I happen to be doing some research right now on gen AI use in the enterprise and one of the big concerns companies have, of course, is the lack of explainability. So if in fact these guys have a solution that can really explain what the model's doing, how it's creating what it's creating, that could be pretty cool. So just over 20 hours ago, this gentleman named Matt Bornstein from Andreessen published their version of the AI stack, the LLM stack, and they got a bunch of people like Ali Goatsy and a number of other experts to sort of chime in and they published this, but the reason I'm bringing this up is it's a very developer centric, and it's also very much chat GPT-like as opposed to what HPE are doing, which is very different, right? They're targeting very specific use cases and like you say, the supercomputer piece. The other thing I wanted to bring up is kind of comparing the Dell, the Cisco, and now the HPE show, because they definitely are cohorts. Hybrid by default, which is hybrid by design. We heard from Dell, multi-cloud by default versus design. So obviously HPE knew that Dell was messaging that. So there, my inferences, they feel as though that hybrid is a more powerful message than multi. So I don't know. Come on, Dave, it's a hybrid multi-cloud world. It's both, right? Yeah, but the fact that they would directly use that says, hey, screw that, we are going to go forward. Yeah, I mean, and I'm with you. I think the problem is everybody's got to figure out a different way to say the same thing is what it boils down to, because the reality is it is a hybrid multi-cloud world and people are all excited about Gen AI. So it's like, how do we get hybrid multi-cloud, Gen AI all together in one place and yet doing a unique twist? But compare that to Cisco. You heard like the networking cloud, the observability cloud, or the G2's cloud, which was a collaboration cloud. So different business all together. And it's because of the nature of those companies' histories, the sort of products they've had in the past, how they're evolving, those sort of things. So all of those have a big impact on obviously the manner with which they're going to bring this stuff to market and the offerings that they have. Bringing it to market, enabling customers to solve challenges. I also caught that yesterday, Dave, that it was hybrid by accident. Right, right. Hybrid by design. Which is actually thought very clever. Very clever, yes. And by the way, it's true. Yeah, right. It's kind of been a multi-vendor type of approach. Yes, but Fadama was talking this morning about how are they going to actually make that a reality? How are they going to help customers go from the accidental chaos that they're in to a hybrid by design? Did you hear anything this morning that indicated there's really wood behind the arrow there? Bob, Rob? I mean, I'd go with the ops ramp thing. I mean, that's been the drum beat for the two days has been, ops ramp is going to be your dashboard for hybrid, multi-cloud, super-cloud types of installments. And I think that, again, we'll see where it goes because then they say, okay, but our sustainability dashboard is over here and our other dashboard's over here and this is where you go to GreenLake. I think there's some rationalization of the dashboards has to happen at some point in time, but I think that they're definitely, the ops ramp thing has got to be the center of gravity. There's a nuance here, which is if you think about Dell, they're going at it from a common storage layer because they come at it from a position of strength and storage. You look at Cisco, obviously, from a network standpoint, it's interesting to see HPE, which is a very strong server business, coming at it from, you know, Aruba Central is that glues that makes their GreenLake hybrid. But part of that is because, honestly, Aruba Central became the benchmark of the sort of UI and experience they wanted to offer. They recognized, hey, people like this, why don't we take some of our other tools, put it into this UI and platform that seems to work for people and just make it a little bit easier. It has momentum and it's interesting to see they put Tom Black, who's an Aruba guy in charge of storage and the new storage stuff, the Electra MP, has a lot of Aruba IP in it. So you're seeing Antonio drive that commonality across the business, which is critical because you got to get storage margins up. They should be double where they are. And their AI HPC businesses basically break even. So they got to make that profitable. If they can, because their server business is good, even though it was down, but service cycles, right? Next year is probably going to be a good server year, right? And they don't have the PC thing that Dell has going on. That's great, it was great during COVID, it's not great now, right? So that's sort of a two-edged sword. But Aruba obviously amazing business for these guys. Yeah, well, and again, I think the challenge back to your original question, Lisa, is look, this stuff is still hard, right? It's hard for people to do. And a lot of HPE's traditional customers are a little bit more conservative in their approach to IT. And so they, perhaps some have been a little bit slower moving to the cloud, right? So there's still questions about how do I make the process of doing this easier? How do I leverage skill sets? I mean, one of the things I think is going to be interesting about gen AI and this luminous model from a left alpha has this is code generation, right? So if imagine the idea of being able to leverage a generative AI code generation tool to help modernize legacy apps. If that can really, now there's a lot of questions, but that's one of the big problems that companies are facing is they're having a hard time modernizing these apps. So if they can leverage that, that I think becomes pretty powerful. I think it's interesting because it's kind of the, you know, message around where co-pilot is and things of that nature as well. All right. We've got waiting in the wings, Antonio Neary is here. You guys had a chance to sit down with him yesterday in the analyst. Any last 30 seconds, Bob, Rob, any questions you would ask him if you were sitting here with Dave and me on the stage? Putting on the spot. Preaching one, go. I was going to say, I would say that, you know, it's really interesting the 98% retention rate. Because of your retention rate, right? Is that NRR or is that really retention rate? That was my only question that I really came away from that with. And I wanted to ask him why they chose Alaphalfa. I'm curious to get a little bit more into that. But they're coming on today, Bob. We will ask that, we will get back to you. All right, sounds good. Thanks you guys. All right, guys, thank you so much. Up next, you heard it. Yesterday we teased Antonio Neary, the CEO and president. We're going to be talking all things green, like the announcements from yesterday, the wood behind the arrow that we're seeing today. And we're going to ask some great questions and get some insight for you. Stick around, Antonio Neary coming up next.