 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. HPE's announcement of an AI cloud for large language models highlights a differentiated strategy that the company hopes will lead to sustained momentum in its high performance computing business. While we think HPE has some distinct advantages with respect to its super computing intellectual property, the public cloud players have a substantial lead in AI with a point of view that generative AI is fully dependent on the cloud and its massive compute capabilities. The question is, can HPE bring unique capabilities and a focus to the table that will yield competitive advantage and ultimately profits in the space? Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we unpack HPE's LLM as a service announcement from the company's recent Discover conference. And we'll try to answer the question, is HPE's strategy a viable alternative to today's public and private cloud gen AI deployment models, or is it ultimately destined to be a niche player in the market? And to do so, we welcome to the program Cube Analyst Rob Streche and Vice President, Principal Analyst at Constellation Research and friend of theCUBE, Andy Terai. Gentlemen, hello. Andy, good to see you again. We saw you this week. Great to have you back. Good to be here. All right, let's start with what HPE announced. They entered the AI cloud market via an expansion of GreenLake as a service, its platform, companies offering large language models on demand in a multi-tenant service powered by HPE supercomputers. HPE is partnering with a German based startup that none of us ever heard of, at least I hadn't, called the LEP Alpha, a company specializing in large language models with explainability as one of the features. HPE believes that this is critically important for its strategy of offering domain specific AI application. HPE's first offering is going to provide access to something called Luminous, a pre-trained LLM from LEP Alpha, which will allow companies to leverage their own data to train and tune custom models using proprietary information and avoiding IP leakage. So let's start with you, Rob. What else can you add to what HPE is offering here? Yeah, I think that what's interesting is that they're taking cray to the next level and making it as a service versus having to buy a supercomputer and bring it on. It makes it more accessible to the supercomputer infrastructure and it gets taking advantage of the underlying software, which is interesting. But when you start to peel it back, they're still six months away from having it GA in North America, which they announced it'll be towards the end of the year and then in Europe, you still have until next year. So I think they're playing catch-up in this space in a pretty big way right at the moment. Andy, in your view, how viable is this strategy? So, first of all, like Rob said, it's only an announcement now, it's not GA, at least for another six months or so. So take it with a grain of salt. But one of the things what they are suggesting, which could be compelling for large workloads, right now the problem is, when you go to the public cloud, the way you're setting up the machine learning models, LLumps and the train, the whole many odds. There's a lot of data science and productionizing work involved before you can get the models production. What they are suggesting is that, give us the best possible biggest workload throughout us, we'll figure it out and how you need to be efficient. We'll have our green lake services, we'll have our supercomputer, powerful network, powerful hardware, powerful storage, powerful memory, powerful, whatever, we'll figure it out. You don't have to fine tune anything, you throw the workload at us, we'll make it train and run. And that's, which is actually very good for too high volume like HPC workload. So it's a good message, not ready yet. And I need to see a whole lot of details. For example, they've just sat in the panel and talked about machine learning operations as key, but I don't see any announcement saying how they're going to execute it. So there are a lot of holes, big announcements. Is it going to move the needle? I don't know. Yeah, but you have to be a point being that the viability in the premise is simplification. So that's good, but now it's a matter of execution. Now this, you kind of alluded to this, but this is not IS, right? Be specific, what are they actually delivering? They mentioned several workloads to come like climate modeling. Right, well they had three models that they're really approaching, which was climate, it was bio-life sciences, and it was healthcare. We're the big three that they were, and they alluded to the fact that they were going to have financial models as well. What should it be, of course? Yeah, they have to. I mean, I think that's, without that, you don't get a lot of the non-governmental types. I think what they're focusing on is a lot of the workloads that Cray really does well with today. And to Andy's point, they're trying to simplify it by saying, hey, you can use our models out of the gate, or you can bring your own model. I did ask that as part of one of the follow-ups was that, great, so I can use luminous, but what if I wanna go and use something from one of the others, like Anthropic, or what have you? And they said you can bring it. I think to what your question, it's more a platform as a service that they're providing versus infrastructure as a service. So to Andy's point again, you don't have to go under and plumb all the data together and stuff like that. Well, Andy, what about the partnership with the left alpha? I mean, a lot of the analysts are like, why are you dealing with this little tiny company that's raised, I don't know, $20, $25 million as opposed to working with one of the other firms that maybe is more mainstream or well-known? Of course, the cloud guys, many of the cloud guys anyway are working with them. Of course, Microsoft with open AI, you see guys like Huggingface, but you know this space really well. What are your thoughts on that move by HPE to work with an upstart like a left alpha? So it's not about who they are partnering with, right? I mean, of course I like Alba, nobody know about. So it's not the goal is not about them showing that. The goal at the end of the day for HPE was to show, look, right now LLM is the craze. I mean, so far when you think of a thick, big AI workloads, it used to be HVC. Rob was mentioning that they were talking about the climate weather prediction, genome modeling, seismic analysis, Monte Carlo analysis, even computational fluid dynamics. All of the things HPE is known for have been doing for a while. Now, LLM has the model training needs and the size of the model, the time it takes to train, it has come up to the same level as some of the big HVC workloads. And everybody on their uncle, as he knows, is training an LLM model now. So their goal is to show that you could train your own private LLM fairly easily using our service without fine tuning the knobs and everything. You throw at us like I said earlier, we'll make it work. So in that sense, they have shown a demonstrate of the capability of training one large LLM. Hey, this how we do it? This how we did it. It's not any different than, remember, Databricks came out like about three, four, five months ago, they did the same thing. They showed you how to train an LLM. With Valley, yeah. Yeah, and yeah, okay, this is how you do it. And we used, and their differentiation was that, you know, we don't need all kind of parameters. They used their employees data, very small set, trying to, big LLM fine tune using their own data. So everybody's going a variation of it. So it doesn't matter what company, basically HPE wanted to demonstrate, we can train a big LLM using our system, which I think they have proven. But again, in my view, training that still, didn't leave a lot more questions to me to ask. Things like, you know, the HPC workloads are not the pure AI, ML, or even deep learning workloads. You know, there are neural networks, RNN, CNN workloads, you know, all of those things. They didn't exactly demonstrate, you know, how they'll work with that. In order to do that, you need to have a big ecosystem of all of those components. And what HP has decided to do is, I don't want to get into the competitive market of creating a software and doing all of those things. With that, it become very complicated. So I'm going to let you use some of the open source available systems. So they pretty much went all open source. Yeah. Whether it's, you know, from Ray. And I would say that I did, I had Jonas Androulas on theCUBE as the CEO of Left Alpha, pretty impressive guy. What he was really doubling down on explainability is the differentiation. You had something to add? Yeah, no, I think that's the thing. I think the one thing from the paths that they're offering, and I think even, I think Andy Sapolinsky or Adam, sorry, oh my God. Adam Sapolinsky. Yes. We're both right out here, right? Red Eye Brain right now. But when you start to look at it, he was talking about the actual, with the trainiums and the new chips and how sustainable they are when they did their announcements a couple weeks back. I think the sustainability aspect of what they're doing with their cloud that they're building, it is pretty unique in the fact that they're gonna help people hit their scope one, scope two, and scope three sustainability. I think for large companies in particular, at the top of that, which is I think where they're aiming, that's gonna be pretty important because I don't think Amazon's carbon footprint tool goes far enough. It doesn't talk about supply chain, which gets you out of some of those sustainability metrics and things like that. But again, is it enough for them to win at this market? I think it's enough to keep them in and get them enough revenue to build a business. Well, so to Andy, to your eyes earlier point, I mean, HPE's fundamental belief is that the worlds of high performance computing and AR are colliding in a way that will confer a competitive advantage to HPE. And indeed, HPE has a leadership position in high performance computing. As we're showing here, HPE is the number one and number three in terms of the world's top five supercomputers with Frontier and Lumi, both leveraging HPE Slingshot Interconnect, which it believes is a critical differentiator. We're gonna talk about that. Also believes that generative AI is unique workload characteristics favor HPE supercomputing expertise. Here's how HPE's chief technology officer for AI, Dr. Eng Lee Go, describes the difference between traditional cloud workloads and gen AI. Let's play the clip and come back and talk about it. The traditional cloud service model is where you have many, many workloads running on many compute servers, but with a large language model, you have one workload running on many compute servers. And therefore the scalability part is very different. This is where we bring in our supercomputing knowledge that we have for decades to be able to deal with this one big workload on many compute servers. Rob, obviously what Dr. Go said makes sense, but the public cloud players, they have supercomputing services. So why in your view, does HPE feel it has an advantage over the public cloud players? And do you think it does? I think they have a lot of heritage. Now people move around that worked on these projects, but the open grid forum and before that, the global grid forum where I actually ran a research group in there was all of these guys. It was Cray, it was SGI, it was IBM, it was HPE. So when you start to look at it, they do have a heritage in doing these, I guess you could say applications that are large applications across many servers versus time-slicing servers for many applications. So they have this in their heritage, in their software. I think there is an advantage there for them. Isn't an advantage that is above some of those other vendors that contribute it back, I'm not 100% sure that it's there, but you also have these people who moved around, it was open source, there was a lot of fundamental pieces that the cloud guys can go pick up and use as well. It doesn't mean that they can do it in the way that Cray has hardened it over the years for people like NASA, DOE, and others that they've been doing it for for decades now. So I think there is some substantial intellectual property in the software, in that aspect of it, from a grid perspective. But Andy, to your point, you feel like it's really not mainstream, it's more niche. Now whether or not it comes more mainstream or maybe that niche grows, remains to be seen, but I'm inferring from your comments that you feel as though it's a little far off from where you'd like to see the company's momentum. Is that fair characterization? Yes or no? Look, at the end of the day, HP, as with many of the enterprise companies, they're all good storytellers, right? And if you listen to the story, they'll make you believe that they are the only one who has a HPC service, which is actually not true. They're about, at least I can think of about a dozen vendors, about five of them, really good. For example, Amazon, they kept repeating, saying that AWS doesn't have it, but if you look at it, Amazon HPC service is not bad. They run on an elastic fabric adapter powered by the Nitro system on it, right? They run the low latency purpose built HPC app center. So it's not exactly Apple's day-to-day comparison, what they're making, but at the end of the day, they're figured out, they're way back lagging. And look, at the end of the day, in order to do AI data, any of those things to do a training models and stuff, data is king. Data is not with HP right now. Which means all of those workloads, the innovation workloads, as I call it, as we talked about earlier, innovation workloads, AI workloads are always, always, always going to go to hyper cloud. You don't have the data with you to begin with, you don't have the ecosystem to begin with, you don't have the ease of use to begin with. What HP does have is a humongous super compute with their Cray, solid density core, and they have storage where you can put all the data and then combine with the GreenLake data network, all the combination, what they're trying to market is, you know what, we got all of this stuff. Bring your largest workload possible, we can do better than that. Is that going to move the needle for them? I'm still not convinced yet. Well, so, and by the way, I want to, your point about data is interesting, because we heard Adam Salipsky on Bloomberg say, 90%, he's the same as Jassy, 90% of the data is still on-prem. I don't believe that's true. I think the data is more like 45% is in the cloud, not 10% in the cloud. Now, if you include the edge, i.e. Telco, maybe you can get there, but in even HPE all week, we're saying that 70% of the workloads are on-prem. Again, I don't believe it's that high. I think it's much more balanced than they're suggesting. Well, I think it's industry dependent as well. I think especially when you get towards the smaller, newer bin born in the cloud last 10 years, of course it's going to be 90% in the cloud and 10% on-prem or something of that nature. I think when you start to look at the, I was talking to a very large, one of the top five too big to fail banks, they still don't have their strategy nailed down. Is it going to be Snowflake or Databricks yet? They have no cloud databases. So when you start to look at how these large organizations are approaching this, I would also say that especially, because Grid's been around forever, I was doing it at Main Alive Financial, where you're using it to do actuarial tables. So this concept of doing big data on-prem and doing big data in the cloud is not that complicated except for what we're all talking about, which is you got to get the data there. And I think their data story about their data fabric with the Esmeralda stuff and some of the other things they're doing, kind of to Andy's point, kind of helps bridge that. I think I want to see it actually work. And it's, they didn't have a lot of information about how that worked with the supercomputing workloads and bringing the data to the network and to that fabric in their cloud. But the simplification message resonates and they did talk about how a lot of the jobs in the cloud, they fail and they have to rerun them. Now that's not necessarily anything fundamental to the cloud. It's just that it's your responsibility to make them work. So simplification is a good idea and they do have high performance computing data, right? That is sort of their domain. But all right, let's move on. Let's take a look at HPE's line. Can I make a quick comment on that? Yeah, please, go ahead. About the enterprise data. I actually challenged them in one of the panels that they had and they asked the same question. Here's my view. The enterprise transactional data is predominantly mostly structured data is still in-house. But if anybody claims that the newer innovation data, unstructured data, all the vision, audio and all of this unstructured data, if it's predominantly on-prem, they're lying. Most of that is on-cloud because- Either they're lying, Andy, or they just have flawed assumptions. But just look at the numbers. If the big four, if you include Alibaba cloud players are going to do close to 200 billion this year, throw in the SaaS guys, where's the rest of it coming from? I mean, services? I mean, it's just, I'm not buying it. Well, I look at that and I say, especially where they said, hey, we're starting with customer support and being able to AI enable our customer support. I bet you their customer support docs are in the cloud. Nobody keeps those on-prem. But here's the irony, here's the irony. Both the public cloud player, AWS, and the private cloud guys, HPE, Dell, they're both saying the same thing and touting it as an advantage. I believe that there's more of an equilibrium that's closer to 50-50 than anybody thinks. Anyway, let's move on. We're going to take a look at HPE's lines of business and how it's AI and HPC business fits and how it performs. Remember, HPE purchased Cray in 2019 in Silicon Graphics 2016, I think, a few years before that to get into the HPC space. And looking at HPE's most recent quarter, you can see here how it reports its business segments. HPC and AI is a multi-billion-dollar business and it's growing, but essentially it's break-even. So not a great business from that standpoint. It brings bragging rights but not profits. Intelligent Edge, by the way, AKA Aruba is the shining star right now. It's got a five-plus billion-dollar run rate, 27% operating profit. So margin-wise, it's their best business. It throws off nearly as much operating profit as HPE's really strong server business. So guys, I want you to listen to this clip from Antonio Neary talking about HPE's unique IP in this space relative to the public clouds and get your reaction. Please play the clip. I mean, if you think about how public clouds are being architected, right, is a traditional network architecture at massive scale, right, with leaf and spine, where generic or general-purpose workloads of sorts use that architecture to run workloads and connect to the data. When you go to this architecture, which is an AI-native architecture, the network is completely different. You mentioned Slingshot, right? That network runs and operates totally different. Obviously, you need the network interference cars that connects with each GPU or CPU for them, either, and also a bunch of accelerators that come with it. And they are all about the silicon programmability with the contention software management. And that's what Slingshot is all about. It takes many, many years to develop. But if you look at the public clouds today, generally speaking, they have not developed a network. They have you be using, you know, companies like Arista, Cisco, or Juniper, and the like. We have that proprietary network, and so NVIDIA, by the way, right? But ours actually opens up multiple ecosystems, and we can support any of them. So it will take a lot of time and effort. And then also remember, you're now dealing with a whole different compute stack, which is direct liquid cooling, and that requires a whole different set of understanding. And the data center is very different as well. Okay, so lots to unpack there, guys. The network, the Slingshot interconnect, the data services, ecosystem, liquid cooling. Andy, what do you think? Is HPE naive about the capabilities of the public cloud players? Or is this HPE flipping the adage that Andy Jassy invokes, i.e. there's no compression algorithm for experience? Is he flipping that on the public cloud, guys? Kind of both, right? So the number of what you showed there for HPE and AI, even though it looks very, very high, the actual number, I can guarantee you, I asked them the question they refused to answer it. The, I would say about 98%, probably 95, 98%, coming from the classic HPE-C workloads that they have now. Right? Yeah, can you see, yeah. The rest of it, the pure AI workloads, including LLM, I mean, they're just showing that they are demonstrating how to use that. So are they be able to convince people to come and run a LLM workload on these servers? I very highly doubt that. One, you don't have an ecosystem and you don't have an MLOps practitioner in place, but more importantly, in order for you to get the model and train them, you got to have some kind of a repository, partnership. HuggingFace is an example that they're not even considering and AWS, that's why they're brilliant. AWS is taking a similar approach to HPE, but they are doing it in a little bit different way. You know what? The back end could be whatever, whoever we are, you bring the model from wherever, we'll fine tune the models, we'll make it run. They are partnering with OctoML, they are partnering with the HuggingFace. HPE is not doing any of that. Maybe eventually they will, but right now it's, they're going again after the pure HPC classic glores, they are trying to rebrand it as AI workloads, they're claiming that we're going to get all of that. Is our people going to do that? I don't know. Again, like we talked about, it is not there. Well, but the HPC guys might do it, so maybe that's not such a bad strategy. The question is whether or not it's going to actually drive profitability. I mean, that's my big question. I love the bragging rights, but that HPE's business needs, first storage has to be more profitable and HPC slash AI has to be more profitable. Yeah, and I question the whole network as an advantage thing. I think maybe they do have a little bit of an advantage right now, but everybody's buying parts from everybody else. I mean, they're buying Melanox, Infiniband from NVIDIA, they're getting other stuff from other people beyond Slingshot, which is that, I'd say, Franken Ethernet that they've built out and it's not really standard Ethernet. So yeah, Franken Ethernet, right? I mean, it's got a band. Number one and number three, now I know these are LeapFrogs. Yeah, but I mean, again, it's an interconnect. I don't know that that's why they won those deals, right? I mean, you start to look at the number of AMD cores in both of those and you start to look at all the other pieces that go into Oak Ridge National Labs and you start to go, well, where's the DOE going to buy from? They're going to buy from an American company. They're not going to buy from Fujitsu. They're not buying from Fujitsu. So I think, again, if I'm one of the other American server manufacturers, I look at that and probably go, why aren't we up there? But at the same time, they do have the software layer, they do have the water cooling. I mean, how many times did I feel like I needed to go back to the trade school and get a plumbing license so that I could be a plumber to run a data center now? I mean, I start to look at this and go, okay, if water cooling is where everybody's going and this is what we're going to do, we're going to need a lot more plumbers in platform engineering because I think it makes sense from the sustainability perspective, but I'm also looking at it going. There's things there that as the models become more efficient, is the water cooling going to be that big a thing? I don't know. And I think, again, it goes back to our earlier premise that we're still not GA and there's still a lot of questions back to what's, you know, is sustainability going to really do? I think in Europe, sustainability will significantly help them. I don't think it's as big an advantage in North America as it is. When you say water cooling, you're not disputing that water cooling will be necessary. You're saying, is it a differentiator? Is that what you're saying? I'm wondering if it's 100% necessary. I think it probably is to be as efficient as they want to be, but I think that to be in this and to generate revenue from AI and LLMs, is it necessary? And are they going to be able to, because at mass, do I need that? Yeah, you're not going to have water cooling in your phone. Who knows, maybe you will someday. All right, I want to explore with you guys how to think about this announcement. In other words, does it have the potential to go mainstream or is it destined to be a niche status? That's kind of one of the themes we're poking at today. Here's some ETR data asking organizations that are pursuing Gen AI. These are folks that said, yes, we're pursuing Gen AI. Actually, we're, sorry, they'll cross the survey. Are you pursuing Gen AI? I'll get to the data in a second. And LLMs and what use cases they're evaluating or pursuing actively in production. And I misspoke at first. This is not just people pursuing it. 34% of the organizations say they're not evaluating, which is surprising to me. I bet you they are actually, they just don't know it. But it's what you'd expect in terms of the ones that are chatbots, they're generating code, they're writing marketing copy, they're summarizing text, et cetera. But HPE has a different point of view. They're focusing on very specific domains where companies have their own proprietary data. They want to train that data. Don't want to incur the expense of acquiring and managing their own supercomputing infrastructure. That's HPE's premise or any infrastructure, GPU infrastructure. At the same time, HPE believes because it has unique IP that it can be more reliable and cost effective than the public cloud players while still offering the advantages of a public cloud. So Rob, is HPE onto something here and that these mainstream use cases are not where the money is for HPE? In other words, they can leverage their supercomputing prowess. And is there gold in those hills with HPE strategy in your view? Yeah, I think exactly they're gonna go to the edges, the edge cases that are more in their wheelhouse from an HPC perspective. And I think even Andy was saying the same thing. I think, is there enough revenue there? Probably. They don't have to be as big as Amazon from a global coverage perspective to win and make this a profitable endeavor. I think what they can do is they can bring supercomputers to people who don't have the means to go, you know, they're not Oak Ridge National Labs and gonna go buy 96,000 cores or something like that. So I think there is a middle ground where they could help people who can't get there or have tried to do it and failed on-prem with standard hardware and GPUs. Yeah. Eddie, anything to add to this? Yeah, first of all on your chart, 34% not evaluating, they're just hallucinating. They don't know that their people are evaluating. Probably 100% of the people are evaluating that. So that's- No doubt, right? I mean, he gotta be. How can you not be- So coming back to this, again, remember I seem to be the only one who's making the differentiation AI workloads. Nobody else is talking about it. There is a differentiation between innovation workloads and there's a differentiation between and the mature workloads. For innovation workloads, almost every single CXO I spoke to, I spoke to many of them over the last year, none of them seem to care, not because especially they are experimenting right now, sustainability, carbon footprint, cost, efficiency, none of them seem to matter. Can I experiment? Can I get the model working? Can I get it to go? That's their important differentiator. I need to get going like that. That's why ChatGPT and other LLMs pre-train enables you to retrain to go to the market faster. If I'm in the innovation mode training those models, I could care less about sustainability, carbon footprint and all of this crap. However, when the model matures, when I want to fine-tune and start running it in full speed, the maturity comes in, it's a different set of problems, including security, governance, ethicality, explainability, sustainability, even liability. So that's the core market that if I'm not getting it wrong, that HP wants to go after. I want you to train first, get the model right, when the workload matures, bring it to me, I'll take care of doing all of these things for you without having to go through multiple ML engineers. If they get that message right, if that works out, it could work out really well for them. And sustainability could come into play at that point. Until then, who cares? Okay. All right, I want to talk about a tale of two points of view. So, I just kind of tongue in cheek, but I tweeted this out during the situation when Matt Wood of AWS was on the main stage with Antonio Neary, and much to my surprise, Matt Wood said, well, the fullness of time, he didn't say the fullness of time, but he basically said over time, which is Amazon always talks about in the fullness of time, but over time, we still believe all the workloads are getting more, most of the workloads are going to go to the public cloud. He actually said that in front of HPE's audience. And then Antonio basically countered that with, yeah, the world's hybrid, dude, and it's going to be hybrid indefinitely. So I put this tweet out and it reminded me of that scene in Bridesmaids where the two Bridesmaids are dueling for attention of the bride. But there's another sort of underneath this line here, the supercomputing workloads are different and HPE has the expertise. We heard Andy or Adam Salipsky on Bloomberg basically say, I'm practically quoting here, but I'm paraphrasing, LLMs are fully dependent on the public cloud and it's massive compute capability. So, you know, in the end, in the movie, the Bridesmaids, I guess they were kind of both right. And there's probably a market for both. I think there's no question that there's a bigger market as you guys have pointed out in the public cloud, but HPE's got to go from its position of strength, which is supercomputing. Anything you guys would add? Yeah, I mean, I tend to agree that they have to play it to their strengths. And I think to just exactly what Andy was saying, you know, maybe the next six months is, and I don't even think maybe, I don't think everything's gonna be decided in the next six months till they get to GA. I think this is gonna have a long tail on it. There's time to Andy's point, you know, that he was making about, hey, train the models, then get to production. And when I get to production, and I got to worry about my scope one, scope two, scope three, and my science-based, you know, sustainability data, till then, let me play around. I need a place to play and maybe the cloud is a good place for that. So let's take a look at some of the ETR data again. And Andy, you're gonna, I think, this is your wheelhouse here. So this data shows the MLAI spending and which companies are getting all the action. It shows a net score spending momentum on the vertical axis and the horizontal axis is pervasiveness or presence in the data set for, again, specifically the MLAI players. Right after that, focus on the big three public cloud players, Microsoft, AWS, and Google. They're pervasive and they're all above that magic 40% red dotted line, which is an indicator of highly elevated momentum. Databricks also stands out. You guys are gonna both be at their conference next week. I'll be at Snowflake. And as an aside, Andreessen just published a version of the LLM stack as they see it this week. Databricks IP was all over it. Not a lot of Snowflake in there. You had a little bit of Streamlit, but I expect we're gonna see some announcements this week in that regard. So it's the Snowflake has its own stack. Let's face it. So anyway, Databricks clearly a player in that mix. And I got a peek at the July ETR survey data and it's not going to surprise you that OpenAI is setting new records beyond even where we saw Snowflake at its peak net score, which is kind of during the pandemic up in the 80% range. You see OpenAI is rocketed up to the lead and you're going to see that in the ETR data soon that OpenAI has really gone mainstream. In core, there's a core IT shops, IT decision makers. So it's no surprise that you don't see HPE in this mix, but I would say over time, Andy, if the company's aspirations are to come true, like Oracle and IBM, you would want to see them on this chart, don't you think? You would want to, would they make it to the list? I don't know, because like I said, all of those companies, if you look at it, there is a commonality in there. They all talk about not only training a large LLMs, they're also talking about retraining existing models, fine-tuning the models and the whole line yards. And HPE is taking a different approach. They are saying, bring the whole inch a lot of the huge speakers model will tackle it, right? So if that messaging works out well, they could become the center of force to train all of those things, but none of these guys want to go after the market. They'll, will give you an option. You can take a model from whether Albeka or from A21 Labs or even from existing hugging face models. You retrain, fine-tune, and then you work on it or even have your own data and do it. So they, again, at the end of the day, their core information is not about do want to make it work. I want to sell my strength, you know, computer, I have networking, I have storage. I want to sell all of this to you. So bring the biggest possible model. I'll make it work with all of this stuff. So would they succeed? We'll talk about the next year or so and then we'll see. You know, Rob, I did give HPE props for, for including LLMs in Green Lake. I didn't see that at Dell Tech World and Apex. I, although I've listened to you guys, I wonder is it sort of bespoke Green Lake? Is it like a separate Green Lake? Or is it actually the, you know, Green Lake, you know, integrated into the console part of that, that model? Well, I think we don't know yet, right? I think that's the big thing is we don't know is it integrated into the console? What, is it a separate console? Is it really on top of a Ruba Central stuff? Or is it a separate installation? I have a funny feeling it's gonna be separate to begin with and then be more brought in. I don't think it's like AWS or Azure's consoles or, you know, Google where you can go in and pick from all the different services and just start them with one thing. There'll be links and, you know, different areas to go to. I'm not surprised by the ETR data either but I think it's a place. The only thing that would surprise me is, and I don't know if this is just because people just don't really trust Google as much from a data analysis and what they do with people's data that they're, you know, so distant and so low towards the 40% line. I expected them to be a little bit higher. So it'll be interesting to see in July what, where they end up. Same, yeah, but part of that is the bias that not as many people are using Google Cloud, they're so, but they are using BigQuery and that's what they showed. But I mean, Amazon or Azure's ubiquitous because of their software estate and Amazon's. But if you think about it like still even though there's five countries in Europe that have banned Google Analytics right now, Google Analytics is the largest platform for sitting on top of BigQuery for, you know, web data. So if you're going and doing intention and spending and return on, you know, return on advertising, a lot of times you're going into Google Analytics and building models on top of that and you're using the Google stack to go and do that. Yeah, so okay, so you don't trust Google? No. I trust Amazon. You trust Amazon? I trust Amazon. Yeah, I do. And then, I mean, Microsoft, I trust but they go down a lot. So it worries me. So I trust them for certain workloads. All right, let's wrap here with sort of we'll bring up this last chart and some of the issues that we want to talk about. We think that the real competitive advantage to the extent that HPE has one, we think it does. It's in the infrastructure software. It's not that, I think the big takeaway from listening to you, Andy and Rob is it's not the LLMs per se. You know, because you can bring those in, you know, like Amazon's got its own it'll bring in others, you know, beside Bedrock but that's really, it's the infrastructure software within what they've built with Cray that could be the competitive advantage, right? Right, yeah, I think so. I think that it's that bring your own model concept and the fact that the Cray grid technology has been there and tested over 20, you know, 20, 30 years now. Yeah, and so the next two points, Andy, again, how many models will HPE bring to bear in its model versus the cloud players? And you talked about HPE's AI ecosystem. For right now, it's focused on HPC, you know, can they expand that? Your thoughts, Andy? Their ecosystem is very, very weak. Sorry to say that, but, you know, it's almost nonexistent, right? None of the model repositories, model sharing or even the software stack. So how many models can they bring? I don't know. They got to partner with somebody like model producers and crank up the models and put it out there so people can retrain. Otherwise, they have to force people to do it. But however, like I said, the advantage that I see with HPE, if they get the messaging right, is that with cloud, the problem has always been, that's why it's still cloud is very messed up for deployment for a lot of people. Fine-tuning it, you could, you know, get hit with the bill without you knowing it. So you got to fine-tune it, watch it. Government and fitness is a pretty big thing. What HPE is trying to say, at least from my understanding is that, you know what? Don't worry about all of those things, man. Just bring the model. We'll help you train. We'll get the best possible you can be. Don't worry about it. We'll take care of it. We got everything from soup to nuts to take care of that. That could be their one advantage. And second advantage is they don't talk about this a lot, but when it comes to AI models, training is only one part of it. Deploying and inferencing is the major huge issue, particularly when it comes to smaller models, LLMS all the craze now, but the regular AI models, which means their edge and networking could be a huge play for HP in this. Brain in my core and I'll help you push it out the edge and do things with that. That could be huge, which they're not talking about. And sustainability, if the play comes into fruition sometime in the future, because nobody's talking about it now, that could be a good play. But the hurdles they have to go through, they don't have the data, data is king. They got to figure out and convince people to move the data. That's going to be major. I agree. I think edge is going to be huge, but I think it's going to be a lot of this stuff at the edge. It's going to be arm based, low power, very low cost. There's going to be tons of data doing that inferencing at the edge. All right. Last couple of points here that we want to bring up, can the business be profitable? I mean, that's ultimately to me what this is all about. And then, what about quantum, Rob? You brought that up as well. And you might have some thoughts on that. Yeah. I mean, the fact that quantum really wasn't mentioned at all in anything, in any interview, in any analyst session this week, that was kind of shocking to me. Given that the supercomputer really go away when quantum gets there. And quantum as a service, IBM is big in pushing down that space and you have others in that space already. It seems like they're going to have to play catch up yet again in the quantum space. And maybe they're already doing it behind the scenes with Craig and they're already down the, I just haven't seen it. And I think that would worry me because I think that changes the game longer term for this. Well, it's interesting, Andy. I think, Andy, you were at IBM Think. I wasn't there, but I'm sure they were talking about quantum. Cisco talked quantum. AWS talks quantum. Yeah, nothing at HPE Discover. Andy, you got the last word. Quantum is not ready for real world yet. They're all talking, they're wasting their time. Because they're as simple as that. You think it's smart that they didn't talk. Well, in fact, for your decision. I think so because they had to worry about catching up with all these other guys. What's the point of talking about quantum which is out of the possible which is five years away if we are talking about through six months away. Well, in fact, John Furrier said he wished that Cisco didn't talk about quantum for that very reason. All right, guys, we got a wrap. I want to thank Rob Streche and Andy Cherai. Thanks you guys for coming on today. Great discussion and to be continued, no doubt. All right, I also want to thank Alex Meyerson who's on production and manages the podcast. Ken Schiffman as well in our East Coast office. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters and Rob Hof is our editor-in-chief over at siliconangle.com. He does some great work. Thanks to everybody. Remember all these episodes are available as podcasts wherever you listen, just search breaking analysis podcast. They publish each week on wikibon.com at siliconangle.com. You can email me at davidotbalante at siliconangle.com or DM me at dvalante or comment on our LinkedIn posts. I could say we post every week. In fact, Rob, AR Insights just reclassified breaking analysis, not as blogs, but as a real research. Cracked the AR Insights 100. I didn't even know it existed a month ago. You and Andy were listed pretty high up there in the top 100, so congratulations to both of you. Congratulations Andy. Like I said, I didn't even know about this list a month ago. Also check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights. Powered by ETR. Thanks for watching and we'll see you next time on breaking analysis.