 Good afternoon, guys and gals. Welcome back to theCUBE, the leader in live tech coverage. We are covering day two of HPE Discover from Las Vegas. Toasty Las Vegas, Lisa Martin with Dave Vellante. Dave, some of the big news yesterday was all about GreenLake. GreenLake for LLMs is a big topic. HPE is pioneering that with a partner that we get to talk to next. Yeah, well, of course the AI heard around the world as we like to say sometimes has changed everybody's thinking on this. So, you know, HPE rightly is leaning in. They are absolutely leaning in. We've got one of our alumni back, Dr. Angling Goh. Great to see you. SVP and Chief Technology Officer at AI for HPE. And Jonas Andrews, CEO of Aleph Alpha joins us. Guys, welcome to theCUBE. Hello. You must be so popular since the announcement came out yesterday. Jonas, talk to us a little bit about Aleph Alpha. Founded in 2019. I understand this is your third AI startup. Talk to us about that and how you're working together to really pioneer what HPE is going to deliver. Yeah, so yeah, we started this in 19. This was even before GPT-3 was out. And so we were building language models back then starting to do that. Of course, the sizes have changed. Like scale has kind of massively driven a big part of this innovation, but we drove a significant part of the innovation in that field. Like we invented multi-modality. We opened Source Start. So that was kind of coming out of our lab first and we now have basically built something for explainability that can give you for every claim that you're outputting that these models are basically creating can give you like a positive and a negative trace. So you can not just have it like in some link and some URL, but you can actually look at and drill into documents and sources of evidence on how to confirm or contradict certain claims. And we found that this is really what is required for humans to take responsibility. Yeah. So when people talk about guardrails. Yeah. This is what they're talking about, right? It's specifically designed for things like explainability or maybe to ask more questions if the AI doesn't know the answer, is that? Or if you mix things up sometimes. Yeah. So you have to be assured that it is not doing that. So that's the intent is, you know, we talk about hallucinations. It's to minimize that. Well, yes, hallucinations, but also when you look at the kind of most valuable use cases, those are never as simple as right or wrong. Like when you think about lawyers or medical professionals, it's not like they're asking a question and then you could say the answer is clearly wrong or right, right? This is maybe if you're writing your homework with like some chatbot, but if you're trying to figure out how to, how to orchestrate different perspectives, how to align this with strategy, how to take responsibility, you need something else than just the chatbot that gives you a reply that's either wrong or right. So that's what we're focusing. We're focusing on these highly regulated industries, really complex workloads that rely on proprietary data and we want to make this as sovereign as possible. Dr. Goh, talk a little bit about why Aleph Alpha was chosen as the pioneer for this program making news from HPE. What was it about the technology, the binds behind it, that HPE decided this is the right way to launch this? No, it's a great question. And it goes way back. You know, I mean, even in September of last year, I was already engaging, way even before some of the other chatbots came to the market, right? I was really working on their models and even way back then, their models were really accepting images. And so very advanced. And the fact that there is ability to deal with highly regulated industry, explainability was important. I mean, if you finance, healthcare, all these are required, have strong regulatory requirements. And then what we do in addition to that is to build tools in front of it that Aleph Alpha is using, especially in the regulated industry. For example, when a regulator comes in and asks for an audit, right? Aleph Alpha has explainability. But sometimes they also want to ask, what data do you feed that model with? Because that can influence the model. So we've actually acquired tools like Peckydum, that is we've renamed HPE product, that will keep track of the, keeps a database of what you've trained the model with so that during an audit, right? And in fact, you are using Peckydum too. Yeah, I mean, this is our part of the coin. Like why is the partnership great for us? Because it's an ecosystem that we're fitting in. It's not just kind of metal. It's also that. It's also like great hardware that we're running on. But it's also basically the whole ecosystem with data governance, with MLDE that we're using to run our experiments and the kind of research. So I think for us, it's really, it's a great fit because we align very well with our values, what we want to bring to the customer. Like both companies are very much focused on bringing engineering excellence in like a sovereign way to the customer. So I think that's from our side, why I'm really excited about the partnership. And I must say, Peckydum has now been renamed HPE product called Machine Learning Data Management System, right? To work complementary to the art model. And it is very different when we build a cloud service for a large language model. It's very different, right? The traditional cloud service model is where you have many, many workloads running on many compute servers. But with a large language model, you have one workload running on many compute servers. And therefore the scalability part is very different. This is where we bring in our supercomputing knowledge that we have for decades to be able to deal with this one big workload on many compute servers. So as a developer, you had to think about that and program to what Dr. Go just described, I presume, right? And that, was that part of the breakthroughs that you made in the last several years or taking advantage of that architecture? Well, yeah, absolutely. I mean, we've built something that is on the research side and on the engineering side, it's quite unique and fits well into these ecosystems and allows customers, for example, to choose different execution environments. So let's say you're in a highly regulated industry, very compliant data, and you have certain workloads that you want to never leave your premises. But you've got others like customer support that you want to run on GreenLake, right? So you can do this. You can combine this all with our different sizes of our models, different customizations, and you can choose execution environments and it all kind of fits well together. So I think, yeah, we're definitely thinking about this and our approach, like the way I think about it is not just, hey, those guys trained one of the best LLMs and it's also multimodal, but I think about how can we bring the power of this technology to the world's best enterprises in the best way to support them? And that's where the partnership comes in. It's funny, the analysts were curious as to why HPE would choose just a small startup. Why, and I'm not surprised at all, I'm actually happy because that's where all the innovation happens. I'd say it's about time. It's good. Yeah, they are leader in innovation, right? And they are focused on the industry that is regulated, explainability. All these are key factors for many of our customers. So, enterprise customers. When we have, I know there's a lot of questions we want to get to like the AI ethics website and governance website, but before we do it, when we have an AI expert or two AI experts on, we'd like to ask them sort of take us back to, I mean, even mid last decade, this stuff was not, the breakthroughs weren't there. And then it seems like scale became better understood. And you mentioned vision as well and images. How important were those? I can help us understand those breakthroughs that you were able to make as researchers and as an industry. Go ahead. I don't have to start. So what kind of drove this innovation from my perspective is the realization that by building on language, we're not just building, we're not just kind of learning speech patterns and grammar, but we're actually learning the result of human intelligence. And so in language, in like a more complex, when you look at different levels of abstraction, of complexity, how you want to model language. And of course the simplest level is things like good morning or once upon a time. That's easy. But then if you look at, and if you make these models bigger and bigger, they get deeper and deeper. And so they're able to understand a conceptual structure in that data. And this seems to carry some of the results of our intelligence. So I think this is one of the key types. So I'm interested in how important scale was in that regard. And also vision. Was vision more important than, and how, what impact did that have? Well, I came from Silicon Graphics, right? Vision, right? And HP acquired Silicon Graphics and I came with it. And back then in 96, we actually started working on neural networks. It's just that we didn't have the scale of compute power to deal with it. And the ability to get all this data into it. So then I kept quiet about AI until recently. So 26 years ago, we were working on this, right? So it is the scale that made the difference. I like to always say, we humans probably would have come across 1,000 book equivalents. Equivalent of 1,000 books of texts in our lifetime. A neural network model like this, a large language model, probably 10 million books equivalent. So these things have seen so many word connections, right? In terms of word vectors and embeddings, so many different word connections that when you are conversing with it, it may come up with string of words that you have never come across before because it has seen 10 million while you've only seen 1,000 books, you say. So that's the key, the scale key. The only problem with that is that even though it has come across 10 million books, it has not gone through the meaning of each word like a dictionary. No, it has only seen words in relationship with other words, yeah? Right? While we humans go through the dictionary. That's the part of the reason why, once in a while, it makes things up. That's the reason why we need Jonas' model. And how important was the ability to ingest images and analyze images in terms of, like for instance, was that more important than speech? It started out with images. I mean, in the recent convolution neural networks. Before we call all these large language models, there were more narrow type neural networks called CNNs, convolution neural networks. And they were built to deal with images. And in fact, the worldwide competition then was to see who can recognize images best. There was a huge competition ongoing and they kept reducing the error rates until it was better than human recognition of images. And was that a major breakthrough in terms of the accuracy and the validity of the models? So what I like, and like my last start-up was doing a lot in vision. What I like about today's vision and multimodality is that we're not limiting our understanding of visual or multimodal data to predefined classes. So like a few years ago, we were looking for pedestrians or... Cats, cats, cats and pictures, exactly, right? So we had a very predefined set of things we were looking for and then we were training DCI systems to perform well for these objects. And that's fine and all if you just want to do like a self-driving car that's not going to run into pedestrians. But even with self-driving cars, we found that this covers 99% of all the observations, but the 1%, those are things that we cannot easily put into rules. And now we found a way to combine the continuous space of images with the symbolic space that has reasoning capabilities or at least the reasoning capabilities of a stochastic parrot. And now we're able to combine these two worlds in like one embedding. And there's kind of recent work from our team that shows that our model is multilingual and multimodal and it has a shared embedding. So if you basically express the same thing in different languages and you express it in words or in images, you're getting actually like a very similar understanding of the AI system. So the fusion of the vision and the textual, right? Yeah. Integrating the two together actually is the big breakthrough. Do you consider it a learning system? The critics of today's AI and general AI say, well, they're not learning systems. They're basically a database with search and some natural language processing. Not my words, but this is what some of the critics say. But it feels like it's much more than that. Is you consider it a learning system? Yes, it's a lot more, it's a lot more. And in fact, what it does actually is, right? You have a network with lots of connections and then you start feeding it with tons and tons of texts in a string. And all it does is watching for each word, right? How often are the other words occurring in that string? So it builds a network of the, how say for example, between the word dear and bear, right? Seems to make a strong connection because every time the word dear occurs, bear also occurs nearby. But dear with turnip probably occurs less. So the connection is weaker. That's how they build it. 50,000 words, right? It builds the connection with all the other 50,000 words. And this is how it becomes, have the ability to predict that next word, right? Remarkable. So the scale is massive. And one of the things that I, we talked with Antonio Neary about this morning and we heard about it yesterday and we talk about this all the time is sustainability. Sustainable AI. Can you talk about what HPE is doing from a GreenLake perspective with LF-Alpha that is enabling organizations to harness the power of these LLMs in a sustainable way so that they can really meet their objectives? Yeah, this is a huge question, right? Given the ESG requirements and the science-based targets that many of these companies have today, right? And these models are huge, right? Some models are 100 billion connections. Some models are trillion connections. Every time it reads a word or comes up with a word, a trillion connections get fired up, right? That's consuming a lot of energy. So that's part of the reason why we on the HPE side have two things, right? One, we build systems that are the most energy efficient we can make them, right? And we have that heritage from the supercomputing world, right? So build highly energy efficient system, right? Imagine, if a big system is 20 megawatts, 10% efficiency wasted, it's two megawatts of power wasted, right? So all these years or decades would be working on energy efficient hardware systems, number one. And number two, to also build, like in the HPE GreenLake layer, a sustainability dashboard. So we can track as you are running the model efficiently, right? To also know what, how much you are consuming. So perhaps as you are tuning your model, you also watch how best to tune it for efficiency too. So, and also all the software layers below that. So efficiency is a big ticket item. That's why when we announced HPE GreenLake for large language models, efficiency, energy efficiency is a big factor. Productivity and also efficiency are splitting that big workload across many servers. Last question guys, as we were running out of time here, I would love to know just some of the feedback that you've gotten in the last, what, 24 hours, 26 hours since the announcement came out. What's the, what are some of those highly regulated customers, industry customers coming to you with comments, questions? How can I get started? Yeah, yeah, they are in general just two different groups of customers, right? And in highly regulated industries like finance and healthcare, there is a group that advance, right? They already have investigated this. They may have a small group of data scientists of their own done some investigation and they are ready to go. But they realize that when they deploy this at scale, they don't want to deal with all the complexities, right? Energy efficiency, getting the systems together, put all those tools together. And even if they did that, then how to run it operationally at scale, right? So this is where we have that discussion, HPE GreenLake for large language models for example, or HPE GreenLake for AI. But then there is also another group of customers, they are very early, right? They have a desire and they believe that this is the right thing to do. They have to be competitive and they need to do this, right, to build an AI model for DR-specific purposes and be accepted by the regulated industries, regulators. And this is where we come in to go along with them on their journey. Different from the other one where they are ready to deploy, right here, we start early with them and work with them and go along with them on their journey towards production. Sounds like, oh, go ahead. I was just going to say, so that means that you've got an AI system that is a trusted partner, if you will, right? That's really what you're delivering within these highly regulated industries. And that's what people really want, right? Because it's fun when it's making stuff up, but you can't trust it. Yes, well the consumer is one thing, but for enterprises, it's a different thing. No, exactly. And it's already integrated, right? So it's more than just creating some text. You need auditability, you need access control, identity management, right? You need to make sure that only the information that you want a certain user to have access to is being used. You need to make sure that the results are reproducible. That you can always go back and look at why was this the result? Why maybe it was based on some outdated data that was still in the knowledge phase, right? But you need to make this transparent and accessible. And I think this is really where we have a strong advantage where we can kind of combine everything into like a scalable, turnkey-ready solution and to your point, right? The second part of customers, they understand that strategically this matters. It's about their sovereignty. It's about their value capture few years down the line, but they're struggling with the limited resources and what to focus on now. Like what is the thing that, because this technology can do almost everything. Like we have customers that are building business processes, accessing databases with that. You can do so much. And then for a single enterprise, the question is what is the thing we need to do right now? And what is the thing we need to kind of get on its way so we can do it next year and then year after? And there's the journey. Yeah. There's the journey. And it sounds like they've got great, customers in every industry have great partners in HPE, what you're doing with Greenleaf, what you're announced with Aleph Alpha guy. This is so exciting. We could spend way much more time. I think we're just scratching the surface. This is fascinating. Thank you very much for sharing your insights. Congratulations. And the partnership. Thanks a lot. On theCUBE. We appreciate it. All right. You heard it here. You're going to be checking out Aleph Alpha. I know it. We want to thank you so much for our Gaston for Dave Vellante. I'm Lisa Martin. Stick around. Great Cube content coming up next.