 Hey everyone, welcome back. It's theCUBE Live at the Venetian Expo. Day two of our coverage of HPE Discover 2023. Lacey Martin with Dave Vellante. Dave, we're going to be talking about, with Intel and HPE Next, about modernization of workloads. Interesting topic. Yeah, everybody wants to modernize. Of course, why would you want to be old fashioned, right? Exactly, you don't want to be old fashioned. We've got two guests who alumni back with us. Please welcome back Janet George, the corporate vice president at Data Center and AI. GM, cloud enterprise, security, and solutions group at Intel. Great to have you back. Thank you. Your title is longer than I last saw you. Thank you. Phil Cachona's back as well. Telco OEM and service providers at HPE. Guys, great to have you. So here we are. Great to be here. Day two, lots of news yesterday. It was just one of those like t-shirt guns at a basketball game or content and news coming out. Janet, talk with us about some of the things that you're seeing in the market from new workloads, from modernization. What are some of the major needs out there that customers are having? So for the past one year, we've seen workload growth in AI. We've seen workloads growing at the edge. We've seen a great deal of media workloads and we've seen a great deal of workloads around compute, if you will, compute infrastructure, faster compute, faster performance and so on. How does that relate to, we talked a lot yesterday, Antonio did today as well, asked on theCUBE with HPE saying customers are in a hybrid cloud by accident. We want to get them to hybrid cloud by design. How does the workload migration and the movement that you're seeing to the edge, how do they integrate? Yeah, people are very intentional about where they want to be. It's not like they accidentally want to be on-prem or on the public cloud. They're very intentional about the use cases that they want to be in the continuum. And I call it a continuum because the cloud, the edge, multiple clouds, all of these things come together. When you think about what infrastructure you want to build upon. And so within that continuum, we've seen growth happen a lot at the edge. And the use cases at the edge are quite different from the use cases we see at the cloud because there's things that matter at the edge that are different from what matters at the cloud. So for example, at the edge, you see a lot of criticality use cases like MRI. There's no enough time to go analyze the MRI back in the cloud and come back. The other use case we see is around latency and this is around close circuit TVs where you're analyzing the data within the TV and you're looking at it and trying to figure out can you predict the crime right at the spot where it's happening. And then there's the case of large data. Maybe it's so much data that you cannot send the data over to the cloud. Sensors are creating so much data. Edge computing is creating so much data. And you don't have the ability and the bandwidth to send that data to the cloud. You might even think about fault computing which is an intermediate layer between the edge and the cloud. And you do your computing the fog layer and then you migrate these edge use cases or sort of address the edge use cases. So let me add to that though. Yesterday I was talking with a customer and they're public now, Freddie Mac. And Freddie Mac, so when we think edge I want to also think about, we used to say it's not data center. So in Freddie Mac's case they lifted and shifted everything to a public cloud because that was the contemporary thing to do. And then they realized they had some performance degradation and they had costs overruns. So they decided to reanalyze every application instead of data that was sitting in a public cloud. And they came out and said, well we're going to go back to the hybrid model. So they repatriated some of their data and apps into a co-location. So now you can start to push that if you will at the edge to maximize their cost, 30% reduction and about a 25 to 30% improvement in performance. So when we start to think about edge I also want to think about there's a public cloud because you mentioned hybrid cloud. There is the colo or on-prem and then there is the far edge. And I know we're going to talk more about the far edge but I want to make sure we put that in context and there's cost savings and optimizations along the way. So you consider that colo edge? Okay, I'll consider when you're moving a workload first A, we're moving it out of a public cloud environment for optimization purpose. By the way, I'm not suggesting public cloud is bad. Hybrid is the way. And then in certain circumstances if we're running a retail application at a colo because it doesn't not practical to run inside of a store. Yes, we'll consider that an edge. Okay, so you got three levels of edge then. You got what you just described. That's right, multiple edges. And then the next layer might be a retail operation you actually have infrastructure there. And then the far edge. Yes, and I'm going to give you a follow on. So working with Intel, we've also have another customer racetrack. So our racetrack is at the far edge. They have a set of 800 convenience stores throughout America, Southern America. And they're pumping about something like seven million gallons of gas a week. It's a lot of gas. And if you stop by and they're at a gas, that's a problem. And guess what? They did $20 billion of revenue last year. So it's a big operation. What they've decided to do is put a double installation of DL 360s inside each convenience store. They raise their resiliency. Now this is the far edge in each of these stores. Think about gas stations. And now they're getting some, all of their videos starting to be captured. They're monitoring, it's managing all of their transaction systems. It's managing the pumping of the gas. And now they can change gas prices in seconds. Not anybody going out there flipping anything on a board. So it's been an amazing example at the far edge. And there's another far edge. It might be an oil rig. A heavy oil rig. Okay, or a cell tower or something like that. Oh, so don't get me started on cell. Okay, so it was really fragmented, right? So as technology companies that have delivered horizontal tech, right? You have to really think hard about the architecture for this that can scale. Because you can't just do one-offs every time. So how do you think about that from a systems perspective and then obviously silicon? Yeah, so let's talk about, that leads me to a topic around workload optimization. Because in my title, you can see it's pretty broad. I work with customers across every vertical market, whether it's healthcare, telecommunications, manufacturing, automotive across every market. And I listen to them. And what they're thinking is when it gets to this edge notion, Dave, is edge usually starts in this disaggregated edge. A lot of different locations. So you have environmental concerns. You might have power constraints. So you have to be very power efficient. You have management challenges as well. So if you have a disaggregated infrastructure, how are you going to manage that? How are you going to upgrade the firmware? Well, GreenLake for ComputeOps Manager now can manage a massively distributed set of infrastructure. So these are the types of things all of these companies are thinking. So there is a harmonizing set of principles that each of these vertical markets are thinking about, even though they're different. Now, if you're at a cell site, now think about this. If you're doing radio access network, and we partner very closely with Intel on this, Intel just announced or just launched now, their Sapphire EE, now brand name, Rand Boost. And it has the accelerator for radio access network built into the processor. So now we will put a short depth server at the cell site that can process the data coming down off of the antenna. It's all done in an HPE and Intel product called the DL110. So that's one example that is a workload optimized product, specifically for that use case. Different from healthcare, different from retail. So you're going to start to see lots of optimizations and permutations as we go forward. So it's almost like a fungible technology that you can apply, but it's different requirements, obviously, than running SAP in a data center. Yeah, we tend to cluster and classify these workloads. And when we look at clustering and classification of workloads, they fall in certain topologies. And we can look at the topology of that workload and that optimization, and we can look at the computational profile that topology requires. And so we map it to the topology and that way we are able to get the efficiencies in the optimizations if you go, in the workload optimizations. How are customers thinking about the edge? What are some of the misconceptions and maybe missteps that you're seeing out there? Let me go. Okay, I'll add. This while I'll say, the first thing is, they're usually trying to solve an immediate problem when they're thinking about edge. And some of the mistakes are, they don't take into consideration the growth, and they're not thinking big enough because with AI starting to come into almost, and let's face it, AI is rapidly becoming the must have application for enterprises of all sizes. So not thinking big enough, I'm going to say is number one. Number two, not thinking about the full experience of service because if you're disaggregated in a lot of different locations, you have to worry about support. Look, we make the best servers on the planet, but if something happens, someone has to react to it. And then management. Someone's got to make sure those servers are updated with the latest firmware to keep them secure, and then security is the second, the final one. Security, don't ever underestimate security. If it's sitting in a retail outlet, how are you going to protect that? How do you make sure it's not compromised in so security? Ideally, by the way, without a truck roll every time, if you can avoid it, right? I'm sorry. I say ideally without a truck roll, managing those every time. Absolutely, exactly right. And I will add to what Phil just said. I think the number one thing we see is the architectural choices that is made, right? You can't have an architecture at the edge that is completely divorced from the architecture at the cloud because the edge of the cloud is a continuum. So you have to make architectural choices that encompasses the whole movement of that workload and the whole movement of data and everything else associated with that workload. The second one, I would say, is a human interface, right? The interfaces have to scale between what interface you see at the edge and what interface you see at the cloud and what outcomes it's solving for you. So putting the human-centric portion of that as you look at the outcomes. But on the former, you have to make trade-offs then. That's right. What are those trade-offs that you make? The way you interface or you make interfaces in the cloud are completely different because of resource constraints, right? You don't have the capacity you have at the edge versus at the cloud and also the scale. So you've got to look at all those scale choices. That's why I said architectural choices. The architectural choices you may completely vary as you look at this continuum. When we talk about human-centric, and Phil, you talked about, you know, when Dave asked about, you know, some of the things maybe the mistakes customers are making, they're not thinking big enough. How and why should they come to HPE and Intel to help them, and where are your customer conversations so that they can ensure that the edge-to-cloud is a continuum that delivers the customer experience that the consumers and the other folks on the other end, whether it's a telco or not, hey, we're connected, we've got access. How do you help them think bigger? Well, let me say, Antonio years ago said, you know, it's going to be a edge-to-cloud world. It's not just going to be about cloud. Even with some of the announcements that we've made this week, you can tell. There's certainly an importance of cloud, you know, I think putting the right applications in the right location with maximum efficiencies that we're all shooting for. So I think HPE has an incredibly large portfolio that has been optimized for various workloads. But it's not just that, we lead the market. When it comes to telco, for example, we were the first to come out with a sled-based kind of modular system called the EL-8000T, and we have very large deployments out in the industry. Well, then we even modernized it and came up with the EL-110. So when it comes to customers looking at HPE and Intel, I think it's really about the portfolio, the services, and our forward-looking thinking. It's about thinking about like the announcers we've made about AI as a service. And customers start, and I will tell ya, every customer I've talked to at this show is trying to figure out how, it's a race, an AI, and they're trying to catch up. And they want to know, so they're coming to us as a thought leader of how to help them. So that's really what it's about. It's about coming together with two great companies and really thinking about the future and knowing that we've got the portfolio and services to support them. Yeah, and I will also add, like as Phil said, sometimes you have to look at the business model itself. There's a business model transformation that has to occur to in order to think big. Otherwise you're just solving for the problem or for the particular use case. But when you think about the whole business model, you think about what are some of the choices I need to make as I transform my business to address some of the challenges at the edge of the AI. So in the tech industry, as you guys well know, it's all about fashion, right? Whatever's in fashion. It was during COVID, of course, it was remote work, and then we were trying to figure out hybrid, and of course it was a forced march to digital transformation. That was the hot topic, and then, of course, now it's all about AI. And so that changes the way customers think about what their priorities are. My question is, what does that infrastructure look like to support these new workloads? Andreessen just came out about 24 hours ago with their version of the AI stack. It was actually quite good. And they brought in a bunch of AI, Brainiacs to sort of collaborate on that. What was missing in that was what the infrastructure looks like to support that. So... Well, we made some announcements also yesterday about trying to make it a little bit easier for customers. I mean, besides the AI as a service thinking really big for customers that are trying to do some language training, but if you just do any, I want to go back to the edge. If you think of inferencing at the edge, so we now have a very, of course, very close partnership with NVIDIA, and we have announced a couple of products. The DL320 now can take four L, four GPUs. And then we've now also introduced the DL380A, which can be four very large accelerators running significant inferencing at the edge. So what we've done is we've looked at our portfolio, we actually had to customize the DL380, that's what we call the A, so that we can put the GPUs in, cool them and power them. So we're making it a little easier to make it packaging, so customers have a starting point, if you will, a skew that they can start with. Of course, we can tune and optimize it, but that's what we're trying to make it easier for customers. We've already thought about some of the workloads and we're putting configurations and blueprints in place to help them. Is there a sustainability, sorry, Janet, you were going to say. No, I was going to say that, you know, when we, the world we came from, where we did AI and AI was what AI started out, with data really exploding, right? So we had this big data issue and then we had AI coming to the table to address the big data, but computers lagging behind. So if you saw the data to compute ratio, we had a big lag and what we saw now or what we're seeing now is that the computer's keeping up with the data. So when you look at the large language models, these large language models are being trained on 175 billion parameters space. So when you think about that parameter space that we're training these large language models on, we're looking at insatiable compute behind such training. And now we're not talking about cluster sizes in the hundreds or thousands. We're talking about tens of thousands of cluster sizes when you look at that insatiable demand, right? And you see the profile computer data ratio getting closer because it was so far apart that we couldn't actually address. And from an AI standpoint, AI was very rule based. So we were doing a lot of prediction type algorithms, maybe even recommendation engines and so on and so forth. But AI was not context aware. Right now, AI is very context aware. And with Generative AI, AI is creating data, labeled data that it didn't create before. And so now with Generative AI, we can go into organizations and we no longer are now encumbered by the fact that we don't have labeled data to train AI. AI can create the labeled data. We need to train AI. So that's the tipping point. That's what's helped us leap frog, if you will. Last question. From a cultural perspective, are most customers getting there, ready to do that? We can see it from HVE announcement, right? Everybody wants large language models. It was announced as part of GreenLake. So, you know, eventually most companies, most enterprises will have some sort of large language model type capability. These architectures, neural network architectures have advanced, they will penetrate into every enterprise. And I believe enterprises want super productivity in every aspect of their business. You know, I think customers are ready, but I also think there's always risk and concerns when we're thinking about AI and it won't get into all of those. But even from a clinician or a radiologist might be concerned that they can be replaced with AI. I don't think that's the horizon. You know, what I think is when I'm talking to healthcare customers and even teleco customers, this is a matter of, they're looking to, right now AI is additive. It's to get better then, compete, beat your competition. So they're ready for that sort of thing. I don't see it yet displacing functions. But let's face it, in the future, I think we're going to get there. And I think that's where there'll be some type of concerns economically and even from a, you know, jobs perspective. But for right now, I see companies asking for it. They're looking for speed. They're looking for a competitive edge and that's why it's a sprint. It is a sprint. That is for sure. Besides, thank you so much for joining Dave and me talking about what HPE and Intel are doing together to help organizations modernize their workloads. And so they can be competitive. We really appreciate your insights. Thank you. Thank you. Our pleasure. It's a pleasure. For our guests and for Dave Vellante, I'm Lisa Martin. Up next, you've heard about Aleph Alpha. We're going to be talking about it with Dr. Emily Ngo and the CEO and co-founder of Aleph Alpha. Stay tuned and we'll be right back.