 Hey everyone, good afternoon, good evening. Welcome back to theCUBE's live coverage of day one of HPE Discover 23 from the Venetian Expo Center, Lisa Martin with Dave Vellante. We have had a fabulous day one, as we always do, but we've got Intel back, going to be talking about what's going on at Intel these days, what it means for customers embracing AI. AI's all the buzz. It is, it was, as you like to say, it was invented last year. It's AI, it's hot and it's like hot, don't touch the AI. It is, white hot. We've got one of our alumni back with us, Lisa Spelman, corporate vice president and GM Intel Xeon products and solutions. Lisa, it's great to have you. Thank you for having me. It's great to have you. You get two, you get two Lisa's. I love it. So you're the Colcham. Yeah. So Lisa, so much going on until these days. Talk to us about the latest fourth Gen Xeon processor and the accelerator engines. We were actually talking about those earlier today, but help us understand what's going on there. Why it's exciting. Yes, I will. So the nice thing about Xeon is that it really truly can do everything from the network, the cloud to the data center out to the edge and back again. And we built it this way to satisfy kind of all of our customers requirements. And so this gives people so much flexibility and that chance to use it for virtually any workload. We in the fourth generation made some big choices on where to put into the actual chip acceleration to really X factor the performance and the performance per watt of these workloads. So some of the examples are again, networks making them move just faster, get that data going. You know how much people love to spend time on their phones and on their Netflix. They need the network going fast. We also have done that for artificial intelligence and explosive growth workload that demands attention from all areas of compute. And we did that for several other areas around database and analytics. So if you think of all the things that really require tons of data and a lot of data movement, that's where we focused our acceleration effort. So do you feel like the value is roughly equal across all those different workloads? Are there some workloads that can take greater advantage of and exploit the new generation? Like I said, we definitely made choices. So when you look in the space of AI, you have workloads that generation over generation. So literally just one product to go, they 10 X to their performance improvement and their performance per watt. So that's a huge gain. You have other ones where the gain was more in the 30% because it didn't have that embedded acceleration. So we offer performance and performance per watt gains for everyone, but in those accelerated workloads, you get again what I call that X factor of value. And that's one of the Glen Gary workloads. Everybody wants to know, right? Is others AI oriented? So the data oriented ones? You know, there's a lot of focus we do put on AI and it's not just that crunching of all of the training and the inference, but it's also how the data moves through the application because there's several, there's so many different kinds of AI, but I'll just use one example. There's a big difference when you're a healthcare provider and you're sitting there thinking, what's the best cancer treatment for a patient that has these variables, age, height, weight, race, gender, and you don't need the response to be instantaneous. You're okay if it takes one second to get that treatment plan. Whereas if you're in autonomous vehicles and you're trying to decide, is that a pine cone or a kid? That needs to be absolutely instantaneous. So we look at it from both sides of which type of AI you're accelerating and we either make chip level decisions or what we call system level decisions. So we're really trying to solve for both. And you're obviously communicating with customers in different industries. You gave a great example there of where real time is absolutely, literally could be life or death or when a second is okay for a result. How are customers kind of embracing? How are you seeing, you know, AI is the hottest thing. How are customers coming to you saying, help us accelerate this? Because as I've heard so many leaders say and Antonio said something like this this morning, I'm paraphrased, if you haven't started working with AI yet as a business, you're behind. Yeah, and I say that even as a person too. You know, we have finally hit this breakthrough in AI which has been a 50 year journey of literally grandma knows about AI and grandma's talking about AI. My mom knows about AI. And this chat GPT thing really kind of pushed it into the mainstream, but the future of commercial AI and use cases is more like that cancer example that I gave of diagnostics or think of anything that's trained specifically on your industry's data. Nobody, well, not nobody, but most people don't need their AI to be able to answer every single question in the world. And if you built your system to do that, you're paying too much. So we're trying to focus on offering not only those huge model options, but in the smaller models that have more commercial use case. I like to joke that the chat GPT is mostly what I spend my time trying to keep my kids from using their homework and the smaller language models and generative AI are more of what we're using to really help advance the human condition. So you'll narrow that scope and focus on that last piece that you mentioned. And then does that make, everybody talks about guardrails, does that make creating guardrails easier for your customers? You know, what we try to do is describe it in terms of outcomes and achievements and then understand whether the customer feels that they're performance bound, at which case cost might not be as big of a factor or if they're total cost of ownership bound. And that takes into account, not just the acquisition costs, but the power and the running of and continuing of servicing that whole solution. So we try to be pretty clear with customers about getting their requirements in and then suggesting choices based on those requirements. Again, a commercial smaller large language model, you would use a Xeon because it's the best total cost of ownership and it delivers the performance that you need. When you start to move into those really big language models, you need acceleration. And that's part of what Antonia was talking about today. So when you talk about TCO, I want to understand more about how Intel thinks about it. So it's obviously, if I can get twice the performance, I could do twice amount of work with the same infrastructure. So it's the hardware that you have to purchase and deploy. And it's also the power consumption. And is it also the, are there people costs that go into the TCO or that's out of your scope? There is a little bit of people costs, but it gets outweighed by the cost of power and honestly the cost of memory. Those are the two biggest drivers of TCO. So if we can, within our Xeon family, if we can keep the power the same and grow the performance. And again, like we did with our fourth gen Xeon, we're delivering an average 55% TCO benefit gen over gen, while also increasing the ability to service that memory so you don't necessarily have to keep growing how much memory you offer. You might, based on the solution, but if you don't, then that's a benefit because you can again, keep those costs down. So we really look at it in terms of the power, a lot, the performance, and then that memory utilization. And what are the migration considerations and the salient points? And particularly as you go toward machine intelligence. Yeah, the migration has been a challenge for customers in literally everything, whether you're running networks, you're a major service provider, whether you're living your life in a cloud, providing healthcare, because you want the latest and greatest technology, but you sit there with the ug moment of the work it's going to take to validate that. So we have a effort we put around our feature consistency and what we call live migration. And this is the ability to move your workload, your capability into the new generation, and we try to drive down the amount of effort that it takes to do that. It's not perfect for anyone, but when I look at, we have 100 million zions installed around the world. I mean, this huge opportunity to upgrade everyone to the most power efficient and performance processors on the planet. And we're working on making that as smooth and easy for them as we can. Is sustainability a factor in customer conversations? I imagine, I mean, we talked to so many vendors where customers are saying either an RFP, they have to work with someone who has an ESG program that will help them dial down carbon footprint and things like that. Where is sustainability? And how is the fourth GenZ on processor facilitator of customers sustainability, sustainable IT needs? Yep, I mean, it's part of every conversation, Lisa. You can't step away from it. And the best part about it is that it is not just people doing it to check a box on the RFP. They're doing it because they absolutely need to and they're actually finally starting to understand how much it matters. So at Intel, we look at it from the manufacturing aspect all the way through to the product's use. So we manufacture with the most renewable energy of any silicon manufacturer in the world. And then, so that's just building the product. And then in deployment, we seek again to increase performance and hold that power the same. And in this generation, we actually introduce something called optimized power mode, which gives customers the opportunity to take a very slight performance hit for a huge power savings. And the reason you offer that is because again, not every single application needs to operate at max performance all the time. So we're offering a lot of flexibility. So you can say, I want to save 20% of my power and take a 3% performance hit. There's a lot of people that think that's the right trade-off. So it's the data center version of when I'm on the plane, I got 13% left on my laptop. All right, I got to make it. You got to think smart about what you're going to do. The other S-word out there, we talked about sustainability, security. It continues to be an issue. Where is that in customer conversations? And what is Intel doing to help customers protect their data as every company has to be a data company? Yep. So we are setting up and establishing Xeon as the foundation for confidential computing. And confidential computing just means that your data is secure, your application is secure, your virtual machine is secure. And the way we like to think about it is when you start thinking of securing your data, your infrastructure, your application, your customer's data, the very first choice you make is that hardware. And we want to make sure that that hardware coming from Intel has all the right features and all the right software enabled so that you start with a trusted foundation. Because if your foundation is rocky, then you're going to be in a position where you're just layering on top of a rocky foundation. We've introduced several features generation over generation. And the goal of those security features is to make it easier every single time to get security with the right performance levels. Absolutely critical. Do you have a favorite customer story that really shines the light on what Gen 4 is delivering? You gave some great examples of real time versus one second. But is there a customer story that you think really articulates the value prop? Well, you know, there's one that I always think of, because again, it resonates with my kids. And it's when you look at all of those cool movies that are, you know, that animation and there's several of them out now. And it just has looked more and more lifelike. And you see the trees moving and the hair moving and all of that. And you know that that's all rendered on Xeon. And it has generation over generation, the ability to make those movies come to life more and more is driven by the performance improvements and the features and the acceleration that we're putting into Xeon. It's just kind of a fun one because again, sometimes in tech, your kids are kind of like, they don't really know what you do. But when it's that, I can be like, no, you guys, this is what I do. So I like those. Absolutely. I'm getting a little mermaid vibe here. Yeah. So in closing, can you, you know, some of the recent announcements and investments from Intel, talk to us a little bit about that in manufacturing and what that means to HPE and your joint customers. Yep. You know, we've had such a longstanding partnership with HPE across their entire portfolio. And we're really proud of the work that we have done and continue to do together on behalf of our customers. Intel is investing in building a sustainable and available manufacturing capability around the globe. So we have a fundamental belief that too much of our manufacturing capacity is not globally dispersed as it should be. We're investing to change that and we're opening up our factories. So that gives us a chance to talk to customers like HPE, like others, about what products they might want to develop for themselves for very specialized use cases and also to give new opportunities for us to partner together on custom solutions. So we're really looking forward to having one more tool in our toolkit all just to solve customer problems a bit better. Very symbiotic with HPE and your whole ecosystem. Lisa, thank you so much for joining Dave and me talking about Gen4. What's next? The value in it for organizations as they adopt AI. We really appreciate you joining the show. Thank you. It's great to be here. I appreciate it. All right. Our pleasure for Lisa Spellman and Dave Vellante. I'm Lisa Martin. Two leases here as you got. Join us tomorrow, the Cube's second day of coverage. We're not only going to be breaking down Antonio Neary's keynote from today, but Antonio himself will be on the show. We've also got, yes, a great line up of guests talking about the AI edge to cloud expanded partnerships. Don't miss Antonio Neary and guests. See you tomorrow.