 Welcome back, everyone, to theCUBE's live coverage here. The broadcast booth, the closing of the area down. We're going to get this last interview in. Day two of three days of coverage. Thirteenth year covering VMware's conference, VMware World Now, VMware Explorer in the second year. I'm John Furrier, Dave Vellante, Lisa Martin, Rob Stratchett, the whole team is here. It's team coverage. SiliconANGLE writers are here. Everybody's here. Our research team, writers, media. And back at the ranch, we have theCUBE cloud pumping out more clips than you could watch. So check out SiliconANGLE.com on theCUBE.net and check out Twitter, tons of stuff flowing there. For this next closing interview of the day, we're going to have AI conversation with an expert from Lenovo, Robert Daigle, the director of global AI leader for Lenovo. Welcome to theCUBE. Thank you so much for having me. I'm really excited to be here at VMware Explorer and to be here on theCUBE with you, talking about my favorite subject, which is artificial intelligence. Well, we're like an artificial fan club here. So we love it. Just, everyone knows we're bullish on AI. We've been doing a lot of it. But everyone is asking the questions, how do I get to the gen AI app construct on the keynote on day one? They showed the landscape of enterprise apps. There's clear line of sight into new opportunities to innovate. And then you got the top down boardroom saying, go take that hill, inject AI into everything. And then the bottoms up developer community going crazy on open source. A lot of open source, a greatness happening. But then it gets stuck in this compliance legal thing going on. What do you have dynamics going on with AI right now? It's a very complex world right now. One of the things that we've done at Lenovo is we actually stood up because we're eating our own dog food, drinking our own Kool-Aid, whatever you want to call it. We're actually consuming this internally. I've got over 17 generative AI projects ongoing internally at Lenovo that we're overseeing as well. And one of the things that we did is we launched a responsible AI committee to review all of the things that we're doing that touch artificial intelligence. So it's a really, really important topic. You want legal compliance. You want to do things the right way. And this whole idea about privacy with generative AI I think is really, really important. And that's why we're seeing a lot of it being deployed on-prem and a lot of interest on running generative AI models on-prem. How has your thinking around infrastructure changed as a result of the AI heard around the world? I know you were working on AI before, but now there's just so much buzz and so much vision. Has it changed the way in which Lenovo is thinking about infrastructure? Absolutely, absolutely. We have, we've been working on artificial intelligence systems for a number of years now. I've been with Lenovo for six years now. And when we launched our AI strategy, so we've been building AI purpose-built AI systems from that point on. I've been working in artificial intelligence systems for about a decade now. And it's becoming so, so critical to focus on the performance of the infrastructure. It really matters now, especially when you're getting into really large model sizes that require a lot of compute performance. The compute matters more now than ever with artificial intelligence. So what about this move toward, everybody talks about training versus inference. How are you guys thinking about that? And what does it mean for your customers and your business? Absolutely. Inference is gonna be about four times the market size of training, long tail. Yes, totally agree with that. So it's a huge market. The really great thing about it is the inference models that we're seeing run really well on PCIe accelerators, right? So you can take GPUs in common general purpose systems and get started with inference today. You don't need to spoke custom infrastructure to start deploying inference models today. And that's a really great news for industries that are looking at and companies that are looking at adopting generative AI. They can get started deploying inference models today and start seeing value from it without having to train up their own bespoke foundation model. We just wrote a piece and we used some data from our partners ETR and it showed, I was just showing it to you. It like literally a 50-50 split between customers saying we're gonna run on private infrastructure and we're gonna run on public infrastructure. Now, the cloud vendors, they have the advantage of speed and they've been at it for a while, you know, AWS Sage make it for, but the legal compliance issues, like they call it FUD, but it's real fear uncertainty and doubt. It's not very slowing down. So it's legit, right? So that's the advantage. How do you see, what's your strategy to be able to, with your partners and ecosystem, keep pace, enough pace, stay close enough in range with the cloud acceleration in order to take advantage of that inherent value that you bring with regard to legal compliance, et cetera? Absolutely. The legal compliance is a real concern. I mean, we've seen the headlines, right, with developers leak in proprietary code. Yeah, and there's a case right now on generative AI content. That's gonna have huge implications if it goes up to the Supreme Court. So the legal concerns are real. Running it on-prem at least gives you a lot of control over what goes into those foundation models and what comes out of it and that you own it. At least it's, you know, within your control. You know, really, I think the innovation here is happening in the open-source community. If you look at some of the new models, like what Facebook has done with Lama 2, right? It's a very powerful model and that is licensed in a way that is open-source and commercially viable. There's over 20 models that are open-source, commercially-licensed models that you can use in your data center today. Well, it's funny. You know, in the hardware business, it's always cycles, right? You know, we all know this well. It's like, oh, who's got the latest chip and who can put in the Intel thing? We can get the market faster. And it's usually, you know, maybe six-month LeapFrog cycles. Now it's like with these foundation models, it's like every week, there's a LeapFrog. There's a new, yeah, I just, I was reading and the speed that is coming from research, from a research paper into like something that we can use is happening in matters of weeks and months, not years, like what we used to experience. And so I read a recent paper that was fine-tuning an Alpaca model and some of the optimization that's been done on bringing those models down into smaller, smaller compute footprint. I think that's another paradigm that we can anticipate. These models are gonna get more and more efficient, right? And so that means we're gonna be able to pack some really powerful, really large, large-language models into smaller and smaller compute. Right, I started to fling Maloya on techs and he couldn't make it. We're trying to get a beat with him. We know it from his HP days. He said, ISG's growing really fast. You guys doing really well. And NVIDIA's obviously earnings were spectacular today. Everyone knows what they're doing there on stage here. You know, I remember interviewing NVIDIA years ago, David, 2015, and they're like, we're not a hardware company, we're a software company. Their software stack dominates their performance today. Yeah, they got gear, they got software. That's the model, everyone sees that. Hardware is still relevant, everyone wants gear, but it's a software. Can you share with our audience what you guys are doing right now? Where's that growth coming from? How are you viewing software? Because the AI gift is probably gonna power more growth for Lenovo. What's your story now? What's the current positioning? What's going on? Absolutely, ebbing software is critical when you're thinking about artificial intelligence. One of the things that we've seen is you have really two options out there. You have open source software tools that are, you know, widely available. We talked about some of the open source options that you have when you're talking about large language models. The other thing that we're seeing is really a lot of investment in the startup community and really great IP coming from startup. So our strategy is partner where there's great IP, where there's great software technology. We're gonna be partner led and we wanna harness the startup community and all the VC dollars that are pouring into it. So we launched something called Lenovo AI Innovators. We have over 45 AI software companies already onboarded that are doing everything from retail, store, you know, computer vision solutions to manufacturing. These are vertically specific solutions and that we can deploy full stack. So we validate these software solutions on our Lenovo infrastructure and can deploy a full stack solution for these specific verticals. So it's really powerful. Software is critical, but the infrastructure is still really important too. I mean, you have to have purpose built AI infrastructure and Lenovo is now the third largest infrastructure provider for AI infrastructure today. So it's really, really critical a part of the equation as well. What we're looking forward to, we're getting the hook here before we go to the last minute we got. Thanks for coming on by the way. I appreciate you getting this last one in. Just take the last minute to explain why you guys are winning customers. Where's the growth coming from? What's the pitch? Yeah, so it's, you know, what we're trying to do is democratize artificial intelligence for industries and companies of all sizes. We're doing that by leading into the startup community through our Lenovo AI innovators program, bringing full stack solutions for industries. We have our AI discover labs. So if customers need more help than what they can get from just a software solution, if they need hands-on support, we use our AI labs for that. What we're really excited about and the things that we're able to do with that is we've been able to very quickly respond to these market trends. Like we're talking generative AI. You know, you heard about it on stage in the keynote. We already have a reference architecture in market that's actually showing you how to do what they talked about in the keynote yesterday. We have a reference architecture in market and we are actually showing a full stack demo in our booth to show you how you can actually use a generative AI model based on that reference architecture. So it's just about being able to respond quickly to these new paradigms in the industry. And that'll become a solution, potentially a skew. Absolutely. All right, hurry up. Yeah. Somebody's over your shoulder. Exactly. Well, great to have you on. Obviously, we're doing some great research in AI. We're plugging it in there. We recognize what you guys are doing. A lot more of the Lenovo story, we're gonna have to unpack. So we're gonna have to follow up and get the briefies. Love that, love the tailwind how AI can really change the landscape. Thanks for coming on. Absolutely. Thank you so much for having me. Appreciate it. Okay, that's a wrap on day two. We're getting every ounce of cube data we can get to you. This is the cubes, what we do, we get the content, we get it out there and we'll let AI take care of the rest. I'm done with Dave Vellante. Squeeze every piece of data out of that juice. Thanks for watching. See you tomorrow. Day three, we'll be right back.