 Welcome, everyone, to theCUBE's live coverage of HPE Discover Barcelona 2023. I'm Rebecca Knight, your host, along with my co-host and analyst, Rob Streche. We are joined by Stephen Gillick. He is the Director of Artificial Intelligence, go-to-market at Intel. Thank you so much for coming. A first time on theCUBE. Welcome to theCUBE, Stephen. Yeah, my pleasure to be with you today. So, we're going to talk about AI. Of course, we're going to talk about AI. It is a dominant force in our industry and a dominant topic of conversation here. Intel's marketing tagline is empowering developers to bring AI everywhere. That's very catchy. What does it actually mean? Yeah, AI everywhere. So, if you go back and look at the role AI is playing and more and more in the industry and our personal lives, it's really going everywhere. It started with an expert thing, right? But nowadays, if people ask me where is AI, I say, wrong question, ask me where it's not. And then this will be very small places, right? So, it's going everywhere. And we at Intel, we want to bring the technology and the solutions to those places where AI is used successfully. So, AI is basically a journey, right? And we want to help those people who are creating AI solutions to manage that journey and plot a path for them with the hardware and software stacks which we are putting together across the data center into the client and covered by a common software stack. Yeah, it would seem that Intel sits at a very unique spot because of that, being data center and client related and being able to see. And people build things not just in the data center. A lot of times it starts with them doing modeling on their laptop and trying to understand and then bringing it up to the data center and training models and things of that nature, not just gen AI models. Are you seeing a similar type of adoption here in Europe where that's happening as well and that's kind of the mode that they're moving in is that client, server, build it up, train it, deploy it, do inference at the edge. Is that kind of what you're seeing? Well, I think there's not so much regional difference when you look at just the technology which is being used, whether it's in the US, in Europe, or in Asia, right? It usually starts with, okay, you have a use case in mind and there are certain ways which this use case can be supported by AI, right? And then experimentation starts. But I think the key thing is we see three faces in the AI development, right? The first phase is the data phase. It's often underestimated because if your data is not structured right, you don't have enough of it or it's in the wrong places from a storage perspective, you can run into serious problems later on. The middle phase is the modeling phase, right? Where you actually select the AI models, the neural networks, right, and train them. That's usually the most interesting phase. And that phase obviously is like by computer scientists like me, right? But don't forget about the data phase. And then there's the deployment phase, which is obviously the most interesting phase for those people who want to get the value out of it. We call it inferencing if you use deep learning. So it matches a little bit the lineup you were proposing, right? Obviously you can use the data center for all of these phases. You usually have the data in the data center anyway. And you do the modeling training in the data center and you can run inference there. But for inferencing on your laptop, for instance, right? Which we are supporting with coming products even more than we do today is actually it has some advantages in terms of the latency, in terms of the privacy, right? So that's why we are saying, okay, let's put the AI everywhere so we can serve the use cases and users can actually pick and choose where they run their inferencing and so on, right? Well, here in Catalonia, it is the perfect place to talk about Intel Accelerator, the Gaudi to Accelerator. Obviously home to the Sagrada Familia and so many works of Gaudi, the famous architect. You are, Intel is challenging the established vendors here. What is this a sign up in your mind? Is it a crowded landscape here? How is this going for you? Yeah, obviously there's a number of vendors who are targeting this market because it's a growing market. Models need to get trained. So we see that in the data supplied by analysts every day and I guess you know it. So yes, we have that situation, there's more players in the market but if you look at some benchmark, places where the serious contenders in the market basically post their data, then you will see less people doing it. For instance in ML Purve 3.1, which just came out in November, we from the Intel side were supplying a number of benchmarks on Gaudi too but also on our data center CPU, see on fourth generation as well, right? And there's not that many more who do that, right? It's actually, I think it was three vendors doing it on the accelerator side. And then you see the seriousness of the approach, right? You have the hardware and software stack in place to really run those benchmarks and be competitive with those as well. So I think you're right, there's a number of people and companies getting into that but you need to look at obviously how the products are doing and which places are they actually going to. And do you see that as also helping with the go everywhere concept? Is Gaudi too really aimed at that, from data center to edge, edge to client? Is that where Gaudi too really shines? Gaudi is actually a very specialized chip. It is specialized to run deep learning workloads primarily training and be very good with that. So we're very competitive with that chip. It is also good for heavy inference workloads, right? So if you have a very large gen AI model with lots of parameters, then this is where the strengths is, right? Of Gaudi too. But that's not the only place where AI is relevant, right? Or where we need to do things, there's other places, right? And we differentiate the use cases as well as the usage models. So most people today actually don't have 80% of their workloads BAI or deep learning. They have maybe 10% and they have lots of other things to do. So they might be better off using a CPU which also has been optimized for AI like the fourth generation Xeon Scalable in the data center to do those workloads which are more broad than just very specific AI workloads. If there's specific AI workloads, then we have Gaudi too to accelerate that, right? So there's different swimlands, if you will, in terms of the usage profile as well as the user profile. Maybe an example is if you have video conferencing, right? Obviously it's a lot of, it's graphics, it's normal things you do on any kind of CPU and then there's an AI component about maybe doing the transcript from what you speak about and the pictures and putting that into words. So that can be done very well with AI but it's still one application and the AI percentage is maybe, I don't know, 20, 30%. I'm just guessing this. So we see that in many applications. So we need to provide computing platforms which can run all of these tasks and at the right place. For instance, if you want to use them on the client, well, maybe you don't want to go back to a data center every time, right? For different reasons, latency, privacy reasons. So that's why we have these different swimlands of products and if you ask me again, Gaudi2, it's for training and heavy inference workload in the data center. That's where it places, it's not everywhere. Sustainability is on the minds of executives all over the world and rightly so, tackling energy consumption is a really important goal. How do you see this goal playing out as organizations are also trying to get this large scale penetration of AI filtering throughout their businesses. Do you see this as a hindrance or what's your perspective as a go-to-market executive here? Yeah, that's a great topic because if you extrapolate, if more and more people are using things like the Gen AI offerings which are plenty, then we will see the power consumption going up a lot. So our approach is multi-fold, right? First of all, coming back to the AI everywhere, so have the right hardware with a common software stack which optimally runs the software you want to run, the AI step you want to run, inferencing versus training and use that optimally. That is already providing some energy efficiency, right? Because you use the right kind of building blocks for the right task, and you don't have to have a really big solution for just a smaller task, right? So for instance, running things on your laptop, for instance, running things on generic CPUs in the data center, and if you need to, then you can go to specialize chips like Gaudy. So that's the first aspect, but then the next aspect is more involving the software, right, because the software which we are using, which is popular in JNAI today is very huge models, right? They're getting more and more complicated, but you don't need them all the time. You can actually run very good things already, very useful things with more smaller models which are really specialized for the task. Example is when you have a company, and in a company database, there's lots of wisdom, right? And you want to equip your Salesforce with that wisdom, looking it up, and maybe even doing automatically answers to customers, quite common application. So that doesn't need the really huge foundational models all the time, right? You can do it with smaller LLM models and therefore be more efficient as well. So it starts there, it's a hardware software problem which needs to be tackled. Last thing I wanted to mention as well, it's about the sustainability of the data centers as well. So how do you do the cooling? How good is your ratio, the PUE as we call it in the data center, what you actually get for the energy you put in or are you using a lot of the energy for cooling? So we have a number of collaborations or data centers where our products are used which are really leading in terms of their cooling infrastructure, re-usage of energy and other things, right? So for instance, the LRZ in Munich is doing very sophisticated things and there's a recent case in Cambridge in the UK where they installed a supercomputer build on Xeon and one of our GPU technologies as well. So that is a lot of things to do energy efficient solution and again, we are trying to put our technology everywhere there as well. I think it's keeping evolving and keeping innovating is just seems to be really one of the messages coming out of Intel these days is that really pushing forward and pushing the limits of what can be done at those different places, as you said, everywhere and the right workload on the right chip and the right chip set in the right place kind of thing. In the sustainable fashion. Sustainable fashion which is huge. Exactly. And I think that seems, I mean, we're a little behind in the US on this to put it mildly. I think a lot of the regulations that are in place here in Europe and starting to come around the world, I think the US will eventually catch up, but we'll see. Well, I wanted to actually ask you about that. In terms of your, in your role as a go-to market, how do you see the differences between the US and European companies in terms of how they view sustainability, particularly with the different regulations? Is there a different mindset, a different approach that you have observed in terms of how you devise your strategy? Yeah, I think that this topic comes up earlier. If people think about solutions, it tends to come up earlier on the European side, right? Yeah, I think that's one thing, right? And then, yeah, that's just the inability in terms of the energy consumption and how do you build your products, right? And we're doing a lot on that side as well. So that resonates. What we also see in Europe is the question on the responsibility for the AI solutions, right? That question is also being asked a lot in terms of, okay, I have this technology now. So who assures me that nothing goes wrong with it, basically, right? We see that across the board, right? But Europeans start to ask those questions earlier, my feeling, right? So, yeah, I think that's my feedback on it. But in practical terms, I think sustainability and responsibility of AI solutions is a topic we will see everywhere, right? Sometimes a bit earlier, sometimes a bit later. It also depends on the industry you're talking about, right? For instance, if you have autonomous vehicles, right? You asked about, okay, can I really be sure of that solution a little bit earlier than for a chatbot where it doesn't make such a big difference is something that goes wrong, right? And the solution hallucinates sometimes, right? So it's also different per industry and a little bit different per geo, like you suggested. I think, again, part of that seems to be in Intel's DNA and innovation and pushing in, and there's this concept of five nodes in four years. Tell us a little bit more about that. I know your CEO is out there talking about it quite a bit, so. Yeah, that's about our manufacturing technology, right? Obviously, when you build solutions and build chips, you need the foundational technology to really build the transistors and the chip itself and then have intelligent packing technology to bring that together. That is obviously what is in our genes at Intel, right? And that's why our CEO likes to talk about it, right? Because we are making really fast progress with our technology to build chips, manufacture them and package them together. So this is very necessary, right? And it's unique because we are at Intel, we are designing chips and we're building them as well, right? So that is, I think, not so many companies do that, right? And we have experience in both. So it's an essential element of us going forward and it also enables some of those AI capabilities I was speaking about before. Excellent. Well, Stefan, thank you so much for coming on theCUBE, a really interesting conversation. Okay, thanks a lot for being there. It was my pleasure, yes. I'm Rebecca Knight for RobStretch. I thank you for watching HPE Discover. We've got lots more coming up, including guests who will be talking about customer success. You're watching theCUBE, the leader in enterprise technology coverage.