 Hello everyone, and welcome back to theCUBE's live coverage of HPE Discover Barcelona 2023. I'm your host, Rebecca Knight, along with my co-host and analyst Rob Streche. Rob, sustainability is the topic of the day here. I think it's very appropriate, as we've talked about before, that COP is starting in Dubai today. So it's very sustainable week. Starting up for two weeks actually for COP, going and running through at least past the ninth, I think to the 12th of December. But I think also it's really about, it starts at the foundations of these systems and really at the processor, especially when you're looking at it from an AI perspective and the different workloads. So I think we have the right people on tap here to talk about this with us. Indeed we do. I'd like to welcome Jean-Laurent Philippe. He is the Chief Technology Officer, EMEA for Intel. Thank you so much for coming on JLP. JLP, thanks for having me. Yeah. As we were just talking about, sustainability is really becoming so much more at the forefront of Chief Technology Officer's minds, companies across the globe, across different industries. How, what is Intel's approach to sustainability and how do you work with customers in terms of developing their own approach? Yeah, absolutely. So first, we didn't wait until sustainability became a buzzword to do something about it. Our first CSR, our let's say sustainability report if I can say so, was done I think in 1994. So we started a long time ago. And I'll show you as we talk that we have made big progress. Now it's extremely difficult. It takes a lot of energy and will to make it happen. So we look at it from all of the perspective that you can think of. Let's assume that we talk about processors. Of course, the first thing that we have in mind when we users consider processors is we run applications on them. How can the applications run more sustainable? But before that, we have to manufacture them. We have to package them. We have sustainability in mind when we do this phase. But first, we have to design them. So we design them also with sustainability in mind. So we look at it from all of the angles from the design to the manufacturing to the utilization that users will have with them. And I would like to tell you about all of those three steps because it really takes a lot when I say energy, that's kind of a pun if I can say so. Yeah, that's the correct one. I think it's a good place to start to dive in because I think especially for servers and when you have to design processors for servers that are going to be deployed for various different workloads, what goes into how you think about improving those features, sustainability features? So as you rightly said, processors are running workloads. Then let's look at what customers run in terms of workload. Let's work with them. Let's identify the workloads that use a lot of energy, therefore have a significant carbon footprint, and let's try to minimize it. In order to do that, we design now processors and take the fourth generation Xeon scalable processor family, the Sapphire Rapids. I'm sorry, I'm taking, I love code names. Intel loves code names, so don't worry about it. So we worked with our end customers. We looked at what they do with it, and they do a lot, but they do also a lot of software defined something, software defined storage, software defined network, they do HPC, they do SAP, they do plenty of things like that. Therefore, we looked at what can we do to make those applications run more sustainably. In the previous generations of processors, we have already introduced AVX 512, which is capable of doing a lot of compute in one cycle, like multiply and add 512 bits data, data sets. We went one step further, because AI is something which is important. Now, AI is a buzzword today. The fourth generation Xeon Intel processor was designed several years ago. It takes several years to design it, go through manufacturing, design manufacturing test, et cetera. We had already decided that AI was important. We added an integrated accelerator for AI units, and we have measured on applications that it can increase the performance by 14, up to 14 times. That means that you can run an application 14 times, in 14 times less, therefore it's decreasing the runtime, and therefore it's decreasing the energy for the time to solution. So that's increasing the sustainability. We have added other accelerators like data streaming accelerator, which is for accessing data faster. We have a load balancer, which is load balancing the workloads, especially for networking applications. We have analytics accelerators in it. Guess what? They are all in there. If you don't use them, they don't use power. If you use them, you are more sustainable. So that's how we did it. We talk with customers, we listen to customers, we try to implement what they ask us to do. So to me, that works. And I think that what's interesting, and Antonio kind of talked about it earlier, and I think Antonio has always leadened into the hybrid nature of workloads and where things are going to be deployed. One of the things he also talked about, and this kind of piggybacks off of what you were saying, is that things at the edge are going to run on servers. I mean, that's just what's going to happen because if you have inference, you're not necessarily, don't necessarily need a full GPU to go and run inference. Most inference is already, if you're doing security analysis or something like that of CCTV, you're doing that probably on a server at the edge. The models have already been trained and pushed down. Is that what you're seeing and a lot of the feedback you're getting as the next, as you look into the future? Absolutely, so that's one interesting thing. Everybody looks at AI and says AI equals chat GPT, more or less. And they say, wow, I need a lot of GPUs to do the training. Yeah, maybe. But once you're done with the training, you want to do a lot of inference. Inference is done at the edge. It's done, especially in manufacturing where the data is in your car. You don't want to wait for the data center to respond to you. On your phone, you want it immediately, so yes, absolutely. Inference is done where the data is or where the data is needed, the answer is needed. So that's why we know that there will be Xeon-based servers at the edge and since they are already there, they can do the workload that they are meant for plus the AI inference and they can also do some refinement or let's say fine-tuning of some LLM trainings because that's also something that they can do very, very well. So that's brilliant. Now, those processors, once they are designed and that we have discussed with customers about what they want to use them for, we need to manufacture them. So as you probably know, we have our CEO telling, we are doing five nodes in four years, so five manufacturing nodes, Intel 7, Intel 4, Intel 3, Intel 28, and Intel 18. So the Intel Xeon 4th generation scalable processor is on Intel 7. This is in high-volume manufacturing. We have launched, if I can say so, the FAP34 manufacturing plant in Lake Slip in Ireland, which is on Intel 4. We will be announcing at AI everywhere on December 14th the Meteor Lake processor as well as the Emerald Rapid processor. But the Meteor Lake is the first one on Intel 4. When you manufacture with Intel 4, you have something which is a little bit smaller, but you have a state of the art manufacturing FAP, which is more and more sustainable. We use as much as possible 100% of renewable electricity. We recycle all of the water we minimize the waste to landfill down to about 5%. Our goal being to be at 0% at the end of the decade. We launched in 2020 the RISE 2030 program, which is responsible or for responsible, AI for inclusive, as for sustainable, and E for enabling. Sustainable is part of it. We are doing it at the moment. We are, I think, at 93% of the electricity that Intel consumes in our manufacturing plants to be renewable electricity. In some countries like the US and a couple of others, we are at 100% for water, same thing. So we are getting there, but it takes a lot of will to make it happen because you have to build it in the manufacturing plant that you're designing. So I mean, this is really fascinating about how you're really reducing your carbon footprint at every stage of the process. But as you just pointed out, it takes a lot of will. And that is something that a lot of companies are in short supply of to get anything done, let alone something that feels maybe a little bit nebulous. So how, when you think about your role as Chief Technology Officer, it's a lot of change management too. You know, it's a lot of bringing people along and getting people excited and interested and committed to this. Yeah, absolutely. So we have to tell them, look, what do you want to leave behind you to your kids and your grandkids? Can you help? Remember, the mission of Intel is to make the life of every person on earth better. So that's today, but that's also tomorrow. So we need to minimize the carbon footprint that we are leaving behind. So if we can do it by designing new processors that are better for that, new fabs and also usage, because of course Intel does a lot of software enabling because the software needs to be taking advantage of the features that we bring in our processors. So that's from one end to the other end that we are working with the whole ecosystem and making sure that our end users are getting the benefits of the better sustainability that we bring. Yeah, and I think it's key, right? Because when you talk to customers, they must be saying, hey, listen, we're looking at, and customers being HPE and people actually using the products at the end user, they're concerned about their scope three. And as part of it, they're buying from HPE but HPE's buying from you and you're buying components from others and they need that full lifecycle as well. And HPE is very advanced in that I was giving a keynote on sustainability at HPE, TSS earlier this year and I was discussing with execs at HPE and they said, we want our suppliers to meet a minimum level of sustainability and we will help them. And I said, hey, how do you rate Intel on this? And I said, you're very good on that. You're already advanced, that's good. We can always do better and let's do better together but we are on the same page with HPE on this and that's good because it will benefit the end users. What's kind of been the big thing with the Gen 11 Pro Alliance and that they're in the fourth gen Xenons? Because there's one thing is to be sustainable but there's also the total cost of ownership aspect of it. How is this really coming together in that platform, in that Gen 11 platform? So because we have more performance, better performance per watt than the previous generation, you have a better TCO. So that's already a good thing. Now, we know that the processors don't always run at 100%. And if you can find some ways of making them run at a lower power dissipation, that means lower power consumption, then you are also contributing to reducing the impact to the environment. In the fourth generation Xeon processor, we have built in a mechanism by which we can reduce the power dissipation, therefore power consumption by about 20%. So for a processor which is 350 watts, that's 70 watts, and in a dual socket system, it means 140 watts. And we have enabled that on applications that don't use the full features of the processors all the time so that they don't suffer in terms of performance. So we can still reduce a little bit the power consumption and still get a good TCO. Now the other thing about TCO is when you do inference, and I'm coming back to what we were talking about earlier, you know that there are millions of Xeon processors in the data centers. If you have them, and then you know that you can run AI inference without having to add any hardware without having a higher electricity bill, guess what? You're going to take advantage of this. You have a seven-seater car. You don't need to buy a five plus a two. You can get seven people in the car and that will be not increasing that much the gas that you are using. So same thing, we are providing people the ability to use something that they are familiar with. We do the ecosystem enabling for the software that makes it easy to use for them and at low carbon footprint and a high TCO. General, thank you so much for coming on theCUBE. A really fascinating conversation. I'm Rebecca Knight for Robstretch. I stay tuned for more of theCUBE's live coverage of HPE Discover. You are watching theCUBE, the leader in high tech enterprise coverage.