 The CUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. Welcome back to Barcelona. We're here live at the FEDA. It's just amazing. Day two of MWC23. It's packed today. It was packed yesterday. It's even more packed today. All the news is flowing. Check out siliconangle.com. John Furrier's in the studio in Palo Alto, breaking all the news. And we're here live, really excited to have Udion Mercajee, who's a senior fellow and chief architect of wireless product at Network and Edge for Intel. And Manish Singh is back. He's the CTO of Telecom Systems Business at Dell Jets. Welcome. Thank you. We're going to talk about greening the network. I wonder, Udion, if you could just set up why that's so important. I mean, it's obvious that it's an important thing, great for the environment, but why is it extra important in Telco? Yeah. Thank you. Actually, I'll tell you, this morning I had a discussion with an operator. The first thing he said, that the electricity consumption is more expensive nowadays, that total real estate that he's spending money on. So it's like, that is the number one thing that if you can change, that bring that power consumption down. And if you talk about sustainability, look what is happening in Europe, what's happening in all the electricity areas, that's the critical element that we need to address. Whether we are defining chip, platforms, storage systems, that's the number one mantra right now. Reduce the power, increase consumption, because it's a sustainable planet that we're living in. So you got CapEx and OpEx. We're talking about the big piece of OpEx is now power consumption, that's the point. Okay, so in my experience, servers are the big culprit for power consumption, which is powered by core semiconductors and microprocessors. What's the strategy to reduce the power consumption? You're probably not going to reduce the bill overall, you maybe just can keep pace, but from a technical standpoint, how do you attack that? Yeah, there are multiple defined ways of, obviously the process technology, that micro-architecture itself is evolving to make it more low-power systems. But even within the silicon, the server that we develop, if you look into a CPU, there is a lot of power states. So if you have a 32-core platform, as an example, every core, you can vary the frequency and the C states, power states. So if you look into any traffic, whether it's a radio access network, packet core, at any given time, the load is not peak. So your power consumption, actual, whatever you are drawing from the wall, you also need to vary with that. So that's how if you look into this, there's a huge savings. If you go to Intel both or Erikson both, or any of them, you will see right now, every possible, the packet core, radio access network, any network, they are talking about energy consumption, how they are lowering this. These states, as we call it power states, C state, P state, they've been built in Intel chip for a long time. The cloud providers are taking advantage of it, but telcos, even two generations before, they used to actually switch it off in the BIOS. I said, no, we need peak. Now that thing is changing. Now it's all like, how do I take advantage of the building technologies? I mean, I remember the enterprise virtualization, Manish, was a big play. I remember PG and E used to give rebates to customers that would, you know, install virtualized, you know, software, VMware and others. And SSDs, yeah. And SSDs, you know, yes, because the spinning disk was, but now we're near with a server consumption. So how virtualized is the telco network? And then it sounds like, what are you on saying? Is there other things, other knobs you can, of course, turn? So what's your perspective on this as a server player? Absolutely. Let me just back up a little bit and start with a big picture to share what Odeon said. Here, day two, every conversation I've had yesterday and today morning with every operator, every CTO they're coming in and first topic they're talking about is energy. And the reason is, A, it's the right thing to do, sustainability, but it's also becoming a P&L issue. And the reason it's becoming a P&L issue is because we are in this energy inflationary environment where the energy costs are constantly going up. So it's becoming really important for the service providers to really drive more efficiency onto their networks, onto their infrastructure, number one. Two, then to your question on what all knobs need to be turned on and what are the knobs? So Odeon talked about, within the Intel, Silicon, the C States, V States and all these capabilities that are being brought up, absolutely important. But again, if we take a macro view of it, first of all, there are opportunities to do infrastructure audit. What's on, why is it on? It doesn't need to be on right now. Number two, there are opportunities to do infrastructure upgrade. And what I mean by that is as you go from previous generation servers to next generation servers, better cooling, better performance, and through all of that, you start to gain power usage efficiency inside a data center and you take that out more into the networks you start to achieve same outcomes on the network side. Think about from a cooling perspective, air cooling, but for that matter, even liquid cooling, especially inside the data centers, all opportunities around PUE, because PUE, power usage efficiency and improvement on PUE is an opportunity. But I'll take it even further, workloads that are coming onto it, core, RAM, these workloads based on the dynamic traffic. Look, if you look at the traffic inside a network, it's not constant, it's varying. As the traffic patterns change, can you reduce the amount of infrastructure you're using, i.e. reduce the amount of power that you're using, and when the traffic loads are going up, so the workloads themselves need to become more smarter about that. And last but not the least, from an orchestration layer, if you think about it, where you are placing these workloads, and depending on what's available, you can start to again, drive better energy outcomes. And not to forget acceleration. Where you need acceleration, can you have the right hardware infrastructures, delivering the right kind of accelerations to again improve those energy efficiency outcomes. So it's a complex problem, but there are a lot of levels, a lot of tools that are in place that the service providers, the technology builders like us, are building the infrastructure, and then the workload providers all come together to really solve this problem. Yeah, Udyan, Manish mentioned this idea of moving from one generation to a new generation and gaining benefits. Out there on the street, if you will, most of the time it's an N plus two migration. It's not just moving from last generation to this next generation, but it's really a generation ago. So those significant changes in the dynamics around power, density, and cooling are meaningful. Are you able to make, you talk about where performance should be. We start talking about the edge. It's hard to have a full blown raise data center floor edge everywhere. Do these advances fundamentally change the kinds of things that you can do at the base of a tower? Yeah, absolutely. Manish talked about the dynamic nature of the workload. So you're using a lot of this EIML to actually predict, like for example, your multiple cores in the systems. So why is the 32 cores in the system? Why is it all running? So your traffic profile in the night times, so you're in the office areas, in the night you've gone home. Now it is everybody's working for remote anyway. So why is this thing a full blown spending, the TDP, the total power, the extreme powers? You bring it down, different power states, CC states, we talked about it. Deeper C states or P states, you bring the frequency down. So a lot of those automation, even at the base of the tower, a lot of our deployment right now, we are doing a whole bunch of massive MIMO deployment virtual ran in Verizon network, all actually cells are deployment. Those edge centers are very close to the cell site and they're doing aggressive power management. So I think that's, you don't have to go to a huge data centers. Even there's a small rack of systems, four to five, 10 systems, you can do aggressive power management and you build it out. Okay, no I agree. If I may just build on what Odion said, I mean if you look at the radio access network, and let's start at the bottom of the tower itself, the infrastructure that's going in there, especially with open ran, if you think about it, there are opportunities now to do a centralized ran where you could do more BBU pooling and with that, not only on a given tower, but across a given coverage area, depending on what the traffic's are, you can again get the infrastructure to become more efficient in terms of what traffic, what needs are and really start to benefit. The pooling gains, which is obviously going to give you a benefit on the CAPEX site, but from an energy standpoint, going to give you benefits on the OPEC side of things. So that's important. The second thing I would say is we cannot forget, especially on the radio access side of things, that it's not just the bottom of the tower, what's happening there, what's happening on the top of the tower, especially with the radio, that's super important. And that goes into how do you drive better PI efficiency, how do you drive better DPD in there? This is where again, applying AI machine learning, there's a significant amount of opportunity there to improve the PI performance itself, but then not only that, looking at traffic patterns, can you do sleep modes, microsleep modes, to deep sleep modes, turning down the cells itself, depending on the traffic pattern. So these are all areas that are now becoming more and more important and clearly with our ecosystem of partners, we are continuing to work on these. So we're hearing from the operators, it's an OPEX issue, it's hitting the P&L, they're in search of PUE of one, you know, the, they're going, and they've historically been wasteful, they go full throttle. And now you're saying with intelligence, you can optimize that consumption. So where does the intelligence live? Is it in the, is it in the RIC? You know, is it all throughout the network? Is it in the silicon? Maybe you could paint a picture as to where those smarts exist. I can start, it's across the stack, it starts, we talked about the C states, P states, if you want to take advantage of that, that intelligence is in the workload, which has to understand, when can I really start to clock things down or turn off the course? If you really look at it from a traffic pattern perspective, you start to really look at a RIC level, where you can have power, and we are working with the ecosystem partners who are looking at applying machine learning on that, to see what can we really start to turn on, turn off, throttle things down, depending on what the, so yes, it's across the stack, and lastly again, I'll go back to, cannot forget orchestration, where you again have the ability to move some of these workload, and look at where your workload placements are happening, depending on what infrastructure is and what the traffic needs are at that point in time. So it's again, it's not, there's no silver bullet, it has to be looked across the stack. And this is where actually, if I may, last two years, the sea changes happened. People used to say, okay, there are sea states and peace states in the silicon, every core, OS, opening system, has a governor built in, we rely on that. So that used to be the way, now that applications are getting smarter, if you look at a radio access network, or the packet core, or the control plane, signaling application, they are more aware of the, what is the underlying silicon power states, sleep states are available. So every time they find some of the scenarios there's no enough traffic there, they immediately goes to a transition. So the workload has become more intelligent. Radio, the RIC application we talked about, every possible RIC application right now, R apps and X apps, most of them are on energy efficiency. How are they using it? So I think a lot more awareness in the last two years. Can I just say what RIC did, right? We cannot forget the infrastructure as well, right? I mean, that's the most important thing. That's where the energy is really getting drawn in. And constant improvement on the infrastructure. Like I mean, I'll give you some data points, right? If you really look at the power edge servers, right? From 2013 to 2023, like a decade, 85% energy intensity improvement, right? So these gains are coming from performance with better cooling, better technology applications. So that's super critical, that's important. And also to just give you another data point, apart from the infrastructure, what cast layers we are running and how much CPU and compute requirements are there, that's also important. So looking at it from a cast perspective, are we optimizing the required infrastructure blocks for radio access versus core? And again, really tying that back to energy efficiency outcomes. So some of the work we've been doing with Wind River and Red Hat and some of our ecosystem partners around that for radio access network versus core, really, really again, optimizing for those different use cases and the outcomes of those start to come in from an energy utilization. So 85% improvement in power consumption, of course you do it, I don't know, 200, 300% more work. All right, so let's say, and I'm just sort of spitballing numbers, but let's say that historically powers on the P&L has been, I don't know, single digits, maybe 10%. Now it's popping up to much higher, right? I mean, I don't know what the number is. Is it over 20% in some cases? Or is it, do you have a sense of that? Let's say it is, the objective I presume is you're probably not going to lower the power bill overall, but you're going to be able to lower the percent of cost on the OPEX as you grow, right? I mean, we're talking about 5G networks, so much more data. The possibly increasing. Yeah, and so, am I right about that? The carriers, the best they can hope for is to sort of stay even on that percentage or maybe somewhat lower that percentage? Or do you think they can actually cut the bill? What's the goal? What are they trying to do? The goal is to cut the bill. It is. And the way you get started to cut the bill is as I said, first of all, on the radio side, start to look, see where the improvements are. And look, I mean, there's not a whole lot there to be done. I mean, the PSR as efficient as they can be, but as I said, there are things in DPD and all that still can be improved. But then, you know, sleep modes and all. Yes, there are efficiencies in there. But, you know, I'll give you one important, another interesting data point. We did a work with ACG research on our 16G platform, the PowerEdge service that we've recently launched based on inside sapphire rapids. And if you look at the study there, I mean, 30% TCO reduction, 10% in CAPEX gains, 30% in OPEX gains from moving away from these legacy monolithic architectures to cloud native architectures. And a large part of that OPEX gain really starts to come from energy to the point of 800 metric tonne of carbon reduction to the point of you could have, and if you really translate that to around 160 homes, electric use per year, right? So, yes, I mean, the opportunity there is to reduce the bill. Wow, that's big, big goal. Guys, we've got to run, but thank you for informing the audience on the importance and how you get there. So, I appreciate your time. One thing that Bear's mentioning really quickly before we wrap, a lot of these things we're talking about are happening in remote locations. Oh, back to that point of distributed nature. So, we talked about a BBU being at the base of a tower, that could be up on a mountain somewhere. Oh, you made the point. You can't just say, hey, we're going to go find ambient air, going to go locate next to a waterfall. Yeah, they don't necessarily have the greatest hydroelectric power. No, all right, we got to go. Thanks, you guys. All right, keep it right there. Wall-to-wall coverage is day two of theCUBE's coverage of MWC23. Stay right there, we're right back.