 Good morning, mobile community, and welcome back to beautiful Barcelona. We're here at Mobile World Congress and starting off our day two coverage of power-packed action here live on theCUBE for the entire conference. My name is Savannah Peterson, joined by my co-host, Dave Vellante and John Furrier. We've also got a CUBE veteran and someone we had on the desk just four months ago at supercomputing. Ehab, so nice to have you with us again. Thank you. I really enjoyed that and I had to come back. It's a good sign, right? If you're willing to be subjected to me asking you questions again, I feel strong about it. Dell has a big, beautiful booth. I actually hung out, even truth be told, had a beer at your booth last night. Tell us about your presence here and why this show is so important for Dell. Yeah, this is, I think our third MWC since we started coming back. We have a lot of engagement with the service providers here, with the telcos. We have a lot of new products and yesterday if you saw on the keynote, we announced a partnership with AT&T as a key customer. So we're making great progress solving problems and helping the telcos. And this year we added AI to the capabilities that we have that we showed at our booth. So. You know, one of the things I noticed on your last interview at the supercomputing with Broadcom, Jazz Trembly, we have Charlie Quas coming on tomorrow from Broadcom. And one of the things that you've been really talking a lot about Lila I want to unpack is that with AI, we now need new systems to make that work. And these systems are clusters. They are now supercomputing. They're like a combination of chips, nicks, switches. And so a new systems revolutions emerging. Our thesis is this revolution will be young and, you know, main practitioners today re-architecting systems. Almost like data centers all over again, but in a kind of new way, kind of a cloud way. Do you agree with that and what's your vision as these AI systems need to run at scale, high performance, new kinds of cluster systems? What's your view? Yeah, John, that's a very good observation. AI is not a server and a network. It's a full system. And we have found that every one of our key AI customers need tuning for the system. There's two changes that I see happening. One is the performance of the GPU is very highly dependent on the network. You know how hard it is to find GPUs these days and how expensive they are. Leaving standard GPU capacities the last thing you want. So network has become a very big focus of attention to get the maximum performance and throughput inside the system. And the system is two components. It's the network inside the server from the NIC and PCIe switching and then also the network that connects all of them together in the data center. But you know what's little talk about but just as big software optimization. You have to tune the AI software for every deployment based on what the application needs and that's also part of the system design as you observe John. So it was interesting, we had Michael Dell last night and we were asking about how to take us back to how you got into this business and what you saw. And he said, well, the telcos came to us and said, we want low cost and efficient equipment, servers, storage, et cetera. So we said, okay, here. And they said, oh, thank you, but we need specialization. We need things that are hardened. We have a very unique use cases. And so Michael said, well, you didn't tell us that, but okay, so we went back and we started building. Now there's a mainspring, you've got a main sort of server line and storage and then you customize for this industry, right? Can you describe that a little bit? Where are we in that maturity? How's it going? What's the uptake look like? You mentioned AT&T, I know you're working with Dish Networks and some other telcos. Where are you in that roadmap? I think the big mindstone we passed already is how you handle multi-cloud. So the big challenge customers had for the last few years, telcos included, is that you have multiple capabilities from big cloud providers, that are service providers that they wanted to utilize. They wanted to mix and match the best of all. And we've solved multi-cloud for them by having common storage and being able to pick that choice of software from the different clouds on different compute, connected together, be able to do data mobility inside cloud and back to on-prem. The next wave now is driven by AI. Now I have my multi-cloud, I'd like to get the productivity and innovation out of my systems to apply AI. So we're at the phase of how does AI and multi-cloud work together? As John said, AI is a system by itself, you still need to optimize for the piece that the industry is moving towards, how you combine the two in a single horizontal cloud, what people are saying. So what were the telcos doing prior to this capability? They were just sort of purpose-build for each cloud and then to have silos, right? Yeah, I think before this they had an IT system that was based on IT technology, and then they had deployments at cell towers and data center that was independent. And then they started to put some services in cloud, but you can't enable service delivery and have a new creation that model. You can't be responsive and fast when a customer, when they want to launch a new service, being able to take the IT, the network, and what's deployed at the cell tower with what's deployed in data center and cloud and make it one, not only gives them significant efficiency, but quick speed when it's time to deploy services. That's where they are now. Speed, definitely a theme both of this show and of supercomputing. Since we just talked about silos, I want to dig in there a little bit. The telco industry kind of historically known for being much more siloed. Your partnership with AT&T is all about elevating open source technologies. Can you tell us a little bit more about that? Yeah, telcos are no different than all our IT customers or enterprises. They want to take advantage of the speed of innovation. However, at the same time, telcos really care about high performance. They care very much about security, quality. So they needed the combination of the secure, dependable ability to work with a single partner who can add as all their capability, but they want the ability to take advantage of the velocity of innovation that's happening in software. And we're giving them that choice. We still are big partners of Nokia, Ericsson, and others. And at the same time, we're enabling a lot of the new software. This is all done via cloud architecture, horizontal where you can deploy what you want. I got to ask the open question around ethernet versus Infiniband. We had a big debate yesterday on theCUBE about this. Let's keep it going. Okay, so, I've been said Infiniband is dead, but that's me, but only four use cases that are. Let's look at Nvidia's earnings. Okay, hold on. Nvidia, Melon has got to give him props for that. However, this is going to be the potential scenario. Infiniband is used for purpose built, high performance use cases, but ethernet is much more of a longer tail ecosystem play. So the question is, I can see them both interoperating and ethernet being a cost provider, a lower cost option. The question is, can it be performing in the NIC architecture in the new system? So can NICs have the performance at scale when you start putting this NIC switch new systems together for AI? What's your take on that? Yeah, I'm going to go back to, I'm going to use your words. It's a system design. And therefore, we have passed the days where you make a network decision independently of compute. So we have now also gone to a new place where the network design is dependent on the application. Every model is different for AI. So some models need very high velocity of data exchange between GPUs. And for those, Infiniband is still the best that you can find. And a lot of customers love the ease of use of Infiniband. Others are looking for a more distributed model and starting to push influencing where ethernet becomes very attractive. In general, I think the industry believes ethernet is the future. The question is when and how and for what applications? So what we're seeing is that most customers continue to use Infiniband, but every one of them is starting to experiment with ethernet. And some of them are taking the step to build ethernet networks. You think the performance will be there? Ethernet performance will be there based on what we see coming in silicon and software on the NICs. The NIC is what makes it possible. I think there's no question that ethernet will continue to gain share. I think we all agree on that. What was fascinating to me about the Melanox acquisition that Nvidia made is they had a bottleneck and they said, oh, we're going to solve that and we're going to build a system. And that system is going to be purpose built for what happened to be perfect for AI. It was sort of high end gaming at the time and graphics. So I personally think that Nvidia is going to continue to lead in the very highest performance training applications. And then over time, I've always said inference is going to be the dominant workload. And they're going to certainly coexist. But I would certainly agree. But I don't think it's dead. I think it lasts. It's a very long life ahead of it. You got Infiniband and ethernet. Okay, let's just put that out there. You got Nvidia and Broadcom. Both make switches, both make NICs. The only two companies that do both. So, okay. Nvidia's Infiniband Broadband. Infiniband is ethernet too. They'll do ethernet. They'd be crazy not to do ethernet. Again, my point is if you go back to the concept of NICs and the systems, what other elements are in the system? You got NICs, you got switches. What else is in the AI system? Yeah, but let me answer one question first. Yes, we have ethernet with Nvidia. We have been jointly innovating on the most advanced ethernet for AI already. We built a joint cluster called Israel One that has Dell's XC9680 with Nvidia's ethernet. I put a blog on that. It's the most advanced ethernet design anywhere. And that's joint Dell, Nvidia. At the same time with Broadcom, we have incredible innovation on ethernet. So we give customers the choice today. Got it. You choose your GPU silicon, and then you choose your network and we'll optimize ethernet or NINFAN based on the application. And you're saying because AI has to be tuned both software and hardware. So systems have to react for the workloads. That's correct. The big aha I would say for people is that the GPU-GPU communication very fast, like NVLink inside the server, you have to incorporate that in the overall topology. And therefore you need to tune the software. So it's interesting, Wall Street right now, of course they're crazy about Nvidia, but Wall Street sees Broadcom as the number two AI company. From a picks and shovel standpoint, why? Because of all the interconnections in that network, it's no longer, this is Charlie Cowell's theory, no longer it's just a CPU-centric world, it's a connect-centric world, and the network is the bottleneck now. That's, again, very excellent observation. The performance of AI, and by the way, that's training and inferencing. Depends on three things. One is the GPU itself, but number two is the bandwidth. It's the high bandwidth memory that you have. So how much memory you have is very critical because it has a lot of data, and then the speed of memory bandwidth to connect. And therefore if the network is constraining the ability to move data into memory, it impacts the performance significantly. That's why networking has to be tuned with AI. What's going to be interesting is on the earnings call Nvidia said that one of the analysts asked, it was actually a good question, will today's training engines become tomorrow's inference systems? And of course, Nvidia said yes, but the question's going to be, will there be new entrants that can be more cost-effective at inference than legacy Nvidia GPUs? And that's going to be an interesting dynamic. I don't know the answer to that. Time will tell. The answer is yes, or we already know that. We, not only do we work with the big silicon providers, we're talking to about 4E other companies and we've deployed multiple alternatives. The secret is in the quantization and optimization for the models. I understand that, but the point is that a customer will have depreciated that asset. So it's free for them to then use it and pass it on. And that's an interesting dynamic. The same assets can be used. We know that. Yes, of course. We know that dynamic well. It's just new into AI. Well, the thing that came out of supercomputing our observation was these new micro-clouds. One of our new analysts on the Cube Research, David Lenthicam, who writes for Info World as well, as well as SiliconANGLE. He coined the term micro-cloud. We call it super-cloud. But you're starting to see these specialty clouds emerge that are building these systems. They're now the new customer because it may not be on Amazon. It might be in a data center. They're hosting it. So you have now these new service providers emerging. How do you think that's going to play out given that the telcos potentially could be doing that? And is that a real market? That's the question that I'm watching right now is will these specialty clouds, by the way, we'll have specialty models, Dave, too. What's your view on this evolution of these micro-clouds? The big discussion item this event is can telcos help solve the massive shortage of data-centered capacity? I think there's enough calculations out there that there's an enormous shortage of data-centered capacity. One telco told me, we were going to shut down all these edge data centers, but I'm glad we didn't. That could become helpful. I don't know if I know the answer, but I think that's the discussion. You got power and cooling. You're in a good spot. And did you see the headline today? The heavyweight EU operators demand a new deal. They're saying they're $200 billion short in CAPEX. So maybe that's a little negotiation so they can get the handcuffs taken off so they can do more consolidation in Europe like we have in the US. But to your point, there's a capacity constraint that somebody needs to meet. Actually, I want to build on that. Since we've had conversations, both in North America and in Europe, are you noticing any different trends with your customers' needs or the solutions that they're seeking based on geography? I think that differences before because of the geography, for the most part. I think the European telcos found it easy to introduce innovation in smaller countries. While the US, everything has to be planned at large scale. But we see over time, they're becoming more equivalent in the design and architecture. On that point, do you see silicon diversity as one topic we talked about? Sovereign cloud is obviously one in Europe. We're here. As these become more policy-based questions in the new system, is there a GenAI stack that you see emerging from a lower end of the stack? So obviously, to the lower-level silicon, chips to apps, so let's go customer that's merchant silicon or custom silicon, chips to app. That stack, what does that need to look like to support all the diversity in terms of running policy on moving data in sovereign clouds to making sure that Ethernet's working with this and they got Infamanda working with that? So you have this kind of new architecture. What does the stack look like in your mind for GenAI for Telco? GenAI for Telco has to go all the way to the device. And we're finally starting to see some models that work on mobile devices. So Gamma that just came out from Google, that works on mobile devices. And therefore, that's the new capability. I think it's going to help justify some of the 5G deployments because Infrancing is an app for 5G that we were not talking about last year or the year before. However, at the same time, they need open source models to be able to deploy them everywhere. It's going to be a combination of multiple types of models and software stacks for them. What's on the device? What's in the data center and then what's in the cloud? You know, you mentioned 5G, this news out there, you got seamless migration on mobility between private and public networks now for mobility across multiple MVOs. I saw that Kendrill deal with HPE and others. You guys are doing the same with private 5G. Is 5G going to be an LLM model? Do you see 5G becoming, because it's got a lot of data. There's a lot of operational data, there's a lot of user data. How do you see that becoming AI enabled? Does it become its own foundation model? What's the vision, your vision of 5G? The data, telcos have tremendous amount of data, tremendous amount of insights. Think of how you plan a 5G network today. It's a much, a lot of it is old base. All that can become Gen AI models. So I think that for Gen AI, it's really simple. Data fuels the model. It's all about how much data you have and the telcos have a lot of data. And then, at the same time, you need to protect the data and you have governance security. So those are the things that have to be navigated to figure out the right answer. It's still early stages. But they're in a good position. Good position, having data and being able to act on it. And AI could provide significant efficiencies. Wow, well, Ehab, this was as exciting of a conversation as we had in Denver. Thank you so much for joining us again. We look forward to having you back again in another four months. Hopefully in another exotic location. Talking about more cool things. John and Dave, always a pleasure to share the stage with you and thank all of you for tuning in to our four days of coverage here live from Barcelona at Mobile World Congress. My name's Savannah Peterson and you're watching theCUBE. The leading source for enterprise tech coverage.