 Welcome back everyone to theCUBE's live coverage here in Barcelona, Spain for MWC Mobile World Congress. I'm John Furrier, your host with Dave Vellante expecting the signal from the noise. We have a CUBE alumni in the house, CEO of HP Antonio Neary here at the event. Give them tons of talks, great to see you. Thanks for coming on theCUBE. Yeah, thank you for having me and great to be back. Looking forward to seeing that HP Discover, HP Discover coming up. Your moves with Juniper recently were amazing. Press around edge to cloud, cloud to edge, AI networking. It's been the big talk in the industry since the news. Here at Mobile World Congress, this is the conversation. AI networking, telcos and enterprise coming together. What's the current situation for you guys right now? What are you talking about here? Well, I mean the announcement of the acquisition of Juniper is another stepping stone in our strategy. As you recall, since 2018 we have been saying that the enterprise of the future will be edge-centric cloud enabling data driven. And our first step was all about deploying the connectivity to help enterprises digitize and automate their enterprise. The second stepping stone was to offer a cloud experience everywhere. And that's where we innovated HP GreenLake is what we call the fourth cloud. It's hybrid by design, but ultimately is delivering all the services customers need with a cloud native experience. And the last step obviously in the data driven approach, we didn't know at the time, but I guess we were lucky in many ways is the fact that the explosion of AI through these generative AI technologies. And when I think about the Juniper acquisition is about accelerating the deployment of the AI through the network because ultimately you need an AI native architecture that brings the data to the right computational resources to accelerate business productivity and data insights. And so that's what we're doing. You call it a stepping stone. I think if it's more of a step-function, I mean, this is a major move for you. And I wonder if you could talk about what that means for HPE, are you becoming a networking company? I think you said that. What does that mean? What does that mean for your customers and the industry? Well, I think, you know, from a customer point of view obviously we're going to drive a modern network architecture which they have been waiting for. I mean, for the last two decades, then they have a lot of choices. So we're going to give them a very strong, modern AI driven choice. And this is the first time, Dave, that the company, whether it's HPE or HPE, will have in a combined basis the entire technology stack to compete in the $120 billion dollar town. Think about it. Silicon, infrastructure, operating system, software, security and services to offer everything from the cloud all the way to the edge or the edge to the cloud. And so we intend to drive a modern architecture where we bridge both the cloud native world and the iNative world. For my shareholder point of view, this is a no brainer. I mean, we're going to double the business in networking. That's going to basically increase our relevancy. And obviously in a different high end margin profile that would generate more profitable growth for our shareholders. And then acquisition, even though it's large in terms of numbers, $13.6 million, actually only requires $450 million of synergies which more than half is the GNS base. So super excited from a relevancy standpoint and a shareholder standpoint. So it's a creative, throws off cash flow, doubles the business. I mean, like you said, it's kind of a no brainer from an investor standpoint. Assuming you execute flawlessly, which I know you will. That's our goal. And Greenleg has good traction on the execution there. I got to ask you on the AI side because we've been talking about this on theCUBE. I go back a decade now with you on theCUBE and it's us talking about edge, intelligent edge. We think we had edge boxes. I remember when you had the devices came out. Now with AI, the acceleration factor, what's your view on how fast this markets, we're in the telco space, which they're not the most nimble industry. But now with AI, there's an opportunity to move faster. What are the accelerants that you see AI bringing to all industries, whether it's telco or any other vertical or enterprises themselves, because is it truly an opportunity to go faster? What's your view on the AI's impact to accelerating transformation? First of all, I believe AI is going to be the most disruptive technology, at least in a lifetime, at least in my lifetime. It's going to be bigger than mobile and cloud was. So that's for sure. Second is that to me, AI is all about business productivity. First and data insights. Second and from that perspective, when you look at the AI, you have to look at the life cycle from training to fine-tuning to inferencing. All the outcomes come from the inferencing side. But right now most of the action has been in the training side. Think about these large foundational models. Everybody talks about obviously about the open AI, but then you have the Google side and many others which are backed by venture capitalists to create this momentum. But the reality is that enterprise, as we know it, will take a model because they're not going to build their models and fine-tune a given context to the model with their data. And so in the cloud-nated world, the simple architecture is that you have thousands of workloads running on thousands of servers. So you share everything. But in the inated world, you may have one model or one workload running on thousands of accelerated compute and the network becomes the core component of that in addition to the accelerator. That's why this Juniper acquisition is so important because HPE has unique IP and the silicon and software in the network for AI. And then Juniper brings obviously a large portfolio that brings also the AI-driven approach. So I think for the telecommunication sector, there is an opportunity to materialize the regional vision of H compute by bringing the AI inference into the edge of the network. I got to get your perspective. First of all, I want to talk more about the network because we've always said that's where the action is in the networking side. But if you look at right now, open source foundation models are catching up to the proprietary or they call them pioneer models, but they're still the open AI's with the managed services. You get llama, Mestrel are growing from capabilities and adoption. The problem is where you host that? So everyone's like, hmm, I'll buy a server and I'll host it on premise. So there's a huge uptake in on-premise hosting and or micro-clouds or we call super-clouds emerging. Kind of private cloud, it's not really private cloud, but GPU clouds, specialty clouds are emerging on-premise. So server sales, you got a smile going on there? You can see that. I mean, more servers are going to move off the table there. Whether it is companies like they're building the models or maybe hyperscalar offering some of the services that you describe, they will take any compute power anywhere they can get, okay? But when you talk about enterprises, right? They are very worried, but at the same time, they will consider other aspects like security, data privacy, sustainability, responsibility, right? And this is where it's important that they have flexibility and choice. They will be enterprises that are going to deploy these fine-tuning or inferencing in their own data centers, which you talk about on-premise. And obviously a lot of the inference will go to the edge. But they also may use a public venue but in a virtual private cloud, because they don't have the data center expertise and the space and the cool in the power. Many of the systems are liquid cool supercomputers in many ways. And this is where HP is unique and differentiated. So on the one end, we have had decades of supercomputing expertise. We now have to optimize the infrastructure to the models. In the past, we did it for weather forecasting, for simulation and modeling. And today we're going to do it for generative AI with an analytics framework that reduces the cost and the entry of barrier. But at the same time, we now have to run these systems at scale with cooling and data center services. That's why we announced last year you were part of the HP Discover. These instances that are public, but actually what customers are doing is building virtual private clouds within those instances and then subscribing to it. It's a service becomes supercomputers now. Basically is what you're getting at. Yeah, in many ways you are democratizing supercomputers for the enterprise. So an observation and a question. I mean, the networking business is exciting to me because the gross margin of that business is so much better than it is in the servers because let's face it, for years, Intel has sucked up all the gross margins. Nvidia's gross margin is 77% on the last earnings call. But I wanted to ask you, I want to go back to inference. You talked a lot about inference, inference to the edge, where the action is going to be. Most of it today is training in the cloud. It was interesting to hear Nvidia say on their earnings call that 40% of their GPUs are going to inference and Jensen said that probably is understated. Can you give us your perspective? Because you don't have a dogginess value. Take whatever GPU or CPU makes sense for your customers. How do you see that inference playing out? When you talk about inference, what are you specifically talking about? What's under the covers? And what does it mean for HPE and its customers? First of all, Nvidia and I have a phenomenal partnership. Jensen and I talk very regularly about what we can do together to accelerate the adoption. And he understands HPE brings a unique set of capabilities including the go-to market to reach these customers but also everything I just talked early on, the ability to deploy and run these models at scale. But from the inference in perspective, I think customers now are understanding what are the use cases for the investment required that would give them the ROI. And obviously bots is a great example of it. Automation of sorts in specific functions whether it's legal, marketing, others are examples of it. We're still early. That's why I'm excited about what comes next because it's a big opportunity ahead of us. But the reality is that most of the action we have seen so far have been in the training side. But now we start seeing momentum in the enterprise whether it's large companies or media companies that they want access to this technology. And that's where in December, here in Barcelona when we did HPE Discover, we announced the pre-configured solution with NVIDIA that actually that's both fine-tuning and inferencing and we already see the pipeline growing very rapidly. So when you think about inferencing in your customer base, I mean, when I think inferencing, I always think about this. But now I hear these GPUs actually, NVIDIA GPUs and obviously you're talking about your relationship with NVIDIA. How should we think about that? It seems like it's going to be very specialized and a very long tail of different architectures to meet different needs. How should we think about that? Well, it depends on the model. So your phone soon will be able to do 20, 30, 40 billion parameters. So if a model can be strong to that level for a specific use case, that would be a very powerful inference device. Not different in a PC for that matter. But then you have the more verticalized use cases where it is very specific for the vertical manufacturing for robotics. If you come to our booth, you will see the deployment of private 5G with AI inferencing for robotics in a floor, right? That's a great example of the connectivity of the networking with inferencing at the edge. And what's inside of that? Is that an NVIDIA GPU? Is it an Intel version of that? All the above, because the fact of the matter is that we have deployed AI systems using all three, including the supercomputing side. But no question right now, people that are looking for large, large training model have been obviously heavily skewed to the NVIDIA side. But inferencing, you need a different set of characteristics and all the vendors will try to optimize for different things. That's why the silicon aspect of this will be way more purpose-built than a generic approach. Antonio, I'm curious, get your take on your investments in AI and where your customers are investing. You mentioned training, now inference. We're seeing the customer investment a little lower than most people are hyping it up, but it's going faster. We're seeing more. But where are you guys investing? HPEs, resources in the AI native stack. We're specifically, obviously networking, but how would you explain your investment strategy? Actually, it's at all level of the stack. So I think people don't understand that or they realize that we do a lot of silicon, not the GPU as we just talked about it, but the silicon to bring the systems together. And one key component of that silicon is the network because you need to connect the GPU to the network bandwidth in order to have the perfect performance, right? It's like when you put thousands of GPUs, you don't want to be the network that bottlenecks. So every GPU is connected directly to the network in order to do parallel computing. So you need to level a coherent approach, including the memory components of this. So that's one example and that's what we call HP Slingshot, which we have first deployed in the supercomputing space. And this is where we brought the exafloat barriers and here in Europe, we are deploying many of those systems, including Lumi, which is already operational. We are now deploying soon the HLRS system in Germany. We just announced the NE system for high performance computing, but it has generative AI embedded to it. You see custom silicon as key to your strategy. The second piece of this is obviously, the IP to run the system efficiently. So HP has more than 500 patents on the liquid cooling. And so when you bring a system that needs liquid cooling, you need a lot of IP to run it very efficiently. You have to learn some chemistry along the way, by the way. But also the mechanical part, the pumps, everything that's needed. Third piece of this is obviously the software. When you deploy AI system at scale, you have to manage the contention of resources because the most important metric in an AI model, training, is completion rates. You want to start and you want to complete the model. And you don't want to have to restart. Otherwise it's a huge cost burden from a computer resource and energy resource. We always have a lot of IP on how we do cooling and the data center. And then last panelist is the machine learning development environment. This is where you bring your resources to the customer so they can access the right services to develop or train the models, including data automation and pipeline automation, which is very hard because you have to bring the data together. You have to automate the data pipeline and then you have to deploy the model and optimize it with the data. And this is what HP to the MLD framework brings on top of that infrastructure. Plus the HP GreenLake, obviously, which give us the ability to meter and charge on a per GPU basis. I'm glad you brought up energy because I've seen some really interesting things. In fact, I think I started last year at HPE Discover with your liquid cooling ecosystem. They'll take the cold liquid and then it gets hot and then they reuse that for other sustainable methods. And so that energy is a huge, huge thing. Our two public instances that we are in the process of standing up, basically a 99% renewable energy, the 1% that is not, actually we are creating a negative carbon footprint because we are using hydropower. And then that waste is used for other means. In fact, in one of the locations, we are going to actually provide that waste, which is hot water, to power greenhouses in the winter. So you can actually create a positive. We use that warmth. Talk about the implementation approaches that you see your customers deploying with AI and systems that now have new constraints. I mean, power is a constraint. Space is a constraint. Space, power, well, first question. What are the constraints that you see right now in the market with AI for customers? What are, as you design these new systems from custom silicon with IP that you're building in, you're basically building a new kind of, I won't say server, but it's a system. I think many of these constraints are, first of all, deep learning expertise is not that simple because that was generically for specific industries like designing new airplanes or automotive or weather forecasting. So you need to have a different set of skill sets. Second is definitely data center running expertise with these new foreign objects like water and other things. And then scales, scale really matters, right? So that's why to me is beyond just selling AI and compute power is actually selling and managed services that goes with it and the security associated with that. And that's why HPE is uniquely positioned because we have all those capabilities. I mean, you guys have been one of the best systems thinkers in terms of how you build products, software and hardware, hardware management and services over the top, exactly. So I got to ask you about the IT conversation because Dave and I are doing some research with the CUBE research team around how platform engineering is changing the role of IT. And the specific comment and question for you is as IT transforms now rapidly with AI, that's our premise, we believe that to be true. I'm sure you do as well. The traditional IT operator is has to be platform engineering person. And that's a hard skill gap to chasm to cross with AI now, new managed services and capabilities are emerging on the scene where you got the automation and new capabilities. And that's why I like the missed angle of where that leads us. It's AI bringing operational benefits. So now a, I'll call level one IT person could be a level three skill with AI augmentations. So co-piloting like concepts are interesting. What's your reaction to that? Do you believe that to be true? I think there's multiple things. First of all, I spend almost 50% of my time talking to customers. And what they tell you is that they don't want to be in the run part of IT. They want to be more on the innovation side. And that innovation is by deploying a platform-based IT set of capabilities so they can innovate faster. Whether it's deploying cloud on top of it or whether it's deploying AI on top of it. And the more you simplify and automate the better it is for them because then they have flexibility and choice. They don't want lock in and they won't love it optics, right? But the challenge they have like any of us is to continue to reskill the workforce. And so there's a combination of bringing new talent in with talent reskilling. And that's the fact of what we're dealing with today. IT's dead, long live IT. I mean, IT is changing in a good way. Culturally. Every industry is an IT industry, so that's for sure. Hey, teach your keynote a little bit. What are we going to hear from you? Keynote? Your keynote, yeah. Actually, it's not a keynote. It's actually a conversation. Okay. That's going to be moderated by PWC. And we're going to have a conversation about AI and we talk about conversation about the Juniper acquisition and how that's going to transform the network in industry. Nice. Tomorrow, 12 o'clock. Any little teasers so you can share with us now? No, just attend, attend at the end of the call. I think this could be a keynote here. Final question for me is what we wrap up as as you look forward, you got the 20 mile stare in the industry with HP knowing what's coming. Love the vision. What are customers going to imagine in the future? What do you see as benefits for the customer as we roll down in all verticals? Certainly we're in telecom. That's becoming cloud data, edge, distributed computing. But every vertical is being disrupted in a positive way by AI. What do you expect to see from a value proposition standpoint for the customers? What are some of the benefits they'll realize? Speed, speed is number one, right? We always said that the future belongs to the fast and those who are moving fast are going to be the winners. But the way you go about it is by giving them a platform as we talk about it, that they can consume all the services that they need from the network core foundation, which to me is the core foundation, to hybrid cloud, NAI services, in a unified experience that's easy to consume. And if you get that choice with the experiences the core of it will lower cost. They can move faster and deliver better outcomes for the business that they are in. That's the goal. Cloud native, AI native, secure stack built in. That's easy to use. Last question, the juniper acquisition is going as planned? Everything looking good. As I said, on January 11 when we made the announcement this acquisition is going to take nine to 12 months. We don't anticipate, we don't foresee any major issues because the fact of the matter is that when you bring these two companies together is still smaller than the current incumbent and also is a good thing for the market to have a stronger alternative. But as always, Dave, we are working with the regulators to get this transaction approved as quickly as possible. But we are very pleased with the momentum we have and the process that we are engaging at this point. In that process with the regulators, it's about a combination of giving them data that they answer in their questions, educating them. Standard process. All of the above. Remember, I have done 35 acquisitions so I have done these two times. Antonio, great to see you. Thank you. Congratulations on all your success. We'll see you at HP Discover. Thank you. Thanks for coming on the queue. Thanks for having me. Come on, Antonio. Thanks, great to see you. Okay, we'll be back more with live coverage here from theCUBE here in Barcelona at NWC. I'm John Furrier with Dave Vellante. We'll be right back.