 Okay, we've heard from the folks at Pluribus Networks and NVIDIA about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst. And to do so, we welcome in ESG senior analyst, Bob LaLiberte. Bob, good to see you. Thanks for coming into our East Coast studios. Oh, thanks for having me. It's great to be here. Yeah, so this idea of unified cloud networking approach, how serious is it? What's driving it? Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed. So the IT pendulum tends to swing back and forth, and we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it, and all that's driving up complexity. In fact, when we asked in one of our last surveys in last year about network complexity, more than half, 54% came out and said, hey, our network environment is now either more or significantly more complex than it used to be. And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility, yet it's creating more complexity. So a little bit counter to the fact and really counter to their overarching digital transformation initiatives. From what we've seen, about nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experiences. I mean, I love how you bring in the data. ESG does a great job with that. The question is, is it about just unifying existing networks or is there sort of a need to rethink, kind of do over how networks are built? Yeah, that's a really good point because certainly unifying networks helps, right? Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east-west traffic. So in the old days, it used to be easier north-south, coming out of the server, one application per server, things like that. Now you've got hundreds, if not thousands, of microservices communicating with each other. Users communicating to them, so there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you're starting to see as well, from that security perspective, when we were all consolidated, we had those perimeter-based legacy castle-in-mote security architectures, but that doesn't work anymore when the applications aren't in the castle. When everything's spread out, that no longer happens. So we're absolutely seeing organizations trying to make a shift. And I think much like if you think about the shift that we're seeing with all the remote workers and the SASE framework to enable a secure framework there, it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're fully protected. And that's really driving a lot of the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is really secure. Micro segmentation's another big area, so ensuring that these applications, when they're connected to each other, they're fully segmented out. And that's, again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius. You want to limit the amount of damage that's done, so that by doing that, it really makes it a lot harder for them to see everything that's in there. You know, you mentioned zero trust. It used to be a buzzword, and now it's become a mandate. And I love the mode analogy. You build a mode to protect the queen and the castle, the queen's left the castle. It's just distributed. So how should we think about this pluribus and NVIDIA solution? There's a spectrum. Help us understand that. You got appliances, you got pure software solutions, you got what pluribus is doing with NVIDIA. Help us understand that. Yeah, absolutely. I think as organizations recognize the need to distribute their services too closer to the applications, they're trying different models. So from a legacy approach, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to a different location in the data center and back. So with the need for agility, with the need for performance, that adds a lot of latency, plus when you start needing to scale, that means adding more and more network connections, more and more appliances, so it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay, so it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility. Which they don't want to do. They don't want to do, right? And the operations teams loses a little bit of visibility into that. Plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there isn't going to do it. So when we think about all those types of things, right? And certainly the other side effect to that is the impact on the performance, but there's also a cost. So if you have to buy more servers because your CPUs are being utilized, right? And you have hundreds or thousands of servers, right? Those costs are gonna add up. So what NVIDIA and Pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smartNIC, right? To be able to deploy the DPU based smartNIC into the servers themselves. And then Pluribus has come in and said, we're gonna create that unified fabric across the networking space into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPUs are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not gonna incur any latency hit for every round trip to the firewall and back. So I think all those things are really important plus the fact that you're gonna see from an organizational aspect. We talked about the DevOps and NetOps teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies so that the DevOps teams don't have to worry about that. So essentially they just create the guardrails and let the DevOps team run because that's what they want. They want that agility and speed. You know, the point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or networking or security offload. And you know, I've said many times everybody needs a Nitro like Amazon Nitro. You can't go, you can only buy Amazon Nitro if you go into AWS, right? Everybody needs a Nitro. So is that how we should think about this? Yeah, that's a great analogy to think about this. And I think I would take it a step further because it's almost the opposite end of the spectrum because Pluribus and Vidya are doing this in a very open way. And so Pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia is also open as well. Being able to bring that to bear so that organizations can not only take advantage of these distributed services but also that unified networking fabric, the unified cloud fabric across that environment from the server across the switches. The other key piece of what Pluribus is doing because they've been doing this for a while now and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported but also the legacy environments, bare metal, you could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from Pluribus. So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? Yeah, well think what it does for again from that operational efficiency when you're going from a legacy environment to that modern environment, it helps with the migration, helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications as well. Got it, so your people are comfortable with the skill sets, et cetera. All right, I'll give you the last word, give us the bottom line here. So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security but also visibility into those environments. And so organizations have to find solutions. As I said at the beginning they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the Nitro card. This is a great way to do it with an open solution. Love it, Bob. Thanks so much for coming in and sharing your insights, appreciate it. You're welcome, thanks. Thanks for watching the program today. Remember, all these videos are available on demand at theCUBE.net. You can check out all the news from today at SiliconAngle.com and of course, pluribusnetworks.com. Many thanks to Pluribus for making this program possible and sponsoring theCUBE. This is Dave Vellante. Thanks for watching. Be well and we'll see you next time.