 Okay, we're back. I'm John Furrier with theCUBE and we're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alessandro Barberi, VP of Product Management and Pluribus Networks and Pete Lumbis, Director of Technical Marketing and NVIDIA remotely. Guys, thanks for coming on. Appreciate it. Yeah, thanks a lot. So, deep dive. Let's get into the what and how, Alessandro, we heard earlier about the Pluribus and NVIDIA partnership and the solution you're working together on. What is it? Yeah, first, let's talk about the what. What are we really integrating with the NVIDIA Bluefield DPU technology? Pluribus has been shipping in volume in multiple mission critical networks. So, this is an advisor one network operating systems. It runs today on merchant silicon switches. And, effectively, it's a standard base open network operating system for data center. And the novelty about this operating system is that it integrates a distributed control plane for to automate effective in SDN overlay. This automation is completely open and interoperable and extensible to other type of clouds. It's not enclosed. And this is actually what we're now porting to the NVIDIA DPU. Awesome. So, how does it integrate into NVIDIA hardware and specifically how is Pluribus integrating its software with the NVIDIA hardware? Yeah, I think we leverage some of the interesting properties of the Bluefield DPU hardware, which allows actually to integrate our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So, the first byproduct of this approach is that whatever we do at the network level on the DPU card is completely agnostic to the hypervisor layer or OS layer running on the host. Even more, we can also independently manage this network node, this switch on a NIC, effectively managed completely independently from the host. You don't have to go through the network operating system running on x86 to control this network node. So, you truly have the experience, effectively, of top of rack for virtual machine or a top of rack for Kubernetes pods where instead of, if you allow me with the analogy, instead of connecting a server NIC directly to a switch port, now you're connecting a VM virtual interface to a virtual interface on the switch on a NIC. And also, as part of this integration, we put a lot of effort, a lot of emphasis in accelerating the entire data plane for networking and security. So, we are taking advantage of the DOCA, NVIDIA DOCA API to program the accelerators and you accomplish two things with that. Number one, you have much greater performance, much better performance, they're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to additional workloads to run your cloud applications or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20% if you wanna run the same number of compute workloads. So, great efficiencies in the overall approach. And this is completely independent of the server CPU, right? Absolutely, there is zero code from Pluribus running on the x86. And this is why we think this enables very clean demarcation between compute and network. So, Pete, I gotta get you in here, we heard that the DP use enable cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking dev sec ops, right? Now you got net ops, net sec ops. This separation, why is this clean separation important? Yeah, I think it's a pragmatic solution in my opinion. We wish the world was all kind of rainbows and unicorns but it's a little messier than that. And I think a lot of the dev ops stuff and that mentality and philosophy, there's a natural fit there, right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And I think that we in the networking industry have gotten closer together, but there's still a gap. There's still some distance. And I think that distance isn't gonna be closed. And so, again, it comes down to pragmatism. And I think one of my favorite phrases is look good fences, make good neighbors. And that's what this is. Yeah, that's a great point because dev ops has become kind of the calling car for cloud, right? But dev ops is a simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you're talking about that part of the stack under the covers, under the hood, if you will. This is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. Yeah, exactly. And I think that's where one from the policy, the security, the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. Alessandro, this is really the why we're talking about here. And this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what? So this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops all coming together, clean separation. Help us understand how this joint solution with NVIDIA fits into the pluribus unified cloud networking vision because this is what people are talking about and working on right now. Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is operation of cloud networking. And the second is distributing security services in the cloud infrastructure. First, let me talk about the first, what are we really unifying? If we're unifying something, something must be at least fragmented or disjointed. And what is disjointed is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your IP clause, fabric, leaf and spine topologies. This is actually a well understood problem. I would say there are multiple vendors with similar technologies, very well standardized, very well understood and almost a commodity, I would say, building an IP fabric these days. But this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint. Those services are actually now moved into the compute layer where you actually, where cloud builders have to instrument a separate network virtualization layer where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they're very dependent on the kind of hypervisor or compute solution you choose. For example, the networking API between an ESXi environment or an Hyper-V or a Zen are completely disjointed. You have multiple orchestration layers. And then when you throw in also Kubernetes in this type of architecture, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually are stacking multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights, effectively, right? They operate as completely disjointed and we're trying to tackle this problem first with the notion of a unified fabric which is independent from any workloads, whether it's this fabric spans on a switch which can be connected to a bare metal workload or can span all the way inside the DPU where you have your multi-i-pervisor compute environment. It's one API, one common network control plane, and one common set of segmentation services for the network. That's problem number one. You know, it's interesting. I hear you talking, I hear one network, one different operating models. Reminds me of the old serverless days, you know? There's still servers, but they call it serverless. Is there going to be a term networkless? Because at the end of the day, it should be one network, not multiple operating models. This is a problem that you guys are working on. Is that right? I mean, I'm just joking, serverless and networkless. But the idea is it should be one thing. Yeah, it's effectively what we're trying to do is we're trying to recompose this fragmentation in terms of network operation across physical networking and server networking. Server networking is where the majority of the problems are because of the, as much as you have standardized the ways of building physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of sort of operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute the security services throughout the infrastructure more efficiently, whether it's micro segmentation, is a stateful firewall services or even encryption. Those are all capabilities enabled by the Bluefield DPU technology. And we can actually integrate those capabilities directly into the network fabric, limiting dramatically, at least for East West traffic, the sprawl of security appliances, whether virtual or physical, that is typically the way people today segment and secure the traffic in the cloud. Awesome. All kidding aside about networkless and serverless kind of fun play on words there. The network is one thing, it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? Yeah, I think what's beautiful and kind of what the DPU brings that's new to this model is a completely isolated compute environment inside. So it's the yo dog, I heard you like a server. So I put a server inside your server. And so we provide arm CPUs, memory, and network accelerators inside. And that is completely isolated from the host. So the server, the actual x86 host just thinks it has a regular nick in there, but you actually have this full control plane thing. It's just like taking your top of a rack switch and shove it inside of your compute node. And so you have not only the separation within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage. But we're taking all of the functions we used to do at the top of rack switch and we're distributing them now. And as time has gone on, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that if alien is good enough or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we can do it. So what's in it for the customer? Real quick, I think this is an interesting point. You mentioned policy. Everyone in networking knows policy is just a great thing. And it adds, you hear it being talked about up the stack as well when you start getting into orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility relative to security policies and application enablement. I mean, is that, what's the customer get out of this architecture? What's the enablement? It comes down to taking again the capabilities that were in that top of rack switch and distributing them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. And again, we always wanna kind of separate each one of those layers. So just as in say a VXLAN network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large, to me, the possibilities are endless. It's a great security control plane, really flexibility key and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandro, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? Yeah, I think the response from customer has been the most encouraging and exciting for us to sort of continue and work and develop this product. And we have actually learned a lot in the process. We talked to tier two, tier three cloud providers. We talked to SP sort of telco type of networks as well as large enterprise customers. In one particular case, one, I think, let me call out a couple of examples here just to give you a flavor. There is a service provider, a cloud provider in Asia who is actually managing a cloud where they are offering services based on multiple supervisors. They are native services based on Zen but they also on ramp into the cloud workloads based on ESXi and KVM, depending on what the customer picks from the menu. And they have the problem of now orchestrating through their orchestrator integrating with Zen Center, with vSphere, with OpenStack to coordinate these multiple environments. And in the process to provide security, they actually deploy virtual appliances everywhere which has a lot of cost, complication and it's up into the server CPU. The promise that they saw in this technology, they call it actually game changing, is actually to remove all this complexity. They have in a single network and distribute the micro segmentation service directly into the fabric. And overall, they are hoping to get out of it tremendous OPEX benefit and overall operational simplification for the cloud infrastructure. That's one potent use case. Another large enterprise customer, a global enterprise customer, is running both ESXi and Hyper-V in their environment and they don't have a solution to do micro segmentation consistently across hypervisors. So again, micro segmentation is a huge driver security. Looks like it's a recurring theme talking to most of these customers. And in the telco space, we're working with a few telco customers on this CFT program where the main goal is actually to harmonize network operation. They typically handle all the VNFs with their own homegrown DPDK stack. This is overly complex. It is frankly also slow and inefficient. And then they have a physical network to manage. The idea of having again one network to coordinate the provisioning of cloud services between the telco VNFs and the rest of the infrastructure is extremely powerful on top of the offloading capability offered by the Bluefield DPUs. Those are just some examples. That was a great use case. A lot more potential. I see that with the unified cloud networking. Great stuff. Pete shout out to you guys at NVIDIA been following your success for a long time and continuing to innovate as cloud scales and Pluribus here with the unified networking kind of bring it to the next level. Great stuff. Great to have you guys on and again software keeps driving the innovation. Again, networking is just a part of it and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in this new architecture and solution learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds and trying to think about unification around the network and giving more security, more flexibility to their teams. How can people learn more? Yeah. So Alessandro and I have a talk at the upcoming NVIDIA GTC conference. So it's the week of March 21st through 24th. You can go and register for free, nvidia.com slash GTC. You can also watch for recorded sessions if you end up watching this on YouTube a little bit after the fact. And we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. Alessandro, how can people learn more? Yeah, absolutely. People can go to the Pluribus website, www.pluribusnetworks.com slash EFT and they can fill up the form and they will contact Pluribus to either no more or to no more and actually to sign up for the actual early field trial program which starts at the end of April. Okay, well, we'll leave it there. Thanks you both for joining, appreciate it. Up next, you're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John Furrier with theCUBE. Thanks for watching.