 Okay, let's take things off. We're here at my Capitano, CMO of Pluribus Networks and Ami Badani, VP of Networking, Marketing and Developer, Ecosystem at Nvidia. Great to have you, welcome folks. Thank you. Thanks. So let's get into the problem situation with Cloud Unified Network. What problems are out there? What challenges do cloud operators have, Mike? Let's get into it. Yeah, really, the challenges that we're looking at are for non-hyperscalers. That's enterprises, governments, tier two service providers, cloud service providers. And the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies in seconds. They need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Really, ultimately, they need a single operating model everywhere. And then the second thing is, they need to distribute networking and security services out to the edge of the host. We're seeing a growth in cyber attacks. It's not slowing down, it's only getting worse. And solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. Okay, with that goal in mind, what's the pluribus vision? How does this tie together? Yeah, so basically what we see is that this demands a new architecture. And that new architecture has four tenants. The first tenant is Unified and Simplified Cloud Networks. If you look at cloud networks today, there's sort of like discrete bespoke cloud networks per hypervisor, per private cloud, edge cloud, public cloud, each of the public clouds have different networks. That needs to be unified. If we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all of those locations with one command and not have to go to each one. The second is, like I mentioned, distributed security. Distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls. But it doesn't stop there. They also need pervasive visibility. You know, it's sort of like, with security you really can't see, you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure. That really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction. Abstract the complexity of all these discrete networks. Whatever is down there in the physical layer, I don't want to see it. I want to abstract it. I want to define things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. Mike, we've been talking on theCUBE a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen. How do we get there? How do customers get this vision realized? That's a great question and I appreciate the tee up. I mean, we're here today for that reason. We're introducing two things today. The first is a unified cloud networking vision. And that is a vision of where our Pluribus is headed with our partners like NVIDIA long-term. And that is about deploying a common operating model, SDN enabled, SDN automated, hardware accelerated across all clouds. And whether that's underlay and overlay, switch or server, any hypervisor infrastructure, containers, any workload doesn't matter. So that's ultimately where we wanna get. And that's what we talked about earlier. The first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. And what's nice about this is we're not starting from scratch. We have an award-winning adaptive cloud fabric product that is deployed globally. And in particular, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4G and 5G virtualized cores. We know how to build carrier grade networking infrastructure. What we're doing now to realize this next generation unified cloud fabric is we're extending from the switch to this NVIDIA Bluefield 2DPU. We know there's- Let's hold that up real quick. That's a good prop. That's the Bluefield NVIDIA card? It's the NVIDIA Bluefield 2DPU, data processing unit. And what we're doing fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host. But it does take processing power. So we knew that we didn't wanna do, we didn't wanna implement that running on the CPUs, which is what some other companies do because it consumes revenue-generating CPUs from the application. So a DPU is a perfect way to implement this. And we knew that NVIDIA was the leader with this Bluefield 2. And so that's the first step into realizing this vision. I mean, NVIDIA has always been powering some great workloads of GPUs. Now you got GPUs, networking. And then NVIDIA is here. What is the relationship of how did that come together? Tell us the story. Yeah, so we've been working with Pluribus for quite some time. I think the last several months was really when it came to fruition. And what Pluribus is trying to build and what NVIDIA has, so we have this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things. Offload, accelerate, and isolate. So offload your workloads from your CPU to your data processing unit, infrastructure workloads that is. Accelerate, so there's a bunch of acceleration engines so you can run infrastructure workloads much faster than you would otherwise. And then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this a couple years ago. And with Pluribus, we've been talking to the Pluribus team for quite some months now. And I think really the combination of what Pluribus is trying to build and what they've developed around this unified cloud fabric fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment, so every server we believe over time is gonna have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what Pluribus is really trying to do is extending the network fabric from the host, from the switch to the host and really have that single pane of glass for network operators to be able to configure, provision, manage all of the complexity of the network environment. So that's really how the partnership truly started. And so it started really with extending the network fabric and now we're also working with them on security. So if you sort of take that concept of isolation and security isolation, what Pluribus has within their fabric is the concept of micro segmentation. And so now you can take that, extend it to the data processing unit and really have isolated micro segmentation workloads, whether it's bare metal, cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud, hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on the DPU. You know what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate? What sets this apart for customers? What's in it for the customer? Yeah, so I mentioned three things. It's in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating. That's sort of the key core tenants of Bluefield. So if you sort of think about what Bluefield, what we've done in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield two last year. We're introducing Bluefield three, which is our next generation of Bluefield. You know, we'll have 5x the ARM compute capacity. It will have 400 gig line rate acceleration, 4x better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add chips to our portfolio every 18 months to two years. So that's sort of one of the key areas of differentiation. The other is that if you look at NVIDIA and what we're sort of known for is really known for our AI, our artificial intelligence and artificial intelligence software, as well as our GPU. So you look at artificial intelligence and the combination of artificial intelligence plus data processing, this really creates the faster, more efficient, secure AI systems from the core of your data center all the way out to the edge. And so with NVIDIA, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the GPU. So we have this convergence, really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key motivations at NVIDIA is really to have our partner ecosystem embrace our technology and build solutions around our technology. So if you look at what we've done with the GPU, we've created an SDK, which is an open SDK called DOCA. And it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we exposed through DOCA. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment, super cloud, or these new capabilities that can really craft their own, I'd say, custom environment at scale with easy tools. And it's all kind of, again, this is the new architecture, Mike, you were talking about. How does customers run this effectively, cost effectively? And how do people migrate? Yeah, I think that is the key question, right? So we've got this beautiful architecture. Amazon Nitro is a good example of a smart Nick architecture that has been successfully deployed, but enterprises and tier two servers providers and tier one servers providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost effective. And that's super key. I mean, the reality is, DPUs are moving fast, but they're not gonna be deployed everywhere on day one. Some servers will have DPUs right away. Some servers will have DPUs in a year or two. And then there are devices that may never have DPUs, right? IoT gateways or legacy servers, even mainframes. So that's the beauty of a solution that creates a fabric across both the switch and the DPU, right? And by leveraging the NVIDIA Bluefield DPU, what we really like about it is it's open and that drives cost efficiencies. And then with this architectural approach, effectively you get a unified solution across switch and DPU, workload independent, doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can create tremendous cost efficiencies and really extract a lot of the expense from a capital perspective out of the network as well as from an operational perspective because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy a security policy and is deployed everywhere automatically, saving the network operations team and the security operations team time. All right, so let me rewind that because that's super important. It's unified cloud architecture. I'm the customer guy, but it's implemented. What's the value again? Take me through the value to me. I have a unified environment. What's the value? Yeah, so, I mean, the value is effectively, so there's a few pieces of value. The first piece of value is I'm creating this clean DMARC. I'm taking networking to the host and like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the DevOps team who own the server and the NetOps team who own the network because they're installing software on the CPU, stealing cycles from what should be revenue generating CPUs. So now by terminating the networking on the DPU, we create this real clean DMARC. So the DevOps folks are happy because they don't necessarily have the skills to manage network and they don't necessarily wanna spend the time managing networking. They've got their network counterparts who are also happy, the NetOps team, because they want to control the networking. And now we've got this clean DMARC where the DevOps folks get the services they need and the NetOps folks get the control and agility they need. So that's a huge value. The next piece of value is distributed security. This is essential, I mentioned it earlier. Pushing out micro segmentation and distributed firewall basically at the application level where I create these small segments on an application by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside because the worst thing is a bad actor penetrates the perimeter firewall and can go wherever they want and wreak havoc. And so that's why this is so essential. And the next benefit obviously is this unified networking operating model. Having an operating model across switch and server, underlay and overlay, workload agnostic, making the life of the NetOps teams much easier so they can focus their time on really strategy instead of spending an afternoon deploying a single VLAN, for example. Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there. I guess the firewall is still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between DevOps is cool because the infrastructure as code is about making the developers be agile and build security in from day one. So this policy aspect is huge, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. Right, right. That seems to be the killer feature here. Right. Yeah, if you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for Zero Trust. So like you talked about, the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between Pluribus and NVIDIA is the DPU is really the foundation of Zero Trust and Pluribus is really building on that vision with allowing sort of micro segmentation and being able to protect each and every compute node as well as the underlying network. This is super exciting. This is an illustration of how the market's evolving. Architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I got to ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go-to-market with NVIDIA? Sure. I mean, we're super excited about the partnership. Obviously we're here together. We think we've got a really good solution for the market. So we're jointly marketing it. You know, obviously we appreciate that NVIDIA is open. That's sort of in our DNA. We're about open networking. They've got other ISVs who are gonna run on Bluefield too. We're probably gonna run on other DPUs in the future. But right now, we feel like we're partnered with the number one provider of DPUs in the world and super excited about making a splash with it. I mean, NVIDIA get the hot product. Yeah, so Bluefield too, as I mentioned, was GA last year. We're introducing, well we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence software with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can learn them separately on the same platform or you can actually use, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. So that's available now. Then we have Bluefield 3, which will be available late this year. And I talked about sort of how much better that next generation of Bluefield is in comparison to Bluefield 2. So we'll see Bluefield 3 shipping later on this year. And then our software stack, which I talked about, which is called DOCA. We're on our second version, our DOCA 1.2. We're releasing DOCA 1.3 in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefield. So we have all of our acceleration libraries, security libraries that's all packed into this SDK called DOCA. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, next year we'll have another version and so on and so forth. DOCA is really that unified layer that allows Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once. And then it automatically works with future generations of Bluefield. So that's sort of the nice thing around DOCA. And then in terms of our go-to-market model, we're working with every major OEM. So later on this year, you'll see major server manufacturers releasing Bluefield-enabled servers. So more to come. Awesome. Save money, make it easier, more capabilities, more workload power. This is the future of cloud operations. Yeah, and one thing I'll add is, we are, we have a number of customers as you'll hear in the next segment that are already signed up and will be working with us for our early field trial, starting late April, early May. We are accepting registrations. You can go to www.pluribusnetworks.com slash EFT. If you're interested in signing up for being part of our field trial and providing feedback on the product. Awesome, innovation and networking. Thanks so much for sharing the news. Really appreciate it, thanks so much. Okay, in a moment, we'll be back to look deeper in the product, the integration security, zero trust use cases. You're watching theCUBE, the leader in enterprise tech coverage.