 From the CUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hi, I'm Stu Miniman and welcome to this CUBE Conversation. We're digging in with Pensando, talking about the technologies that they're using and happy to welcome to the program two of Pensando's technical leaders. We have Krishna Dota Penny. He's the vice president of software and we have Kirabu Raman. He's a principal engineer, both with Pensando. Thank you so much for joining us. Thank you Stu. All right, so Krishna, you run the software team. So let's start there and talk about really the mission and shortly, obviously, bring us through a little bit of architecturally what Pensando was doing. Yeah, so to get started, right? So Pensando, we are building a platform which can automate and manage the network storage and security services. So we talk about software here, it's like the breadth of software is you start from all the way from boatloader to all the way it goes to microservice controller. So the fundamentally, the company is building a domain specific processor called DSP that goes on the card called DSC and that card goes into a server in a PCIe slot. Since we go into a server and we act as a NIC, we have to do drivers for Windows, all the versus Windows Linux ESX in free PSD. And on the card itself, on the chip itself, there are two fundamental pieces of the chip. One is the P4 pipelines where we run all our applications. If you can think like, you know, firewalls in the virtualization or security applications and then there's ARM SOC, which we have to bring up the platform in where we run the control plane and data and management plane. So that's one piece of the software. The other big piece of software is called PSM, which kind of, if you think about it in data center, you don't want to manage one DSC at a time or one server at a time. We want to manage all thousands of servers using a single management and control point. And that's what that's where the PSM comes from. Yeah, excellent. You talked about a pretty complex solution there. You know, one of the big discussion points in the networking world and, you know, IT in general has been, you know, really the role of software. I think we all know it got a little overblown, the discussion of software does not mean that hardware goes away. You know, I wrote a piece, you know, many years ago, if you look at, you know, how hyperscalers do things, they hyper-optimize. They don't just buy, you know, the cheapest, most generic thing. They tend to configure things and they just roll it out in massive scale. So, you know, your team, you know, is well-known for, you know, really from a chip standpoint. I think about, you know, the three Cisco spin-ins. If you, you know, dug underneath the covers, yes, there was software, but, you know, there was an ASIC there. So, you know, when I look at what you're doing in Pensando, you've got software and there is a chip. At the end of the day, you know, it looks, you know, the first form factor of this looks like, you know, a network card at the NIC that fits in there. So, give us in there some of the challenges of, you know, software and, you know, there's so much diversity in hardware these days, you know, everything getting ready for, you know, AI and GPUs and, you know, you listed off a bunch of pieces when you were talking about the architecture. So, give us that software-hardware dynamic, if you would. So, I mean, if you look at where the industry has been going towards, right? So, I mean, the Moore's law has been ending and the Dennett scale is a big darn Dennett scaling bit. So, if you want to set all the network in Surin security services on X86, you'd be wasting a bunch of X86 cycles. The customer, why does he buy X86? He buys X86 to run his applications, not to run IO or do security for IO or policies for IO. So, where we come in is basically, we do this domain-specific processor, which will take away all the IO part of it and the compute, just the compute of the application is left for X86. The rest is all off-roaded to what we call the Pensanto. So, Nick is kind of, you know, one part of what we do. Nick is how we connect to the server, but what we do inside the card is, you know, firewalls, all the networking functions, SDNs, load balancing, in all the storage functions, NVMe virtualization and encryption of all the packets, data of data at rest and data in motion. All the services is what we do in this card. And, you know, yes, it's an ASIC, but if you look at what we do inside, it's not a fixed ASIC. Yeah, we did work on the previous experience, as you said, with ASICs, but there's a fundamental difference between that ASIC and this ASIC. In those ASIC, for example, there's a hard-coded routing table or there's a hard-coded ACL table. This ASIC is completely programmable. It's more like, you know, it's a programmable software that we have a domain-specific language called P4. We use that P4 to program the ASIC. So the way I look at it, it's an ASIC, but it's mostly software-driven, completely. And from all the way from controllers to what programs you run on the chip is completely software-driven. Excellent. So, Piraboo, you know, of course, the big announcement here, HPE, you know, you've now got the product, becoming generally available this month. You know, we'd watch from the launch of Pensando, obviously having HPE as not only investor, but, you know, they're an OEM of the product. They've got a huge customer base. Maybe help explain, you know, from the enterprise standpoint, you know, if I'm buying ProLiant, where now does, you know, am I going to be thinking about Pensando? Is there, you know, what specific use cases? How does this translate to the general enterprise IP buyer? So, we cover a whole breadth of use cases. At the very basic level, if your use cases are, if your company starts ready for all the different features, you could buy it as a basic nick and start provisioning it, and you will get all the basic network functions. But at the same time, in addition to the standard network functions, you will get always on telemetry. Like you will get rich set of metrics. You will get packet capture capabilities, which will help you very much in troubleshooting issues when they happen, or you can leave them always on as well. So, you can do some of these TAP add kind of functionalities which financial services do. And all these things you will get without any impact on the workload performance. Like the customer's application don't see any performance impact when any of these capabilities are turned on. So, once this is as a standard network function, but beyond this, when you are ready for enforcing policies at the edge, or you are ready for enforcing stateful firewall, distributed firewalling capabilities, connection tracking, some of the other things like Krishna test upon NVMe virtualization, there are also a lot of other features you can add on top of. Okay, so it sounds like we're really democratizing some of those cloud services or cloud-like services for the network down to the end device if I have this right. Maybe if you could, networking, we know our friends in network, we tend to get very acronym-driven, overlays and underlays and various layers of the stack there. When we talk about innovation, I'd love to hear from both of you, what are some of those kind of key innovations if you were to highlight just one or two, probably maybe you can go first and then Krishna would love your follow-up from that. Sure, there are many innovations, but just to highlight a few of them, right? Krishna touched upon P4, but even on the P4, P4 is very much focused on manipulating the packets, packets in and packets out, but we announced it so that we can address it in such a way that from memory in, packet out, packet in memory out, those kind of capabilities so that we can interface it with the host memory. So those innovations, we are taking it to the standard and they are in the process of getting standardized as well. In addition to this, our software stack, we touched upon the always on telemetry capabilities. You could do flow-based packet captures, net flow, you could get a lot of visibility and troubleshooting information. The management plane in itself has some of the state-of-the-art capabilities, like it's distributed, highly available, and it makes it very easy for you to manage thousands of these servers. Krishna, do you wanna add something more? Yes, the biggest thing of the platform is that when we did underlays and overlays, as you said, everything was fixed. So tomorrow, you may wake up and come up with a new protocol, or you may come up with a new way to do storage, right? Normally in the hardware world, what happens is, oh, I have to sell you this new chip. That is not what we are doing. I mean, here, whatever we ship on this ASIC, you can continue to evolve and continue to innovate irrespective of changing standards. If NVMe goes from 1.2 to 1.3, or you come up with a new encapsulation of VXLan, you do whatever encapsulation, whatever TLVs you would want to, you don't need to change the hardware. It's more about downloading new firmware and upgrading the new firmware and you get the new feature. That is, that's one of the key innovation that's why most of our cloud providers like us that we are not tied to hardware. It's more of software programmable processor that we can keep on adding features in the future. So one way to look at it is like, you get the best of both worlds kind of a thing. You get power and performance of a ASIC, but at the same time, you get the flexibility or closer to that of a general purpose process. Yeah, so Krishna, since you own the software piece of thing, help us understand architecturally how you can deploy something today, but be ready for whatever comes in the future. Because that's always been the challenge as to, gee, maybe if I wait another six months, there'll be another generation of something where I don't want to make sure that I miss some window of opportunity. Yeah, so it's a very good question. So it's not, I mean, basically, you can keep enhancing your features with the same performance and power and latency and throughput. But the other important thing is how you upgrade the software. I mean, today, whenever you have an ASIC, when you have to change the ASIC, then obviously you have to pull the card out and you put the new card in. Here, when we're talking upgrading software, we can upgrade software while traffic is going through, without with very minimal disruption in the order of subsequent. So you can change your protocol. For example, tomorrow, you change from VXLand to your own innovative protocol. You can upgrade that without disrupting any existing network or storage I.O. I mean, that's where the power of the platform is very useful. And if you look at it today, where cloud providers are going, right? In the cloud providers, you don't want to, because there are customers who are using that server and they're deploying their application. They don't want to disturb their application just because you decided to do some new innovative feature. The platform capability is that, you could upgrade it and you can change your mind sometime in the future, but whatever existing traffic is there, the traffic will continue to flow and will not disturb your app. All right, great. Well, you're talking about clouds. One of the things we look at is, multi-cloud and multi-vendor peeraboo. We've got the announcement with HPE now, ProLiant and some of their other platforms. Tell us how much work will it be for you to support things like Dell servers or I think your team's quite familiar with the Cisco UCS platform. Two pieces on that. Number one, how easy or hard is it to do that integration? And from an architectural design, does a customer need to be homogeneous from their environment or is whatever cloud or server platform there on independent and we should be able to work across those? Yeah, first off, I should start with the thanking HPE. They have been a great partner and they have been quick to recognize the synergy and the potential of the synergy. And they have been very helpful throughout this integration journey. And the way we see it is a lot of the work has already been done in terms of finding out the integration issues with HPE and we will build upon this integration work that has been done so that we can quickly integrate with other manufacturers like Dell and Cisco. We definitely want to integrate with other server manufacturers as well because that is in the interest of our customers who want to consume Pensando in a heterogeneous fashion, not just from one server manufacturer. Yeah, I just want to add one thing to what they were saying. Basically, the way we think about it is that, there's x86 and then all the IO, the infrastructure services. So for us, as long as you get power from the server and you can get packets and IO across the PCI bus, we want to make it a uniform layer. So the Pensando, if you think about it, is the layer that can work across servers and could work inside the public cloud and when we have one of our customers using this in hybrid cloud. So we want to be the base where we can do all the storage network and security services irrespective of the server and where the server is placed, whether it's placed in the colo, it's placed in the enterprise data center or it's placed in the public cloud. All right, so I guess, Krishna, you said first x86, down the road, is there opportunity to go beyond Intel processors? Yes, I mean, we already support AMD, it's another form of x86, but our architecture doesn't prevent us from any servers. As long as you follow the PCI standard, we should, it's more of a testing matrix issue. It's not about support of any other OS, we should be able to support it. And initially we also tested once on PowerPC, so any kind of CPU architecture we should be able to. Okay, so walk me up the application stack a little bit though, things like virtualization, containerization, there's a question of does it work, but does it optimize, any of us live through those waves of, oh, okay, well, it kind of worked, but then there was a lot of time to make things like storage and networking work well in virtualization, and then in containerization, so how about your solution? So I mean, if you, I mean, you should look at, I mean, good example is AWS, what AWS does with Nitro. So on Nitro, you do EBS, you do security, and you do VPC, right? You know, in all these services, it's effectively weak, think about it, all of those can be encapsulated in one DSC card, and obviously when it comes to this kind of implementation on one card, right? The first question you would ask, what happens to the noisy neighbor? So we have the right cures, mechanisms to make sure all the services go through the same card, at the same time, giving guarantees to the customer that, you know, you're multi, especially in the multi-tenant environment, whatever you're doing on one, one VPC will not affect the other VPC, and the advantage of the platform that what we have is this is a very highly scalable and highly performant. Scale will not be the issue. I mean, you know, if you look at existing platforms, even if you look at the cloud, because when we're doing this product, obviously we'll do benchmarking, right? With the cloud and enterprises, with respect to scale, performance, and latency, we did the measurements and we are order of magnitude compared to even the existing clouds and currently whatever enterprise customers have. Excellent. So, Pirabu, I'm curious from the enterprise standpoint, are there certain applications, you know, I think about like, you know, from, you know, an analytics standpoint, Splunk so heavily involved in data that might be a natural fit or other things where, you know, it might not be fully tested out with anything kind of that ISV world that we need to think about. So if you're talking in terms of partner ecosystems, enterprise customers do use many of the other products as well and we are trying to integrate with other products so that we can get the maximum value. So if you look at it, you could get rich metrics and visualization capabilities from our product, which can be very helpful for the partner products because they don't have to install an agent and they can get the same capability across bare metal virtual stack as well as containers. So we are integrating with various partners, including some CMDB configuration management database products, as well as data analytics or network traffic analytics products. Peshant, do you want to add anything? Yeah, so, I mean, I think just not the analytics products, we're also integrating with VMware because right now, you know, the VMware is a computer orchestrator and we want to be the network policy orchestrator. In the future, we want to integrate with the Kubernetes and OpenShift. So we want to add integration so that, you know, the platform capability can be easily consumable irrespective of what kind of workload you use or what kind of traffic analytics tool you use or what kind of data like you use in your enterprise data center. Excellent, well, I think that's a good view forward as to where some of the work is going on the future integration. Krishnan and Pirabu, thank you so much for joining us. Great to catch up. Thank you. Thank you very much. All right, I'm Stu Miniman. Thank you for watching theCUBE.