 From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hi, and welcome to this CUBE Conversation. I'm Stu Miniman coming to you from our Boston Area Studio. We've been digging in with the Pensando team, understand how they're fitting in to the cloud, multi-cloud edge discussion. Really thrilled to welcome to the program, first time guest. Silvano Guy, he's a fellow with Pensando. Silvano, really nice to see you again. Thanks so much for joining us on the CUBE. Stuart, it's so nice to see you. We used to work together many years ago and that was really good and it's really nice to come to you from Oregon, from Bend, Oregon, beautiful town in the high desert of Oregon. Yeah, I do love the Pacific Northwest. I miss the planes and the hotels. I don't necessarily, I should say actually I don't miss the planes and the hotels, but going to see some of the beautiful places is something I do miss in getting to see people in the industry I do like. As you mentioned, you and I cross-pass back through some of these spin-ins, back when I was working for a very large storage company. You were working for Cisco. You were known for writing the book. You were a professor in Italy, many of the people that worked on some of those technologies were your students. But Silvano, my understanding is you'd retired. So maybe share for our audience, what brought you out of that retirement and into working once again with some of your former colleagues and on the Pensando opportunity? I did retire for a while. I retired in 2011 from Cisco, so if I remember correctly. But at the end of 2016, beginning of 2017, some old friend that you may remember and know called me to discuss some interesting idea, which are basically the seed idea that is behind the Pensando product. And the idea were interesting. What we built, of course, is not exactly the original idea because you know, product evolve over time. But I think we have something interesting that is adequate and probably superb for the new way to design the data center network more for enterprise and cloud. All right, and Silvano, I mentioned that you've written a number of books really the authoritative look when some new products had been released before. So you've got a new book, Building a Future-Proof Cloud Infrastructure. And look at you, you've got the physical copy. I've only gotten the soft version. The title, really interesting. Help us understand how Pensando's platform is meeting that future-proof cloud infrastructure that you discussed. Well, network have evolved dramatically in the data center and in the cloud. You know, now the speed of a classical server in enterprise is probably 25 gigabit. In the cloud, we are talking of 100 gigabit of speed for a server going to 200 gigabit. Now the backbone are ridiculously fast. We no longer use spawning tree and all the stuff. We no longer use access core aggregation. We switch to closed network. And this closed network have a huge, enormous amount of memory. And that is good, but it also imply that it's not easy to do services in an decentralized fashion. If you want to do a service in a centralized fashion, what you end up doing is creating a giant bottleneck. You basically, there is this word that is being used that is trombone or tromboning, you know. If you try to funnel all this traffic through that bottleneck and it's not really going to work. The only place that you can really do services is at the edge. And this is not an invention. I mean, even all the principle of cloud is move everything to the edge and maintain the network as simple as possible. You know, so we approach services with the same general philosophy. We try to move services to the edge as close as possible to the server and basically at the border between the server and the network. And when I mean services, I mean three main category of services. The networking services, of course, there is the basic layer, two-layer, three-stuff plus the bonding, you know, the MLog and what is needed to connect the server to a network. But then there is the overlay. Like the Exlan or Geneva are very, very important basically to build a cloud infrastructure. And that are basically the network service. We can have others, you know, but that sort of is the core of the network services. Some people want to run BGP there. Some people don't want to run BGP. There may be EVPN or kind of things like that, but that is the core of the network services. Then of course, and we go back to the time we worked together, there are the storage services. At that time we were discussing most about fiber channel. Now the buzzword is clearly NVME, but it's not just a buzzword, it's really a new way of doing storage. And it's very, very interesting. So NVME kind of services are very important. And NVME has a version that is called NVME OF over fabric, which is basically a sort of remote version of NVME. And then the third least but not last most important category is probably security. And when I say that security is very, very important, you know, the fact that security is very important is clear to everybody nowadays. And I think security has two main branches in services. There is the classical firewall and micro segmentation, which you basically try to enforce the fact that only who is allowed to access something can access something. But you don't at that point, care too much about the privacy of the data. Then there is the other branch, which is encryption, which you're not trying to enforce to decide who can access or not access the resource, but you're basically caring about the privacy of the data encrypting the data so that if it is hijacked, snooped or whatever, it cannot be decoded. Excellent. So, Silvano, absolutely. The edge is a huge opportunity when someone looks at the overall solution and say you're putting something in the edge, they could just say this really looks like a nick. You talked about some of the previous engagement we worked on, you know, host bus adapters, smart nicks and the like. There were some things we could build in, but there were limits that we had. So, you know, what differentiates the Pensando solution from, you know, what we would traditionally think of as an adapter card in the past? Well, the Pensando solution has two main multiple pieces, but in terms of hardware has two main pieces. There is an ASIC that we call CAP internally. You know, that ASIC is not strictly related to be used only in an adapter form. You can deploy it also in other form factor, in other part of a network, in other embodiment, et cetera. And then there is a card. The card has a PCIe interface and sit in a PCIe slot. So yes, in that sense, somebody can call it a nick. And since it's a pretty good nick, somebody can call it a smart nick. You know, we don't really like that two terms. We prefer to call it DSC, domain-specific card. But the real term that I like to use is domain-specific hardware. And I like to use domain-specific hardware because it's the same term that NSC and Patterson use in a beautiful piece of literature that is the Turing Award lecture. It's on the internet, it's public. I really ask everybody to go and try to find it and listen to that beautiful piece of literature, modern literature on computer architecture, you know, the Turing Award lecture of NSC and Patterson. And there they introduce the concept of domain-specific hardware. And they explain also the justification for why now is important to look at domain-specific card. And the justification is basically in a nutshell, and we can go more deep if you are interested, but in a nutshell is that the specking that is the single thread performance measurement of the CPU is not growing fast at all. It's only growing nowadays like few point percent per year, maybe four percent per year. And with this slow grow of the specking performance of the core, you know, the core need to be really used for user application or customer application. And all what is non-essential can be moved to some domain-specific hardware that can do that in a much better fashion. And by no mean I imply that the DSC is the best example of domain-specific hardware. The best example of domain-specific hardware is in front of the eye of all of us and our GPUs. And no GPUs use for graphic processing, which are also important, but GPU use basically for artificial intelligence, machine learning, inference. You know, that is a piece of hardware that has shown that something can be done with performance, but there are other processes. Yeah, it's interesting, right? If you turn back the clock 10 or 15 years ago, I used to be in arguments and you say, you know, do you build an offload or do you let it happen in software? And it was always like, oh, well Moore's law will mean that, you know, the software solution will always win because if you bake it in hardware, it's too slow. It's a very different world today. You talk about how fast things speed up. From your customer standpoint, though, often, you know, some of those architectural things or something that, you know, I look for my suppliers to take care of that. Speak to the use case. What does this all mean from a customer standpoint? What are some of those early use cases that you're looking at? Well, as always, you know, you get a bit surprised by the use cases in the sense that you start to design a product thinking that some of the most cool thing will be the dominant use cases. And then you discover that something that you have never really thought of the most interesting use case, you know. One that we have thought about since day one, but it's really becoming super interesting is telemetry. Basically measuring things in the network and understanding what is happening in the network. And I was speaking with a friend the other day and the friend was asking me, oh, but we had SNMP for many, many years, which is the difference between SNMP and telemetry, no. And the difference is to me, the real difference is in SNMP on many of this management protocol, you involve a management plane, you involve a control plane, and then you go to read something that is in the data plane. But the process is so inefficient that you cannot really get a huge volume of data and you cannot get it practically enough with enough performance. Doing telemetry means thinking a data path, building a data path that is capable of not only measuring everything in real time, but also sending out that measurement without involving anything else, without involving the control path, the management path, so that the measurement become really very efficient and the data that you stream out become really usable data, actionable data in real time. So telemetry is clearly the first one is important. One that you honestly, we had built, but we weren't thinking it was going to have so much success is what we call bidirectional ER span. And basically it's just the capability of copying data and sending data that the card see to a station. And that is very, very useful for replacing what are called TAP network, which is just network that many customers put in parallel to the real network, just to observe the real network and to be able to troubleshoot and diagnose problem in the real network. So these two features, telemetry and ER span that are basically troubleshooting features are the two features that at the beginning are getting more traction. You're talking about real time things like telemetry, the applications and the integrations that you need to deal with are so important. Back in some of the previous startups that you've done, it was getting ready for say, how do we optimize for virtualization? Today you talk cloud native architectures, streaming very popular, very modular, often container based solutions and things change constantly. You look at some of these architectures, it's not a single thing that goes on for a long period of time, but it's lots of things that happen over shorter periods of time. So what integrations do you need to do and what architecturally, how do you build things to make them as you talk future proof for these kind of cloud architectures? Yeah, I mean, what I mentioned were just the two low-hanging fruit, if you want the first two low-hanging fruit of this architecture, but basically the two that come immediately after and where there is a huge amount of value are distributed state for firewall with microsegmentation support. That is a huge topic in itself. So important nowadays that is absolutely fundamental to be able to build a cloud. That is very important. And the second one is wire rate encryption. There is so much demand for privacy and so much demand to encrypt the data. Not only between data center, but now also inside the data center, you know. And when you look at the large bank, for example, the large bank is no longer a single organization. A large bank is multiple organizations that are compartmentalized by law that need to keep things separate by law, by regulation, by FCC regulation, you know. And if you don't have encryption and if you don't have distributed firewall, it's really very difficult to achieve that. And then, you know, there are other application, you know, we mentioned storage and DME and it's a very nice application. And then we have even more if you go to look at lot balancing between server, you know, doing compression for storage and other possible application. But I sort of lost your real question. So just part of the piece is when you look at, you know, integrations that Pensando needs to do for really maybe some of the applications that you would tie into any of those that come to mind. Yeah, well, for sure, it depends, you know. I see two main branches again. One is the cloud provider and one are the enterprise. In the cloud provider, basically, this cloud provider have a huge management infrastructure that is already built and they want just the card to adapt to this, to be controllable by this huge management infrastructure. They already know which rule we want to send the card. They already know which feature we want to enable on the card. They already have all that, they just want the card to provide the data plan performance for that particular feature. So we are going to build something particular that is specific for that particular cloud provider that adapt to that cloud provider architecture. We want the flexibility of having an API on the card that is like a REST API or a GRPC in which they can easily program, monitor and control that card. When you look at the enterprise, the situation is different. Enterprise is looking to two things, two or three things. The first thing is a complete solution. They don't want to, they don't have the management infrastructure that they have built like a cloud provider. They want a complete solution that has the card and the management station and that's all what is required to make from day one a working solution which is absolutely correct in the enterprise environment. They also want integration and integration is with two that they already have. If you look at many enterprise, one of the dominant presence is clearly VMware virtualization in terms of ESX and vSphere and NSX. And so most of the customer are asking us to integrate with VMware, which is a very reasonable demand. And then of course, there are other player not so much in the virtualization space, but for example, in the data collection space and the data analysis space. And for sure, Pensando doesn't want to reinvent the wheel there. Doesn't want to build a data collector or a data analysis engine and whatever. There is a lot of work and there are a lot of out there. So integration with things like Splunk, for example, are kind of natural for Pensando. Excellent, so right. You talked about some of the places where Pensando doesn't need to reinvent the wheel. Talked through a lot of the different technology pieces. If I had to have you pull out one, what would you say is the biggest innovation that Pensando has built into the platform? Well, the biggest innovation is this P4 architecture. And the P4 architecture was a sort of gift that was given us in the sense that it was not invented for what we use it. P4 was basically invented to have programmable switches. And the first big P4 company was clearly Barefoot, but then was acquired by Intel and Barefoot built the programmable switch. But if you look at the reality of today, the network, most of the people want the network to be super easy. They don't want to program anything into the network. They want to program everything at the edge. They want to put all the intelligence and the programmability at the edge. So we borrowed it before architecture, which is a fantastic programmable architecture. And we implemented at the edge. You know, it's also easier because the bandwidth is clearly more limited compared to being in the core of the network. And that P4 architecture give us a huge advantage. If you tomorrow come up with the Stuart encapsulation super duper technology, I can implement in the Capri easy, the Stuart whatever was called super duper encapsulation technology. Even when I designed the easy, I didn't know that encapsulation exists. It's the data plane programmability, it's the capability to program the data plane and programming the data plane while maintaining wire speed performance, which I think is the biggest benefit of Pensando. All right, well, Silvano, thank you so much for sharing your journey with Pensando so far. Really interesting to dig into it and absolutely look forward to following progress as it goes. Stuart, it's been really a pleasure to talk with you. I hope to talk with you again in the near future. Thank you so much. All right, thank you for watching theCUBE. I'm Stu Miniman. Thanks for watching.