 From the CUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hi, I'm Stu Miniman and welcome to a CUBE Conversation. I'm coming to you from our Boston area studio. Happy to welcome to the program, first-time guest on the program, Francis Mattis. He's the vice president of engineering at Pensando. Francis, thanks so much for joining us. Thank you. Glad to be here. Alright, so Francis, you and I actually overlapped, you know, in some of the companies we've worked with, you know, if anybody familiar with Pensando, you have worked with some of the MPLS team over the years through some of those spin-ins. But for our audience, give us a little bit about your background, you know, what brought you to help and be part of the team that, you know, started Pensando? Sure, yeah, yeah. So I started my career with advanced micro devices in the mid-90s. Got out of school, really wanted to build microprocessors and so AMD being in Austin, Texas and me going to LSU for undergrad was a perfect sort of alignment. And so I got to AMD and Austin built K5, worked on that team, worked on the team with K7 and then I came out to California to help with K8 and that brought me to California. And then we got into the dot-com era and being at AMD fighting Intel, so to speak, seemed like a hard battle. And so with the dot-com era coming, I just saw this perfect opportunity to jump into the internet. And so that's how I got into building internet and data communications equipment. Went to Nissan Systems, we talked a little bit about that earlier. And that got me into storage and from there, I got into a company called Andiamo, which was building fiber channel sand equipment. So built chips there and got to know the MPLS team there. I always say they hired me off the street. And from that point on, well, we've been together since 2001. So 19 years, yeah. And been building silicon with them in systems for almost 20 years now. So had quite a journey. Yeah, it's been fun. Great stuff, yeah. Going back, Nissan, talking about iSCSI, in the networking world, it's a little bit of a dark arts in general for most people. Understanding the networking protocols and all the various pieces and three and four letter acronyms aren't something that most people are familiar with. Pensando, I'm curious, networking in general, you're like, I work on internet stuff and we're the tubes that things go around. So when you describe Pensando, how do you explain that to the people that maybe aren't deep into east, west or south over and under underlay protocols? Yeah, absolutely. So for me, Pensando was the culmination of all the things I've done in my career. Processing, being able to build compute engines that are programmable, starting with microprocessors, being able to do storage and storage networking with Andeano, we built the computer with MOBA and the virtualization layers around the ethernet interfaces in the adapter with what was really our first smart nick in 2006, 2007 timeframe. And then with SDN and CME, all of these elements kind of came together, these multiple different layers in the infrastructure stack, if you will. And so Pensando, for me, what was interesting was the explosion of scale in both space and time with the advent of, let's say, 25 gig, 50 gig, 100 gig to the server, the notion of very dense computing in each rack and the need for very high scale after doing all of these technologies and seeing where silicon kind of started to fall in place with 16 nanometer, it seemed that bringing this kind of technology to the edge at very low power with sort of an end-to-end security architecture and end-to-end policy engine architecture, distributed services, as we're doing, all seem to naturally fit into place. And the cloud was already proving this model. When I say the cloud, I mean the hyperscalers like Amazon and Microsoft were already building these platforms. And so it dawned on me that I didn't think that this was possible unless you built the entire platform, you built the entire system. If you built any one piece, the market transition would take a lot longer. And I think this is true in technology history. It tends to repeat itself starting with mainframes when IBM built the entire computer and that built the entire computer and HP built HP built. So these kinds of things are important if you want to really push a market transition. And so Pensando became this opportunity to take all of these things that I've done in my past life and bring them together in a way that would give a complete stack for the purposes of what I call the new computer, which is basically the data center. And so, you know, when my mom asks me, you know, what is it that you're doing? I said, well, it's just imagine the computer you have right now and multiplying by thousands and thousands stack them in a rack and anyone can use it at any one time. And we provide the infrastructure and the mechanisms to be able to to orchestrate and control that at the very high speed layers. So I don't know. That was a long answer. No, no, no, it's fascinating stuff. And, you know, when I look at the industry, you know, cloud, of course, is that just mega wave that changed the way a lot of people look at this, the way we architect things. There was this belief for a number of years, well, you know, I'm going to go from this complicated mess that I had in my own data centers and cloud was going to be, you know, inexpensive and easy. And I don't think anybody thinks about inexpensive and easy when they look at cloud computing these days, then add edge into these environments. So I guess what I'm asking is, you know, today's environment, you know, we know it always is additive. So I have various pieces that I need to put together. You talked about building platforms and, you know, how can it be a complete stack? So, you know, companies like Oracle, you know, for many years said, we can do everything from the silicon all the way up through your application. Amazon, in many ways, does the same thing. They can, you can build everything on Amazon, but they built out their ecosystem. So how does Pensando fit into this, you know, multi-cloud, multi-dimensional, multi-vendor world? Yeah, so that's a good question. So, so one of the things we wanted to do was to be able to bring a systematic management layer to heterogeneous computing. What I mean by that is in any enterprise data center, modern data center, you're going to have multiple types of computing. You're going to have virtual machines. You're going to have bare metal and you're going to have containers, or at least in the last, say, three or four years, chances are you'll have some containers and moving there. And so what we wanted to do was be able to provide an infrastructure, a management mechanism where all of these heterogeneous types of computing could be managed the same way with respect to policy. And what I mean by policy is sort of this declarative or intent-based model of, I declare what I'd like to see, whether that be network policy or end-to-end security with data in motion, and be able to apply it in a distributed manner across these different types of heterogeneous elements. The cloud has the advantage that it's homogenous for the most part. I mean, they own the entire infrastructure and they can control everything on there. Now, our systems will obviously manage the homogenous systems as well, and in many ways that's easier. But bringing together this notion of heterogeneous types of computing with one management plane, one type of interface for the operator, specifically the networking services operator was fundamental. And then the second thing is being able to bring the scale and speed to the edge. So a top-of-rack switch or something in the middle of the network is obviously very dense in terms of its IO capability. So the silicon area that you spend in building a high-speed switch is really spent for the most part on the IO. That's typically 30% to 40% of the area that will be IO. And the rest will be very much hardwired control protocols. We know that as we go to SDN services, and we want, let's say, software-defined mechanisms in terms of what the policy looks like, what the protocols look like, the ability to change over time in the lifespan of the computer, which is three to five years, you want that to be programmable. Very difficult to apply at very dense scale in the core of the network. And so it was an obvious move to bring that to the edge where we could plug it into the server effectively, just like we did really in the UCS system with Nuova, the system. Yeah, some really tough engineering challenges. For the longest time, it was very predictable in the networking world. You go from one gig to 10 gig. There was a little discussion how we went the next step, whether 25, 50, 40, and 100 gig now. But you talk about containerized architectures. You talk about distributed systems with Edge. Things change at a much smaller granular level and change much more frequently. So what are some of the design principles and challenges that you make sure that you're ready for what's happening today, but also knowing that technology changes are always coming and you need to be able to handle that next thing? That's right. Yeah, so I think part of the biggest challenges we have are around power with respect to the design, power. And then what is the usefulness of each transistor? So when you have a sort of a scale of flexibility, CPUs are the most flexible, obviously, but have probably the least performance in them. FPGAs are pretty useful in terms of its flexibility, but not very dense in terms of its logic capability. And then you have hardwired ASICs, which are extremely dense, very much purpose-built, logic, but completely inflexible. And so the design challenge that was put in front of us is how do we find that sweet spot of extremely programmable, extremely flexible, but still having a cost profile that didn't look like an FPGA and got us the benefits of the CPU. And that's where this notion of domain-specific processing came in, which is, okay, well, we're going to solve a few problems and we're going to solve them well. And those few problems are going to be, we're going to bring PCIe services, we're going to bring networking services, we're going to bring storage services, and we're going to bring security services around the edge of the computer so that we can offload or, let's say, partition correctly the computing problem in a data center. And to do that, we knew a core of CPUs wasn't going to do the job. That's basically borrowing from this guy to pay this other guy, right? So what we wanted to do was bring this notion of domain-specific processing, and that's where our design challenges came in, which is, okay, so now we build around this language called P4, what is the most optimal way to pack the most amount of threads or processing elements into the silicon while managing the memory bandwidth, which is obviously, you know, packet processing has been said to be embarrassingly parallel, which is true. However, the memory bandwidth is insane. And so how do we build a system that ensures that memory is not the bottleneck? Obviously, we're producing a lot of data. We're computing a lot of data. And so these were some of our design challenges. All of that within a power envelope, where this part or this device could sit the edge inside of a computer within a typical power profile of a PCIE attached card in a modern computer. So that was a huge design challenge for us. Yeah, I'd love to hear, you know, it was a multi-year journey to launch the solution. You know, I think of the old world, it was, you know, very much a hardware centric, 18 to 24 months for design, you know, all the tape outs you need to do on this. Sounds like, you know, obviously there is still hardware, but it is more software driven than it would have been, you know, 10 years ago. So give us some of the ups and downs in that journey. Love to hear any stories that you can share there. Well, yeah, I think, you know, good question. It's always, there's always ups and downs in anything you do, especially in the startup. And I think one of the biggest challenges we've faced is the exact hardware-software boundary. So what is it that you want in hardware? What is it that you want in software? And, you know, one of the greatest assets in our company in Pensano are the people. We have amazing software and hardware architects who work extremely well together because most of us have been together for so long. So that always helps when you start to partition the problem. We spent the first year of Pensano, which was basically 2017 when the company was founded, really thinking through this problem. For all the problems we wanted to solve, the goals that were given to us end-to-end security. Okay, so I want to be able to terminate TCP and initiate TLS connections. What's the right architecture for that? I want to be able to do storage offload and be able to provide encryption of data at rest, data in motion. I want to be able to do compression, these kinds of things. What's the right hardware-software boundary for that? What do we hardwire in silicon versus what do we make programmable in silicon, obviously, but still through a computing engine? And so we spent the first year of the company really thinking through those different partitioning problems. And that was definitely a challenge. And we spent a lot of time in conference rooms on whiteboards figuring that out. And then 2018, the challenge there was now taking this architecture, this sort of technology substrate, if you will, that we built, and then executing on it, making sure that it was actually going to yield what we hoped it would, that we would be able to provide the services. When we talk about L4 firewall line rate that's completely programmable, that we achieved that. Can we do load balancing? Would this P4 processing engine and the innovations we brought to P4 satisfy all of these requirements we had put for us? And so 2018 was really about execution. And there you always have the challenges in execution in terms of, you know, things are gonna go wrong. It's not if it's when, and then how do you deal with it? And so, again, I would say the biggest challenge in execution is containing the changes. You know, it's so easy for things to change, especially when you're trying to really build a software platform, right? Because it's always easy to sort of kick the can and say, we'll deal with that later in software. But we know that given what we're trying to do, which is build a system that is highly performant, you can't kick that can. You have to deal with it when it comes. And so we spend a lot of time doing performance analysis, making sure that all these applications we were building, we're going to yield the right performance. And so that was quite of a challenge. And then 2019 was kind of the year of shaping the product. Really, lots of product design. Okay, now that we have this technology and it does these pieces that we want it to do, these pieces meaning these services, what are all the different ways we can shape this product? After talking to customers for months and months and months, as you know, Sony is very much customer driven, customer centric. So we were fortunate enough that we got to spend a lot of time with customers. And then that brings a set of challenges, right? Because every customer has unique problems. And so how do we form this product around a solution that solves quite a bit of problems that really brings value? And so those were the challenges in 2019, which we overcame now. Obviously we have several releases that we've come out with already. We've got the A6 and the chips. It's all there now. So now 2020, unfortunately COVID's here, but this is our year of growth. This is the year that we really bring it out into the world with our partners and our customers and show how this technology has been developed will benefit customers over the next years, two years. Francis, really appreciate the insight there. Yeah, that discussion of the hardware versus software brings back memories for me, lots of heated debates as to where that is. That's right, yeah, absolutely. Well, one of the lines we've used on theCUBE many times is software will eventually work and hardware will eventually break, so. Yeah. Well, Mario, those trade-offs that we always have to look at. Well, Mario taught me something a long time ago. He said that hardware is hard to change, software is hard to stop changing, so. Yeah, no, that's a great one too. All right, so you gave us through the last three years journey, give us a little bit look on the next three years and where you expect Pensando to be going. Sure, where I see Pensando in the next three years as we go through this market transition is both a market leader and a thought leader in terms of the next wave of data center edge computing, whether it be in the service provider space, whether it be in the enterprise space or whether it be in the cloud space, the hyperscaler space. As I was mentioning at the beginning, when we were talking about the journey, market transitions of this nature really require understanding the entire stack. If you provide a piece and someone else provides a piece, you will eventually get there, but it's a matter of when. And by the time you get there, there's probably something new. So time in and of itself is an innovation in this area, especially when you're dealing with a market transition like this. And so we've been fortunate enough that we're building the entire system. I mean, we go from the transistors to the restful APIs. We have the entire stack. And so where I see us in three years is not only being a market leader in this space, but also being the thought leader in terms of what those domain specific processing look like at the edge. What are the tools? What are the techniques for really, as we say, democratizing the cloud, bringing this technology to everyone? Excellent. Francis, it's been a pleasure to talk with you. Thank you so much. Congratulations on the journey so far and can't wait to see how things progress going forward. Yeah, we're excited and I appreciate it. Thank you for your time too. All right, check out thecube.net. We've got lots of back catalog with Pensando also. I'm Stu Miniman and thank you for watching theCUBE.