 Hi, this is your host Appan Bhatia and welcome to another episode of TFI Newsroom and today we have with us Paul Pindul, outreach working group chair of OPI project. Paul, it's great to have you on the show. Good to be here. Glad to be able to talk about the open programmable infrastructure project and what we're doing there. Yeah. And I would love to know a bit about the history, the origin of the project and also what is this project all about? Well, I'll give you a little bit of history and then we can move into what it is and who's part of the project and that sort of thing. The history of the project, it started about two and a half years ago. So I work for F5. I'm a principal architect there. And F5, Red Hat and IBM got together. We saw this trend with DPUs and IPUs, so data processing units or infrastructure processing units. These were coming to be a new deployment area that we thought was something we needed to play in. And we'd had problems in the past dealing with each of the vendors smartNICS solutions. So smartNICS may have been a predecessor to DPUs, but smartNICS, they were very difficult to deploy, very difficult to transition from one vendor to the other. And our goal was to create a standardized framework that we could use to deploy, secure and run infrastructure and applications upon these data processing units that was multi-vendor across all of the vendors in the space. So that started about two and a half years ago. A little over a year ago, we joined the Linux Foundation as a project. So we are a Linux Foundation project. We joined in June of last year, I believe. And there have been several members that have joined alongside of us, many of the providers of these DPUs and IPUs. And so some of those that have joined us are Arm, Dell, F5, I'm with F5, Intel, Keysight Technologies, Marvell, NVIDIA, Red Hat, Tencent and ZTE. Those are our premier members in the project. And we have several that are general members, Dream Big Semiconductor, Fujitsu, Hewlett Packard Enterprise, HPE, Solid Run and Unifabrics. So those are the members that are part of the project at this point. As we're talking about this project, can you talk a bit about the role of DPUs in the modern word, modern economy, modern text? What role is playing and why it's critical to modern word? You know, a data processing unit is, in many cases, a PCI card attached to the PCIe bus that has a full compute complex on it. So it's got compute, memory and storage on the card itself. It has its own endpoint, its own network identity. And it's used often to offload and or isolate workloads from the host that's processing an actual workload. So it's a way to offload those infrastructure workloads away from the compute and isolate them so that they can, so that the host can focus on processing the server workloads that it needs to process. So that's the big, that's the two main uses that people are using data processing units for. Hyperscalers deployed them, but they've all built their own non-standard frameworks and we're trying to create standard APIs. And we're trying to abstract that hardware so that solution providers can focus on deploying services. So we want to create an ease of development and an ease of deployment of these devices. And we believe that doing so is going to drive the efficiency in large computing environments. And that's going to create for the users themselves a TCO savings. And then also that will mean that these will become more popular. And the vendors will benefit from this by those devices becoming more popular and easier to use. So we feel like we're creating this flywheel effect where by creating these standard APIs, they become easier to use because they're easier to use. People use more of them. The vendors create more of them and so on and so forth. So that's one of the background goals of the OPI project. So it's going to be kind of its own umbrella project within Linux Foundation, if I'm not wrong. It is its own umbrella project within the Linux Foundation. And so our goal, our stated mission and our objective is that we want the open programmable infrastructure project to foster a community-driven standards-based open ecosystem for next-generation architectures and frameworks based on DPU and IPU-like technologies. You can get that same if you go read our website. That's our objective statement, our mission statement. When we look at Linux Foundation projects, and of course, if you look at this modern word, we don't live in silos anymore. And of course, when it comes to open source project, we always leverage each other projects, we always kind of cross-pollinate. If you look at OPI, what are the projects that you feel that you will be either working closely or where you see, hey, you know what, the community, the problems, the ecosystem is more or less same. So we will be working closely with those projects, those communities as well. There are several that we're working closely with. So we're working with the SmartNix Summit, folks. We recently had an event, a half-day tutorial event, that was co-located with the SmartNix Summit that was held in San Jose. So that event was in June of this year and we were recently there. So they're one group. Another that we're working with and we've had communication with is the Dash Sonic group. So working with them. SNEA is another open-source project that we're coordinating with. We'll be presenting, we have two OPI-sponsored sessions at the next SNEA SDC event. And that is coming up in September, I believe. And that'll be in San Jose as well. So we'll be presenting there. And several of the vendors that are part of the OPI project will also be presenting at that conference and bringing their OPI experience to their presentations. So those are the major ones that we've worked with. We've also submitted sessions to Open Compute Project and we'll be looking at what we can do with them down the road. There's been very early discussions with LF Edge, another Linux Foundation project. When you look at this project, can you talk about what are the things that you feel that since the inception of projects that you have achieved so far, what are the milestones that you have hit? And then we'll talk about what are the things that are in the pipeline in the future? We've focused in a couple of places. We have several different working groups and those working groups that are writing code, those working groups are in the area of provisioning and lifecycle. So lifecycle management. We've done quite a bit of work there where we've standardized on SZTP as the method for delivering and provisioning these DPUs and IPUs using OPI APIs. So we've standardized there. Our API and behavioral model group has worked on defining the taxonomy and the schema that we're going to be using for APIs. They've also been working on several different APIs from a storage API. So DPUs are a great way to offload storage management tasks from a compute onto these data processing units, DPUs and IPUs. So a storage is one. An IPsec using a strong swan is another solution that we've, another solution that we have taken on board and we're using that for IPsec. We're working right now in the networking space and Kubernetes space on different CNIs and how we can use a DPU in a standardized fashion to manage CNIs and the Kubernetes networking space. We've also got a, not so much code, but we've also got a use case working group. And that group is working with what we call our deployment partners. So end users of these cards, they may be tier one or tier two cloud providers at this point. That seems to be all the folks that are able to purchase DPUs and IPUs right now. They're all being funneled to tier one and tier two cloud providers. So we're working with them to determine what are the use cases they're looking for. So those are the three major ones. We've also got a developer platform and lab group that is, that's building a way to test and work with these solutions. You talked about the founding members. Can you also talk about the kind of community, as it's a project related to new, of course you talk about the community that is there, but ideally what kind of community you have to build, what kind of growth you want to see, or the growth you're seeing of the project? We are actively looking for new members to join the project and come on board with us and help define how, define these frameworks and actually write the code that makes up these frameworks. So we are looking for new members to do that, we're looking for, and we have our, we're reaching out to folks in the industry that we know, folks that may be the first time they hear this from your, from this webcast, I would love to have them reach back out to us and, and find out and, and figure out how they can contribute. What kind of folk will be ideal for this project? I see three separate groups that are interested in, in this project. The first are the vendors themselves, the, the vendors that are making these cards. You know, we've come together and, and those vendors come together and work together day by day, side by side, looking for how do we take and create a standardized framework so that, you know, there are, there are pieces and parts of using a data processing unit. They're common across all of these vendors. They all have to be deployed. They all have to, there's a whole bunch of things that they all have to do. It makes sense to pool their efforts and define that once, and then have each of them use those methods within their own stack and provide their own secret sauce on top of that. So back to your question. Yes, vendors are one of the constituencies that make up this community, and we're always willing to get more of those. The other is the, the, the integrators I'll call them. So folks that take, take a DPU and integrate it with something. Maybe it's Fujitsu, maybe it's HPE or Dell or one of the server vendors that puts those in their servers. They have a vested interest in simplifying how they deploy different data processing units, IPUs on their, in their system hardware. So they have an arm as another vendor of ours, another server vendor of ours. So they have a vested interest in making sure that, that those interoperate well together and that they don't have to choose and build bespoke deployment programs for each and every vendor's card. Operating system vendors, Ubuntu, Red Hat, others are, are here and how do we, how do we provide the drivers and the operating calls in the kernel that use these DPU cards in a very standardized way? How do we, how do we get them on board? ISVs like F5 networks. So how do we as a, let's say how do we take and put a firewall or a web application firewall or an API gateway protection, how do we put that on a DPU? So how do we deploy it there so we can offload it from the, from the host? So that's, that's another. And those, that middle group, those, those integrators is that middle group. And the third group are the end users, the deployment partners we call them. They, they're cloud one, cloud, cloud providers at this point, tier one and tier two and enterprises. So how do they take these devices and easily put them within their infrastructure? And let's say they run into a situation where they can't purchase any more of one vendors cards for whatever reason they want to swap to another, how do they make that transition seamlessly? So those three groups, the vendors, the integrators, and the deployment partners are the three main groups that, that are part of the community and that we wish to welcome to join us in this endeavor. Can you talk about the software stack that you folks are building? What kind of like pipeline you have? What version we have already there? One of our goals is to reuse open source software that is out there and being an open source project. We want to, we don't want to reinvent the wheel if we don't have to. We're using a gRPC pipeline mostly and the way things, the APIs and the way they interact. And like I, like I spoke of earlier, we've got several different integrations that, that we have on the truck. So we've got some provisioning work that we've done with SZTP. So secure zero touch provisioning. We've got some work around IP sec and the strong swan implementation. We've got some work around storage and some demos around how to offload storage from the host to the DPU, offload storage management tasks from there. So those are the various different ones that we've got in the works right now. We've got a full simulated build environment that anybody can come and download and, and test out the code that we have written so far. We've got a fairly good code coverage on most of those repositories that we have in GitHub that are open. And so yeah, that's the, that's the code that we're at right now. I said next on the roadmap we're working on networking. So those are some networking APIs. And we're also working on some so deployment, how to deploy a workload onto a DPU. That's one of the next things that we're working on right now. Paul, thank you so much for taking time out today. And of course talk about this project, this scope, and how, you know, of course folks can get involved. Thanks for all those insights. And I would love to chat with you again. Then of course, there are new updates to the project. Thank you. Thank you so much for the opportunity to come and talk about OPI.