 Hey guys, welcome to our panel session. Oh, 10 seconds. The minute was longer than I thought it was. Eight, seven, six, nine, eight. Hey everybody, welcome to our panel session. Today we are here to talk about OPI, the Open Programmable Infrastructure Project. On Tuesday of this week we launched this as part of the Linux Foundation. So it's the newest project within the Linux Foundation. But a bunch of us have been working on this for a while trying to get it to this point. So what is OPI? Just at a very brief high level, so in case anybody isn't aware. OPI is about creating a common set of frameworks and interfaces and software around IPUs and DPUs, right? In order to be able to grow this ecosystem, we really believe that some pieces of this need to be common and down in a more standard way. We have a bunch of different member companies who have joined. A lot of them are represented here, but there's even more. And then a whole bunch of technical contributors as well. So there are different ways to join the project. We'll talk about that later, but let's just start with who all is here. Yes, thanks. So I'm Jan Fischer, I'm with Red Hat and I've been with Red Hat for about 10 years. This is my first open source project that I'm participating in and kind of working on the formational phases. I lead a couple of work groups within the project. Governance and vision goals. So if you have any questions later on, feel free to reach out. Hi, I'm Venkat Pulera, Chief of Technology at Keysight. From Keysight, we are really excited about OPI because we want to contribute testing, validation, measurement solutions and frameworks for OPI and we can talk more details as we go. And Chris Murphy, I work for Red Hat. I'm in emerging tech in the CTO office. And yeah, I lead, do I lead any groups? Oh, I lead the orientation group at this point and participate in a lot of the high level organization stuff across the project. Hi, my name is Garth Frige, I'm with NVIDIA and I'm a DPU segment lead for the Americas. And man, we're excited about OPI. Basically because we're excited about DPUs. When we think about DPUs, we think about this opportunity for us to sort of revolutionize the data center architecture with these domain-specific processing elements. It started with the GPU, now it's the DPU, and it was the processor optimized for data movement. And if OPI can give us an opportunity to accelerate the adoption and deployment of DPUs in data center, we're all for it. Hi, I'm on to our distinguished engineer for Marvell, specifically the processor BU business unit focusing on DPU. And we want to help work in the ecosystem and build the provisioning life cycle of the infrastructure, the firmware, as well as the offloaded functionality that will run on the DPU. Okay, so this is not a formal session. We're not going to show any slides. So this is here just for the backdrop. This is our website, newly stood up. So I think we just want to open this for questions. We're here to help you understand more either on the technical or on the governance side of why we joined the Linux Foundation. Think about a question or two for us. And while you do so, I may put Chris in the hot seat because she was very modest about her role in the project. She's the brain trust behind the project. She is the one who started it. And I want her to maybe comment a little bit about what prompted you and how that all began. Okay, fine, yeah. I'm not the only brain trust, by the way. It wasn't just me, but it started with a small number of companies and just some thought leaders where we were looking at the amazing hardware in what the vendors were creating in this space. And we thought, wow, there's a lot of potential here. But it's very hard for a platform for an OS for a system OEM to think about having to do something for this space in five or six or seven different ways. That's really not sustainable. Some of the interfaces need to be common in order for the software ecosystem to really be able to handle all of these different solutions. But yet we still understand that each adapter vendor is going to have their own special sauce, their own different accelerators and hardware capabilities. So we really just wanted to create a couple of different layers. One of them being on the provisioning and life cycle management. A lot of that doesn't necessarily add value if we do it different ways. If we can be common, then it makes it easier for users. It makes it easier for the OS vendors. It makes it easier for the system OEMs to know how to integrate these adapters into their systems and into their existing processes. So that's one area where we really feel commonality will help us and be able to make it less confusing and difficult for users. And another layer is an API layer between the applications and the hardware. One reason we want that is because we don't want customers or application vendors to feel like if they invest in here they're only investing in one hardware solution and they're locked into that. We know things are going to change over time. We know a lot of customers aren't going to invest in something if they're locked in. They want options. So if we create a common set of APIs for the pieces that can be common, some pieces won't be, we understand that, but some will be. And if we can create that common, then the investment that is unique for each vendor's solution from the application side is less and allows more portability and also allows the vendors to focus on their special sauce. So that was really the vision at the beginning. We've obviously convinced a lot of others that this is a good vision and now we're at the point where we want to go and make it more real. But making it real doesn't mean reinventing everything. We truly believe that there's a lot of existing open source and other projects out there that we want to reuse. Because while a DPU is a separate subsystem, it has its own CPU, it has its own resources, it has a lot of the same needs that the whole system CPU and all that that we've been working on for years and years and years. So we do want to engage with other communities and other projects and then provide the glue of how they might work together for a DPU in a best practice scenario and be able to test them together and provide a platform where you can say, on this piece of hardware and this piece of hardware, we've tested these pieces together, we've added this layer that's the glue to glue them together and now you have a full solution that we can say this will work if you stand it up. As part of that, you know, we're evaluating different other projects that we're working with. And, yeah, I just wanted to ask, you know, what other communities are, you know, our panelists thinking of that, that we might interact with and we might look at trying to develop a relationship and or include into the project? If I can jump in on that. So do you have a question? Oh, sorry. Mike? Okay. All right. So I think you answered my question, in a dumb way, just to make sure we clarified the question or what's going to happen here. Is this a, an open source reference? Or is it a standards building exercise? We are not a standards body. We do not want to be IEEE. And we also realize that if we tried to do that, maybe in six years we might have something. So, when we say standard, we don't mean standards in the formal, we're creating a standard, but more in the sense of common, you know, and reference architecture point of view. So absolutely, that's a great question and great clarification. We're not trying to create a signed off standard, but more a common platform and way of doing things. And we also understand that one of the beauties of doing this in open source is if we provide a reference platform, we will be able to take our reference platform. And if there are pieces then you want to change and tweak, you can do that, right? It's just a common base to start with that'll allow the ecosystem to grow and then people can focus just on the piece that they want to differentiate on or, you know, switch out and make different. So if I can jump in on that. I am also the rep, the representative from Marvell for one of them, for ORAN, Open Radio Access Network Alliance, as well as the Spiffy Spire CNCF project for workload and node attestation. And I've been running between these organization meetings for the last two months or so. And I'm trying to push, for example, OPI should look at Spiffy and Spire for node and workload attestation. ORAN is already interested in OPI and the DPU in their radio unit, distributed unit and the whole security layer. And so, you know, we are trying to serve as the foundation but not necessarily a full SDO, if that helps. Yeah, so Anto actually brought up a good point here. I do, if I may, I want to do a little quick show of hands to just understand something brought you in into this room today. So we're trying to understand what you guys do and what you may be more interested in. So how many folks here work for vendors? Hardware? Software? Okay. How many folks here would consider themselves user customers of potentially of this project? Okay. A couple of folks. And how many people are associated with other open source projects on the Linux foundations or elsewhere? Okay. Quite a few folks. So that gives you a little bit of background on who's here and why, possibly. Here's a question in the audience. Thanks, John. I wanted to ask how do you all envision end users participating in this community? I assume that that's an important goal. But if I'm someone who wants to use DPUs or a set of DPUs, what's in it for me to get involved with this? Great question. Jumping off SDK. We have an SDK that you can leverage. And I'm talking to my boss later today about possibly building layers on top of that to support some of the application. Because the SDK is more about system and firmware. And now if you're going up to layer 4 to layer 7 networking, for example, or even all the way to layer 7 where specifically inspired play, we can help with extensions or some form of. And I'll pass it to my colleague who can talk about Doka. If you're a customer and you're developing applications for a heterogeneous environment, then it's important. Because you want to invest in that application development and you want that application to be portable across DPU vendors. Some of our customers want that. Some of them don't. We have other tools for that. But for me, from an end user perspective, I mean that's the real value. And what we're doing is we're pushing in the lower level drivers where we're opening up the APIs associated with our SDK called Doka. And we're pushing these in to OPI. You can kind of create a common API shim layer so that you can develop those applications and interface in a common way across DPU vendors. And I think that's the real value of our perspective. A couple of more things. When you want to get involved, just looking at the title itself you could have seen, it's fairly broad. And then we're not expecting everyone to be a C programmer or whatever. C++, Python, whatever else to come in and contribute. You could have use cases. You could be an end customer. So we do encourage, we do want you to come in and just give us the problem and say, hey, solve this problem. We want to pick up the most valuable problems that are useful for you. It could be at that high level. I just throw a problem. Or you could come down and say, I can help you look at multiple different ways of implementing a functionality. Each vendor does it differently. And I am an expert in APIs or interfaces and things like that. And how do we bridge the common interface that we want to use? Or you may be some expert in storage and you are really the guru in how to create that interface. So you could be coming and helping us with implementations. I think we have a broad set of requirement, a bunch of groups that are, it's not a monolithic thing. We have groups that meet and tackle different problems. So please come look at the website . To add to the doka and the common shim layer in Oran, we are looking at something called hardware abstraction manager with the accelerator abstraction layer which is essentially Oran shim. So we can work on that. End user are already engaged. They want to, you know, where Oran is going full forward. So the other application area is automotive. I have done some abstract submission for security on DPU inside the Zono gateway and the central gateway to the far edge. So there are many, many ways to get engaged. It depends on what your project is going to look like. We have a very excellent track record working with customer to make sure that your projects are successful. And we do specifically have a use cases subgroup already and we would love more customer input to that. We already have some that are engaged in saying these are the use cases where we want to start. So we have a lot of work to be focusing on and it's a huge ocean to boil. So having customers show up and say this is what we're most interested in as a starting point, that's a really, really important input for us. So definitely encourage customers to join us for that. This is probably a good segue also to talk about how you can engage with us and what you can do. We do have a GitHub setup. It does have representation of all of our working groups. You could explore that. You could see where you could contribute. There are names of the leaders of the groups. They are listed. The ways to contact them and short description of the work group that exists. This is absolutely not too late to join the project. We just went through the formation phase as Linux foundation calls it. We became a part of the Linux foundation. There are ways to get included and there are ways to influence this project. Whether you want to do that as a part of governance board and with a premium membership or whatever. I think today we definitely looking to see who can join our group and represent other categories of interested users like you heard today from various different verticals, automotive, edge networking, IOT, etc. I think we definitely looking to have the engagement across the board. Starting from the technical working group that you see in general or on the more of an outreach and origination of the project side. Making sure that the word of this project gets spread around. I think that's probably a good time to see if there's any other questions. Any questions online by chance? Okay. Well, speaking of the use case subgroup, there's a POC subgroup and just maybe to give you guys some ideas of this is all very nascent, but there's one right now on distributed next generation firewalls. So if you're interested in cybersecurity, how DPUs can help kind of enhance security of data center infrastructure, this is a great POC group to join. The goal is I believe to have some of the best practices in the world. So one other way, I mean you may be like some of you're already part of other Linux foundation projects and other open source projects out there. So one way you could help is also to kind of help us bridge with some of the existing work out there. You see something that overlaps that we could use. By no means we think that we could use that, right? And then each of us are also as representing our companies we are part of other groups, right? And we are actively trying to engage with them and bring them on board and interface with them and I can talk about a couple of them, right? So if you are in the sonic sci ecosystem, which again if you've seen Google, Facebook and a lot of others have joined it, right? And then there's a lot more work going on even at the sonic. Sonic has just moved to Linux foundation. That's going to make it even more appealing to a lot of us, right? And then one of the DPU projects under sonic that's happening is called Dash, which is disaggregated distributed API, too many acronyms, right? How do we take DPUs and use it in a specific use case, which is like an appliance form factor or might become a smart dog. So there's some good work going on there. Keysight is part of that, we are providing a testing framework there and some of that learning we want to bring it here. Like that you may be part of other groups, we want you to bring that and then we would like to use that to kind of get ahead, get fast, otherwise like you said Chris will take seven years to come up and we want it to be seven months, right? And there are other groups like PNA, because one of the things we're talking about is programmability. On the data plane side, on the switching side there is P4, EBPF, there's a lot of programmability work going on, so we're also engaging with the P4 as well as PNA, which is programmable NIC architecture. So there's a NIC subgroup under P4 we want to engage. We're already engaging, that's the kind of work that we want to bring in. One more work that is happening under OCP is more like a functional acceleration, there is an OCP NIC group I think it just got launched. I think I see good overlap which is good. At the same time I think if you see OPA has a little more programmability to it because DPUs are bringing this new functionality, statefulness, lots of flows and things like that. So there's some more work we can really work with them, they are looking at functional offloads, anything that happens here we would like to engage. If you're part of that group please talk to us otherwise, we are actively going engaging. There's one more I have seen, sorry I'm going off, so if stop me when you have a question. Because there's lots of groups, OPIE is covering a broad set of infrastructure, we would like it to be that way. While we'll focus on specific use cases, the more flexible ones. The last one and I'll stop after that is there's something called OpenSnappy I think they are looking at maybe I may be mischaracterizing so correct me if somebody knows more about it, it's more of how do you do workload placement? Because once you have this broad set of programmable functionality you'll have like a hybrid distributed abilities. So how do you kind of do it? So there are looking at some very good work happening there and that will be very useful for us and hopefully we'll engage with them this is more about work placement that suits the infrastructure. And it becomes more complicated because programmable it becomes what you want it to be it takes the problem to the next level. So we have some serious challenges and problems we need to solve. At the same time we can definitely learn and use some of these other groups. Just one more project I want to pitch for is OpenTelemetry. So far we have decided to settle on OpenTelemetry as our way to collect metrics and allow a consumer of those metrics to be able to get into the DPU data. So I want to pitch that because the OpenTelemetry people were very interested in doing this project on Monday when I mentioned the OPI was looking to adopt Hotel. So maybe this is a good segue to I don't know how closely you were watching the announcement or following the announcement. Maybe we just do a quick recap of who actually joined and who are interested parties that for a variety of reasons that has nothing to do with the DPU but they will be joining later. So the founding members are Morville, NVIDIA, Red Hat, Keysight, Intel, Dell, and Tencent. Tencent didn't make it through their PR and legal approval so they were not in the press release but they have since already committed to the Linux foundation. We also have definitively heard from AMD and in the making so I can comment when and how they will be joining. But if you go and look at the list of committees on the Github and look at the proceedings of our inaugural event that was basically a virtual event where we all got together and listened to why people would be interested in joining the project or why this project as they would need it, you could get there by going into about on the website and there is a video recording and you can see that we actually interacted and tried to pull the community in so we do have interest from Google to participate we have Verizon, Comcast actually joining various groups that we have we have representatives from Canonical SUSE and a couple of folks from VMware so this is if not immediately this is going to be a pretty good representative body of who is who in the industry from the vendor standpoint and we also certainly hope that customers will see through and definitely will jump on board and help us to make this a truly open community. And to add to that the launch on Tuesday if you're wondering conflicting schedule but Dell and F5 were there so they are founding members as well as heavily involved. That might be I forgot about F5 but they were one of the first people to actually join the project and they were working very closely. There was a couple of companies at the beginning that we were talking F5 was one of them. Distribute a firewall and DPU. Okay so the question is somebody said this is not FPGA FPGA so I'll answer and you can chime in but a while back we had servers we have hardware which is CPU OS and then you have applications and then some things people wanted to accelerate so they started putting things into the NICS called functional NICS and they became smart NICS where they started putting FPGIS so that you can program it and then you can change what you want to do the functionality but what we have evolved to is something called DPU based NICS and that's what we are talking about and maybe I should let the vendors really do to give talk more about it but as a user this is exciting because there's a new functionality that is coming into picture which is a lot of state you can track lots of flows and you could do things that we used to do as part of switches and routers that used to do like flow based either fire walling or load balancing WCCP if anyone is old enough to know and work on it so these kinds of things used to be part of the routers and switches and they went away because of the statefulness and the speeds and feeds so that level of functionality is coming back into the IPUs and DPUs and maybe I should maybe 20 years in Cisco sorry for that what is an FPGA this kind of gets into the semantics of what is a DPU how do you differentiate a DPU from a smartNIC and a smartNIC from a standardNIC and really we may even have differing opinions on that from our perspective we're talking about smartNICs introduce this idea of an accelerated data path so we're accelerating those packet processing sort of tasks that are super heavy on the CPU but the control path still remains on the x86 host so there's still the dependency to the host now with a DPU whether you're enabling it with ARM cores whether you're enabling it with an FPGA or whatever the idea is that you're able to accelerate the data path you're able to offload the control path create some independence from the tenant domain versus the infrastructure domain and by doing so you end up actually being able to air gap you're airgapping your tenant domain from your infrastructure domain and so you're bringing enhanced security so somebody's DPU could be based on an FPGA somebody's DPU can be based on ARM cores somebody's DPU can be based on whatever processing element they may use the idea is offload, accelerate, isolate and you can have this sort of independence from host that's our perspective and one of the early in the project and one of the first groups that we formed was the minimum requirements groups and that group was sole purpose of this group was to define what are we going to consider DPU and IPU and when we got to a point where enough folks were participating one thing emerged for sure and Garz highlighted it it's the independent processor that is separate airgapped or there's a trust line or the host machine and the actual DPU and IPU how that core is implemented to me it's just a technical detail it could be a soft core in FPGA it could be any particular vendor implementation any ISA that you want as long as it's programmable using the parts of the OPI project and we can address that through the APIs actually I was going to say something to this first is I was going to say the separation of concern and the off load of the certain control plane and data plane functions play very nicely into some of the Red Hat OpenShift WinRiver Cloud Platform VMware Telco and Cloud Automation because all of those services and customers have the concept of tenants versus core versus infrastructure so I think I'll let Chris I can come on to that in a second but back to the original question I think maybe they were getting at is this going to be code for programming FPGAs and the answer is no when you look at the title Open Programmable Infrastructure I can see how people might think this is a project for programming FPGAs and that is not what this is so to get back to there are absolutely other projects for that HUE is involved in no so if there are people who want HUE Brock from Red Hat could engage and tell you guys about that stuff as well but yes on the tenant versus infrastructure layer absolutely and the answer there for doing that and Red Hat is very much looking at cluster designs for DPUs where we have one cluster that is your tenant cluster and a separate cluster that your DPUs belong to that is your infrastructure cluster because then you can have where your admins for your infrastructure only have access to the DPU layer they don't have root level access to that tenant layer so for customers that yes even our system admins can't access stuff in there and potentially have root access to your data and vice versa if you have a tenant workload that goes rogue or somebody is able to penetrate a vulnerability somehow and get into your CPU layer they can't then broaden out to your entire cloud because they don't have access to that infrastructure layer that security air gap is so important right and we've seen so many things across the last five years where I think most of us who are in this business understand that being able to isolate in a way that even if there is a vulnerability and somebody gets in they can't get into everything is huge. So to amplify on that one of our focus internally is security especially in the area of Sbom, Gitbom and the open secure software foundation because we understand that in the past you log in, you download SDK and firmware but now it's not so much a single admin doing that or multiple admins now it's automated and it has to be verified, tested and all that so that's another area I want to bring in security and the other thing we forgot is that we have to include as the big semiconductor Intel. I mentioned Intel of course. Can I just add one and then we'll go back. It's good that we have so many names we're forgetting that I think you can see that option part of it. So going back this FPGA question I think things are going to get worse, bad news is it worse before they improve and we'll have soon chiplets where there's a bit of FPGA, a bit of, I mean it's what I hear, it's something called C of course, you put a bunch of course to do so when you say DPU, IPU there may be more terminology will come out and there could be a little more confusion. I think we are timely to really launch the OPI because no matter what you have we would like to have a common interface and it is programmable you expose your programmability and it's going to be tough. I think as an industry we'll really have a lot of difficulties at least especially when you want to develop applications on top. So it's going to get a little muddy before it gets clear. So a quick question. So I've come from, I used to work for VMware and now I work for Dell but I'm looking into Kubernetes and containers. So I see that every time there's a new abstraction layer right Kubernetes has abstraction CSI on storage side, there's of course other things and also on the networking side you have Istio, Service Smash. So how does the OPI kind of an address from that you're getting more and more abstraction you're getting pulling the storage and network functions more and more to each and every abstraction and does the OPI now you're kind of separating the whole compute infrastructure away from the whole network that's what I'm understanding you're abstracting that away. So how does that impact the overall Kubernetes ecosystem like the Service Smash, the CSI all the whole storage services things like that. I can give half of the answer and pass it to others who are experts because I was going around the exhibition hall and looking at what all is being discussed. So how does that impact the level of abstraction at the node level, somebody wants to do a cluster somebody wants to do a gateway somebody wants to do a Service Smash they're all they came into existence because they're solving some unique problem right out there. So I doubt if they will go away the layers and the sequence of layering of these solutions may not go away right and if you talk about multiple different programming infrastructure is supposed to be the solution but the layering to me adds value there's a uniquely why you need Service Smash as well as you need a gateway right so to that extent layers yes other how do we bring it together maybe others. To jump in for one of our sandboxes we have things like K3S instead of KAS and MULTUS and Calico C&I so there are things like that that we're looking to build for a customer and we're trying to figure out how we can extract some of that to contribute. So yeah I came from VMware and then before that Cisco right so this stuff is always in the back of my mind how to do. I would say that from a Kubernetes standpoint when we're talking about doing it as different clusters it's a whole server it's a whole subsystem to Kubernetes so a lot of those same things are going to exist exactly as they do up on your host CPU layer so we're not going to try to reinvent how you abstract that part we will reuse C&I we will reuse a lot of those things that exist already the abstraction part is more the how you talk between those things between a C&I or a networking storage layer and how to program the specific hardware accelerators and features within this new hardware category yeah right and so it's not recreating those wheels and reinventing those abstraction layers it's more hiding the unique hardware features where we can so that those things can interface in a consistent way across multiple vendors no I agree it's more about where the abstraction resides and how those services are presented to the tenants it's not about creating something new we have two minutes which is probably one minute okay so maybe one last question or I know Jan did pitch about how to interact with us as a community I just want to make sure that people understand yes your company can join as a paying founding member but for technical contributions there's no monetary contribution needed you know if you're a user if you're another company who is interested if you're involved with an open source community and you want to get engaged with us come we want you, we want to talk to you we have Slack, we have options if all else fails and you can't figure out how to get in contact with us hit me up on LinkedIn and I'll figure out how to get you guys well I am the orientation group so I will get you connected so we want you there there's no commitment to come and talk to us and start thinking about how we can engage so reach out we have a lot of time to have a huge influence on where we go so come and interact with us very well said and I want to thank everyone here and online for joining us thanks a lot, thanks to the panel as well