 Hello everyone and welcome to another episode of Open InfraLive. Today we have a packed episode with a lot of great speakers from different Open Infra members including Canonical, Intel, Fungible, and the Open Compute project. And we are going to be speaking about composable infrastructure and the future of data centers today. So this is obviously, I think for me a super interesting topic. And that is very, very relevant for our infrastructure world. I am Jonathan Bryce. I am the executive director of the Open Infra Foundation. And I'm going to be hosting today. Before we get directly into the conversation, I want to remind you. We have a summit back this year and we did Berlin, a city that we had summit 18. Very excited to go back. This year we will be in Berlin, June 7th to 9th. We just recently published the summit schedule. So you can go online to openinfra.dev. You can check out the schedule. You can see the 100 plus sessions across 11 tracks that we have scheduled. And see all of the great topics and speakers who are going to be there. We have some great users who are going to be speaking like Bloomberg, BMW, Kakao, China Mobile, many others. We also have just a few sponsorships left. Those sponsorships have almost sold out, but we have a few left. And we have tickets available if you would like to register to attend. Ticket prices are going to increase in just a couple of weeks. May 16th will be the next price increase. So if you're interested in attending, this is a great time to go to openinfra.dev. And register and sign up. I also want to thank all of the members of the Open Infra Foundation. These are the companies who support our mission to build the next decade of open infrastructure. And our Open Infra Live episodes are brought to you by our Open Infra Platinum, Gold and Silver members. So a huge thank you to all of our members for their support. And if you're interested in joining the Foundation as a corporate member or an individual, you can go to openinfra.dev slash join to learn more. So with that, I want to go ahead and dive into our packed lineup for today. We will have a first a set of presentations from our speakers and then we'll go into a discussion. So feel free to drop comments or questions in the chat wherever you're watching the stream. We'll be able to see those. And when we get to the Q&A portion, I will pull out any of those questions and throw them to our panelists. To start off with, we have a presentation from the Open Compute Project. And we will be hearing from the CEO of OCP, George Chaparian. So let's go ahead and hand it over to George. Thank you very much, Jonathan. And hello everyone. So let me start by explaining a little bit to all of you what OCP is all about. And the next slide, please. Okay, OCP's value proposition is that OCP was formed about 10 years ago. And by taking the hyperscale innovation and with the community led by the community and for the community, OCP bridges this as a foundation, as a platform of collaboration to make sure that all these innovations from the hyperscalers are moved into deployed by shared innovation. It's community driven standardization of, let's say, those technologies, shared best practices and most importantly, take the rapid adoption, first time to market type of push and also transformation of the data center or all the businesses and the solution accordingly. So OCP was formed to make sure that the scale for what the data centers wanted to do at that time from a hyperscalers point of view and as scaling out instead of up and also driving costs down as effectively as possible. Next slide, please. OCP has been, like I said, for 10 years and we started driving from server desegregation to storage desegregation, networking desegregation and into optics and now into chiplets. This is where this whole thing is coming together and OCP is really pushing out lately to have open up die to die interfaces and then stitching those dies together in a solution for chips and also trying to get to the level of, even from a physical layer, data layer to a link layer that's what we're trying to do and to form, put together a solution as a hardware, a certified hardware for our members of the community, take those and build their solution on top of it. And the question comes how all these open source software is really being integrated. If we go to the next slide, please. Yes. So if you look at this slide, how this open source software is trying to be integrated on top of, let's say, all the hardware infrastructure that we're putting together. If you map it and you take the three foundations and how they are working together to build the solution for automation. This is a snapshot of, you know, very rudimentary way of putting it together, but it shows how the OSL layers are applicable, where every foundation is playing their role and also how their interfaces to build the solution. If you look at OCP is in the physical data, you know, link layers. And also, you see, Linux Foundation comes where the NASAs are. And then, of course, Open Infra Foundation takes it all the way to the application level and above. So the question always comes, how is this coming together? And OCP is putting the foundation and the collaboration for open hardware, certified hardware. And we're moving it from, you know, hyperscalers all the way to enterprises and in between. Okay. Next, please. And what's our mission? If you look at our mission and our motivation, it's basically promoting modularity and desegregation. And in this context of discussion, it's all about chiplets, interfaces. Like I said, opening the even dies and interfaces between the dies and stitching them together, modularizing it for scalable architecture and then desegregating it to make sure it's customized and tailored to the specific use case that we are driving all the way from, you know, to the chip level. And then we're fostering collaboration and across open old communities and building that bridges that's necessary. And this is where we ask for you to participate with us with Open Infra, Linux Foundation or Open, you know, Compute Project and come in and help us, you know, with the talent of the industry. And we're inviting you to come in and be part of this journey, this aggregation journey that's really taken off and also making sure that, you know, you also come in as a volunteer to help us and then benefit from this. From my ex, let's say, company that I was leading, there was tremendous beneficiary, I was beneficiary of it and I see how this desegregation, this open, you know, transformation of where the industry is going now all the way to the chip level is really helping everybody. And not only cost reduction, this is giving power to the software and the hardware to build your own, you know, tailored use case and doing it with modularity so that you can scale and also reduce cost. Now, if you look, the last portion is what OCP does is try to integrate, validate and certify these and we're building an OCP community lab where we try to put all these together and certify the hardware for your views. And finally, the next slide please. And the question was asked how you get involved and how is it that we really can, you know, come in into OCP? It's very simple. You can join and you can really start contributing and you can attend all our, you know, summits, our webinars, our training, engineering, workshops, they're all free. You don't have to be a member to do that. But then when you come in, you can start driving the projects if you want. You can volunteer, become part of the talent of the industry's talent. We have, you know, incubation committee, we call it IC, where we, you know, vet all these projects, putting it together. And then we contribute to the specifications, standardization and also drive to drive product into OCP's marketplace when you come in with your solutions and take it as a channel to the industry. It's all also free. You can come into the marketplace. We put your solutions in the marketplace and push it into industry. Use it as a channel for you to make sure that, you know, you take advantage of this whole desegregation journey with all the community. Again, for the community, with the community and also, you know, taking high-scale, hyper-scaled innovation from, you know, Google, Facebook, Amazon, all the key players, Microsoft, and with, you know, all the other solution providers such as Intel is a part of our board, et cetera, and others, AMD now coming in, you know, all the fungibles coming in, you know, all the industry leaders are coming in to pull this together, you know, on a triplet level and also to the final solution. So thank you very much. And if you have any questions, please let me know. You can also reach us through OpenCompute and I'll be happy to help you. Thank you very much and thanks for you guys at, you know, OpenInfo and finding me. Appreciate it. Yeah, George, thank you for giving us the overview there. Now, I know that you are pulling double duty and have a keynote that you're going to have to go jump to before we get to our discussion time. So I, but I didn't want to let you off the hook without any questions before you leave our show. So I wanted to ask you if you could, you know, we, in the OpenInfo community, we have people who are often responsible for many different layers of infrastructure and are maybe not always familiar with the deep details of hardware or the deep details of software or the deep details of, you know, kind of user applications or whatever. And so I wondered if you could just explain a little bit more about the concept of chiplets. You know, I think this is a really interesting kind of design element that's emerging on the hardware front, but I'm not sure that everybody has heard about it. So I wanted to just get you to give us a little more detail on the concept of chiplets and where they play into the hardware landscape. Sure. I mean, as we have desegrated the hardware, if you think about the hardware, and if you look at, let's say, take a motherboard and you look at the motherboard, you have this big honking chips. And as, as the technology evolved, the chips got bigger and bigger and bigger. And it came to a point that it became really hard to modularize the hardware, right? So hardware was being suffocated. So the chiplet concept is, you know, taking this big honking chips and like we said, at the die level even desegrating them, opening up with their interfaces and try to bring those specific, let's say, you know, use cases, even within the chip to put it together and stitch it back together all the way to the, even pushing the link level from physical layer to link level. Example, if OCP is driving now of, let's say, I would call it standard, it's called open domain specific architecture. And please go to our website and you can learn a lot about what we're doing with that and how that is really evolving with other consortium like UCIe and et cetera, it all coming together to make sure that what we said that, that, you know, getting into the chip level die to die, you know, pulling it die to die and creating that interfaces so that it will be modularized and you can really help put it back together the way you want to. I hope I was able to explain it. I think that's great. It's super interesting and I love seeing how these kinds of things evolve in our industry. So thank you very much, George, and thanks for joining us today. Next up, we will be hearing from the co-founder and chief development officer at Fungible. Fungible is also going to be speaking in the keynotes at our open info summit in Berlin, talking about next-gen data centers and DPUs. And today we have the co-founder of Fungible joining us, Pradeep, who is going to be walking us through some slides here. And I think, Pradeep, if we can, there we go, great. Good to see you. And I will hand it over to you now. Thank you, Jonathan. And thank you for the invitation to speak at the open info summit. Hello, everyone in Berlin. And as I understand, people are all around the world. It's a pleasure to talk about disaggregated data centers, disaggregated composable hardware and data centers. Next slide, please. So this is what I'll be speaking about. Next slide, please. A little bit about Fungible. Fungible was founded about five years ago and it's focused on improving data centers. In fact, cloudifying data centers at all scales, medium scale and small-edge data centers. So this slide gives you a little bit of a perspective on the evolution of computing over many decades. And way back when people used to glue vacuum tubes together to make a computer, of course, long since then microprocessors were invented, and things consolidated to one type of CPU, the x86. And that's given us a great ride for the last 40 years or so. And then we went through this phase where you had enterprise scale hyperconverged systems. And probably I would say 15 years ago, the hyperscalers invented this idea of scale out, which is basically the idea that I have a small number of server types, and many, many instances of these server types connected to the same network, typically an IP Ethernet network. Now, where we see the world going is going from this late compute-centric era to a data-centric era. And in a data-centric era, you will have heterogeneous compute elements. You won't only have x86 servers, you'll have x86 servers, you'll have ARM servers, you will have GPU servers, you will have FPGA servers, and all of these things need to be disaggregated and then composed so that the infrastructure is far more efficient. Now, this bell curve that is shown here actually shows you where the industry is. Clearly, the hyperscalers are leading this effort, and typically the enterprises are lagging behind in this evolution. Now, we see this developing extremely rapidly because without full disaggregation, the efficiency of data centers, for example, for enterprise data centers are typically operating at 5% to 8% utilization, which is very, very low. Even my incandescent light bulb is more efficient than that. If you go to hyperscalers, that utilization rises to 30% to 40%. Both of these can be improved quite substantially with further disaggregation and pooling of the resources in a data center. And the resources I'm talking about are things like storage, things like GPUs, things like FPGAs. Next slide, please. And so there are many, many challenges that still remain in data centers. And these challenges are especially acute in enterprise data centers, but there are also challenges that are there in hyperscale data centers. So number one is cost and power constraints. Actually, power even more than cost because it's a driver for many, many, many things. So it's almost impossible to build a single data center and feed more than 40 to 60 megawatts into it. And power, of course, given the focus on green, is becoming more and more dear. Now, you add to that those constraints the fact that there are demanding new applications. What I'm speaking about here is machine learning and analytics. And these applications are demanding because they generally need a very large amount of data. And the network needs to be extremely fast in order to feed the GPUs and CPUs which are doing the processing. Now, of course, we also have technology limitations and this wonderful gift that the industry gave us in the form of Moore's law has started to slow down. Actually, the slowdown was observable way back in 2010. And these days, in terms of performance improvements, you're not getting a lot. You're getting in transistors maybe a few percent. But at the chip level, you are getting improvements because the number of transistors increases. But as George pointed out, that also has a limit. So people are going to chiplets in order to improve the use and provide more IO per transistor. And of course, there's always users have the need for faster response times. So these pressures are what create the challenges in data centers. So what Fungible has invented is a new kind of microprocessor that we call the DPU. Now, I'm sure most of you have heard about this element called the SmartNIC. Now, the SmartNIC was conceived as an adjunct to a general purpose CPU. In other words, it's always plugged into a general purpose CPU and that all servers are built using X86 or ARM CPUs. Well, the fact is that there are workloads which we call data centric. And I'm showing you here the characteristics of these workloads. There's a lot of multiplexing. These workloads are stateful. The ratio of IO to compute is relatively high. And then of course, these are packetized workloads. And they represent almost a third of the workload in data centers. This might be surprising, but there was a seminal paper written by Google in 2014 that described what they called a data center tax. What Google calls a data center tax, we call data centric workloads. And given the prevalence of these workloads, it was very important to invent a new kind of microprocessor, which is fully programmable, to actually execute these workloads efficiently. And the fact is that neither CPUs nor GPUs can execute these workloads efficiently. And the DPU, for example, can do these workloads probably 20 times measured more efficiently than general purpose CPUs. And this is the advantage that you get by having a new architecture for this very important building block. So the DPU in summary is a purpose-built device, which does two things. It executes these data-centric workloads extremely efficiently. And then it also stitches together all of the servers in the data center by creating a fabric out of standard IP Ethernet networks. So both those are extremely important for efficient disaggregation and then composition. So next slide, please. So here's an example of a fully disaggregated data center. Now, you know, I have to underline what full disaggregation means. First of all, the reason for disaggregating is that it allows you to pool the resources. Let me take a very simple example. If you put local SSDs and plug them into a server, what happens is that those SSDs are very nicely usable by the local CPU, but other CPUs can't use them very efficiently. So these local SSDs become a stranded resource. So typically they will be used only about 15%, 20%. Now, 15% utilization is pathetically low. So it's advantageous to disaggregate storage, put it on the other side of the network so that you can consolidate it and pool it. So the pooling principle is actually a very tried and true principle. This is the reason money exists in societies at all. And so the thing about disaggregation is that it is feasible only for those devices whose latencies are a small factor times larger than the latency, round-trip latency of the network. So for example, it is relatively infeasible to disaggregate DRAM because DRAM latencies are in the order of 80 nanoseconds. Network latencies, if the network is extremely fast, round-trip latencies are on the order of maybe 10 microseconds round-trip. So nonetheless, disaggregation can be applied to all manner of storage, SSDs and hard drives. It can be applied to GPUs and it can also be applied to FPGAs very, very profitably. And because GPUs and FPGAs are expensive resources, it's very useful to do that disaggregation. Now, on the slide, I have to mention that disaggregation and composition are two sides of the same coin. So people have talked about composable data centers and disaggregation for the longest time. There was no efficient way to disaggregate resources until now, at least not over a standard IP Ethernet network. And the DPU actually enables that. Now, having disaggregated resources, you now need to have software that composes them together. And so what we have delivered at Fungible is a composition software which runs on standard Linux, which is able to create a virtual data center. We call it a bare metal virtualized data center in less than five minutes. So imagine that you're a service provider and you want to offer infrastructure as a service. You can create data centers with the dynamism of literally five minutes. And so you can spin up and spin down resources very, very quickly. So this is where the efficiency of data centers comes in. That now you can drive the utilization well north of the 8% to 10% that enterprises have and actually even improve the utilization of data centers that hyperscalers have. So that's the main reason it's to provide the better economics, better reliability and improve the performance. So we are able to provide bare metal performance, which means performance which is equivalent to bare metal, even though we're virtualizing the data center. And that is thanks to this new element that we call the DP. Next slide, please. That's all I had to be happy to answer any questions. Thank you. Thanks, Preeti. That is a super interesting concept. It's kind of what George was talking about with the chiplet, except we're applied everywhere inside of the server and the data center. I've got some questions for you, but we'll hold those until we get to the round table portion. So we'll go ahead and move on to our next speaker, who is a senior principal engineer at Intel. And Intel is an open Infra Foundation Gold member. So thank you, Intel. Dan Daly is going to be talking about IPDK. So I'll hand it over to Dan. Great, thank you. Yeah, so related to the earlier presentations, when we talk about composable infrastructure, at some point we need to go and program all of the elements that are inside that infrastructure. And the problem that our customers were running into, were that they wanted to go and program all of these different devices, software elements running in the system. They all had different programming models. They all had different interfaces. Often they were being put together by different vendors. And it made it difficult to be able to take advantage of this, these new devices that were coming out into the market, because you're losing some of that advantage that you had previously where it was a homogenous all-in-software environment. And so we wanted to be able to come up with some sort of platform that can find some commonality in the open source for programming all of these devices. And so within Intel itself, there are multiple types of devices that can run these infrastructure applications. And so we use those use cases as an example, try to come up with a set of interfaces to be able to do this programming so that you can put together this composable infrastructure in a system using open source software. So if you go to the next slide. So today on IPDK.io, IPDK stands for Infrastructure Programmer Development Kit. We've started a small group of developers who are looking at different frameworks that are completely target agnostic. So within IPDK, no within our GitHub, there are no device specific code bases. There's no one's SDK. There's no code to specifically support a brand new IPU, a brand new DPU, what have you. What it is is a set of frameworks for programming at an abstraction level that can take advantage of multiple different types of hardware to handle these types of use cases that one can take advantage of composable infrastructure. And we want to make sure that all of the use cases that we support can run in a software first environment where without having to start with a brand new piece of hardware or a new swanky kit, they can just on their laptop be able to set up the control planes, be able to start running tests and be able to start to try to use some of these frameworks without anything special to purchase or to adopt. And so how does this fit within all these other projects? We're just talking about Open Infra, talking about Open Compute. There is also a new organization called the Open Programmable Infrastructure Project. All of these are pieces of I think a larger puzzle and IPDK wants to be a component in that space to make it a little easier to take advantage of all these new pieces of hardware that are coming out, but also try to bridge the gap between lots and lots of different types of software APIs and interfaces. And have a place where if I have a new application that I want to bring into this space, I can come to the IPDK platform to port that application to get access to all the different targets, the different types of hardware and software that support an IPDK. And conversely, if I have a new piece of software or hardware that implements the data plane for these different types of infrastructure, I can bring that to IPDK and have access to all the different applications that have already been integrated with IPDK. And that integration could be integration with OpenSVAC, it could be integration with Kubernetes, and it could be integration with some of the different crypto control planes that we're putting together inside of IPDK. And so today I wanted to just bring it to everyone's attention these two different projects that we're working on actively, which is both IPDK and OPI. Great. Well, thank you for that overview, Dan. We'll come back to you with some questions in the roundtable as well. And we'll move on now to the final presentation, which is going to be brought to us by a senior engineer at Canonical, Dmitry Sherbakov. Thank you. Hello, everybody. I'd like to talk to you about some aspects of integrating DPUs into open infrastructure clouds. And if we go to the next slide, you can look at a DPU in, let's say, one of its set of features has programmables which are in the server. So if you remember, some SRV capable NICs includes an embedded switch. Some more advanced ones are capable of flow of floating. But with DPUs, this has taken a step further, and there's actually an embedded system with a general-purpose CPU on the NIC card itself with privileged access to programming flows and plugging in ports of the NIC switch. And architecturally, if you think about it, we no longer have networking agents running on the compute host and they're instead present on the DPU. And you can see on the picture here that OVN agents run in the DPU user space. And in this case, there's the kernel data path capable of floating flows into the NIC switch. So that changes a few things. If we go to the next slide, there are some aspects of integrating DPUs with open infrastructure. You can think about a concept of our multi-managed ports because the agents now are present on the DPU. And the compute agents can no longer interface with it directly. So we need to figure out how to make sure that the compute host can transfer the necessary information to the cloud management system, which can be open stack or Kubernetes for something else. And then that information needs to be propagated to a networking backend. OVN is one of those backends and it needs to essentially route port plugin requests to the right OVN chassis. And how to do this discovery that a particular compute host is associated with a particular DPU is something that we looked at and came up with a certain design. And we're relying on sender interfaces for now. So we take the information available in the card itself for the PCI specs. So we take a serial number, Mac address over KF and the virtual function number and we pass it all through the cloud management system down to OVN to find the right DPU and to do the necessary port plugin and flow programming. If you go to the next slide, you'll see that we've upstreamed some features that are now present in the respective upstream projects. So there is now a port plugin framework in OVN, which historically hasn't been a part of OVN's job. It was typically done by CMS specific agents and now it can be done in a plugin-based approach with a new project called OVNvif. So this project is focused on plugging ports of the embedded switch into the right OpenV switch bridges and there is currently a relatively vendor agnostic DevLink-based implementation and it is plug-able so it can be extended to support things like DPDK or IPDK to work with different kinds of hardware that don't have, let's say, a kernel-based data path. We've also upstreamed the support for remote-managed ports for this whole workflow into OpenStack, into the NOVA and Neutron projects. Neutron now supports passing the right kinds of information to OVN and NOVA supports collecting those kinds of information from the host. And there were also some related changes to Libre that we've done in order to support that. As you can see on the picture, this architecture is not specific to OpenStack necessarily, so it's done such that it's based on open interfaces and if you are working on community-based infrastructure or OpenStack-based infrastructure or if you have some other project that can be a cloud management system that's integrated with OVN, you can rely on the same design and essentially gets to the point where you have a virtual machine or a container spun up that consumes, let's say, a virtual function on the compute host and that virtual function is then properly plugged at the DPU side and flows are properly programmed so that the traffic for a particular workload can be accelerated. So that's one of the aspects of DPUs. There are others like providing storage or other things like hardware-based vertio and things like that. We're looking into those as well and into testing with more hardware because this is an accessible design and that's already available in the respective releases of OVN and OpenStack-Yoga and those are available in Ubuntu in 2204. Thanks for your time. If you have any questions, feel free to reach out. Thank you, Dimitri. So I think we're going to go ahead and bring back all of our speakers and while everyone is getting their cameras back on and joining us, Dimitri, I'll throw you a question to start the discussion. When you introduce some of these new architectures, what kind of challenges also are introduced? What kind of control plane challenges or other kind of... You talked about the higher levels. What are the things that an operator needs to be aware of? Yeah, so there are some other DPU aspects as well. So for example, the storage capabilities, maybe the built-in GPU capabilities of DPUs. So this needs to be orchestrated as well. Currently at the OpenStack level, at least there is no good way to do this orchestration. So that's one of the challenges. Another one is the provisioning of those DPUs. So essentially, you need the control plane to... If you look at the Greenfield cloud, you need to bring up both the hosts and the DPUs and link them together in the single data model so that the operator doesn't essentially make sure that those steps are done at the right order or come up with an at-hoc automation. So some relate it to the cloud management system itself and some are related to provisioning and potentially different kinds of data paths that OpenVswitch might need to support. But that would be covered by DPDK or IPDK in some cases. Okay, I always say that this is why the mission of OpenSource infrastructure software never ends because as hardware changes, as the data center architecture changes, our software has to keep enabling these features and exposing them up to the higher layers so that everyone can take advantage of it. Pradeep, you talked about disaggregation and composability as being kind of two sides of the same coin. What is it that allows that process of disaggregating and then recomposing to kind of recapture some of that inefficiency to improve the overall usage in these data centers? I think you might still be on mute. Apologies. That's a very important question. As I said, those are two sides of the same coin. It's hardware that enables disaggregation and it's software that enables recomposition. Now, if you can compose hardware on the fly with software, you can appreciate that all the resources now become kind of liquid or they become poolable. Once you can convince yourself that there are no more stranded resources anywhere in the data center, automatically it follows that the utilization of the resources is very high. I'll give you a very simple example. Imagine that I have storage distributed across many, many racks and servers distributed across many, many racks. There's compute servers. Now, if I had a constraint that I could only use a storage in this particular rack, well, now a good bit of the storage might be not usable because it's not satisfying some constraint. Well, a proper disaggregation and composition removes this constraint altogether. So the resource now becomes location-independent and anyone can use it by being composed in software. Composition here means just that I can put together CPUs, GPUs, FPGAs, storage and assemble them into kind of a mini data center, which is virtualized, but up until now it was not possible to do them with bare metal virtualized performance, meaning with full hardware level performance. That's the magic. Disaggregated hardware gets recomposed by all of this software into liquid pools that you can utilize more fully. Okay, that's great. I love how simple you make it sound. So I want to go to Archna. And Archna, you are the successor to George here from OCP on our show today. It's great to see the OCP structure with the community involvement in designs that George was talking about. My question is how does all of that community work that we've been encouraging, there were some questions in chat about how to get involved in OCP. We dropped some links in there. How does that community work ultimately result in those designs finding their ways into data centers and production systems? That's a great question. It's not an easy process because you have all sorts of different elements, right? You've got storage elements. You've got cooling elements. You've got racks. You've got chiplets now. The community operates on a whole plethora of different modes. The split guys are at the pioneering end of hardware, right? They're trying to understand what dye-to-dye interfaces are, what kind of integrated solutions can they put on a dye and then stitch them together all on one package. So they're defining the facto standards. They're working together. They're taking off their corporate hats and they're kind of working to solve a problem. And so they're working on developing standards. If you move to the opposite side, which is the data center guys, they have facilities, whether they're cold-mote providers or whether they own their own data centers like the hyperscalers, they're dealing with cooling that data center. They're dealing with sustainable practices. They're dealing with heat reuse of that data center. They're dealing with different types of racks that are coming in to optimize their workloads. And so the community is now coming together from a data center facilities perspective and discussing best practices and providing guidance, design guidance, as well as guidance to those that are looking to engage or maybe build their new data centers, whether they be on the edge, right? Or an enterprise or something like that. So a lot of different ways that the community is operating depending on what they're trying to solve in the industry. And we have companies that are providing their expertise. We have research institutions that are providing their technology and their research capabilities. We have wonderful individuals that are providing their leadership. It's completely voluntary. It's completely volunteer-led. But it depends on how deep you want to go, but you can engage. You can listen. You can add. You can contribute. So very, very different model, but it works. It's been working for us for 10 years, and we're hoping to, you know, in the last five years it's been global and it's been wonderful to see that. That's great, yeah. So we had a question from YouTube. Maybe we can get that pulled up here. And it's kind of going to the other side of the coin that we've been talking about, which is software communities and how can software communities best support these evolving hardware architectures. Dan, it sounds like with IPDK and the projects that you're working on, that is the explicit goal of that community. How do you kind of break it down from just the big goal of let's support these new architectures to it seems like what you're focusing on are specifically interfaces and development kits. Could you talk a little bit about how you think the software community should be approaching this? Yeah, I think that the ideal situation would be that we could arrive at being able to consume composable hardware without any changes to the software. And so we have been using programmability in the hardware primarily in step one to change the hardware to make it mimic the behavior of existing software. And so we did this with where we put VertIO into hardware in some of the products that are out today. And we have done this by taking some of the same constructs that you have in OpenVswitch inside of the things that you would see inside of SBDK or inside of a storage pipeline. And we put that directly into hardware. And then we try to find the lowest point in the system that makes the smallest amount of software changes where you can then have a choice between do I want this to be implemented in software as it is today or do I want to move that to hardware and allow to get some of those composable systems that you just can't get when everything is in software. And so this is sort of our way to introduce with the sort of lowest amount of inertia these new hardware platforms. And then once we sort of can establish that process we can take that same pattern and start to move up the stack. So for example where we're taking things like Kubernetes and starting to offload parts of that Kubernetes data plane. We're trying to take that same approach where we minimize the amount of software changes that are necessary. The amount of sort of direct awareness that the software needs to have that there is this new hardware technology underneath it. That's the approach that we're trying to take to make it easier to adopt. Hey Jonathan, if I may add to that in this new world where hardware is aggregated it'll be great to take a page out of the book of how software is designed today, which is as a set of microservices. If you now think of new hardware features as being microservices that can be involved from anywhere then the question becomes well what's the interface? A great example is in storage where the storage industry has come up with NVME over fabric. That's a standard that any CPU or any other type of computing element can invoke reads and writes to media over the network. And I think the NVME community is trying to build on that for example by developing key value primitives key value store primitives. I think that's a great example and by using NVME's real interactions that's a great way to enable the use of disaggregated hardware resources. I think the goal of not having to change a single line of code is probably not achievable even though it's laudable. If you have new functionality you know somebody's got to write some code to speak to that new functionality but on the whole I agree that you want to minimize software changes but I don't think it's always possible. I guess I could add to that by saying by bringing some experience of how we have done the implementation of our changes upstream it was only possible because other communities have done a lot of work before us and gave us certain interfaces to hardware or in the kernel at the control plane level that we could rely on. We tried to minimize the changes as much as possible and we managed to essentially just introduce one new project to an already existing larger open source project. I think that's a great achievement in terms of minimization of new source code and agents. We've essentially packed everything within the confines of the existing pluggable infrastructure that was there before in OpenStack in OVN and in OVS. I guess from the perspective of how could the community support us, well design reviews and code reviews are very helpful and I guess a shout out to the Nova and Neutron OVN and OVS communities who are providing all the design reviews and code reviews we've done a lot of work with the community and that was very helpful to shape it in the right way and I guess another aspect is the CI infrastructure essentially with all of those new devices you have to build lamps to test all of the integrations and things like that and you don't always have access to this additional hardware so if a community member is a part of the OpenSource project then they can provide a certain type of hardware that's very useful so more testing and helping at least getting this throughout the essentially the OpenSource projects is very helpful. Thanks y'all and I was just so excited that my question was getting answered I decided to hop on the episode with some technical difficulties so I'm going to be here to help wrap things up but I know we're right about time does anyone have any last thoughts about the topic or any kind of calls to the community on how they can participate and continue advancing some software to address some of these hardware architectures? Yeah I guess Yeah go ahead I was just going to say Open Communities along with our hardware community to really come together and solve problems I appreciate the invitation from OpenCute but I love seeing OpenStack there and I love seeing the Linux Foundation there I think that's the ultimate community we can't do it alone and we're not going to be the best at what we do unless we partner with our members as well as our community and sister communities so I think that that to me is a very positive thing So Alison I'd like to tell you one thing often times in the industry we've seen hardware and software as though they were antagonists with each other there is no such thing software doesn't run on air and hardware is useless without software and I think it's time that people stop going one way or the other way what's clear is that with Moore's Law slowing down we're going to have innovation in Silicon and there are going to be new primitives and these new primitives will need software and this is where software and hardware folks need to learn to work together it's not easy because the time scales of development of software are a lot faster than hardware this is almost a lot of nature building chips it takes time building software it's okay to have silicon not so much so I think one maybe realization from the open source communities that there needs to be a lot better collaboration between people who are trying to develop new architectures and new hardware and people who are trying to develop new software paradigms well I'm glad you brought that up I completely agree with both collaboration communities and foundations and countries and companies it's what makes all of these things advance and sorry R9 completely cut you off I'll let you pass it to you and then I'll wrap the episode all I'm going to say is I couldn't have said it better yeah well it actually brings me to my last point which is really exciting that we actually have an upcoming opportunity for all of us to collaborate in person so thank you all for joining this open in for live episode and I learned a lot and I actually really just hope we continue the conversation in a few months the open in for foundation is hosting our first in person summit in the past two and a half years so if you haven't already registered June 7th through 9th we're going to be in Berlin we have a great lineup of speakers you can register here at openimprint.dev slash summit and we've dropped a link into the chat on whatever platform you're streaming in as well our panel participants today it was a very interesting and engaging conversation and let's continue the conversation in person in Berlin in June alright thanks everybody have a great day thank you Allison for the invitation thank you so much likewise thank you