 So hello everyone, Ariel, Maxim. We're gonna talk about VDPA behind me. I do have a tendency to talk too quickly sometimes so you can just tell me and then I'll talk quicker. So let's get the agenda. So we're gonna start from the problem statement. One of the issues when we talk about VDPA is that it is so technical. We touch so many things that you can lose your eye after balls. We just want to start from the problem statement. What are we actually trying to do here? Then we're gonna talk about SRV base acceleration. Those of you who are not familiar, a few words on SRV and how do we actually actually use it to accelerate containers and VMs. Then we're gonna move on to VDPA kind of like explain what actually does and how again it's used for accelerating containers and VMs. Then we're gonna talk about the VDPA kernel framework briefly. VDPA DPDK framework. We'll talk about the VDPA Tribunetics framework which is currently developing a live demo looking forward and Q&A. So the problem we're trying to solve here is about accelerating network acceleration for VMs and containers. Okay, kind of a high level. In today's use case, both telco and enterprise, we're seeing more and more use cases, more and more situations where you actually want to take a VM or container and you want to give it a fast networking interface. Okay, fast is warp speed, it is off capacity, very low latency. Okay, the way to do it usually today is with SRV. Point here, when we talk about accelerating networking, that also means many things. So we start the discussion to pack a processing and pack it steered. Okay, pack a processing is a replication that's gonna build a pack and make it pretty simple. Packet steering is actually getting the packet from the application as fast as you can into the net. Our focus, VDPA, is about packet steered. Okay, packet processing, DPDK and other tools. Well, this is something we're gonna talk about a lot, the issue of decoupling your VM and container from the physical net. Okay, so we're gonna say that again. Again, the SRV within a second is a very good solution. It is deployed today by telcos and others, but it creates the decoupling between the actual image of your container and VM to the physical net and it has relevance. So talking about SRV, a formal definition is, you know, single root virtualization, standard type of PCI device, shared by single device with multiple virtual machines. What actually means that you can take a physical device and slice it. You're basically taking and slicing it to multiple containers and VM. They're all sharing the same physical link, although they feel as if each of them has their own physical net, which is really just a slice of VM. When we talk about SRV, we talk about PFs and VMs. If physical functions, they have virtual functions. PF is the PCI device. The device, it's how you provision the physical net. It's the net, actually. You can only have one PF on a physical net, but then you have VFs. So VFs are those slices of PCI. Okay, you can have multiple VFs on a single physical net. So talking about SRV, how do we actually accelerate with it? And first of all, we're gonna talk about containers. You also have acceleration for VMs, but we just want to focus the discussion. So the darkness we use when we're gonna use it all through this talk is we always have our hardware layer, which is the hardware blocks. We have the kernel space. We have the user space. And then we map the different building blocks into these dimensions. In Kubernetes, we talk about a node. So this is a node for the user space, kernel space, hardware block. And we're gonna describe this top down. So top down, again, we're focusing on packet steering. Okay, packet processing is done by DPK. So typically, what you're gonna see today with SRV, you're gonna see in container case, you're gonna see a container with a DPK library linked into the application. That's a very efficient packet processing. And the packet steering is done by SRV. How is it done by SRV? It's built in Maltes. So Maltes is a CNI that enables you to associate multiple CNIs to a given pod that is to inject multiple interfaces into the same pod. Why is it useful? Because it's a non-intrusive solution. That is, you can have your regular Kubernetes deployment with yet another container. The party interface is just a Kubernetes interface. Does everything as usual. You can push a secondary interface or a third or fifth interface that is accelerated, that it's connected to SRV and you can do whatever you want there. So the main building blocks top down are the SRV device plugin, which is going to bookkeeping, okay, maintaining all the PDFs, which the device plugin attributes. We've got the Maltes connected to the CNI. We've got the SRV CNI, which comes in here in Maltes. It talks both with the actual pod, it injects an interface net zero SRV and it also talks with the physical net SRV. Basically, for example, provisioning a VM or configuring something on that RV line. Inside the pod itself, we have the Vener-VF PMD, which is a PMD co-mode driver. It's an official way to do back processing. From that point on, the Vener-VF PMD can talk directly with the physical net. Then we differentiate between the red line, which is the data plane, and the blue line, which is the control plane. So this is actually memory mapping. You push it back and it's quickly control plane, signaling, for example, anything on top of that. The key point of this slide, that this part here, the Vener-VF PMD, that driver, inside the container, is Vener-specific. It is tightly coupled with this specific Vener-NIT vendor firmware. If you take this container and try running it from different noting communities with a different NIC or the same NIC or the different firmware, it's not going to run. So this is how it's done today. Then we jump to VDPAs. So what is VDPAs? So the basic concept is to take something that's been around for 12 years or more. Vener-Networking. Vener-Vial virtualization. The ability to take up VMs and run your VM or GIFs over a course. Vener-Net is one of those interfaces for networking. We're basically doing, taking the Vener-Vial and instead of just taking it from the VM or guest to the host, we're pulling it all the way down to the physical NIC. So VDPAs or Vener-Vial data acceleration is composed of two parts, the data plane and control plane. The data plane part is the green web. It's how do I actually arrange the buffers? And that's something standard. VDPAs assumes that the NIC vendors, the NICs, they implement the VDPAs data plane, which can be or slivering or clattering as defined in the various maintenance devices. Everything is open and standard. The other part, aside from the data plane, is the control plane. VDPAs does not assume that the NIC vendors implement the VDPAs control plane inside the NICs. Instead, it provides a translation layer. It's a framework or in the kernel, which I will describe, or VDPK, which Maxime will describe, which the vendor can basically push in his drivers. So from this point, he's using his proprietary interface to talk with his physical NICs. So the framework then translates that to something standard, which can be or VSTL. Let me see that. DOSU's protocol or characterize CTOs depending if it's containers or VMs. The point is that if we do this thing, we're able to now have an open standard Virayo driver inside your container or image. And we're going to see that in a second. That means that it's decoupled from the actual physical NIC. I'm just going to point out that there is another way to do this, which is called Viratary-alpha hardware floating. Okay, and this is actually in production in Alibaba, for example. Alibaba BirdMetal Server, where you actually implement on the physical NIC, both the control plane and data plane. Maxime has actually tried this out. For the CUBE Come demo, we did it in October. It's much more challenging because the control plane changes much more rapidly than the data plane. Why is it complicated? So first of all, many stakeholders, many opinions. And the second topic is because it touches so many things. If you think of it, it's something so small. I mean, it's just a VDP. It's just your first interface to move from your product container into the NIC. There are so many other things that you need to do. But in practice, it's really complicated because you need to touch so many things. You need to work on the kernel with QNU and the lover of VDPK and Kubernetes, OpenStack, working for us. So it touches many, many communities. Are we in time? Okay, so let's see how we can actually use VDP to accelerate containers. So very similar to the picture we used for the SRV. And main building blocks are the same. So we've got our VDP device plug-in instead of an SRV device plug-in, which is able to do the bookkeeping for VFC in the case of VDPK. We still use Maltes. And Maltes then pushes in the VDPC in right, instead of an SRVC in right. And then we push and then zero VRI interface into the pod. But the magic starts here. So the first thing that we do is we have a Berlio net P&D driver. This is a Berlio driver. It's not very specific. It can occur less. Okay, it's generic. And the second thing that we do is we have a framework. Okay, so in this case I'm showing the framework inside the kernel. It can also be in the user space using VDPK. And this is the VDP infernet plus the VDP of vendor drivers. The NIC vendor basically pushes his own control plane driver here. And that does the translation. So again, we have a data plane going directly from the NIC, from the driver all the way to the NIC. Standard Berlio data plane. And we have our control plane going for a translation there. If we won't look at VMs, for example, so in the world of VMs, we've got our you know, we've got Lupfer, we've got KVM, we've got our QMU process. Inside the guest, we have the kernel space and user space. So again, if we're talking about acceleration, we're usually going to talk about VDP applications. In this case, the VDP application has its Verdo net P&D in the user space here. And again, it talks directly with a physical NIC data plane memory mapping going all the way down. It's the same VDP framework we saw in the case of the containers. So the point is the same framework can support containers and VMs. So we talked about this block here, the VDP framework in the kernel, so I'm going to talk shortly about this framework. Maxime is going to talk about the DPDK. For all the native kernel speakers, it's going to be, it's really a high level of short discussion on this because Jason Long, who's leading this effort, he's basically finished a document about 30 pages on this and the issues of IONU and DMA and so many variants and how can we support Intel and Linux. So it's a much broader discussion. We prefer taking that one off. So if you zoom in, this is the VDP framework in a nutshell. Again, hardware, kernel, user space. On the hardware side, when we talk about VDP, we have, we have, we will have in the future, multiple VDP devices for different net vendors. I talked already about VDP, we have some PS. Similar to SRV, you've got your physical functions, you've got your virtual functions. But looking forward, you're going to have additional things. You're going to have sub-functions. This is something no one supports already to date. And we're also going to have ADA, ADI, which is, something device interface. ADI includes new technologies such as, as, on the scale by AB, for example. So Intel are coming up with scale by AB. It's your ability, instead of using slices of the F, to use much smaller slices, of using capacities going all the way down. And that's something that we also want VDPA to support looking for. So VDPA is not only a replacement for SRV, it's a replacement for any interface going all the way down to the physical. The key point of the VDPA framework is the ability to take all these different devices, VDPA devices, and expose them to the existing to date VOS, the radar driver, as yet another VOS device or radar device. And that's one of the key points, I'm just going to say that again. This is what we have today. It's been working for a very long time. Okay, VOS, the radar driver, always there. Usually what happens is that VOS, in the typical case, that's what runs on the host. And the host kernel on the radar driver is what usually runs on the guest kernel. In the case of VDPA, we pull them both to the host. And we want to give them the ability to continue doing exactly what they did, that is talking with VOS devices or radar devices. From practice, they're talking with VDPA devices. So that's what the VDPA framework does. So now I'm just going to zoom in to the VDPA framework. Then I'm going to talk about the VOS path and shortly about the Verdeo path. Now, the VOS path is what we're currently using. All the uses, we talked until this one, container acceleration, BN container acceleration, everything goes through the VOS part. The Verdeo path is for more advanced use cases, we're planning to get to. I don't talk about AFX-DP, which is a socket-based interface. And we're also looking into socket-based interfaces that will replace in the future the VDPA interface to consume VDPA. So first of all, opening this box. So the key part of the VDPA framework is the VDPA plus. Okay? For each of the different VDPA devices the NIC vendors provide, they also develop a driver. And the driver is vendor-specific, which is this. Okay? Also, and again, we're talking about the control plan. In this case, I'm reminding everyone, data plan is data, it's just memory mapping. So all the framework is just for the control plan. The interface between the driver and the physical device is proprietary. It's vendor-specific. We know that, and you're going to have many of those options. But what this framework does is it forces you to implement an interface inside your driver, which we call a VDPA device abstraction, which connects to the VDPA bus. So the way we work with the buses, we have a driver and we have a device and the bus basically connects them together. So on the one hand, these are drivers. If you look at it from the side of the devices here, but from the perspective of the bus, these are also devices. Okay? So that's a little tricky. But again, we have this perspective of devices and the perspective of drivers. So after I presented the basic opening of the block, now I'm going to focus on, okay, so how do I actually connect the BIOS to this bus? And I do it by adding another block here, which is called the BIOS VDPA bus driver. Now, I know the names are also somewhat challenging. So I'm again just going to try to explain the essence. The essence is that BIOS has been here for 10 years. And BIOS knows how to work with a BIOS device. That's what it does. And we want to continue using that. We don't want to change anything. The whole idea of VDPA is building something that works, not changing the world. So on the one hand, BIOS needs to talk with a BIOS device. But on the other hand, these are actually VDPA devices. So the essence is that by putting a translation layer here, which is a BIOS VDPA bus driver, we're able to give the BIOS driver here the impression he's actually talking with a BIOS device. But on the other hand, from the perspective of the VDPA bus, this is just a driver and out of the bus. I hope I was able to convey this message. I know that the naming isn't perfect. But again, take away your BIOS is something that's been a long time. You want to keep on using it. It just sees BIOS devices. But in practice, in magic here, it's working with VDPA devices. The other path is the VARDIO driver path. Now again, this is traditionally running inside the guest. We're pushing it into the kernel. And again, we're building on existing parts of VARDIO, which are the VARDIO driver and the VARDIO bus. But here again, we're pushing this translation block, which we call a VARDIO VDPA bus driver. And what it does is, again, it gives the VARDIO driver the impression it's just talking with a VARDIO device. So if you remember, in the case of VDPA, for example, so VARDIO would run in the kernels and the guest kernel. And then we'll talk with the device, which is inside QNU. We're doing something really similar. But it's just inside the kernel. And again, the VARDIO driver talks with a VARDIO device, although in practice, it's talking with a VDPA device and so many different options. So that's kind of the magic of the framework here. To repeat what I said, the VARDIO driver is for future use cases. BIOS is what we're using for everything that we're talking about today. Containers, VMs, everything is BIOS. But looking forward, we have concepts we call it VARDIO, which is again, it's socket-based interfaces. And they're going to be using VARDIO driver. Thank you. So I had talked about the kernel of VDPA. And on my side, I will talk about VDPA1. So first, maybe everyone is aware of what VDPA1 is. It stands for the data bracket. It is basically a framework, a set of user space library and whole-mode drivers that I use to achieve fast packet processing. Regarding VDPA, it has been developed by a joint effort of UTI and Red Hat and introduced in 1805 release. It plays on top of the BIOS user library of DPDK. And it relies on the BIOS user library to provide a unified interface to the front end. And what is needed also on top of this VDPA framework is VDPA drivers that are also part of DPDK to handle the vendor-specific control path. So before introducing you to VDPA, let's have a look at a classic architecture of utilization based on VDPK, something that's today of the production. On the other side, we have the VNF, for example, that is using VertelGunet PMD. So this is a standard VNF driver of DPDK. It uses VFIU to control the power-actualized VNF device. And, obviously, DPDK uses the first user library to act as a back-end for the VNF device. Meaning that PCI registered accesses that are done by the VNF PMD are translated, in some way, into the BIOS user protocol requests. It seems like setting the varying addresses, configuring the number of queues, etc. So, as I said, obviously, DPDK relies on the BIOS library to act as a BIOS user back-end. And so the database is processed in software by the BIOS user library, which means that it costs a lot of CPU and has an impact on performance. OVS, we switch the packets, so we have an extra POPI here. So the packets will be switched from the BIOS user only to OVS by VNPOPI, and we switch to a Vendor PMD, which is another Vendor-specific driver to control the feedback. So the advantage of such a solution is that it has a flexibility on our side to switch the packets. It's also a Vendor-specific driver running in the guest. Unlike the SRIOV solution as Ari had presented, another advantage is that you can support migration because the BIOS user library is able to look at the page, to know which pages have been modified by the OVS DPDK process. But, as I said, we have a lower performance solution and it has an impact on the cache pressure, the utilization, and CPU usage. We need that. Compared to the SRIOV solution, for example, you will have less DNS or CNS in this case by machine. So now let's compare with the VPDK solution. Just to connect us, so before that, we described the kernel VPDK firmware, and now we're showing the DPDK firmware. They both provide the same capability. Yeah. And so, as in our slide, we have the same built-in blocks with the VNF. So we have the same VNF using the same driver, same configuration. But instead of having the VPDK, we have a different process that's at the VPDK demo, which translates, which receives the BIOS user protocol request from QEDU. And the demo will translate request to Vendor-specific hardware participant accesses. And here, the data pass is no more processed in software. So as the SRIOV solution, the software solution, they are still on the Vendor-acoustic. We still support live vibration. So live vibration can be supported either directly in hardware, but it has a cost on the PCI bandwidth, or it can be assisted by the demo, but it will have a sub-cost on the CPU usage. So compared to the SRIOV solution, the software solution, we have better performance and much less CPU usage, because we have no more extra copies, we have no more processing of the descriptors in software. But compared to what I presented earlier, we require in this case an application running on the RSI to translate the Vostuser protocol into the Vendor-specific addresses. So the key point here is that we are still standard, so we use the Vostuser protocol that is specified in the QNU, meaning that we can use plug-in directly to QNU. We can use it directly with a container with the Vendor-type user PNU. We don't have to do any changes to the application when we want to move to a solution that is software-based for the data sending to our base. So we need the FMR to achieve that. We find a set of contacts that the Vendor drivers will have to implement to do the translation. Basically, we have contacts to set, get data features, the problem is the number of queues supported by the need, to configure the IOU table using the FIO, and to set up the varying RSI. So for now, we only have one application, which is an example in the SDK that can act as a MMR. So we are looking to integrate it in our solution or to provide an application that would be okay for production. So now let's have a look at how does it fit in a Kubernetes architecture. So we'll see that. So here we have a workload that will consume the DPA-DF using a Vendor-type user PNU. So we have three main components. So the first is the VTPA-CNI that I introduced. So it is a CNI, the CNI is a plugin that is loaded by MIRTUS at the same time. It will be in charge of injecting the first user socket to the one. We have the VTPA-DEMON set, which will run the VTPA-DEMON to do the translation between the user request and Vendor-specific accesses. So this VTPA-DEMON set also runs a GMPC server so that the CNI can query for given CNI, which may appear to be specific. And finally we have the VTPA-DEMON set. This VTPA-DEMON set is used to act as a bootkeeper and start time to scan the system to get the available VTPA devices on the system. And when we have some network extra that will be created, it will be running between the PCIe address and the customer resource session that will be applied by the workload. So when the workload starts, it will have a CRD that will land there. It will ask MIRTUS to resolve it, so it will be a resource name, for example HTTP-D, which is an HTTP server. And the device plug-in the main set will send the PCIe address that corresponds to such a resource. MIRTUS will forward the information to the VTPA-CNI, which will query the GMPC server to get the VTPA-DEMON set corresponding to this PCIe address. And when you get the information, the VTPA-DEMON set will be injected into the boot and the workload can initiate the VTPA-DEMON set. So now we will do a demo with the Wi-Fi board. The demo is based on what I presented just before. The workload is an HTTP-D application which is based on SYSTAR. SYSTAR is the framework developed by Sylla TV, which is a database company that is used for achieving high performance of our application. And SYSTAR will use this framework to support the VTPA. This is the framework where we use Intel test-cash-cash smart mix. This mix supports VTPA 1.0 specification, so they don't support yet the factory version of the VTPA spec. So it's on display for now. And currently this is not available to VTPA. You can only select customers. Basically we have the same building block before, so the workload will be SYSTAR HTTP-D and we will connect another server. We will also use SYSTAR but as a traffic generator. The Wi-Fi is still working, so that's good news. We have a Grafana dashboard here. This is some prometheus matrix. I have a backup. So we need a VTPA for data thrown and that's like creating customers. So here we have the Grafana dashboard. SYSTAR HTTP-D application exports some matrix. So here is the number of HTTP requests per second. Here we have some statistics regarding the full-put and another packet per second and here we have a list of the containers that are running on the node. So as I said on this machine we have the Intel Cascade-Dilacé NIC, we can see with LSPCI that it exposes .io so here we have the pf.0 and three vf's are created. These vf's are bound to vf.io. So we can consume it from user space with DPDK for example. So here we start the two building blocks, the demand set and the network attachment definition and we will start the HTTP-D pod. Very interesting. So the demand set will create the network attachment. So in our case we only need one to be this one which corresponds to the HTTP-D. The two over one could for example be another network attachment for another type of application. Now if we move back to the Grafana dashboard we see that our containers are up and running. So the sister one is not initialized because we don't see the metric yet. Now it is initialized and we will move to the... so we can see that we have our pods here. So we're going to just point out we've got back to back to physical servers. One server is injecting traffic, the other one we're actually using, first of all they both support VPA but one is just injecting traffic and the other one we're actually running is a Kubernetes node and getting all these pods out. So on the traffic generator side we just start to generate some traffic using CELAC and over CSTAR application. Here we can see that the traffic start to run. So just to add, so currently it's about 160k it's not a lot of traffic when you're seeing here, but point out this is REL that is a real VDP net, okay data plan all the way through including a control plane. Again the framework in this case is a DPDK, not a kernel framework, but still. It was important for us also to show in QCon and REL boot that it was a real technology. The second part is that gradually we're improving the performance and again the target has reached starting performance. So the packet size in this one? It's under size. But the cost here is mainly on the TCP stack. We have 20 simultaneous connections but we only use one CPU to achieve this traffic on both sides. So we actually wanted to get something more stable. QCon for example is simple and simple and really pushing out. So looking for, so first of all we'll take a lease. What we've been showing here and we've been showing this a lot in the past few months is that VDP is real. So again it's touching a lot of components but it's consolidating. It's been around working this give and take with Intel for two and a half years. I get many opinions on how exactly to do it but we're gradually getting communities to agree. Second point is that if we go back to the problem we started with the issue is how can we decouple the VM or container for the physical network. That's exactly what VDP comes to do. It's the decoupling issue that is you can just have an image with a standard driver run it on any physical network that supports VDP. So for VMs it means now you can actually migrate VMs between different machines. For containers it just means that you can spin up containers on any machine that you want assuming that the NIC supports VDP. And it actually solves a real problem which is the issue of certification. Certification for containers of VM is becoming a more and more challenge especially for telephones as they go into mass productions because them trying to fit the image of every container of VM with the presentations of a specific driver and firmware of a specific NIC is almost impossible to manage and test because you have so many combinations of different images and the same application with different images so it means different testing. If you're able to have a single image for the containers of VM with a simple VDP driver you're solving a problem. And it also opens the door to a bunch of other hardware floating capability because once you have a standard interface to go to the NIC so now you can start uploading additional things into the NIC and make sense. Because there's no point of uploading all the stuff to the NIC it's actually the biggest issue. We expect all the major NIC vendors to be supporting VDP in the next 12 to 18 months. They're all involved working with them, supporting them, solving issues, enhancing the spec. Some of the vendors actually have cards now that they're going to be coming up with public answers. We're also working on feature parity with SRV so we've got people here to provide users a simple migration path. It's not to come to them tomorrow and say you have to change everything to VDP. You can gradually take containers of VM and migrate them to VDP and you can still have the majority of your VM containers running in SRV so it's a gradual progress. And that's why we also want to have a feature parity with SRV. We're also working to convince cloud providers. Umdom's also here so we've had discussions with them. It's challenging but it's something that we definitely want to aim at. If we are able to convince cloud providers to also use VDP, so it means that you can really start talking about a hybrid cloud solution and get the same image of the container or a VM to run over on-prem or public cloud and it's impossible today. Think of AWS ENA. ENA is SRV so it means that if you're able to take a VM today or a container attach an ENA interface, SRV, for AWS it's not going to run anywhere else because you've pushed their driver into your image. If you have something like a VDP you can take that image from AWS and just push it on-prem or G-Cloud anywhere else assuming they support it and that's a game changer for these specific use cases of VMs and containers we want to accelerate. Another point to show you is the series of blogs and one of the things that we've been working on is um, there's something in there Oh it's, I forgot it was a PDF So again I started with saying that it's complicated because there's so much history behind all this and different architectures for VOSNet, TverioNet and VOSUser. Maxime talked about VOSUser to Tverio PMD and now VDP and we also mentioned briefly paragraph loading and we compared it to SRV so really we really briefly touched all these topics so what these series of blogs do is they really take you step-by-step and they're composed of three types of blogs, one is solution overview so that's if you just want to see the big picture so for each of the different architectures you get a solution overview then you have a deep dive okay so that's really going down the rabbit hole understanding the details and you also have hands-on sessions with Ansible scripts you can just go and try it out yourself. Last point is the community, TverioNetworking, with the mailing list what do we see now? Mailing list you're welcome to register, it's an open public mailing list we just use red up because it's simple for us to maintain it discussions, for example Jason's discussion on the VDP kernel framework, the documents are there, discussions on AFRDO that's more dense socket-based solutions also discussions there you are welcome to join it. Anything else you want to add on this? Thank you very much Shed, good evening. What is the state of mind? Should they implement all features like the vendors are not bound to do for all the VDP features that exist? So this is preferable to use my question Does the other vendors actually implement all the VDP features on the version of the application? Yep, so it's to the vendor to implement it all as long as they advertise it. Future negotiation, you don't have to implement all the features, part of the work we're doing for example with Marvel we're adding additional features into this negotiation because they need specific things to optimize their cues that it's something that makes sense, we add it and then the driver basically you can just have this negotiation and decide in runtime what does it actually support and not support it. So it's still the same driver in your image that can do the sound check to understand the real capabilities. Everything here is on the data plane. Second question is how does it compare with respect to the performance compared to the design? Again, it's under-dependent, so we have a lot of samples on it so we're not switching line-wise with small packets, but it's still better than what we see with your chooser in software. We're still not there, the performance was, but I know specific vendors I'm going to talk about 100 gig line, 40 gig, if you just use small packets it can reach around 30, like 60 million or so with the mix that we know that are coming, but later on you already know about this doing the AC space and then there's no real limitation there, just the performance of the AC. So implementation was just a point out, so it's a part of the patterns we're working with they're planning to implement it based on NPR, FJ or something, so that's going to get some performance, but some are going to AC, so I think it won't hit the SR it's going to be the same latency, which actually in many of these cases is more important. Since you moved some of the performance from what you used to be in the guest into the host channel, does that mean you could correctly, without having to log the NPR? So we are moving? Yeah, so I looked up to the question. So we haven't moved a component from the guest to the... It used to be the case that because it's a user space process and another one you have to log the memory into your memory. But in the case of DPDK, that's it too, because we are using the same interface Yes, but now we have this burst and everything is done on this burst So Amnon said yes and are you really sure about that one? The answer is that the soccer base interface we're looking at will not require you to lock the page memory You won't need to lock pages, but it's more complicated, it's based on the past ideas that we're able to pass, and that's something I prefer Jason to really lead proper discussions, because there's a lot of details in it. So the long term we expect to have... We want to challenge that, just to point out. One thing that we really try to focus on is doing something that we feel that we can productize, something that this community can support over time. So yes, we definitely want to try and test the approach of a soccer base interface instead of DPDK, because we think it can give value and simplify things, but it's yet to be proved. Questions or everyone or Tosha? Go ahead. Just another point to mention, so the bigger vision here is to have a generic network for bare metal. So this virtual driver that you saw in the channel framework, so it will work on the guest, of course, as a new virtual driver that can also do DPDK directly to the hunger, but it can also work on a bare metal server as a generic mid-driver. So think about MV mid-driver and similar for networking, that's kind of the biggest vision. That's kind of what I love. Any other questions from anyone? It has two vendors releasing new types or models of the next cars? So it depends on the vendor. Last, this week on Monday for DPDK, Melanox upstream has a VDPA driver and it is for connected 6DX, which is a standard NIC, but it's just a firmware update that will allow to do VDPA with it. So it depends on the vendor. So if it's the implementation, it's going to be a firmware update? If it is a basic implementation, we will have to have a new NIC. For provision in the existing spec or whatever to add vendor-specific extensions or vendor-specific features? You mean extending the specifications? But having your way to pass extra flags or something like that, why stay in this material? The goal is to be standard. If you want to migrate your workload from, I don't know, Intel, from Melanox to Intel and if they have vendor-specific flags, it will just break the goal. The goal is again to decouple the VM of the container from the physical mix, so it can't use anything special. If you give the word of that special thing to the container of VM, so it's no longer decoupled. Because it actually has additional information, so we want to decouple it. So if the vendor wants to introduce a new feature, what are your specifications for that other vendor? I'd say the answer is half, because in our case, it's a no-by-no circle. I know there was a discussion about providing additional negotiation goods or vendors to put something that's more of a daredevil. So I'm asking you, because, for instance, in the open space, that's slowly making it to the standard of those operations. Basically, if you run an integer card and it tells you, okay, I have this and this and this extension, I can use that. So it would break like migration, for example. Well, it could break things, but it could also be something where you migrate, you know, you have an extension and then your software has to don't break. So if the spec is extended, you would need to go upon the update, right? The vendor needs to go upon the update. So then you would take your place. The job is because you want to have additional capabilities. You have to go through the community, put your proposal there, and if it makes sense, it will say goes in. That's a general case. Because you need, yeah, or dependent, it's a control plane data plane. It's a control plane, so maybe you could just take that in the mediation. Any other questions? Okay, so thank you very much. Thank you.