 Is it OK now? Oh, thank you. So on this session, we are talking about how to achieve the load latency NFV with the OpenStack. And I'm Yu Hongjie. I'm from the Intel Open Source Technology Center. I've been working on the hypervisor, KVM, and also on the Nova. So currently, I'm working on the KVM for NFV project. So this is a project on the OP NFV site. And we will talk about it later. So I can give some very small background introduction to where we come to this topic. So as I will talk later, the KVM for NFV project is to provide a hypervisor, or maybe a real-time hypervisor, to the OP NFV usage scenario. And one major purpose is to achieve the very low latency NFV. And we have a whole bunch of test cases running on our lab. So because we began our work last year, at that time, the whole Nova and the Neutron is ready for the load latency NFV. So we have to use the batch scripts to achieve our purpose. We have about 20 batch files to do our testing. So it's easy to maintain and to change. Especially when we begin to do more and more work on the testing, like run, launch, maybe 10 and 20 virtual machines at the same time, and still testing the latency result. It seems to be quite difficult. That's the reason we tend to open stack to hope to make sure all test cases are open, or the open stack to be more automatic. And also there will be, in the same way, as the OP NFV test CICD process. So the investigation and all try to come to this topic. So the agenda. Firstly, I will talk about the NFV and the network latency. And then I will deeper dive into detail why the network latency will be a issue on the NFV side. And then I will talk about how to achieve the load latency NFV. And in the end, I will give some example of how to set up the whole stack, including the hypervisor, the virtual switch. The open stack is the whole thing to achieve the load latency NFV. So first is the NFV on the network latency. So as everyone knows, the network latency is very important. So for different network service, maybe the importance will be different. And also the impact will be different. And also the requirement of different network function will also be different. For example, for the real IP service, if the network latency is too long, maybe the most current will be bad. And normally, it says that the latency should be less than 150 milliseconds. And for the interactive game, like the online game, if the latency is too long, maybe the game will have to wait for a long time for the response from the game server. And it will impact the user appearance. And normally, maybe it should be less than 100 milliseconds. And for some like base station, depends on which layer of the protocol we are talking, sometimes the latency should be less than one millisecond. And some extreme scenario, like the algorithmistic trading, which is like using the automatic program to do the stock exchange. On this scenario, maybe the latency requirement will be sometimes tens or hundreds of microseconds. And the latency, if there are some very long latency, even the whole stock exchange will fail and the profit will be impacted as a range of money. So you can see the latency is really important. And also I want to emphasize that when I talk about latency, it's about end-to-end latency. For example, take as a real IP service as an example. When one people calling from Compute A to the Compute B, they'll maybe have several network function load that will be involved. It may be including two computer load are end-to-end, some are edge router and some call router. So the end-to-end latency means all the latency happens in these five machines. In total, it should be less than 150 milliseconds. So for each load, maybe the budget will be only less than 30 milliseconds. So it's important, right? And also the number will be much lower than the 150. Another one is the latency is related to throughput. But normally it's not so strictly related. In most of time, it should be two dimensions of the network performance. And also come to the NFV and the network latency. So I think the NFV is very hard on this of six-star meter and everyone is talking about the NFV. So by understanding, basically the NFV is a change to the traditional network function deployment. Traditionally, the network function will be deployed in the hardware platform like the hardware firewall, hardware slot or switch. And on the NFV scenario, the network function is not running on the hardware. Instead, it's running on a virtual machine. And the virtual machine will be hosted by the high-volume service. And because of the virtualization, it brings a whole bunch of benefits, for example, the flexibility and the manageability. With the virtual machine, it will be very easy to create more and more virtual machines if the requirement for network transaction is increased. And then we can destroy or just post the virtual machine if there's no such big requirement anymore. And also because there's no need to host so many hardware platforms like the hardware firewall and the router, the cost of the network administrator, the cost of the air conditioning and the power will all be reduced. And the most important is with the virtual machine, it will be easy to create a new virtual machine. So there's no need for the design, testing and the verification of the hardware platform, which will be a very long time, maybe several years. But with the virtual machine, it will be much easier to do the testing and the deep knowledge. So it will decrease the time to market or not. But also the AF will have so many benefits. It still have a bunch of issues. And one of the major issues will be the latency. Especially if we talk about the summer scenario like the latency requirement will be 100 microseconds, the latency is really a summer thing we have to consider seriously on the AF side. So in this session, I will talk about the different analysis where the network administrator comes from. So this is a very simple example of the open-stack environment. So on this environment, you can see there will be VNF VCPO, which means the virtual network function. And it is in fact hosted on a virtual machine. And on this virtual network function, there will be some network function to run like the router or firewall. It will receive the packet from the virtual leak. And then, somehow, it will send the packet to the virtual leak also. But the time from the virtual leak to the network function on the back is mostly similar as what happened on the physical network function. But there will be some extra time that happens from the packet received by the hardware leak on the real platform to how the packet is received by the virtual leak. So we can see, this is a very generic open-stack deployment, right? There will be one component, the OVS kernel module, which will do the most virtualization of the work. There will be the KVM V-host, which is the hypervisor to the virtual machine. And also, you have to notice, the virtual VNF VCPO is in fact running on the same CPU core together with some other VCPO, which means probably there's some non-VNF VCPO or some IAPS or some service. So in this situation, the network comes from the whole bunch of the flow from the packet received by the leak to see the packet received by the virtual leak. And I will give them more analysis from the packet flow from the view. Firstly, for example, when the packet received by the leak, right, the leak have to tell the OVS kernel module, there's a packet arrived. It will go through the interruptor. But at that time, maybe the OVS kernel module is not running, or maybe the CPU core is running for service or not the higher priority or interruptor. So there will be some latency happen for the packet to, for the hardware leak to notify the OVS kernel module of the through the interruptor. And even the interruptor is received by the kernel module because normally in Linux kernel space, the packet is handled by the software IACO context. But the software IACO may get another delay because software IACO can be preempted by other higher priority interruptor or by other software IACO. So second latency. And the third one is because for the OVS kernel module, it will use the generic kernel network stack. Maybe you know the kernel stack is not optimized for the NAFV packet processing scenario. It's handling of the buffer. The packet queue is not so efficient from the packet processing point of view. So there will be a, the third latency will be the kernel stack. And the first one will be the, on the current open stack environment, there will be multiple bridge be created to handle the packet forwarding mostly for the security purpose. For example, in a generic or neutral environment that will be the integration bridge and also the physical bridge. So the packet have to be copied between these different bridge and they will bring another layer of the latency. So in the end, after all of the latency, the packet is received and handled by the OVS kernel module and then send it to the KVM and the Vhost. But here it will come another whole bunch of the latency. The first one is for the packet to be received by the Vhost, the OVS kernel module have to notify the Vhost and at that time it may go through the virtual interrupt. But it is in current or virtualization for the KVM and the same, the interrupt the container is emulated so they will bring another branch of the latency. Normally the emulation of the interrupt the container may bring 20 microseconds delay in some situation. So in the end, after finishing the virtualization overhead the packet is received by the OVS kernel and we try to put the packet to the guest. But in the time another latency may happen because on the kernel space there will be a lot of the threads of the interrupt or ISR. So it's most importantly, maybe the VCB is not running at that time and it's being pre-empted by some kernel thread like the RCU kernel thread or by some ISR like the timer ISR or maybe some other ISR thread. So this is another latency. And luckily, for example, we have the VTPU not to be preempted by any kernel thread not to be preempted by any ISR. Another chance is because as we said, the VIN FVCPU is sharing the same CPU core together with other non-VIN FVCPU or another generic application or maybe in some situation maybe the OpenStack Nova computer service is running on the same CPU core. So at that time the VTPU is scheduled out already and the other application is running so they'll cause some VTPU scheduling cost. And another one, in the end the VTPU is running that it's not preempted, it's not scheduled out. The VTPU received the packet, right? It will begin to handle the packet just like in the physical network function scenario. They will still have another issue with the virtualization overhead. For example, when the VTPU is running the network function it may cause some transaction to the KVM hypervisor to finish the virtualization for some device model, for example. So that transition may cause, it means the most fast KVM transition will take 10 microseconds in which hardware. So that is still not the latency cost. And in the end, there's another one is hardware short-contention. For example, maybe there's some, as you know in current CPU architecture there's some CPU cache will be shared by multiple CPU core. So although the VTPU is running on this CPU core but it have to contain it with other CPU, other application running on other CPU core for the S3 cache, for the memory bandwidth, for the TRB everything. So when FVC have to contain it, the hardware is solved with other application. So you can see, we analysis the packed flow from the hardware leak to tear the packet received by the virtual hardware leak and the virtual IO leak. And you can see in this process there are so many that latency may happen and each latency will impact the final result run. So on this page, we try to analysis from the packed process point of view. And on the next page, I will try to category the latency type. So basically I think there are two type of the latency. The one type is the virtual contention. The result may include in the hardware CPU the physical cache like the S3 cache or TRB or memory or even IO for example if the leak is shared by multiple CPU. And the contention will come from the other VTPU, other application, other service or even kernel service like the RCU callback thread, the ISR on the kernel. So a whole bunch of people we are trying to contain the hardware result with the VNF CPU and the impact latency. Another type of the virtualization overhead is the, a lot of type of the latency comes from the virtualization overhead. So basically because the virtualization is in different software, the CPU memory on IO is supported by the KVM hypervisor and the network may come from the virtual switch or in some situation the next bridge or maybe other open source or other kernel source virtual switch. So the virtual may come from the CPU and the IO virtualization for example as I said interrupt the controller is kind of emulated. So all the interrupt the controller access will cause the overhead. And the network is sure every packet of the copy between multiple bridge it will cause some latency. So there's another cost of the latency, the result of the content on the virtualization overhead. So how can we achieve the non-latency NFV? Still on back to the reason of the latency rise. So for the result of contention, we can do several things. For example, we can reduce the contention cost. That means for example if the, when the packet arrived for the VNF CPU but there is another application is running. If we can reduce the time the switch from another application to the virtual VCPU it is sure to be, sure to reduce the latency. Another one will be like the priority. We can release the VCPU thread priority. So even there is other application waiting for running the VNF CPU will get a higher priority and can service the packet faster. And on some extreme situation maybe we want to exclusively assign some resource to the virtual CPU. And I think in previous session we have several talk about the CPU pinning, the SIRV, all of this to the purpose to assign the resource to the virtual CPU exclusively. And for the virtualization overhead, there's for the CPU and I will say for example we can use the advanced version of Feature that provided in the nature hardware for the Intel platform to have more and more hardware feature to enhance the virtualization. And also can we should enhance the KVM hypervisor or RZNIC hypervisor to reduce the virtualization overhead. And for the network stack there's it can be a severe option. For example we can use the SIRV to reduce the virtualization overhead. There will still be some although it seems the SIRV can evolve all the virtualization overhead. But if we understand the whole process even with the SIRV experiment there will be still be 10 or 20 microseconds of delay because of the interrupt of the virtualization. Another choice is to use some software mechanism like the software OBS-DPTK to enhance the network virtualization. We'll talk about it in future. So to achieve, as you can see the whole network comes from the virtual switch, comes from the hypervisor, comes from the content of the hardware resource, right? So to really re-achieve the known NFV we'll have to provide a systematic solution for the whole stack from the hardware to the virtual switch, to the hypervisor, to the application, everything have to be done. Of course, open stack. And I will talk about this one by one. So this is in the right side of the picture of what will be done in the end to achieve the extreme known latency environment. And this is in fact what we are doing for our KVM for infrared testing. So you can see on this scenario every virtual network function, virtual CPU were exclusively on one CPU car. All the other applications and other virtual CPU thread will be moved to other CPU car. And even the other kernel thread and other kernel is ISR will be moved to other CPU car. Of course, in generic CPU or Linux it's not so easy to achieve that. We'll use some special analytics to support and remove all the storage to the other CPU car. And also the KVM kernel model is not used anymore. Instead, a user space over DPTK OS will be used to pulling the storage to achieve the latency, a higher latency. And the hardware side will also have a whole bunch of enhancements. Yes, so as we said, we have to do the full static or full stack systematic solution to achieve the known latency. It's basically extremely known latency in NFV. So with this scenario, for example, we can just assign a unique card to the virtual network function. So we can reduce the cost or not because as we said the CPU have no contention with any other kernel thread or any other application. And the unique device is also right. So the latency will be very low, maybe just 20 or 30 microseconds. Another solution, as we said, is the OVS-GPTK packet will go through from the hardware linker card to the OVS-GPTK PMG thread and go to the VCPOG directory. So we are involved in latency, the soft echo latency, the kernel stack and also the packet forwarding in the between multiple bridge. So firstly, we have a look of the advanced hardware platform. VTD and SRV, as you can see, it's not so advanced anymore. It has been here for a long time. So with the VTD and SRV, we can reserve the IO device for the virtual machine to our water contention for the IO site. And the PI is some new feature as it is on hardware platform. But the purpose is to inject or interrupt directly to the guest. So we reduce the interrupt of the virtualization overhead. And in this way, it's very important for the SRLV. It will, in the end, remove all the virtualization overhead for the SRLV. And the CAT is a new hardware feature that is in the brotherware that is released recently. So with the CAT, the idea of the CAT is to reserve part of the cache for the VNF, VCPO. So even there are some other applications running on another CPU call try to contain the CPU cache, which is the VNF, VCPO. Because of the CAT, the part of the cache will be reserved for the VNF, VCPO. And the other application cannot be used, cannot be used any more. So they achieve the cache and they never reservation. And on top of the hardware platform, the real-time hybrid is also important. And generically, we're talking about the Linux side, Linux and KVM. Of course, I know there are some effort on the main side for the real-time hybridizer. I have no idea about the KVM, whether VMware or Hyper-V. So to achieve the non-latency NFV, there's a whole bunch of enhancements that have to be down on the real-time hybridizer. For example, the real-time scheduler. With the real-time scheduler, we can achieve the higher priority for the VCPO thread. And also the full-premises support, that means for example, even if the application is running, when the application is running and the package achieves the thread, the VCPO thread will immediately preempt that running application and begin handling the package so it will reduce the contention cost. And also with the real-time hybridizer, all the current activity, like the ISR, soft IRQ, everything will be done by the thread context. And this thread can be moved to the other CPU core and it will not contain the VINF VCPO. Basically, we achieve the real CPU reservation for the VCPO thread. And of course, CPU attenuation to attenuate the CPU for the guest. So the scheduler will not try to schedule any other application or any other VCPO to the same CPU core. There's a whole bunch of features. In fact, we achieve different level of the latency. And also I'll give some simple introduction of the KVM for infrastructure that we are working on. This is a project based on the upstream real-time Linux kernel. And we have tuned the kernel configuration very carefully to achieve the latency. And also there are some patch that will be on this tree and will be pushed to upstream also to even reduce the latency. For example, there are several enhancements for the KVM hypervisor to the VFIO to reduce the latency. This KVM for NFV has been included in the B-release of the OPMFV. And in the series, we hope that the whole testing with the open stacker support line, the whole testing will be integrated to the OPMF test framework and the CSED. And the project, in fact, I have a whole bunch of the active contributor from the OPMF community. Intel is a one major player and Nokia contributed a lot also and WinRiver also contributed to it. And so while we finish the talk of the hardware platform and the hypervisor, which is the most important part of for the CPU and the IO side. And for the network side, we will do the SIOV to achieve the very low latency NFV. But it has been discussed in last sessions. SIOV have some issue like the migration of the security group. And another choice is to use the software accelerator like the DPTK OVS for the low latency NFV. For those people who have no idea of what the DPTK, they can give us a very short introduction. The DPTK is a library that is first introduced by Intel and now is supported by the open source community. The idea is to improve the package processing on the Intel architecture. And now, of course, it's also supported ARM and other architecture. Initially it's only for Intel. And the DPTK will bypass all the Linux network stack. And we just said the kernel stack will bring some latency to the OVS kernel module. And also with the DPTK, the user space can access hardware registry directly. The DPTK will have one pony mode thread running in the user space to it continues the pony if there are any package arrived. So with this pony mode, we awarded the Intel of the latency. And also it has its own implementation for the queue and the buff management that is basically to achieve, to award the network stack on the kernel and to enhance the latency. So with the DPTK is a generic library and it kind of has been used to enhance the open virtual switch. So here's an example of how the OVS or DPTK OVS is working. So you can see there's a user space or PMG thread which will continue supporting the interrupts if there are any package arrived on the hardware network card. And if there are any, it will pass the package directly from the user space to the guest. So there will be no contact switch from the kernel KVM-V host to the virtual IONIC anymore. So with the PMG thread, with the code buff management and with the user space of the host, it really achieves very low latency in FVM. And the hardware platform, the hypervisor and the DPTK OVS is a low level functionality to support the low latency in FVM. And another layer will be the open stack layer for the orchestration. So for the NOVA or for the computer side, there will be maybe two effort. The first one is the NOVA have to contract the computer very carefully. For example, the NOVA should know what's the physical CPU that has been reserved for the virtual machine exclusively. And then when it tries to assign the isolated CPU or reserve the CPU to the virtual machine, they will make sure the CPU will not be assigned to multiple virtual machine or multiple virtual CPU. And also when the NOVA try to create the virtual machine, it will make sure the VINF virtual machine will get the reserve the resource on occasion. And also the VTB thread will get a very high priority scheduled schedule. So it will be scheduled very hypervisor at the normal thread. And of course, there are some, it makes sure the NUMA support, the whole page support. And also for neutral side, it have to support the VTNIC port corresponding to the SR-IOVNIC. And also it have to support the DBDK OVS and the virtual user scenario. So on this session, I've talked about how we should do the full stack implementation to achieve the very known things in FV. It includes the hardware platform, the real-time hypervisor, the DBDK enhanced OVS, and also it requires support from the open stack, for the NOVA and the neutral to achieve the orchestration. And so we can have a look of the setup and the configuration. In fact, the configuration of the whole stack is not so easy. As I said, we have about 26 ribs to do the overall testing. And for example, for the ARTNICs test setup, we have to support a static parameter to reserve the CPU for the VNF and to move a whole bunch of the kernel thread like the SR-IO callback thread or the timer thread to ARTNIC reserve the CPU. And on the run time, when the kernel is up, we'll have to change the interruptor affinity, change the watchdog, change the SR-IO callback thread affinity, the whole bunch of things. For the DBDK OVS, it's another whole bunch of story. For example, most of the distribution have no DBDK OVS at all. So we have to upgrade it to the DBDK OVS instead of the normal OVS. So we can use the mode DBDK. And also we have to arrange the whole PMD thread affinity very carefully. So for example, we don't want to have two pony-model threads on the same CPU call and the container with the ICS. So there's sure to bring a whole bunch of the data-connected things. But, unfortunately, all this whole configuration cannot be done by the OpenStack yet. We are still trying to figure out if the ironic can help us to automatically set up the whole configuration by the ironic project. But it's still only on the road. And the second one will be configuring the OpenStack. So we have to configure the computer loader. For example, as we said, the kernel parameter will specify what is the CPU that is reserved by the kernel for the VNF only. But we have to pass the information from the kernel to the OpenStack to the Nova. So the Nova will know the CPU from the CPU 3 to CPU 10 is reserved for the VNF with CPU only. And when the Nova tried to launch the VNF with CPU, it will know which CPU will be used. And also, if there were a track, for example, if we have 10 CPU reserved for the VNF scenario, right, the Nova have to know, for example, three physical CPU has been assigned to the virtual CPU already, so only seven left. And once the left CPU, for the future VNF with CPU, the whole thing has to be tracked very carefully by the Nova computer loader. And the Nova have to specify what should or not, what should or what should over any CPU over subscription. So we don't want it to VNF with CPU running on the same CPU right. So normally we have to set the CPU location ratio to be one, so there will be no virtual CPU on the same CPU card. And also we don't want it over subscription of the RAM. And for the PCI, we have to tell the OpenStack what's the PCI device that has been reserved by the kernel, especially for the VNF with CPU. So today they have to track the PCI divider element carefully. So this is the whole thing that has to be done by the computer loader. And also for the VM flavor, when the Nova try to create a VNF or a CPU, they have to, as we said, it has to specify a whole bunch of property for the VM. Have to specify, for example, what's the VCPU? This virtual machine will run. And what's the VCPU's rate of priority? Should it be a high priority, like the real-time priority, or the normal one? And should it be like the, should it, because as we said, the commu will be supported for the VCPU's rate of run. It also have to run the commu's rate carefully for the VCPU. So there are whole bunch of the VM flavor has to be specified for the VM creation. And we have to configure the Neutron side also for the DPTK OVS and SR-IOV. And in fact, pay our understanding of infligation, I think that the DPTK OVS support is still on the way. It should technically be ready by some several enhancement studio on pending. For example, currently the DPTK OVS security group is not managed yet. So although we can create the virtual machine with the DPTK OVS, but there will be no security group supported for the virtual machine. And also as I said, even currently with the DPTK OVS, there will be multiple bridge including the integration bridge, the physical bridge, tunnel bridge should be created by the Neutron or agent. So the package will still have to be forwarded between this multiple bridge. So I'm not sure if anyone is working to only one bridge on the Neutron agent so to achieve the very, very extreme in the network network scenario. So that was still something we haven't looked. And for the SR-IOV, I think Neutron is quite ready and has been working for a long time. So the whole config, this is a summary of whatever talk. We analysis the region of the network network latency in the NFV scenario. And also we analysis the whole workflow from the package received by the NECA card till the package is transformed to the virtual NECA card. The latency from the OVS kernel module to the hypervisor to the kernel threader machine to the other visible machine. So, and we also give our proposal of the full stack systematic solution to reduce the network latency. But here comes to some things called to action. So the first one is documentation. We really hope that the open stack project have a more clear documentation for how to set up the whole thing. In fact, when we do the investigation, I have to check the Nova source code to know what should we do, what's the reason or what's the meaning of the different Nova configuration or the VMAV configuration meaning. Luckily I know about the Nova source code but not everyone knows about that. So documentation is really important and I think it's something we have to do. And also is that we said currently to do the whole configuration is not so easy. The kernel parameter, the kernel run time configuration, the OVS, the DPDK, OVS, PMG threader affinity, the whole bunch of things I think kind of you have to be done manually before we use the open stack to launch. So this is also something we have to do more especially if we need to check if ironic we are supported. And another one is automatic netting test. Currently in either the open stack or the OPMFV have no testing plan or no test case to the latency of testing. And that may bring some issue that we don't know. So that's something I think we should do in future. And this is some reference to the configuration. The first one is the summer week for the KVM for FV project. It gave a very detailed steps by step from how to configure the BIOS, how to configure the kernel parameter, to configure the kernel run time environment. And the second one is the summer source code. In fact, this is the major source I have to check on how to set up the OVS, DPDK environment. It's on the GitHub. And I hope in future there will be more and more documentation of the OVS, DPDK. And the third one is about the SRIOV. Oh, the third one is about SRIOV. The third one is about how to tune the OVS, DPDK to achieve the latency. It's from the OVS official documentation. Okay, that's all. Good presentation. Thank you. I would like to know, since you mentioned Ionic, are you, Ionic only does the bare metal. So bare metal. Bare metal provisioning. Bare metal, okay. So are you looking at only bare metal provisioning in KVM for NFE? Or are you also looking at the virtualized implementation? Well, actually for bare metal, of course we can achieve very little latency, right? Compared to the virtualization, virtual machine environment. But I think that the reason is because if we are using the bare metal, we may miss a whole bunch with the flexibility and the manageability. So the reason we can't focus on the KVM virtualization scenario and also because we're picking our work for the testing of the KVM for NFE project. So we're picking, so we're all focusing more on the virtualization side and the KVM side. So another relevant question to use. Obviously, we are looking at low latency from different angle. One of them is live migration maybe. Another could be CloudLit, which is at the cloud, a VM at the edge of the cloud. VM? VM at the cloud, edge cloud. Okay, okay. So what I am thinking, okay, this is, we are planning to have a project called CloudLit under OpenStack for the tier two cloud. That is, you have the main cloud, you have the edge cloud and I'm looking at, can we tie these two together? CloudVit for NFE, let's say, I call it. Example, just a project name, let's say. Because you are dealing only with the CPU level optimization, I'm looking at network latency optimization or at least coordination between the two. You may not be able to do, speed of light is the limit, we know. That's what we can achieve. But even then, can we do some coordination in that? Maybe yes, but I can talk later about your project. Yeah, I think I'll come to you on that and I'd like to take your support. You can see, we also talk about the Neutron enhancement with the OBS GPGK. There's also some part of the network enhancement too for the low latency. But yes, of course, we can have more talk here. Thank you. Did you have any measurements of before and after for latency? In fact, what we compare is that, as we said, we have a whole bunch of scripts, right, to do the very key for arrangement. And also, we try to use the open stack to do the arrangement. So with the SRIOV scenario, basically, we get a similar environment with all the script that means the manual setup and the open stack setup. For the OBS GPGK, frankly speaking, we didn't finish all the setup yet because, as we can see, there are multiple bridge for the package from bridge 1, integrating bridge, to the fatigue bridge, to the tunnel bridge. So basically, the kind of OBS GPGK cannot meet our test requirement so we didn't try in future. But for the SRIOV scenario, yes. Basically, the open stack environment achieves a similar result as our script environment. And for the package forwarding, we do some very simple one, to run air to forwarding application on the virtual machine. And the latency should have been less than 15 microseconds. 15 microseconds? Yeah. As opposed to, what would it have been without the... If we don't do any optimization like the kernel thread, it have no optimization of the affinity, reservation. The number is a meaningless. More than one millisecond. Maybe you already alluded to this or maybe I just didn't understand as I'm not familiar with the real-time Linux project. But is the goal for these enhancements to be rolled into the existing KVM binary or would this be a parallel KVM binary-specific for an over-the-call for an affinity type functions? Our purpose is to enhance the KVM for the generic scenario. Maybe for them, there may be some special configuration because the KVM parameter is needed, but the purpose is to... It will be in the generic one. So our project will be pushed to upstream. Okay, good. Okay, thank you.