 Hello everyone, I'm Qin Haidong from ISPRA. I'm a cloud computing network architecture. I am mainly engaged in the design and development of over-stagger control plans and high-performance network for staggers solutions. I will share the performance of optimizations of DPTK for edge scene networks with Xu Zhenjie from Intel from the aspects of virtualization and containers. First of all, I will share DPTK performance of optimization for edge scene networks from virtualization direction. I will share the application scene networks of edge computing, key technical features of edge computing, optimization of over-stagger or DPTK and effect improvement. Edge computing has been a hot topic in the past two years. Various white papers and research reports have defined edge computing. The core points as a same close to the data generating end. Now, let's try to analyze this cloud computing and edge computing application. In different scene networks, cloud computing is suitable for non-route time, long period data and business detection making scene networks where edge computing has irreplaceable role in routine short period data and local data making scene networks. The drivers scene networks listed here all have edge computing features. We can predict that in the future. Edge computing and cloud computing must be the two important support for the digital transformation of the industry. The synergy between the two terms of a network of business application intelligence AT will help support with the range of scene networks under driver value creation. Here, with some rest is three key technical features unique to edge computing. First, real-time. Real-time is a core performance problem faced by many new applications such as augmented reality and driver-less grabbing. And it is also the core problem to be solved by edge computing. The stack is high but with and low latency when a writer of massive technical devices can generate a large amount of data. They should build in the ages. These are connected nearby to avoid but with both the next is a central cloud and the latency can be effectively reduced by physically closer driver. The key technical characteristics of the above tool data plan have high requirements for the construction and the technical specification of the edge cloud. The third cloud edge called a bio-reaching from a global perspective, the central cloud and the multi-age cloud are distributed syndrome. This also has a classic reality, just like in Octavis and Technos. So edge cloud data see signal remote push for the manager. All these need to be fully considered un-accredited. Next, let's talk about the current common solutions that meet the key technical characteristics of the data plan in the real-time high bandwidth and the low latency. First of all, let's take a look at the real-time. At present, the real-time solution is mainly rely on Linux real-time patches. Real-time patches are used to optimize hypervisor scaling, screen change, process locks and interrupt shading machines. So what are the optimization solutions in the network? We have concluded that there are many deep decay, SRV, and smart network costs. One common feature of this solution is the bypass kernel. From the current and technical ledger, the bypass current in the network L pass as a hypervisor layer is improved. The most basic method of network L performance wins the virtual machine. So that's need to carry network characteristics, so that such as NVFV network elements, the kernel receiving and sending package is also brought to the next neck. The improvement of network IO user virtual machine is now mainly the DBTK solution. Next is the focus of our sharing the time. We have done some practice and operate messaging of over stack and DBTK. There are also many over stack DBTK solutions such as over stack DBTK, VVP, OVN. Over stack DBTK is compatible with the neutral open message agent group plan. The solution is in nature and the introduction cost is low. We chose over stack DBTK as the month DBTK solution since the Cora community not released the over DBTK container based on CNOS after our engineers researched and developed there's no technical problem with continuing the over DBTK container on CNOS and we have successfully continuous the over DBTK OS. After our continuous training and the testing of over DBTK in actual user we have summarized the key of the messaging points of the host and over DBTK. Let's first of all look at the points of over the messaging of the host. First, the best must be adjusted first. For example, there is a performance mode in the mantra several bars of inspo which can spot one key adjustment and then optimizes the power street speed performance and set it to the best mode. Second, apply real-time patch to computing nodes. Third, set them parameter of the messaging mainly based on the number of Neoma, the number of CPU calls and the number of memory on the computing node. All don't make take a CPU isolation setting of over DBTK bounding calls, larger-page memory, kernel threading, thread of pinning and other optimizations force. Nova configure optimizes over the messaging. Here is mainly open to make the false view mode will host the network card, ring buffer, set the CPU pinning list. First, flavor optimizations. Here's a monthly parameter turning for creating information bounding screen. Stringing, die, huge pages, network card multi-creen. Next, we will look at what we need to optimize and adjust. After our continuous observation testing and a campaign, we monitor the wide DBTK in three aspects. First, PMD bounding calls, larger-page, huge-page memory, there are common convention or testing mode method. I believe everyone is familiar with it and I will not elaborate it here. Second, DBTK physical network card parameter of the messaging. You need to open the physical network card multi-creen, send and receive these parameters, overload parameters to optimize. Third, multi-creen bounding call optimization need to optimize this distribution, the distribution of the virtual machine and the walker card and the physical network card is set into the screen on the CPU. For this optimization points we have made in Automaker Optimization Strange, of course. We also support the config duration adjustments which can be adapted according to user needs above as a core turning advantage. How about the effect of OIDDK? For OIDDK, we have also done performance tests in many scenarios including combing virtual machine service and the various screens at the similar VNetwork elements virtual machine screen service. We have conducted comprehensive tests for these scenarios. Next, we share two more classic test scenarios and the results. Let's take a look at the combination virtual machine business in the first test scenarios. This is to see the application of virtual machine use the kernel protocol stack, network stack of two send and receive package. These test scenarios is like this, two computing nodes, five virtual machines on each computing nodes, two computing nodes each pair perform the screen. It can be seen from this result that the effect of both small and large package has improved significantly and it just increased it more than two times. This is another scenario. FVR2 forwarding is a scenario. DPDK is used to run layer two forward in the virtual machine and a test is passed extremely. This is a classic PVP scenario. Let's take a look at the test results since every scenario has a specter folding daily according to the operators every test specification. We start the forwarding daily to be less than 30 microseconds. We can see that 512 batches are almost close to well-speed forwarding and meet the daily requirements. Other scenarios with packed lanes can meet the requirements in terms of drop and delay. My share is over and the next will be shared by Xu Chenjie from the internet. This is Chenjie Xu, I'm from Intel. From this slide, I'm going to introduce the networking requirements of network H. Network H is deployed in telecom server room. The original application of telecom is waiting life which is used to process networking traffic. As COVID-19 is becoming more and more popular, seeing life which is cloud-native network function is emerging. Network H needs to run both waiting life, stand-up and normal application like engines. However, wait-up and stand-up are based on DPDK which means NICS networking stack is bypassed but normal application is NICS networking stack. Neurospace stand-up plugin is an open-source stand-up plugin and it's derived by Intel. Neurospace stand-up plugin uses OS, DPDK and VPP to accelerate container networking and can be used by seeing life. Seeing life will run in container and should be based on DPDK. The basic idea of Neurospace stand-up plugin is to use Weihou's user protocol to accelerate container networking. There are control paths and the data paths in Weihou's user protocol. Control paths is through a socket and data paths is through sharing memory. This is a data path of the Neurospace stand-up plugin. There are two kinds of traffic. One kind of traffic is between physical NIC and port and this traffic is marked in yellow. The traffic from physical NIC will be pulled and put into unbuffed by the PMD. By doing this traffic will bypass links kernel and be sent to Neurospace directly. OS, DPDK will match open-flows and execute actions. Suppose the traffic should be sent to port zero which runs in life inside. OS, DPDK will put the unbuffed into the shared memory between port zero and OS, DPDK. Port zero will put unbuffed from the shared memory through Weihou PMD. The other kind of traffic is between different ports and this traffic is marked in blue. Suppose port zero sends traffic to port one which also runs in life inside. Port zero will put unbuffed into the shared memory of port zero and OS, DPDK. OS, DPDK will copy the unbuffed from the shared memory. If the decoder copy is not enabled and then match the open-flows, execute actions, in this case, traffic should be sent to port one. OS, DPDK will copy unbuffed into the shared memory of port one and OS, DPDK. And then port one will pull the unbuffed from the shared memory through Weihou PMD. However, Neurospace snare plugin can't port normal application because it bypass the Linux networking stack to support both CNF and normal application. Motors should be used. Motors in an open-source snare plugin only developed by Intel. Normally, ports in Kubernetes only have one interface. Motors can create multiple interfaces for the ports but motors doesn't create interface itself and motors creates in configuration files of other CNF plugins and then call other CNF plugins to create interfaces for the port. The diagram shows the port with three interface. Is it not zero, not zero and not one. Is it not zero connects Kubernetes clusters, not work to connect to Kubernetes services. For example, Kubernetes API server, Kubernetes and so on. Not zero and not one are additional network attachments that connect to other networks by using other CNF plugins. For network age, OS with less should be used for isn't not zero and this interface can be used by normal application like in Linux. OS based snare plugin should be used for additional network attachment such as not zero and this interface can be used by CNF. CNF is used to process networking traffic. There is a requirement in telecom to process the traffic generated by normal application through a CNF. It means the traffic generated by normal application should be able to send to CNF but this method doesn't support to send traffic from normal bridge to your space bridge in OS. This method can be used to miss the requirement of processing the traffic generated by the normal application through CNF. In this diagram, those based CNF, in this diagram, user space CNF is still used by the port which runs CNF for the ports which run normal applications. XDP with AF XDP should be used. Let's consider the traffic marketing right. It's generated by port three which runs normal application and the destination is part zero which runs CNF. The traffic will go through Linux networking stack in port three because a vice interface is used by port three. The traffic will be sent to vice four which is the vice pair of vice three. OS implements XDP program and an eBPF map which interacts with XDP program to forward package to AF XDP socket. So AF XDP socket package can be sent to user space. OS also implements a network device type AF XDP. So this network device AF XDP can be used to receive and transmit package using the AF XDP socket. After receiving package through AF XDP network device, OSDBTK will match open flows and execute actions. In this case traffic will be sent to part zero. OSDBTK will copy unbuffs into the shell memory of part zero and OSDBTK and then part zero will pull the unbuffs from the shell memory through what I'll keep.