 Hello, everyone. My colleagues and I will be able to use a topic with enhancement and optimization of computing, storage, and network for momentum with the hardware of loading. At the moment, our child mobile power cloud is a large-scale expansion for market demand. Based on the OpenStack, we have done a lot of practice and the extension, but we was not enough for our customers. This topic is about a prototype of a child mobile that's crucial for customer momentum to enrich content on our public cloud. In this topic, we will be delivering to the three following content. First, I will give an overall introduction to the basic technical architecture and the future from our bare-mental products and the reason why we want to do it and what a goal it is. Second, my two colleagues will make a deep explanation and practice sharing about the solution of always overloading. OK, first of all, I will introduce our bare-mental product, which we call the enistic bare-mental, because it can provide a flexible future on the spot, like a machine, so that our product can adapt to customer business quickly. Like many OpenStack-based vendors, our bare-mental service built around the computer, storage, and the network. According to the open source architecture, like an OpenStack, we have made a lot of extensions to provide more competitive spot based on the general service, such as the dynamic disk mounting, service monitoring, and accelerating the development process. This involves the development of some of our customer components, combined with hardware like BMC for outbound management interface and software for in-band service implement. We have achieved the first stage of a standard open source bare-mental product to go online. As a motion about, our first stage hardware architecture is built based on an open source commuter solution. The bare-mental computing storage and network service are all dependent on hardware. The software is only for automatic management so there are a few points we need to share. First of all, the web type of this network falls on the bare-mental terrarium. Using this, the STN can control the switch to configure the network directly. It does a good job, but the function from the hardware is still too weak. For storage, we need to provide a cloud disk service for bare-mental due to no versionation solution for available for the bare-mental. There are no satisfaction of a disk service is matched with the cloud disk of future. For example, we mounted the disk of the network to server by Asuka's client, but worked, but exposed storage network to user is not so safe. Finally, we still need a management panel for in-band control. We have made a note of in-band software to solve automatic management for GSOs. However, because using so many GSOs interface, it will not work when user disables it. Overall, this solution has given us several big problems and puzzles. First, the future and the interface of bare-mental instances is not matched with the virtual machine. Also, many cloud results are not comparable with it, like an image driver, a steam flow, and a storage connection portal, which results bare-mental only can be used as a system deployment tool, not like a whole solution from an open stack due to lack of a front-end virtual machine capability. So some numbers in the future, like a dynamic pulse, is not available. However, this kind of technology relies too many on the results, like operating system from GIST. We don't think it's a good idea. Last, the network. Experiences of physical network and the virtual network is the obvious. For example, port can be created randomly on the virtual networks. However, on the standard network cast of bare-mental, one network cast can only represent for one port, which is not convenient for PhoenixBall, PhoenixBall, business spot. For example, four network casts are required or we won't work. An oracle DB only need a true, but we don't know who will buy, that we cannot prepare for it. That is a topic. The solution we proposed, we called the Unistic Bare-Mental, is a developer based on the borrowable requirement and disadvantages. The whole idea is that we need a compute platform with hardware-never-virtualization. It is similar to the software reward. So we cast the two hardware. One is we call the ROK server. It's used for computing, like KVM. But what it provides is a physical network. The other one is a smart link, Habacad. Which combines the software to provide RO and control service. With combined with Unistic Bare-Mental, OpenStack only needs to implement several drivers in IRONIC that can provide a bare-mental instance just like a new virtual machine. In detail, the main point is in Habacad. The link hardware is composed of SOC and FPGA. The FPGA is used for offload, and SOC is for control, so that we can manage the platform out of guest OS. The offloading modules for bare-mental in FPGA are virtual and OS. Virtual is for host support function implement. In our solution, net and block type is included. It's included. Also, OS is used for network exploration. Make bare-mental network service more close to virtualization. Next, I will show the virtual offloading in our Unistic Bare-Mental. It has hop-hack capability, which makes it more convenient for guests to create posts. Since the virtual RO is based on the standard spec, there is no need to make any changes for software. In this way, the virtual machine image can share the driver with our bare-mental, which is a big problem of our tradition bare-mental solution. The virtual net and block devices in virtual RO are controlled in SOC, and all that will be comprised into network. Finally, the OS acceleration module will transmit it to bank and which is storage or network service, so the physical port. OK, this is the introduction of the host game on Unistic Bare-Mental. Next, my colleague Rune will introduce the OS offload of this product. Hello, everyone. The following topic is about some enhancement of network functions and performance for bare-metal with OS full offload solution. Left picture shows traditional bare-metal scenario. The realization of the network function needs to rely on hardware switch, such as Wixler in-cap and de-cap. Ordinary hardware switch do not support stateful ACLs and can dynamically increase or decrease ports, which is the lack of flexibility and scalability. The elastic bare-metal scenario on the right solves these problems well by replacing hardware switch with open-flow-based OVS and sinking the OVS into SMARTNIC, which is the aforementioned hypercard. We can achieve flexible network functions and more powerful performance. The OVS receives the open-flow rules issued by the Northbound SDN controller to implement various complex network functions. The pictures shows our hypercard, including dual-row 25G ports with SOC arc. Networking functions include basic Ethernet packet processing and matching, basic layer 2 and 3 for voting, packet modification, such as internal push and pop, VLAN push and pop, stateful security group, and firewall based on hardware connection tracking table, cues based on hardware meter table, forwarding acceleration model, performance indicators, including forwarding rate greater than 20 million PPS, table capacity greater than 1 million interest, and so on. This picture shows four acceleration modes. In order to further accelerate the network forwarding capability, we use the OVS full overload solution to overload the whole packet of procession into the hypercard. And out of consideration of the direction of future technology development, we chose the DPDK-based RT-flow overload mechanism to convert OVS fast data pass flows to RT-flows with DPDK-RT-flow API. The TC flower API with OVS kernel data pass seems to be more feature rich, which was a support for connection tracking. However, the user space data pass is in general faster than the kernel data pass due to more packet procession optimizations. And so far, the communities support for DPDK-based OVS full overload is not complete enough, and only support partial overload. So we realize the DPDK-based OVS full overloading, according to our needs. This overload features including tunnel in-cap and de-cap overload, meter overload, LACP bound overload, connection tracking overload, and layer 2, layer 3, and layer 4 net overload, and so on. The following is the hypercard packet forwarding process. This is the hypercard. DPDK-based OVS runs on the SOC. Software process first packets packet of every new flow. The first packet is forwarded to the slow pass on the SOC. And the corresponding rows are sent to the hardware after the lookout table is processed. And the sub-secret packets forwarded using the hardware pipeline. They forwarded directly in hardware. The combination of software and hardware greatly improves the network forwarding performance. And we will continue to reach the OVS full data pass overload features. And this is all the OVS full overload solution. Thank you for your attention. I'm Yuxiang. I'm a player to share this topic here. Yajun and Ruin has talked about elastic BAM and wastage. I will introduce QS practice in hardware of loading. QS and Red Limit is a commonly used function. OVS itself does not spot it directly. It realizes QS by configuring Linux, CountDc, or DPDK. That is, the QS in OVS is implemented through data pass. OVS implements the QS or ingress and the egress through the interface table and the port table respectively. And the egress also needs to combine the Qs and Qtables to implement the traffic shaping technology. It's realized by calling TCE or DPDK through net-outsell policy and the net-outsell queue. Our hardware of loading solution finally chose DPDK to do this. Next, I will show the logic implemented by Mint's feature in FPGA. We designed two tokens buckets for it called Bank A and Bank B. This way, we can spot both PPS and BPS. Let's look at the three situations respectively. Situation one, no meet is configured. That is, there's no speech limit or issue. So the process is like this. The packets were kept. Bank A meet is enabled, no Bank B meet is enabled, no. And the response is kept. Situation two, there is a mid-fake, but not enough tokens in our token bucket. It means there's too much traffic to handle. So the access traffic will be dropped and increase drop stats. Bank A meet is enabled, yes. Sufficient Bank A meet tokens, no. And the check, Bank A meet is enabled, yes. Increase Bank A meet drop stats, and the response is dropped. Situation three, meet and token are both available. The packets were kept, and tokens in the bucket will be deducted. The passive traffic will increase, keep stats. Bank A meet is enabled, yes. Sufficient Bank A meet tokens, yes. The check, Bank A meet is enabled, yes. Deducted Bank A meet tokens, and the increase Bank A meet keep stats. And the response is kept. Here is the advantages of our architecture. One, hardware level speed limits with granularity or when Mbps with an accuracy of 5,000. Two, support DSCP to meet differentiated customization requirements for different business traffic. Three, combine with FlowTable to provide fine matching requirements. Four, flexible and expandable, the miss configuration is isolated from the life circle, life circle with the FlowTable. Five, support BPS and the PPS speed limit at the same time. Here's my end topic.