 Today we will introduce and demo our work of an open-style testing controller named as DC Fabric and the efficient OpenStack Networks based on it. First, please let me introduce of BNC, which is the Public Research Institute at Shanghai of China. BNC starts the testing research since 2012. We are one of the earliest testing players in China and took part in most of the testing research projects supported by the government. One of the projects gave us the birth of DC Fabric and our work today. Since there are a lot of testing controllers, why we have to start a brand new one? The answer is that we want to develop an efficient testing controller for cloud computing data centers. The three design principles, which are language-based, concise architecture of four layers and cloud computing oriented, make DC Fabric be unique compared to the other testing controllers. Over the past two years, we have released three versions of DC Fabric. We use the names of the Chinese entities to name it. Since the third, Joe Washington, we have implemented all the neutral APIs and have been deployed in some real OpenStack environments. This fabric uses S-Fabric algorithm for packet routing instead of flows based on hosts. S-Fabric adopts tagged aggregated flows based on the switch IDs, which can result in a very small number of OpenFlow flow entries. Since most of the commercial physical SD switch can only support a limited flow space, which is several thousands, so a small number of flow entries is very important for SD networks that contain physical SD switch. Another feature of the S-Fabric algorithm is the two steps in flow storing. There are only two OpenFlow entries needed to be installed when constructing the communication paths for two hosts. These two features make the S-Fabric algorithm very suitable for SD networks in OpenStack. Benefit from the efficiency in network management and the unique packet routing algorithms, we can use DC-Fabric and the physical SD switch to provide a bi-mental SD network for OpenStacks. In this network, the OS is connected directly to the physical switch using network wires, so no tunnels are needed. All the packets are routed using the OpenFlow entries. Therefore, any encapsulation and decapitalization are not needed, so we can achieve very high network throughput. In our experiment, under the default OAS and KOM settings, the maximum network throughput of OAM can reach up to 15 gps, and the total network throughput of physical storage can reach nearly 35 gbps. It is a very high network throughput compared to OLA networks. When used in OpenStack, an SD controller has to control the OS in every computer node. Therefore, supporting a large number of switches is very important when deployed in a large OpenStack cluster. Over the past years, we have done a lot of refactoring and optimization for supporting a large network. Today, we are very glad to announce the first Qin washing of DC-Fabric, which can support 3,000 switches for one instance. We used data serialization to improve the efficiency of thread synchronization, and we reduced the database for the master sleeve clusters of our SD controller. Also, we added some new functions such as the reverse ANT. In summary, Qin washing of DC-Fabric is much more powerful and stable, and is ready for commercial applications in OpenStack. Next, my colleague Jianqing Jiang will give two demos to show the efficiency of the SD controller and the networks based on our controller. Hi everyone. First, I will demo that DC-Fabric can control 3,000 switches. In this demo, we deployed DC-Fabric on a physical server with two Intel G-Zon CPU and 16 GB memory. We used the minnet to create 3,000 switches on 10 physical machines, each of which has 300 switches. And here I got a video for you. Let's begin. First is the initialization process of the minnet, which will start 300 switches and some hosts. And here is the DC-Fabric GUI. 3,000 switches can be discovered by DC-Fabric. And here is the technology. You can see 3,000 switches on the DC-Fabric GUI. And you can see some details of the open-v-switch. And we use pin-or to test the reachability of all the hosts. And we have to interrupt it because it's too long. And at first, the 10 minnets are isolated, so a host can't reach hosts from the other minnets. But if we add a VXLAN to two switches, the hosts can reach the other one. And the second demo is that the high network throughputs between VMs. We create an open-stack environment containing two nodes, each of which have a 4DG network card. And we create five VMs on each computer node. And then we use iPerf to test the maximum network throughputs between five pairs of VMs that are at different computer nodes. And here I got another video. And this is open-stack dashboard. And we can see five... Before, you can find the five VMs are running on controller and five VMs are running on computer. And here is DCFabric GUI. You can see two OVS can be discovered by DCFabric. And each OVS connected by five hosts. And then we can see five hosts connected by the OVS. And then we use iPerf in five pairs of VMs which cross the physical machine to test the throughputs between the two OVS. And this is the iPerf tool to test the throughputs of each pair of the VM. We have five VMs, five pairs of VMs. And I use the top to see the CPU usage and it's pretty low. And here is the floor table of the OVS. And then I use the top to see the throughputs of the VMs. And you can see each pairs of VMs, the throughputs and the total VMs, the throughputs is reached more than 60 gigabits. 30 gigabits, throughputs. The two OVS. Okay, this is our demo. Do you have any questions? Yes, bottleneck. Maybe the bottleneck is the OVS, the efficiency of the OVS because we use the default OVS. We do not use the DPD card. So the network throughput of the OVS can only be 30 gigabits. But if you use the OVS network, for example, we transfer our GRIs, the network throughput cannot be higher than 10GPS. So our solution is much higher than the OVS network. Okay, thank you. Okay, thank you very much.