 Hello, everybody. Very glad to hear to introduce my engineer practice always back in China. In our cloud service, private cloud and public cloud, I'm Zheng Jun, CTO of China C Group. So let's begin. First, we will introduce the glass of China C. Our company is created in 2010. And this is the same year we offer the public cloud in China. Today at this moment, we have 21 data centers. So host the 15,000 physical servers in our data center. So opt to fiber interconnect for the main data centers, including Beijing, Shanghai, Guangzhou, Shenzhen, and Hong Kong. So in 2015, we have achieved a C round of $100 million. So this is my company's chance for private cloud and public cloud. At this moment, we offer the four types of cloud for any user in China, public cloud and private cloud, and hybrid cloud and C cloud. So the public cloud is more complicated. I think the small part of OpenStack is utilizing our public large scale data center and large scale cloud service. The private cloud, our product is fully based on the OpenStack. So based on public cloud and private cloud, we offer the hybrid cloud. You know, in China, the more communicated scenarios utilize achieved for the cloud-based model. So hybrid cloud is nature, I think. OK, we just released the data-driven hybrid cloud policy for the end user in China. IDC to cloud is more about how to transfer the traditional IDC provided into the cloud source provider. You know, there is two things here. The first is how to transfer the traditional IDC to be cloud data centers. The first is how we offer the operation capability to these traditional web providers. So I think IDC to cloud is a very interesting product for my end users. This picture gives you the overview of my product for the follow for the Chinese in China market. The first is the elasticity computing. Elastic computing is wholly based on the NOAA. It's mainly to offer with all the scaling capability to offer the high-performance computing for the end users. The second, I think, is SDN. You know, SDN, I think, is a methodology for providing a more complicated product. For example, we offer the VPN as a service, a lot of balance as a service. A fair service, I have to focus on SDN. So the computing networking, of course, is the first important thing for the public and private. The second, I think, is we offer the CDN, the content delivery networks. This moment, we have created 200 CDN nodes in China. The second is of course the CDN. The following, the more complicated service we offer in our cloud instructor. The COS. The COS is full name is the China Sea Objective Storage. It's mainly based on our own developed product. The Swift hasn't been used in our solutions. You know, we develop ourselves our distributed file system, our distributed database, and then we will offer the large scale objective storage for the huge amount for the massive data management. Based on the COS, based on the objective storage, we offer the Spark as a service, big data as a service for the public cloud and private cloud. I think the Spark as a service is more complicated. How to offer, how to provide the automatic cluster of Spark management, management in the cloud, in the large scale network is the challenge. The result management is very challenging, I think, here. But we offer mostly the service delivery and the service automation. So, you know, in China, the situation of the cloud environment is very different from the American. Private cloud, I think, is more popular, you know. But the Chinese company, especially national company, has a lot of money to set up their own data center with a large scale data center. So the bare mental service is very important. How to offer the high performance computing, HTC component based on the hybrid cloud environment, including the bare mental service is very important. We, just based on the OVC ironic, we offer the developer our own good and silent service, named the CBMS, the Chinese bare mental service, for the high performance computing and users. So, this slides give you the simple picture about my private cloud, just the dashboard. The cloud archer is fully based on OVC stack. But I think we do a lot of innovation features beyond the OVC stack. So what is important, I think, is to offer the advanced feature to the enterprise level user for the private cloud, especially how to automate the environment, how to achieve the automatic management, and how to use the feasibility for the private cloud. So the following, I will give you some sort about OVC stack engineering, especially in the China market. So from the 2015, from the last year, you know that Chinese customer accept OVC stack more easily. I think it's good news for all the OVC stack community. But I think the challenge exists in the direction of the feasibility, the automation of the stack and monitoring of the stack. Everything, I think, a lot of challenges here. So I think that this year, these two years, it's a significant type of window for OVC stack life circle. You know, in the last five years, we achieved a lot of results, a lot of auto-current for OVC stack, for OVC stack cooperation. But how to transfer the result into the real production environment is a challenge, I think. So the last two things I thought is how to develop a metro, a disruptive product beyond the OVC stack. You know there is a word that is no, there is a free software. Doesn't mean free launch, right? So I think that we must develop the once the feature beyond the OVC stack, beyond the NOVA. So for the China city, we developed the hypothesis computing. We developed automatic features for the easy deployment, easy use. So the first important thing I think here is HPC automation and the full stack are keystones for the cloud filter, for the OVC stack filter. The HPC, I think, is strongly their requirement in China. You know that large scale, a lot of private cloud and users, especially likely to send up their own data center with a lot of physical servers. How to provide data intensive, IO intensive service is challenging, I think. You know that we should not only focus on the GPU, but also we focus on the X86. So how to offer the general HPC solutions based on the OVC stack is challenging, I think. Automation is the OVC, I think automation is the soul of the cloud, especially for the large scale cloud. So the full stack, it means that I think that the ice, the boundary between ice and the parts will disappear, it's mostly, you know. So I think the full stack just like the AWS of sorts is the filter of the cloud. The end user only cares about the APP performance. So as the cloud source provides the ratio of the plan of mutation to the whole generation solution. Networking results, stories of computing results and the message and the database, the other software, all the components can be accost treated in our cloud service. So full stack end-to-end performance guarantee is most important for the end user, for the cloud filter. The following, we will give you more detailed our innovation about the security, performance and ethnicity. So I will invite my technical guide, my technical director to introduce my detailed innovation. Hi everybody, thank you for coming here. And my name is Liu Xiaoxin. And in the next 10 minutes, I will give some more details about our cloud engineering practice in China Sea. And as you can see, while talking with our cloud customers and we found that customers care very much on some of the capabilities while adopting the cloud services. Here is the elasticity, performance and security. And while we say elasticity, it means the abilities to expand the contrast in response to business needs. And while performance is mostly focusing on how to optimize the speed and the throughput of our cloud services. And in terms of security, that means it cares most about how to protect the data and ensure it's consumable in the cloud services. In terms of electricity, and we did a lot of works around this area. And the first one is we support the broad metrics in our cloud services in both virtualizations and storage and network models. And it can meet different customers' application requirements. Sometimes they need not only virtualization, but also some of the applications it needs to deploy in the environmental or containers. And for storage, we have support different kind of combinations in our portfolio. For networks, we have support both open with switch and also integrate with some third parties hardware SDN solutions. And while different applications or workloads may have different requirements for the QoS. So from this perspective, we developed a bunch of capabilities to guarantee the storage our PNs or the network bandwidth for the workloads. And while stepping further, we would like to provide more automatic scaling based on the defined policies. As you can see here, for auto scaling, we support both the horizontal scale out and in and also the vertical scale up and down. For horizontal scaling, it's mostly based on the China CIS stack watch and the China CIS load balancer product together with the open stack heat engines. And while some of the workloads is bursting or coming down, the state watch will trigger the add or remove new instance to the back end of the load balancer. And for the vertical parts, it mainly focuses on adding or remove resources of instance, such as the CPU, memory, network bandwidth and the disk. And here China CIS did some enhancements for this part. You can now do a live upgrade of CPU and memories. That means that you can reduce the downtime while we're doing the scaling. Here in terms of data security, we also provide a bunch of feature groups to help enhance the security to avoid data loss and do some fast recovery from disaster or some human operation errors. And well, from the bottom line, we leverage the capabilities of the low-end storage systems, such as the read technology or multi-copy mechanisms in our back-end distributed storage system, for example, SAF or GlassDiab as something like that. And further, based on the standard snapshots or backup restore capabilities, we build the capabilities of keeping the data status from a specific point in time. And also, we extend some of the remote backup and restore capabilities to store the snapshots to a remote site, which can be an option for the disaster recovery. But all the privacy options are mainly focusing on keeping the status of a specified point in time, but you cannot protect some of the human operation errors. So we bring in a new feature of Cinder CDP, which is called Continuous Data Protection, and it can help recover the data to any point in time of block storage. And here is a detailed view of the warren Continuous Data Protection operations. From the left side, you can see that's the general IORite flow. It goes through the block layer and then go to the block drivers and right to the storage system. But with the Continuous Data Productions, we embedded a module, CDP, in this flow. And while the IORites go through the virtualization systems, it will capture the binary CDP module and be stored in the CDP repositories. And here is the detailed workflows of these mechanisms. And it will first do some replicates of the initial images to the CDP servers. And while capture each IORite and CDP modules will replicates the IORite data to the CDP sites. So with the stored metadata and the IOR payloads, you can restore images once the source image site has some crashes or disasters. And here, from the perspective of performance, we did a lot of work in storage, compute, or network. And from the storage sites, now in our cloud services, we can support multiple different backhands, including the distributed storage systems and also some traditional sand storage. And we do some of the special co-bindings to save OSD processes to ensure the performance capabilities. And for the site SSD scenarios, we had some optimizations to the memory allocations, which can improve the read and write several times. And from a compute perspective, we had some CPU bindings to the physical cores and also enables the scheduling by leveraging the new technologies and some huge page settings and the pass-through PCI devices to enhance the performance. And from the network perspective, we had made some open-v-switch dbdk enablements and also leverage the SIOV supposed to direct network work flows to watch functions on the network. Do some tunnel offerings, which will release the CPU processing workload. And here is the general package flows for the optimization of this dbdk. From the left side, it's not traditional. It will go through TCP IP kernel stacks. Well, in the right side, it's the package flow with the dbdk. It will directly send the pass to the user land, which reduced a lot of Linux kernel overheads. And then coming to the slides, it shows some measurement results of these optimizations. Here is two VMs in the same host. And as you can see, the performance of the throughput and the latency in the switch with dbdk is extremely higher than the traditional way. And especially in latency sites, with dbdk scenario, it's just half of the traditional environments. And here is the test scenarios about two VMs in two different hosts, which goes through the network and goes through the external switch. We can also see some obvious improvements in the throughputs and the latency. Here, the time is very short, so it's next all for our shareings. And here is the contact information from us, and we had on booth B12. So if you have any interesting topics, you can visit our booth and have further discussions. All right, thank you. OK, thank you.