 Hello, everyone. My name is Wang Bo, and this is my colleague, Shikui. We are from EDStack. EDStack is a leading cloud computing solution provider in China. This is our topic, Unified Management Platform of OpenStack and Kubernetes. So this is a agenda. Firstly, I will share our experience in our product development about the combination of OpenStack and Kubernetes. It includes how to deploy OpenStack and Kubernetes, the architecture of OpenStack and Kubernetes. And lastly, I will talk something about the features of Kubernetes in our product. Then we will talk something about our customer cases. Okay, as we know, OpenStack provides infrastructure resources like NOVA VMs, network, and volumes. Kubernetes is a system for automating deployment, management, and scaling of the containerized applications. So we have different scenarios based on different requirements. We can deploy Kubernetes independently, and we can run OpenStack and Kubernetes together. In some cases, users don't need OpenStack. They just want to run some containerized applications, or they have other as-platform but not OpenStack. In our product, we recommend to deploy OpenStack and Kubernetes together. So that with one dashboard, users could run applications in containers, VMs, and bare metals. I think that will create a great help for the customers. Okay, this is a deployment process of the competition of OpenStack and Kubernetes. The deployment tools we named YesRuler. As a number sequence, firstly, it's install OpenStack, and when OpenStack is ready, it will provide resources like VMs, networks. Then the ruler will install a deploy server. The deploy server will install Kubernetes masters. All the fall steps happened in initialization time. After that, the YesRuler will be destroyed. During running time, users could update the Kubernetes cluster as they want. They could create new resources and call the deploy server to install new nodes, and the new nodes will register into the cluster. This is some detail deployments about the Kubernetes. The deploy server uses the Ansible scripts to do the deployments. So firstly, we need to add the public keys of the deploy server into the cluster nodes, including master and slave. And there, we use the hardware as a private image registry on the deploy server because all the components of the Kubernetes and other features are wrong in the Kubernetes pause. So we need to install the Docker image into an image registry. The deploy server needs to configure the inventory file and run the Ansible command to do the deploy. Okay, this is the architecture of the combination of OpenStack and Kubernetes. On the left side, here are the controller nodes, OpenStack controller nodes. On the right side, the computer nodes. And the Kubernetes clusters, the nodes, the masters are wrong on the top of computer nodes. There are arrows in different color. Different color means different network. The controller nodes and the computer nodes communicate with the OpenStack management network. The master and the nodes of the cluster communicate with each other by the shared private network. The nodes are from different projects. So the private network must be shared. And users could access the Kubernetes service from the public network or they can access the unified dashboard also from the public network. In the picture, we can see that the different nodes, different nodes are from different projects. The rectangles in different color means different projects. Next, we will talk about the resource isolation in different projects. Sorry? Yes. It's about multi-tenancy. Multi-tenancy is just a resource isolation between different OpenStack projects. There are two things you need to think about. Firstly, as a post-schedule, we use node labels. Think about an example. Users from project A create VMs and add them as slave nodes into the clusters. We want to make sure that the user from this project create a post and schedule into the nodes what he creates. So we add the label in the form of a project equal to project ID. We add this label into the nodes. So the Kubernetes scheduler can make sure that the post will always schedule to the slave nodes with this label. The second thing is the cloud provider. As we know, in some cases, Kubernetes will read the cloud provider info and to consume the OpenStack resources, such as we create a service, the type of load balancer, or we create a pod with volumes. In these cases, the Kubernetes will call the OpenStack and it needs the cloud provider info. Currently, the info is stored in a static file. So if that Kubernetes doesn't know what the user is creating, post or creating volumes, so we change it and we change it to pass the user info dynamically from the Kubernetes to the OpenStack. So Kubernetes uses user info to create the resources and the resources are with different user info such as project info. So this is a multi-tenancy. Why not Magnum? We know that we have a project Magnum. It could help to deploy Kubernetes clusters, but it's really not enough to meet the cluster's requirements. All the functions of Magnum are listed here. Cluster templates, management, create, delete, update, and cluster, and cluster certificates, and quota. That's all the functions Magnum have. A real production-ready container platform needs the following features such as a private image registry, the CI-CD tools, the internal DNS or the outside DNS, the monitor and alarm, and log collection and search. We use the software on the right side to implement all the features. This is a normal use case about CI-CD. The deployers make the code change and push commit into GitHub and it triggers Jack's job to build a new image and use the new image to deploy into the test zone. After the test, everything is okay. The operators will use the image to launch the application into the production zone. This is a very common use case. We need these features to support a container platform. Now let's welcome my colleague, Kui. In the past three years, we have accumulated lots of customers in different areas such as large-scale enterprises and telecommunications and also some financial enterprises, for example, some banks. I chose six typical customers to share our experience to show that the Chinese specific countries are in the cloud-computer market. The first one is CI-V, the Industry Bank. This is a financial bank in China and also the Postal Saving Bank. The Postal Saving Bank is the fifth-largest bank in China. These two banks face the same threat from the Internet company. For example, in the new pay mode, they can use two-dimension codes to pay and use the Internet channel also from the mobile application. It's very easy and friendly for the user to pay for. It's not the same as the traditional mode. Under this condition, these banks face the threat from the Internet companies. The last one is the new business model of the Internet financial business. What's the joint force of cloud computing for the banks? They need to save their costs to deliver their application very safely. Also, their application is more complicated. The advantage of the traditional financial company is that they have an advantage in the traditional mode. But in the Internet financial mode, they lack such an experience and also develop the IT architecture to deliver their application very quickly. In this condition, they should transform the traditional arc to the new cloud computing architecture to deliver their application. The second one is Unipay. Unipay is similar to Master. On the credit card, you can see the mark of Unipay in China. This company is similar to Master or Wisa in our country. Unipay has used OpenStack very early. The first version of OpenStack is ASICS about four years ago. They have committed a lot of experiences such as the hard availability and the operation procedure and also the security processes. They also face some threats from the Internet business model. Also, the functionality of the ASICS version is not to certify the more and more limited requirement of the business application delivery. In the past years, they started a new version for the whole cloud computing architecture. In the past few years, they have used VMware and also Cloud computing platform to support their business. Last year, they changed from VMware and also CloudStack to OpenStack. Also, in the area of NFV, they have started to make cross-testing of different hardware providers and also different NFO or NFM service providers to construct the big architecture of the future network. During the development of the switching from the old architecture to the new one, they have made some customization for the VMware resource pool and also Cloud data pool. Also, they have supplied a unified management platform in the headquarter. From the headquarter perspective, they can control areas of the processing branches. The Lenovo is the largest enterprise. During the traditional IT structure, in the internal of the Lenovo, there are two rules. One is the architecture provider and one is the consumer. In order to save the cost and to support the various application delivery, they should transfer the traditional architecture to the new one. In the past few years, they have changed from the one-by-one step-by-step to cloud computing architecture and using OpenStack to set up their cloud platform. Also, the state-grade, the power supply company in China is the most focused on security because the application should be monitored by the headquarter of the state-grade. There should be any problem in the application. There is a very high requirement for the security grid. During the construction of the cloud platform, we have optimized many aspects for the platform. Also, for the high availability and capacity to protect the whole cloud computing platform to avoid the attack of the outside. These six categories are very typical in our customers, which stand for some specific categories in China's cloud computing market. That's all of our share. Also, for the first one, CIB, on Wednesday there is a test study for the CIB. We will share some experiences with our customers. Welcome everyone to join the session to get the detailed information. Thank you.