 OK, hello. Good afternoon. Thanks for joining us here. In this topic, we'll talk about the fundamental cloud in the China mobile with 3,000 loads. We will review the architect and the deployment of this cloud and some of the implement detail of the cloud. My name is Jin Yun Tong. I'm from China. And as a technical marketing engineer from Intel Open Technology Center, I promote OpenStack and other open source technology in the cloud to the marketing. This project is a core effort from Intel and China mobile. And it's part of Intel Cloud for All initiative program. The goal of this project from Intel's perspective is to help our customer solution implement cloud that we can share the experience and feedback or what we have done and what the customer needs and do the upstream community. And we believe this will help us to accelerate OpenStack adoption with large-scale deployment. And we also have Yaojun from China Mobile here. Yaojun, tell us a little bit. OK, let me introduce myself. My name is Yaojun. I'm from China Mobile, cloud computing division. Today, I'm glad to work with Intel guys to introduce a cloud experience. Thank you. Thanks. And we are supposed to have Li Hao from China Mobile here and who is key contributor in this project. But unfortunately, he can't be there because there are these issues. So OK, here's the agenda. Firstly, we will talk about the practice of OpenStack in China Mobile. And then we will talk about the architecture and the deployment in this experimental cloud and some of the implement detail. And at the last, we will have a call for action. China Mobile is a leading telecom service provider in China marketing. They are transitioning their IT and CET infrastructure to the cloud. And they choose OpenStack to help them with the transition to the cloud. They have deployed several very large-scale OpenStack cluster for their public cloud and private cloud also. China Mobile is also the member of OpenStack Foundation and the super user at the Barcelona Summit. They have their public cloud resource in Guangzhou and in Beijing, also in Beijing. And if you're interested in what they're offering in their public cloud, you can visit the website here. OK, let's talk about the developmental cloud. Here is the three resource pool in China that they are planning to deploy elementary source. One is in Harbin resource pool, which will be have 3,000 nodes in this year and they plan to expand to over 20,000 nodes in future. And also, there's a resource pool in Beijing and in Hohaut. In Hohaut, there will be 1,200 nodes in this year and they will plan to expand to over 2,000 nodes, developmental nodes in future. You might realize that all these three locations are in the very north of China. And you may figure out why. It can be very cold there. So now let's look at the architecture and the deployment of this developmental cluster that are all deployed in Harbin resource. All the service and the elementary resource will be deployed in three zones. Those three zones are network isolated. The left one is the DMZ zone, which is public to internet. The end user will access their service and the resource from there. So service like NOAA VNC Proxy will be deployed there to provide a VNC service for the end user. Also, actually Proxy and with KPR live service will be there to do the load balance for OpenStack API. In the middle of the picture is the core zone, which will be deployed OpenStack core service, like NOAA API, NOAA conductor, NOAA scheduler, and the Neutron and Ceremetral KS2 service. The right zone is there called production zone. All the elementary resource and storage resource will be put there. Now let's look at how exactly those services will be deployed. In the control zone, like I said, will be deployed there OpenStack core service. And we have three nodes to host all these services, like Azure Proxy, NOAA Neutron, KaleStone, and the Ceremetral. And also database like MySQL. We deployed MySQL cluster, Galileo, on three nodes. There will be two set of MySQL cluster. One is for the Ceremetral service, and the other one is for the rest of NOAA service. WebMQ has a similar placement with the Ceremetral cluster and the two set of the cluster, which one is for the Ceremetral, and another one is for the other OpenStack service. The product zone will be hosted by Azure Proxy and also Allonica service. Since the storage resource will be in this zone, so the glance service will be also placed in here. So does the Cinder volume service. So there will be 10 nodes in this zone to host Allonica and the glance service. One of the challenges here when we do the deployment is that in China mobile, the existing OpenStack service are killer versions. Why for the Allonica service? To catch up all this key requirement that has been implemented in recent release, we have to use the Otaka version of Allonica. We don't have much trouble for the API compatibility. Thanks for the API migration. But we do have some lot of issues from NOVA, especially from the NOVA Allonica driver. Since, like, you know, risk-conditioned and kind of deleted the instance while swapping, lot of effort need to put here to track all this issue and the back portal that fix from NOVA side. OK. So next, we will talk about some features that are supported in this cloud. We had a very active and amazing community in OpenStack and in the Allonica. They have implemented all these key requirements for the deployment cloud that I can list all of them here. Some of the features that we are going to talk about here and to show you how to integrate them in the development cloud. For the Allonica driver, we are choosing the agenda IPMI tool, which means it will boot from the PXE deployment with the agent and do the power manager with IPMI. We all know that there's two ways in Allonica to do the deployment. One is the executive way, and the other one is the agent way. The reason we are choosing the agent way is because we have an Allonica agent, which is IPA, Allonica Python agent, that will help us to do all this deployment work, which means we can implement all the deployment process we need from the agent here. And the other reason is that Allonica agent way support the rate configuration. So of course, we support the multi-tenants, right? We are supposed to have a cloud here. And which means we have to support the local boot. As we know that after the provisioning process, the provision network will be tailed down and switched to the tennis network, which means the deployment on the tennis network will not be able to access the provision network. So the PXE server will be on the provision network. So the parameter has to boot up itself from a local disk. We support the new network as a default network. And there will be three-part SDN solution integrated with the new team. There will be another session from China Mobile to work you through the detail of how to do the SDN integration within the deployment cloud. I think the session will be on Tuesday, right? Tuesday, right? And the underlying layer of the SDN is using the VXLAN. Another feature we support is the Windows image. So to support Windows image, you have to choose the whole disk image feature in Ironic. And one of the problems here we met to support Windows image is that when we build the Windows image, there's always a system crash to boot up the system. And it turns out we are missing the rate driver in the Windows image. So you probably want to make sure you include all the driver that needed to boot up the system. HA of the Nova computer. HA is a key requirement in the production environment. We all talk about HA of an OpenStack service. Like I mentioned before, there will be 10 Nova computer services deployed on 10 nodes to achieve the high availability. And the 3,000 nodes behind the Nova computer will control by those services. So if one of the Nova computer field, there will be like 300 nodes left unmanageable. So with the Nova computer HA solution, when the Ironic node behind the field of the Nova computer will be ticked over by the other computer service, that will remind me we have some similar situation with Nova conduct. Like we have 10 service deployed with Nova conduct. And the Nova conductor will provide TFTP service for the elementary nodes behind them. If one of the Nova conductor field, there will be like also 300 nodes unable to access the TFTP server. And when the Nova conductor field, there will be a rebalance process in the Ironic. But the rebalance process will involve like deploy, remove, set up all the TFTP server for the computer node that they serve it. So we will do some more tests to find out how long it will take for the Ironic conduct to complete the rebalance process. Download image from glance directly. The usual way in Ironic to get the deployment image from glance is that the back end storage for the glance, like Swift service, that will explode a URL, temporary URL. And the node will download the image from the URL. But the situation in all deployment is that there will be no Swift service. So the glance will explode this, download the URL. And the Ironic IPA will get the image directly from the glance. Here is some more detail about the network in this environment. For each deployment nodes, there will be five NIC cards. Two of them are 1 gigabyte Ethernet card, which will bound as a management network. And the other two of them is 10 gigabyte fiber card, which will bound as a business network. The business network will be also served as the provision network, cleaning network, and also tennis network. And there will be also a VLAN sub interface for the storage network to isolate the storage traffic from the tennis traffic. The last one is 1 gigabyte Ethernet, which will be used as IPMI power management. So the problem here is when we do the provisioning, there will be two fiber cards for the provision network. And we have to use one of the specific cards to attach the node to the provision network. Ironic will want to do this for us. They just use the very first Ironic pod to do the attachment. So we need to add some extra property in the Ironic pod, like we add the main NIC. If the main NIC is yes, then we choose this card to attach to the provision network. So we talked about it before that two of the fiber cards will bound together. So part of the group. So we have to support the part of the group. I will talk about this part of the group integrate within three steps. One is the inspection and the provisioning. And finally, update it to the tennis network. So when we do the Ironic deployment, the very first step is to do the inspection. The first step is create the Ironic nodes, which we will put information like deployment kernel, deployment RAM disk, and the information about the Ironic driver, which in this case we are using the IPMI. So we need to put an IPM address, the username, and the password. And after the node created, we set up the provision state as the inspect. So the inspect process will get start. It will get the DHCP address from Newton and download the inspect image from the DHCP server and put the nodes. And once the nodes put up, it will load the inspect image, which have the IP agent in there. The IP agent will do the inspect to collect all the hardware information of the developmental nodes, which include the network call, the RLDP information. And the agent will send back all this information to inspect the server. Inspecting server will then format all this information, put the information into the information as the Ironic port property. So after the inspection, the node will be shut down. And the provisioning process will get start. The NOVA boot will issue this process. And the NOVA API will schedule the request to one of the NOVA computer. NOVA computer will then ask the Newton to allocate a port. The port we allocate here is the tenant network, and which is unbound at this stage. So the other things that NOVA computer will do here is to pass all this user information to the configure driver. This information will include how to do the bound for the network call, how to create the storage subinterface. So then we set the provisioning state as active, and they will do the deployment. Once the deployment complete, it will shut down the developmental nodes. So from the Ionic side, when the Ionic API receives the requirement from NOVA, they will ask the NOVA conductor to do the node deployment. And the conductor will ask the Newton to create a provisioning report and update the DHCP information. Then the node will get the DHCP information and download the deployment image from the DHCP server, do the deployment, and shut down the developmental. So at this stage, when the deployment complete, it will tell down the provision network and switch to the tenant's network. And we have all the LLDP information in the Ionic port. So it will configure the Tor switch. And then the NOVA will reboot the developmental nodes. And when the system starts, it will use the information in the configure driver to do the port bounding and create the storage subinterface. So this is pretty much all the important detail that we would like to talk about today. China Mobile's strategy on OpenStage is always upstream first. So they put a lot of engineering that which active participate in the upstream community. We have a lot of requirement and proposal in the community that would like your help, like no computer HA solution that we would like to your review and contribute. The Ionic external display driver, it will launch a VNC for the KVM. The KVM here stands for the keyboard, video, and mouse, which will allow the end user to use the KVM to switch their developmental console. Other feature like the configuration driver, the framework is there in the upstream community. But we are lack of a lot of driver from the server provider. Other things like we would like to support environmental cloud and with virtual machine cloud in the same region. OK, that will be all the information we would like to share today. As last, we want to thank you there, OpenStage community, and thank you all for being here to watch this. Thank you.