 Hello, everyone. Good morning. Good afternoon. Good evening. Okay. Welcome to our topic. We are from China Mobile Research Institute. We are Xiaoguang, Han, Xin, and me, Zhiqiang. Xiaoguang is the team leader of our technical cloud integration team. Han and Xin are the experts from our technical cloud team. I am the open source program manager and I'm responsible for open source strategies and projects management. Okay, now let Xin introduce at first. Hi, everyone. From year 2009, China Mobile began to construct its live-scale Net-V-STN cloud network for 4G and 5G core network services in several regions across the whole country, with a hardware scale of around 20 to 30,000 servers for each phase project. Currently, system deployed in phase one project has been put into production for 4G network services, such as EPC and MS network. And phase two projects are busy doing end-to-end tuning and accepting tests for rollout of 5G network-related services. China Mobile's Net-V-STN solution is based on OpenStack. We have carried out in-desk practice of OpenStack with CNCC's Net-V-STN network construction. Under current cloud virtualization infrastructure, system architecture changes a lot compared to traditional mode. Generous server switches has replaced the traditional hardware software integrated black box. Internal back-plane connections of traditional hardware exposes and is replaced by using physical and virtual network for bare metal and VMs. Hypervisor, manual and VMF components for cloud infrastructure replace formal internal software architecture we seen specialized hardware. All these changes brought huge difficulties for system deployment, integration and debug. It's not possible to use traditional integration mode to handle it. Under this background, our team is formed to study and verify integration-related technology for this cloud and virtualization infrastructure. Our CNCC and MS-STN network follow ETSI-NIV architecture besides MANL, which provide overall management and orchestration for NIV architecture. There are mainly three layers and they are decoupled from vendor product point of view. The bottom layer is hardware resources, including server networks, storage devices, SOI security devices which provide resources like CPU, memory, storage, network, etc. for upper layer. The middle layer will be hypervisor layer which takes charge of hardware virtualization. On top of the hypervisor layer is VMF which is virtualized to core network application, including CNCC network clouds. These three layers are decoupled. That means whole stack can be composed by product system belongs to different vendors. As mentioned before, this multi-vendor stack has brought much difficulties to system integration, test and troubleshooting. Generally speaking, there are four phases for CNCC and MS-STN cloud resource pool integration, which are design, hardware integration, software integration, and acceptance test as shown in these slides. Regarding hardware integration phase, hardware installation, configuration, and acceptance test are included in our CNCC and VM cloud network. Only server, BMC, BIOS, RAID config and switch-based config, autobahn network-related config are in scope during hardware integration phase. For hardware configuration, server, BMC config, switch-based config can be automatic by using our automation platform called Auto. For hardware integration acceptance test, hardware components check, server, BMC, BIOS, RAID config check, cable connectivity check, autobahn, IP connectivity check are automated currently also by our automation platform. With current layer in a decoupled NLV cloud architecture, high complexity for multi-vendor system and low efficiency for large-scale construction brings many pain points for integration work. Pinpoint one would be the lack of the unified procedure. Formal procedures are mostly like turnkey more and it's vendors' experience-based procedure, which is lack of unified and clear process. There are also some other pinpoints like across vendors, there would be the barriers, variation of integration procedure design data, etc. Among different traditional telecom vendors, lack of inter-operation verification brings more integration issues. Also, like the pinpoint four falls in lower, in pinpoint four harder to locate the issues with hardware software decoupled system, the falls in the lower layer bring more falls in the upper layer system in current layer cloud architecture and make it harder to locate root cores. Mechanism of quick fault detection within each layer is required. Facing these pinpoints, we use solutions including the standard operation procedure, SOP, the data standardization and automation to handle. Regarding the standard SOP, we as operator lead to build standard unified and compact integration procedure and light power render follow the standard. Also early intervention from purchase phase is required to minimize the onsite manual hardware initialization work by pre-complete for that in factory. Regarding the standard data, we have unified the data template for the hardware integration based on standard LB template. Meanwhile, we have automated the generation of the LB, also the data asset which can be used, this data asset can be used to trace the whole integration processes and hand over to the operation phase. For the automation, we think that high efficiency integration will minimize the manual work with automation tools, which can also lower the type requirements for onsite worker. For the high quality test, it can be the automation tool can be used to do the acceptance test, which can cover the full check. Okay, now my colleague Han will introduce the next part. Hello, I'm Han from China Mobile. I introduced this page shows the hardware integration automation procedure. We define the unified hardware integration procedure. Like you see the pref configurations back when we order hardware, we define the unified products specification for servers and switches. Meanwhile, we use the HLD and its N rules to automatically create RLD. And then we will define the automatic hardware configuration. This will decrease manually config mistakes. Then we will go into the automatic hardware check procedure. At the same time, an onsite engineer will correct the mistakes according to the automatic test result. Finally, we clear all the mistakes of onsite engineering and output the test report and the statistics and go to the software integration phase. Okay, she will detail introduce the data standardization and automation. For the data standardization and automation design data standard is a mandatory prerequisite for automation regarding data standard. We have unified hardware integration data template called low level design. Yeah, hardware integration, low level design. There are several sheets, including the resource pool info sheet containing the resource pool name type and rack info sheet containing the data center room ID row and rack number. As well as device number device info sheet, which contains a device name, including the server and the switches as well as vendor model rack position BMC IP related information and the BMC account info. Also, we have the cable connective connectivity sheet containing the cable connectivity info, including the local device, local port, remote device, remote port. In addition, all the hardware integration LD can be generated automatically by our auto platform with port design info unified detail design rules as input. We also provide the accurate infrastructure data for subsequent software integration helps assist help assist deployment issue analysis and locate. It can also be used as visit data for network operation maintenance after never go out. Okay. Now, please continue. Before we further introduce our practice on automatic configuration for switches. What we do is to entire pause your until our switch auto configuration on the initial In the fifth one we found many manual configuration mistakes or inconsistent by multi wonders on set stuff and that distinctly influence efficiency and quality. So we decide to divide automatic initial configuration for you are to our space. How we do that we again dynamic IP to switch on the management pause and then the auto script will log into a stretch to execute. The benefits is very clear for us for scale of 300. You are on to our switch in one pole. The auto tool will automatically automatically configuration will only takes 15 minutes. Accuracy will reach 100% rapid rapid correction and updates. For the prerequisite that we will require all the switch vendors to have some pre configuration in factory and ensure automatic software configuration and test. Here we introduce the third automatic test solution. Auto tool also supposedly entire pause serve automatic config and test. Including configuration for several BMC IP address host name configuration on several BMC BIOS hardware components test. Red face interface and pre configuration is a little predicted channel will define the spec based on the red fish protocol wonder need to comply with the spec specification. So we're dealing with the pre configuration. On the same BIOS parameters and ensure the automatic server can config. We have two modes to realize this automatic procedure. We have center centralized mode and the distributed mode. The distributed mode will have has more high efficiency and centralized mode has clear and more light weighted. The benefits is for scale of 1500 servers in one part auto tool will only take one hour to finish auto configuration and test. We also divide the auto cable connectivity check solution. We can automatic check the cable connectivity in the whole port. We use the LDP protocol to get actual cable connection information on compare on compare to the LDP information for the whole part the automatic check only takes less than five minutes. Our manager as well will further introduce the effect of this automatic procedure. Okay, let me introduce the solution. We can use the solution they use in the same thing I think cloud project face one to face two. The integration efficiency and quality is increased significantly for one part about one thousand two one thousand and five hundred servers. We want it to spend only one point five hours used for auto configuration and test. And it shows the system integration time cost decreased sharply. Below it is about time cost comparison from the different integration mode. At the left side it is manual conversion plus sports check. This is a traditional integration. The second one is a manual comparison plus auto full check. The time cost is decreased about 50% and the third one is a manual comparison plus auto full check. And plus partly correction correction. It also decreased about 50%. The fourth one is about auto comparison plus auto full check. This is our perfect solution. It is a total automation time the time cost decreased significantly. Next page. Next slide. As you know automation is based on the technical standard direction. So that means if you want to implement automation, you should do some prerequisite work to make sure the automation can work smoothly. So standardization means we should do some work in industry and we should work together to do some standardization work. Now in CMCFC we have launched two plans. One is CMCFC open auto plan. Now we have 14 partners joining our open auto plan. And we do some best practice of integration innovation on CMCFC and free cloud construction. Another one is our open lab. We created an open lab for integration. And now we involve several vendors to join our joint lab. In this lab we can do some verification about integration and also about the continuous integration, continuous test and continuous deployment. And through that work we can push the MFV cloud greatly. That's all. Thank you. Thank you for introduction, thanks. And yeah, we really welcome friends and community members to join us. Talk, discuss about the automation work and also the hardware integration.