 Good morning, ladies and gentlemen. We are very pleased to join the OpenStack Cast Sharing event. We are about to share the topic with CIB Fintech company, the first and largest Chinese financial industry cloud with OpenStack. First of all, I would like to introduce the speakers. As the cloud computing general manager of CIB Fintech company, Mr. Zheng Zizhou will introduce the CIB Fintech company and tell you why they choose OpenStack. In addition, we also have another three speakers. We all come from the Chinese OpenStack company, Edith Stack. And my name is Liu Jian. I am the technical manager of East China, and I will tell you the solution based on OpenStack. Mr. Wu Shubuo is responsible for delivering. He will tell you the key points of CIB Fintech cloud architect design. At last, our principal architect, Shi Kui, will answer your questions if necessary. Well, this is the four parts I just mentioned. Part one, CIB Fintech introduced. Part two, why OpenStack? Part three, solution based on OpenStack. Part four, key points of architect design. OK, now welcome Mr. Zheng Zizhou to introduce the first and second part. Good day, ladies and gentlemen. My name is Zheng Zizhou. A great honor to attend this OpenStack summit in Boston. I am general manager of cloud computing business division of CIB Fintech. I come from Shanghai, China. First of all, let me briefly introduce our company, CIB Fintech and our financial industry cloud, CFD cloud. CIB Fintech is a member of China Industrial Bank Group. She founded in December 2015 in Shanghai, China. She is young, but she was born of a bank-bank platform of CIB, has been operating successfully for 10 years. CIB Fintech is a digital solution provider focused on the financial industry. So service offering includes ISE, PAS, SAS, and BAS. We can offer integrated and comprehensive cloud service for small and medium-sized financial enterprises. Our financial industry cloud has three major planning. First, focus on the completion of the existing cloud, computing to upgrade, building a technology lead and in line with the banking regulatory requirement industry cloud computing. Until now, with the application of OpenStack technology, we have finished the first step. Secondly, relying on the experience of CIB Group's non-bank financial institutions, provide cloud service for various internet financial enterprises. Thirdly, sooner accumulation of data provides more accurate, more timely, more personal financial service for small and medium-sized financial enterprises. Then, let me introduce key point of our platform. In 2007, we started our business with first customer, Dongying Commercial Bank in Shandong, China. In 2013, we have expanded our service to 15 banks. Only one year later, we make a significant breakthroughs. Our client quantity raised to 1,000. In 2015, CIB FinTech was founded. Up to now, we have already provided the cloud service to more than 300 banks and gained a 10-year cloud operating experience. With business development, our technology keeps upgrading. We started the business with IBM Power, Power System in 2007. Then, we upgraded to virtualization in 2012. In 2016, we applied OpenStack to upgrade our cloud platform. This year, we have some breakthroughs in blockchain and AI technology. People may be curious why we choose OpenStack as our cloud operating system. Let's talk about the benefits to OpenStack. This is the map of China. You can see our customers have almost covered entire China. By using OpenStack technology, we can rapidly supply computing and storage capacities to users and help users rapidly use a specific private network to ensure their application and data security. Users can focus on their business capabilities, no need to self-build and self-operating data center, relying on our technology support and service to meet their finance application request. In the end, let me use three numbers to summarize our CFT cloud platform. First, number one, we are largest banking cloud service platform of China. Secondly, we have more than 300 banks users. Thirdly, we have more than 400 cloud services from S2 bus. Let me end my speech with a report published by Gartner in June 2017. CIB FinTech is a leader in banking industry cloud service and has been under CIB for many years, providing financial industry cloud service, managed hosting infrastructure, and application service. Banking and tracker-based business process outsourcing and consulting service in China. Then, our corporation company, EGStack, will introduce the technology detail of this project. Thank you. Next, I will continue the speech. Let's go into the third part, solution based on OpenStack. CIB FinTech cloud application function design includes three layers, core service and portal. Core layer is the most basic resource layer, includes compute, network, and storage. In the part of a compute resource, we can not only provide a virtual machine based on KVM, but also can provide containers and bare mental service. The container can be deployed both in virtual machine and can be deployed in a physical machine. We use the ironic module to provide bare mental service. At the same time, we also solve the problem of multi-tenant in bare mental service. In part of network resource, we not only use network solution based on the pure software in this project, but also have an SD-controller and telegration. In the part of network resource, a storage resource, we have safe and safe FC storage in use. Service layer is for the tenants of CIB FinTech cloud. It mainly includes three parts, cloud-based service, advanced service, and monitor service. This part of cloud-based service must be clear. We can see the cloud provides three types of storage, block storage, cloud mollum, shared file system, NAS, and object storage. We focus on the host HAA function. When a hypervisor is done, the operation of VM automatic recovery will be triggered immediately. A few minutes later, the VMs was rebuilt on other available hypervisors. Advanced service parts mainly includes operation and management service, such as in tenant, installment, authorization, and upgrade. In the monitor service part, we provide virtual and physical monitor includes VMs, cloud volumes, subnets, and physical controls, compute net storage nodes. Tenants can define monitor items and triggers. We can also analysis logs for troubleshooting. Bottom layers mainly provides access to CIB FinTech cloud, includes dashboard UI, command line interface, and APIs. These APIs include infrastructure, monitor, RPC, and install. The security part on the rise runs through the whole CIB FinTech cloud. Well, let's go into the last part, key points of architect design. This is multi-level organization. As an enterprise tenant of CIB FinTech cloud, he needs multi-level organization to manage his cloud instance. We exert one more level between domain and project, which is implemented by subdomain based on domain concept of Keystone. Using domain and subdomain and project concept, user can organize the department flexibly, find granted management of all resources on the cloud platform, more accurate audits, and monitor the resource for different departments and teams. And this is being in system design. Based on salometer resource metrics data, being in system defines the items of digs and VM, egg. Beaning will calculate the cost accurately. Second level. All the data will be updated into the beaning database. Beaning will also organize the basic statistics that into different level. We should see the cloud beaning detail, the domain, and department, and the project. For each level, statistics user can export their usage for day, week, month, and year. It's not updated. The screen is not updated with the laptop. This is the beaning system design. Based on salometer resource metrics data, being in system defines the items of digs, VMs. Beaning will calculate the cost accurately. Second level. All the data will be updated into the beaning database. Beaning will also organize the basic statistics data into the different level. Cloud domain, department, and project. For each level, statistics user can export the usage for day, week, month, and year, which will help IT department to predict the requirements of IT system. Also gets the created data of cost and the resource allocation in different department. OK, I will invite my colleague, Oshu Bo, to introduce the next key points. OK, the third key point is large-scale deployment. CIB Phantai Cloud has 400 nodes now. There are four nodes making up a controller cluster for running OpenStack Service API, high-available models, and so on. For the efficiency of database access and OpenStack Service communication, there are three nodes deploying my so-called Galera cluster as the database for CIB Phantai Cloud and another three nodes making up a RebMQ cluster for my state transmission. More than 300 of compute nodes divided into APP aggregation and the DMZ aggregation. For the security of necessary, the APP aggregation and the DMZ aggregation use their own storage pool. Considering the performance of that space, application, and the big data application, CIB Phantai deploys OpenStack-Ironic service. And this service provides environmental management, database and database cloud application and the big data application runs on the bare metal using the same private network with CIB Phantai Cloud virtual machines. CIB Cloud is self-integrating with OpenStack. There are 14 storage nodes and 400 of OSDs. In the future, CIB Phantai Cloud will use commercial storage to provide more volume types. CIB Phantai Cloud has been planned to increase to 1,000 nodes this year and the architecture will be changed for the larger deployment. For example, OpenStack Stereometer Service and Billing Service will use their own RebMQ cluster and MongoDB will be separated from the controller node. CIB Phantai Cloud integrates with H3C SDN for the network, router, and firewall functions. Phone nodes make up SDN controller cluster and there are two open-flow switches in one rank, as leaf nodes. The leaf nodes act as web-length gateway and the response for the transforming VLAN and the VSLAN. There are four cool switches as span nodes that act as VSLAN IP gateway. The span nodes are responsible for the communication of VSLAN and the classic network. Neutron FWAS integrates with H3C and 9,000 firewall devices and realizing the protection of tenant network. In the SDN architecture, when tenant creates a net or subnet, there is no change in configuration of the rack space. The first package of MVM in the network will be sent to the SDN controller and SDN controller will establish VSLAN terminal and the following packages of other and the following packages will use this VSLAN terminal to communicate. Packages in compute node is VLAN node and will be converted to VSLAN mode in the type device as definition. When tenant creates a router, there is a VPN instance will be created on the gateway switch and when associating a subnet to a router will create a VSI interface on the gateway switch. That stream will be out and in use this terminal and in SDN architecture section introducing the death stream on north and south and let us think that two tenants have the necessary of communication. In this situation, the death stream must be transmitted over the two tenants' routers and all the packages must be filled by tenant firewall. In the other situation that two tenants have necessary of communication, you might tenant and the tenant router will transmit the data between the two subnets. But how is the security of the two subnets in maintenance? In the picture show, the VAM data stream will be proved to security results pool. In the security results pool, VFW is a virtual firewall that controls that stream between two subnets in maintenance and the VRB is a virtual loader balance. The VRB and the VFW make up the service chain. As shown in the picture, this is the real death stream in the network fabric. So that's all the connection of our sharing. So do you have any question? The solution, the SDN solution is cooperated with our partner to integrate together and the details of the implementation is implanted by the partners. So we intact it with our open stack. Also including other NFA, part of the NFA functions also supplied by our partners. Any questions? So we have deployed our class system within two data centers, separated. And so we use the multi-region solution to use the unique horizon, the dashboard, and the unique keystone and the user measurement to manage the multi-region. Any other questions? So in our solution, total solution of the CIB, for the CIB platform, we have many improvements such as the adjustment that's subdomain and also some customization. There, for this kind of improvement, we should have cooperated with different open stack modules, such as the Keystone, Celerometer, and also the billing system. We have used based on the domain sub-concept and we embedded a similar to the solution of the projects of the parent project and the sub-project, we implemented the domain and sub-domain and also for the function of the quota for the domain and also the sub-domain to control all of the usage of different departments. So that's a very fine-grained measurement and very accurate for the IT department. They can predict what kind of department we will use more resources here. So that's very convenient for the customers to make the bucket for the new years. Yes, we reconstructed the domain and the sub-domain and we used the data scratch of the domain with extended parameters there. And because we use extra parameters so that the API is the same as the original, so we supplied different parameters to implement this new function. Actually, we should do some more work that we should reorganize the data. And also, sure, we use the API but with different parameters. That's the difference. So any questions? More questions? OK. Thank you, everyone. Thank you.