 OK, good afternoon. My name is Luo Gang Yi, and I come from China Mobile. And today, I will share some experience about how to use OpenStack to manage existing virtualization and storage. Here is my contents. And let's begin about the current situation. You know, China Mobile is the largest telecommunication company in the world. And we have more than 800 million users and more than 100 data centers and 100,000 racks in production. And most of our data centers are built on traditional IT architecture. You know, it likes VML plus IP-SAN or FC-SAN and plus 86-based servers. And since we are fully turned into OpenStack, our customers are most of our customers are managers of the data centers. And they want to use OpenStack to make everything down. And they want to use OpenStack to manage VML. They want to use OpenStack manager asan and manage existing VMs and networks and integrate with existing monitoring system. And also, they want to keep the operation unchanged, OK? And but since it's not that easy, we have encountered these three major problems. The first one is terrible send drivers and the mud path software. And the second one is how to import existing virtual machines and the networks from the center. And the third one is the operation gap between VML and OpenStack. OK, let's first talk about send. We have integrated almost all popular send from EMC, HP, IBM, to Fujitsu, Hitachi. And we can see none of these send drivers is perfect. The major problem we encountered include incomplete function, such like unsupported code and live migration in accurate statistics and unintelligent scheduler. For example, if we have more than one storage pool, the scheduler driver cannot smartly to choose the best one. And besides these problems, and the more serious problem is the another two. It's wrong mud path device and overload on storage port. And we will analysis follow. OK, let's look at this scenario. In the beginning, the user attach volume 1 to VM1. So I assume we have four storage ports. So we will have four paths. The path name would like the green square shows IP0, long 0, IP1, long 0, and IP3, long 0. And these four paths will be mapped to four devices, like STB, SDC, SDD, and SDE. And the multi-path software will map these four devices into one micro device, like the orange color shows. OK, so let's do simultaneous detach and attach. I assume another user is attached volume 1 from the VM1 and another user is attached volume 2 to VM2. And notice that the detach method has two phases. One is disconnecting the path in the computer nodes. And then another is disconnected the volume map from the same. And these two paths are asynchronous. So there is a possible we clean up all the paths in computer nodes, but the driver didn't finish the disconnection in the same. And at this moment, we do another attach, like we said, another user attach volume 2 to VM2. And the node will run rescan again. So the rescan will recover these four paths, like that. But the volume driver will finish this connection eventually. So the green part will vanish. And OK, so this will leave a recorded garbage device and garbage path. And the more worse, if we go on, we attach volume 3 to VM3. Since long zero no longer exists, the operating system will reuse the long zero path and map it to the garbage device. And it is still like the orange color shows. And then what was most, if we attach volume 1 to VM1 again, it will use the same path again. So you can see from that, the two virtual machines will read and write the same device. And it will definitely result in data corruption. And it's a serious problem. And the solution of this problem, we have three solutions. And I think the root solution, well, we should add a log between NOVA and Cinder. But since we can have several Cinder volumes, and we can have hundreds of NOVA compute, so the log is harder to design. And some partial solution would be like, we can add some wait time after every attach and detach. And we can run some script to periodically check the existing wrong device. And if we find some problem, we can send a log. Another serious problem about send is overload on storage port. Unlike VMware, OpenStack maps the host on the storage port dynamically. So there is no way for Cinder to know all the hosts. There is no way for Cinder to know all hosts information before attaching along. So Cinder will map all the hosts to the same storage ports which are configured in the Cinder configuration. And this is a small environment. It's not a problem. But in our production environment, we may have thousands of computer nodes. And if we map all these thousands of computer nodes to the same storage port, it will cause the storage port overloaded. And when the overloaded happens, the connection to the log may be stuck and may dead. And the user will see IO arrow in virtual machines. And their file system will be ready only. And in current Cinder volume driver, I think there is no good solution. And we cooperated with some same manufacturer to use some temporary solutions. Such as we can plan the host and the storage port map in advance. And we can pass the map to the Cinder volume driver. So it's more like a static map, not dynamic map. And we think it's not a good way. Every time we add a new host, we had to change the map. It's not automatically. Let's turn to VMware integration. And for VMware integration, our customer proposes three major demands for us. And the first one is managed very well by OpenStack. And they won't keep the operation offered by OpenStack similar to Vicenter did. And they won't import existing virtual machines, disks, Cinder networks. But they have some problems. First, the community has no neutral driver for DVS and standard v-speech. And OpenStack has no live migration for VMware and no clone, no incremental snapshot. And another problem is OpenStack only can see the cluster in VMware. They cannot see the ESXi and the Datastore. And this has some influence. Maybe a user has four virtual machines of a business assistant. And he wants to separate these four virtual machines into different ESXi and the different Datastore to achieve high availability. But since OpenStack do not know the information, so the scheduler is totally rely on Vicenter. And this is not a user wants. And for importing existing virtual machines, there are also some problems. First one is VMware has no volume model. So we cannot import the second disk of VMware to OpenStack. And we cannot describe the volume in OpenStack to VMware. And the snapshot model of VMware and OpenStack is incompatible. So basically, in OpenStack, the snapshot model of local disk is for Snapshots. And for VMware, it's incremental snapshot. And another problem is VMware allow duplicated VLANs. But OpenStack do not allow it that. So if we encounter the VLAN duplicates, we can only import one VLAN. So it's probably not a problem. And we can see in community version, neutral can only support NSX. And no one network works, but no one network is deprecated. So we have to write ML2 driver to support DVS or standard with which the basic procedure is like the blue square shields. To make operation of OpenStack more similar to VMware, we add some dedicated APIs. These are four APIs we have added. It's OS server clone. It's used to clone virtual machine. And OS Live Migrate of VMware is to do live migration in VMware. And OS Snapshots of VMware use the VMware ways to do the snapshot, do the incremental snapshot. And also we can use this API to do revert. And the last one is OS VMware resources. This API is used to inspect the information of the inner cluster, like the CPU and the memory of ESXi, the usage of the data stores, and how many data stores other cluster have, such like that. And also we enhance the sound existing APIs. For example, we add some metadata in Nova Boot. So when we do it in Nova Boot, we can designate the ESXi and the data store. Also in VMware migration, we also add some metadata to allow users to designate ESXi and the data store. The next problem is in part existing virtual machines on the networks from VMware to OpenStack. So to do this, we write some scripts. The scripts mainly do is first we query virtual machines, virtual machines to do recenter APIs. And we can differentiate them by the managed by section. If a virtual machine is created by OpenStack, the managed by section will be right. The value of managed by will be OpenStack. And if we move out, it's another value. So we can find the existing resources by the query. And then we will query the detailed information about this VM. And when we have this information, so we can create a Nova DB item and create a neutral DB item. So we can create a virtual machine directly. But we still have some remaining problems. Firstly, like we said before, we cannot import the second disk of a virtual machine since there is no way to describe it in OpenStack. And we cannot import snapshots. And we cannot deal with duplicative lens. And these problems, we do not have the solution yet. OK. For integrating with existing monitoring system, our experience is extended Celerometer. So we can implement a new thing in Celerometer pipeline to fit the format of existing monitoring system. If the existing monitoring system uses SNMP, we can add SNMP publisher. And we think UDP is preferable for sending data. So if we use default population, we use MQ. Two-minute data may cause the MQ blocking. OK. That's all my presentation. Do you have some questions? Hi. I'd like to know which version of OpenStack has already when you face these problems. Kilo. But we have test of Taka. Some problems still do it. It's still the same? Yes, same. Do you guys are running it in production already? Yeah, yeah. OK, good. OK, thank you.