 OK, hello, everyone. This is my first time to be here to deliver this presentation. OpenStack, Watcher Machine, quickly available. This is a faster technology we used in our public and private cloud. So before this presentation, I would love to give a brief introduction about me and my company. My name is Michael Chiu. And my main scope is its own high-performance computing, including GPUs, CPUs, optimizations, and networking, like OVS, DBDK, and also OVS, DBDK, VPP to accelerate the cloud networking. Another thing of my team is storage-related. Safe enhancement. We're using RDMA and ECF loading to accelerate our storage performance. So an example for us in networking is that we make Vecharland networking from two or three gig to nine gig. This is our main work. My company is called China Sea. In Chinese, it's Huayun Shuju. My company is a public and a private Honeybird cloud sprider in China. Also, we are our products, including big data, CDN, and other things. So if you want more information, you could go this link, probably, to get what you want. This is the agenda. Today, I would first give you a brief introduction about this technology's background and how the technology works and how much technology we used. Second, we will give you the brief introduction about the x-ray methods, including the technical details and the implementations. Then I will show you the performance data. We're just comparing with the native OpenStack virtual machine, Budap. Just think that the dog, when you start a dog, it's very fast, right? But if you apply our technology, you can create virtual machines and find out that Budap is only needs maybe one second. It's also very fast. OK, after the performance data, I will give you a demo to see how fast it is. OK. When we create a virtual machine, we could separate the times like here, T1 and T2. T1 means when you create a machine from the website or the console, you could go through the OpenStack NOVA and NOVA will generate the XML file for LibreWord. And using this XML file, LibreWord creates the QMAP process first to price the XML file and generate the QMAP command line. And then, calling the QMAP process to start up this guest. After that, this is what we call T1. It's including OpenStack schedule time. OK, most optimization about here is in schedule. But we have another way, is we do the accelerated T2. That means after we start the QMAP process, we go to the guest loader, and then we put kernel, and guest OS, and lots of services start up. This will cost a lot of times. For example, in central OS 3.2, when X window is installed, it will cost almost 20 more seconds to boot up. After T1 plus T2, that means the user could log in. When we access the guest using VNC or control screen, we will see the login screen. Here, you see what is the difference between VM2 and VMM if the user creates so many virtual machines using the single image. What is single image means? That means all this machine is like Ubuntu or central OS 7.2. Of this, what is the difference between them? I think they are almost the same, except what? Except the CPU numbers, the memories, all the nicks, the disk case. Otherwise, it's the same except this I just told you. Even though if user selects the same flavor, all of them is the same. Before you use the login, this is what and the key we try to optimize. OK? Hottable devices for guest OSes. To be here, I think some of you has aware of what we do, but I still need to say what is Hottable for a guest. CPU, I think, is good. This way, we see just from LibVort or Qmo, not include Urban Stack. And memory, storage, nicknames, USB or other like serial consoles, it all could be Hottplugable except the video accelerator card. But for Urban Stack, now we support storage, nicknames, USBs, but for CPU and memories, we does not support Hottplugging. But fortunately, in our Urban Stack releases, our company's Urban Stack releases, we support CPU and memories Hottplugging. This is key. What are we do the acceleration? OK, here, new created VMs equals to base VM plus data. What data means? As what just I said, the CPUs, memories, or storages, or nicks, is different between the base VM. But what is base VM? Base VM means you have the list, CPUs only have one, list memories like only 500, 12 micabits, or one gigabits, OK? And storage, storage here, we must have one system storage, not including data storage. All data storage units could be Hottplugable. But the system storage, I don't think so. It could be Hottplugable. OK, VM data is different between customer configuration and base VM. If base VM have one CPU, one gig memory, and one system disk, and user create new machines, we could increase the CPUs, memories, and disks, nicks, and other hardware disk hardware type of this. OK, how do we accelerate? We first put up the base VM and save it to use it later. So we save the VM state for base VM for VM state. This is memory data. And after that, we create a new machine, then using the according images. The images was mapped to the VM state. So when OpenStack creates disk images, we will check if it has a VM state. If it doesn't have, we were pointing from the image servers. Then we reduce the instance configuration, like to one CPU and one gig memory, and put it up. After that, we do live upgrade. And I think the hardware configurations is almost the same to the user we want. But after that, it's not enough. We need to modify the guest info. The password, the user name, the host name, and MAC, and IP, and what else you want. But what challenge of LibWord? Because we just save the VM state, but LibWord cannot recognize the VM state that does not belong to the instance itself. So here, we just see the process. Nova calls a LibWords API called correctDemand if it does not find the VM state. OK, go to the normal VM. But if it checks that it have, so it will check the VM state's UUID and the name so that it will ensure it belongs to itself. If it is correct, restore the VM, just migrate to fire. So it's very fast. If it falls, it will fall back to boot VM. So what do we do to let LibWord to recognize the VM state fire? We rewrite the VM state header. Another thing is data consistency. One VM state must be strictly by injection. And if you do miss things, what will happen? I think your VM could be booted up, but probably it has lots of data consistency, or you can't reboot, or you even never use it. OK, another thing is when you save the VM state, for both the VM state and image fires, you should keep the same. Because if you're using this image file to boot up another guest, the data in this image fire should be changed, and the data must be consistency. So before you push them to the image repository, don't touch them anymore. Just as I say in before pages, it is weak consistency. The consistency is controlled by the administration, who doing depletion of the administrator. But if he does some false things, what happens? It's out of control. So another scope we consider is we do some checksums on the VM state header. Just look at this picture. The header XML file and memory data, this fashion is the liberal VM state format. So the header, including magic, versions, XML lens, was running at compressions and unused. We're using last unused field to do some checksum things. But it still has issues. What issues? That means it will cost a lot of time to calculate the system. So it's still a problem. But we would take a lot of time to solve this. But at this stage, we're just using weak consistency. All of accelerating stage, we have three. Deployment, pre-creation, post-creation. And in deployment, we make sure that data consistency, the relevancy, the image with VM state, we must buy injection, just as I said. And metadata in VM state, because we need to get the VM state's flowers, like CPU numbers, memory numbers, and the image filter. It's very important for image filter, because we push back the VM states to the image servers or image repository. And if you do not do the filter, users may see these fires. It's unacceptable. When we pre-creations, we pull VM state and modify name and location to push VM state to the Libreford folder. And Libreford will automatically recognize this file and rewrite its header using the instance XML file. And then, put it up. It's very fast. After it's put up, upgrade and inject user names, passwords. Also, don't forget to change the Mac and IP, because we do the restore. The Mac and IP was the same with the base VM and does not equal to what the VM we created. So we need to change it and get other solutions. Here, this is the first idea what I have. And this is more faster, but it's much more complex. Yeah, what we see here means that all we do is live migration. We just start up one VM called base VM. And what user create did, we just do live migration to multi-hosts and just do the same thing before as the former one. This challenges what is that the cumul does not support local machine migration. So we need to modify cumul. And a little word to let it support this one. This is my performance data. This is including the schedule time. Normal ones is the blue. And fastboot is light blue ones. The color should be, I need to change it, but I think we could see it. And fastboot, when fastboot, we're using 3 seconds, 0.3.1 seconds. But normalboot, we're using 30.8 seconds. This is calculated from you click create, or you just tap the boot command and wait for logging. When you can log in, the time stops. OK, I will do a demo for what you I can show you how fast it is. First, because I'm using the VPN back to China, so it's probably slow. OK, to make sure this one is already booted up, we just check it. OK, this is a booted up one, and we just remove it. Sorry, it's in Chinese, but I could show you the steps. OK, that VNC has closed automatically. We now create one named Michael. OK, type general, source is, OK, networking, security. We can connect it. Sorry, it could log in. And we could try it. Yeah, you could see we could log in. So fast, after you click create, almost you could log in. It's very fast. And you could do anything you want, re-boot it. But when you re-boot, it will go to the slow pass and boot up. It's very slow, because we install the X window, and we're using the console to log in. So it's slow. Sorry, and shut it down. OK, any questions? Did you publish any technical details on this on the internet, or it's your private solution that you want to share? Actually, now it's our private solutions. We will push our code to open stack. Yeah, it will be open. Sorry, Windows VMs? OK, actually, it does not adopt the Windows or Linux guest. But indeed, that's a limitation, because we're using the live upgrade. But for Windows, only it has two versions it supported. For normal servers, like server 1011, and mostly Windows guest does not support, because Microsoft has limited the upgrade functionality in Windows only the top editions. I can grab for you, I think. But I want to say a thing that Windows guest will be able to support later, and using our other technologies. Sorry, I just think before that they'll have five or more Windows distributions, but only two support this. You can check it offline. Any other questions? We changed the code in open stack in LibreVert. I think we'll probably push it in next July or August, I think, in my plan. Also, if you have any questions about this technology or other including networking, you can contact me for more details. Our company is very open for this. Any more questions? OK, if no, I will finish my sessions. Thank you.