 This is today's agenda. First of all, I will introduce you myself and my group. And then I will talk about the OpenStack in the Intel IT, how many OpenStack clouds within Intel IT. And third, I will talk about the way we do cloud. I will generally introduce the approach we do cloud workflow, what kind of steps within the workflow. And then I will introduce each practice in detail. At last, but not least, I will talk about how to integrate the OpenStack cloud with the Intel validation cloud. So my name is Shu Quan Huan. I'm an Android Intel 2011 software engineer. I'm a square master. Currently, I lead the IT engineering computing OpenStack team to provide the cloud solution for our validation labs customer and provide the solution for Intel internal use. For myself, I focus on the cloud solution and agile methodology. My group is called the Intel IT engineering computing, which is a group to provide the cloud solution within Intel. Our currently, the major customer of our group is the Intel validation labs. We provide the cloud solution to the Intel validation labs, help them to establish the validation environment as soon as possible, provide them the ability to create the compute network and storage for them, and help them link the resources together. Our group started OpenStack journey since 2011. Actually, there are three types of OpenStack cloud within Intel IT. The first one is Shanghai. The first one called the Silicon Design Cloud. This type of cloud is to provide the Silicon Design engineer cloud services. Second is called the validation cloud. This cloud, like I just mentioned before, is to provide the cloud solution to the validation labs, help the Intel validation team to shorten the validation time. And the third is the hosting cloud. This cloud is for the hosting purpose. Before OpenStack, Intel has other cloud solutions for provide cloud services. But in the future, the tendency is to use OpenStack to manage all the infrastructure, no matter the existing infrastructure or the new infrastructure. So I'm going to tell you how the way we do a cloud. There is no doubt we have to do some customization based on OpenStack to fit our business needs. So other developers, once he did some customization, he will submit this kind of code to our own local repo. And then it will trigger the following steps, like the build, pre-release, and production. I will show you an example. For example, we have some snapshot improvement. And this improvement will go to the local repo. And there is a server. He will check the checks and will apply it to the build stage. In this stage, it will run some unit tests or static checks to see if there is something error that happened. If it happened, the flow will stop. If it passed, and it will be automatically deployed to the pre-release environment. And when the code was deployed to the pre-release environment and templates will be run to verify the functionality of this cloud, if it all works, no problem of this code. And it will go to the production environment. In the production environment, we have the cloud data analysis services to provide the data analysis capability to our operator to help them to troubleshooting or help them to provide them suggestions. And we also develop our own one-stop shop portal. Within this portal, we integrate many functions like the monitor and the alarm function and the other and the templates result. All the operation-related stuff you can find in this portal. So this is the workflow of our when we develop and operating a cloud. And in the following, I will introduce each of them in detail. I think some of you may be familiar of this workflow. Yes. Actually, this workflow is a continuous delivery workflow. And why we enable this CD approach? Because we want our developer to get feedback from their code submit. And the developer can see the impact of their code. And no matter the developer and the operator or some operation guys, once they apply some changes to the code, they can see the result and get the feedback quickly by trigger this workflow. So first of all, I will introduce this workflow in detail. How many of you heard about the CD continuous delivery? Can you show your hands? And you are using this workflow in your development? Have you? OK. So why we have to use this CD? Because at the beginning, when we develop and operate the open stack cloud, the approach we use is to have each developer to manually configure the cloud. That will cause a problem is the configuration is easy to lose. Because when we change, we want to exactly copy some configuration from one server to another server. It's hard for developer or operator to remember all the things. So we want to have all the configuration co-start in the version control system. And the second is that it takes a long time to get feedback after the developer makes customization changes. Once each time developer makes some changes, they have to manually deploy a small cluster and to verify its functionality. So it takes a long time and it is no good for our velocity improve. And third is that by taking this approach, we can easily check and try the latest update from the community. We can manually or automatically merge some latest code or bug fix from the community and have this changed code go through our workflow and go to our environment to see some new feature. And last is we can integrate templates into this workflow and have templates run automatically. So actually, this solution I refer to the OpenStack official CI system and build our own system in local and have all the configuration as a code. Actually, here we use Puppy to do it. And our concept is to have everything version control. We made a lot of changes here. First of all is how to install the OpenStack by source code. Previously, we only installed it by the package. And the second is the environment cleanup problem. There are two choices here. One is to upgrade or is to rebuild totally. You have to consider it. So back to the snapshot improvement, I mentioned before. At the beginning of a trial OpenStack, we found a problem that if you snapshot a 20 gigabyte VM, it takes a long time. So we do some improvement. We analyzed each step of the original snapshot. We found that there are two steps. It takes a long time. One is the upload back to the ground server if the image is very large. That takes a long time to upload. And the second is the rebase steps. Actually, in some cases, there's no need to rebase. So we try to reduce the time-consuming stage and optimize the workflow to fit our requirements. I will show you our fast snapshot workflow. Here is a general diagram for the snapshot. You can see when you launch a VM, it will get an image from the grants and will store in the VM instance folder. During the fast snapshot, in some cases, actually, there's no need to rebase. So we only upload the disk file to the grants. And when you need to deploy it, only the disk file can have to copy to the NOVA. And then it can launch the VM for you. So after this process optimization, it saves a lot of time when we're doing snapshot. For our auto-department solution, previously, we used the loader deployment to do that. And currently, we are using the puppet. When we do the deployment, all we need is to use one new disk. And we burn all the image into a new disk. And when you launch the disk, you can launch a master server. The last server, actually, is the PXC server, or a puppy, all that master server. And we have configuration in this server. And by using this master server, you can configure. You can auto-install your cluster from bare metal within several minutes. After you deploy an open-slack cluster, you would like to know if your cloud works well, if all the functionality works well, then you need templates to help you. But previously, when we used templates, we found there are a lot of problems. One problem is that sometimes templates will have exception. Then it will cause your environment dirty. So we do some improvement here. And second problem is that there are dependency in the test. For example, when test A is finished, the original tool available network is removed. And when test B is run, it will fail. The third problem is that previous templates is not useful for the test case customization. And the last problem is test result and error information is separated. It's not easy to visualize it. So we developed our own launcher script to address these problems. So at last, we can customize our own test plan and minimize the impact of the templates and get very nice diagram from the test results. If your cloud pass the templates, that means the cloud can go to the production level. We have the Cloud Data Analysis Service to help to analyze the production level OpenStack cloud data. What's cloud data? Actually, cloud data is we focus on the OpenStack service log. And by analysis logs, we can get the user behavior, machine behavior, or even some error you come easy to find. However, this data is separated in different nodes and with the cloud running and the data is too large to pass. So we have this solution to help to analysis this cloud, cloud data. Our cloud use save us the storage. So we use save FS to help us to aggregate the cloud data. We will mount the folder of NOVA or other services into the save FS. And then we can get all the cloud data in the save FS and we will launch the map reduce in the save FS and get the results. So the first challenge is to enable map reduce in save. And the second is to consider how to do log mining. The approach we use is to construct the invoke flow of each OpenStack model as a vector space and put it into some algorithm to get the clustering or other results to help our operator to find some issue from these results. By analysis logs, it will help us to find some problem we can not easy to find normally. And we can put the save and OpenStack together. We can easily aggregate all the machine data at a real time. Another is our one stop short admin portal. In this portal, of course, we can see all the VMs and host status. And we can get all the monitor metrics. Also, we can see the templates test result very user-friendly. And we get the data from save FS in one place. So the administrator no need to access to different server to check logs. They can only check logs from this portal. At last, I would like to mention we also integrate the Intel Power package is by using this package, we can have your OpenStack cloud save your power. For the monitor, currently, we are using the Ganglia to monitor our OpenStack cloud and use Negros to monitor each service and send it around if something fail happened. The last thing I want to mention is how we integrate the OpenStack into our Intel validation cloud. Currently, we do not directly provide the horizon to our internal user because we don't want to break there. We do not want to change their user experience. We want to do it step by step. So first of all, we just leverage the existing systems and integrate by invoke the OpenStack API and have the OpenStack managed by the existing system. Actually, it's very smooth to integrate by just use the API. And user can have the consistent user experience when he use the existing systems. Actually, he doesn't care for, he just not care which VM he create. And why I do this way? Because our existing systems have running for many years. So the user have their user experience and they do not want to change. And this system is fit for the Intel validation lab business need. I just point out two parts here. One is our existence have more powerful permission control. You can have very low gravity control. And the system can have the physical machine and devices management. It can manage the network port, manage the KVM. This KVM is not the hypervisor KVM. It's the keyboard, video KVM. And it can also manage the PDU. So for this specific lab business need, we integrate the OpenStack into our existing system. And we hope in the future, we can smoothly have our user to move to the OpenStack horizon or other project, OK? The summary here I would like to say is that try to enable continuous delivery to include all your development and operation activities. It's very helpful for your OpenStack development and operation. And there are a lot of solutions in the OpenStack ecosystem. So you should be very clear which solution you should select. That is based on your own business needs. And only by understanding your business needs, you can only let the OpenStack help you very well. OK, that's all. Thank you. So any questions? Yes? Yeah. We have discussion with the templates community developer. Actually, they have made a lot of change in the H version. So they said, we have grouping there, but they do not accept, actually. Yeah, they said they will re-architect all the templates and make it better. So the previous problem will not exist in the future. Yeah, and our customization is based on the previous architecture. I'm sorry? I cannot hear you. I reduced time. Yeah, actually, we noticed in some cases, there are no need to do the rebase. And you can directly upload the disk file into the grant. So we reduced time in these cases. Install, right? Currently, we refer to you. Because in the source code, actually, there are scripts. You can just run this script. And we will install the package into your system. And when you install the package, you can select which service to launch. And you can run some scripts and put it according to your system. And you can select the service to run. That's OK. So thank you, everyone. Thank you for your time.