 Good afternoon, everyone. My name is Xinchao Yu. I'm the director of United Stack. I'm also the core viewer in the popular OpenStack community. I participated in OpenStack in 2011 with the Diablo release. In this topic, we will talk about the way to enterprise OpenStack deployment as a service. Six years ago, OpenStack started in Austin, and this year we come back. We have more than 10 tools and projects to deploy OpenStack, but the deployment is still a hot topic. Why? Think about it. Because OpenStack deployment is still hard for users to master, they have to learn a lot of things, and they can deploy OpenStack, and there's no perfect product to fit all users' needs. So where is the way to? In this topic, we will not talk about the details of each deployment tools and projects, and we will talk about funny stories or gossip behind the deployment projects, and we will not give you a prediction of OpenStack DES deployment trend, because we are not good at this, gotten a good at this, but if they have any interest in this. So what we will talk about is we will give you a brief introduce of main line OpenStack deployment tools, and we will give you the biggest challenge in current situation, and then we will discuss about delivering an enterprise OpenStack deployment service, and last we will introduce our U.S. installer, which codename is C task. So first, let's begin with the OpenStack deployment of his story. As we know, there are so many tools that can deploy OpenStack, where are they from? There are some very popular tools, such as Configure Management tools, which start to, which start to support OpenStack deployment, such as Puppet, Chef, and Ansible. This project is really, really good to deploy OpenStack in production, and provide a flexible way for users with complex OpenStack architecture, but they are not friendly to the user, and they are complicated, and hard to master. You need to learn Ansible, Chef, and Puppet, and you also need to learn Chef, Python, and Ruby. So there is a project named PaxStack that appears. Actually, it supports popular OpenStack models from upstream, but it provides an interactive way for users to deploy OpenStack, and even don't know Puppet, but it's not really friendly for users. If they don't know OpenStack, or they don't know Linux and command line use, so learn some deployment tools, and based on web console appears, such as Fuel and Jujo, for example. Fuel also use popular OpenStack models from upstream, but it has web console, and which code name is NeoGun. They hide the complexity of the deployment process, and it's more easier for users to deploy OpenStack. In OpenStack community, they also have a project which use OpenStack themselves to deploy OpenStack, which name is triple O. These projects use NOAA, Ironic, and HEAT, these OpenStack projects to treat physical machines like virtual machines. The idea of this project is really good, but this project is very hard to configure. And as we know, container is a very hot topic, the band now. The community has some projects which use multi-container technology to deploy OpenStack. They are very fast to deploy and can have a good resource isolation by nature. This picture is from the latest story, which is taken in this month, and we can see from this picture, the result shows Puppet and Ansible taking the first and second place, and they are very close by now, and Fuel take the second place, and we can see Chef and Packstack and other deployment tools on this picture. I just introduced these projects, which are widely used in currently, and after a glance of all these deployment tools, let's take a minute to rethink it. What do we want? Or what do our users want? Why do we have so many deployment tools or projects? Are they good enough to fit all users' needs? What's the next step for we to take on or we should focus on? I will leave the discussion to my colleague, Wei Wang. Hello everyone. I will talk about what you really care about. According to Xinchao's presentation, we know that there are so many deployment tools in the market, it seems very simple, but why? We think that there are no tools that can match your own needs. There are always some pay points, so everyone wants to make a new well. First, Upgrade OpenStack is still a hard problem. We see topics like Upgrade OpenStack without breaking the world. What people think? Upgrade OpenStack means I will break the world. I will break my VMs. I will break my service. It seems can't avoid. Is that avoid? We know that Upgrade Software is pretty simple. Users may use containers. They say I just need to pull a new image. We all know Upgrade Software is simple. Even you can just use YAM Upgrade or something to replace the software package. It seems very easy. But the problem is that the data plan, you need to make sure the data plan will not be break and the API downtime should be minimum. We want the service always keep online and always can service our customers. Not always say our service is under maintenance. You need to get service 30 minutes after or one day after. Another thing is that install OpenStack is very easy in a lab because you know everything about the lab. You know the hardware. You know the network. You know the architecture. But if you want to install OpenStack in a real world, in the enterprise environment, you will see that what you get is totally different. You may see any hardware. You may see many network nicks or any disks. For example, OpenWay Switch may not work well in some nicks like Boardcom, some model of nicks. And SAVE may not work well in some disks. For some SSD may get very bad performance. And there are many just like some of these problems. And the OpenStack community may even give announcing about the compatibility list. And another thing is that OpenStack is very complex. It has very complex configuration and architecture. I have just a picture from the OpenStack website. It's just like this. I don't want to talk about this picture. I don't want to show that every component takes some efforts or what it means. I just want to show you that OpenStack is very complex but no customer wants to get this complex by themselves. They want to use all the complex and the architecture. And they may have many, many needs. Some customer may want to use Sahara but they don't like the heat on the dashboard. They just want to use Sahara but no heat in the dashboard to write the template or something. And since the hyperconverged is a new trend, so you may add the storage service in your computer node. So you need to add some SSD in your computer node or even some HDD in computer node or even mix them. That's all the challenges that architecture or our development engineers can't handle this complexity. Last but not least, workflow may not work. We all want a choice that we all need is some clicks. Just start and see that the log is flash and then we get the OpenStack deployment is already done. But in the real world, you may not get a very automated process since you may face many, many problems just as I listed. That's the hardware compatibility or the network has some problems. But when you face some problem, what you want is just I want to solve these problems by myself. I can use my tricks or some experience. But when I make the problems down, I want to continue to bank the workflow to make the deployment choice work just continue my previous work. But for now, there are no choice can do this. So what's our answers? We have faced these problems for years and we have some plans and principles and do some work. Let me show you. First, about the upgrade. There are some topics about rolling upgrade in the community like NOAA and Neutron. I have just seen a topic named Neutron Liberty Upgrade yesterday and the community has done some pretty wonderful work in the rolling upgrade like also Oslo version project means networking OVO and like L2 agent restart. We have seen some patches about the L2 agent restart which is OVS agent. Community has to do some work because this thing cannot handle the deployment toss only. We need both work from the community and the deployment toss. Another thing is the sanity test. Before the installation, we need to do some compatibility test and benchmark performance test to save the very high performance disk. We need to make sure that the hardware is good enough to make our software wrong. The other thing, architecture. We know that architecture is complex that even you can't write or you can't get the architecture in just a web console or in a GUI. But we have some powerful toss like YAML. We use YAML in puppets and maybe use YAML everywhere. We can provide some editable templates. You can use the template. We have some template scenarios but the templates can edit again and the deployment is modular. You can just deploy some model that you want or you can just mix these deployments as you want. The last is about inspection toss. We have development some inspection toss and scripts. There is a video to show that how it's like. This is the video talk about our open source toss for network inspection. It's especially for open source. This is the video talk about our open source toss for network inspection. It's especially for open source. This is the video talk about our open source toss for network inspection. It's especially for open-stack deployments and we already put this on the open-stack big tent. You can see and make contribution under the open-stack umbrella. Integrate with all these works, toss and principles. That's what we call the CTASC. The United States presented a new deployment toss. It's not just another deployment toss. It's really another deployment toss. You can use it and make benefits. We have planned to make it open source but it needs time. Thank you. That's all.