 Good afternoon everybody. Today's my topic is a way to build DCOS on open stack. I'm a senior engineer from P2 Cloud which is China's startup focused on cloud computing. It's weird that my Magnum was added at the topic end. I comfort myself that there are a lot of topics about about Kubernetes and Magnum and the master machine. My topics also work in the same way. So I should tell the truth. My presentation point is about DCOS with open stack not focused on the Magnum. So I'm so sorry about the Magnum fan to make them disappointed. The last is the list right. By the way, my colleague Han Chen have one presentation at tomorrow forum B. So I know I'm interested about the 800 nodes practice. Anything about the cluster scale troubleshooting will welcome to the topic. Okay. So just sit back and relax. This presentation now begin. My engine contains three parts. Part number one is about the DCOS related thing such like the architecture and who and why we use it. Part number two is about the DCOS deployment because DCOS can deploy that anywhere. So the main work we should do is to give the methods the input structure no matter what the provider is. It might be AWS or open stack no one on any container engine such as some environmental. In other words, DCOS can run anywhere and our concern is to build the DCOS about open stack infrastructure. Part number three is about the work we have done and what we are going to do about the history at the beginning. Google certainly is a top star. There are three generation resource scheduler in Google. The first and the second generation, they call it bulk. And the methods is open source. The third generation, they call it omega. And Kubernetes is a corresponding open source version. So methods is a heart of DCOS and it is two level schedule system. The master maintains the results and the agent asks us for results product and run task. The disadvantage is clear. Framework and application did not know each other and the results claim and the results destroy need a lot of time when the whole system is busy. Another problem is the global log from the master set. The task needs huge results. It's likely to waste more time when the current result is limited. The improvement is not the key we talk about today. This slide shows what is DCOS. DCOS is more like one resource manager based on methods. It's a light, reliable and easy to scale out. The top level is DCOS is framework. You can call it application. Anyway, DCOS makes a whole cluster like one person computer and you can run your state for and the state flies task easily if you want. It said Twitter have managed more than 10,000 nodes through methods but I cannot wait. From this picture we can see big data framework run and manage on the DCOS system and no matter what the real infrastructure it is. The back infrastructure might be provided by AWS, OpenStack, KVM or environmental. To make it easy we can compare the DCOS to Linux kernel in operation system. The kernel manages and maintains the results just like the CPU call and the memory and storage such as even the GPU card and provides the process and service with system API. The operation system makes the physical machine easy to use. We can ignore the machine detail no matter what it really is. Another slide shows more detail information. We can see layer from two different systems do the same work just collect and manage results assign and restore it. Methods do the same work as Linux kernel. In the past year most organization chases their application and service as their pets. They do everything they can do to keep them healthy. For scalable system it better assume that your service will fail and be prepared to replace it with some new instance. DCOS and the methods is really powerful. We can see that who uses the methods. Just like Twitter they handle 10,000 nodes with a few administrators. If you use the methods for their CI system, Microsoft use them for its self cloud. Samsung cuts the cost by 60%. Apple also use the methods for the theory. I draw one slide to show the comparison of the methods and the Kubernetes. I just wanted to say DCOS decides to run anywhere on premise no matter what infrastructure provides you choose. The DCOS abstracts the infrastructure below and provides powerful tools and the best practice to build self-killing distribution system. I suggest that we use OpenStack to construct our infrastructure and use methods to run our application. I agree with AutosView who is a Magnum project team lead. He provides us a requirement list if we want to use methods such as you have a big data center and you have a lot of jobs. You might have an infrastructure team and including a lot of experts. You want to schedule multiple gens work a lot such as Heidl and Morrison. The last point is very important. If you want to manage a large scale cluster, you should have a cluster. If you not met that requirement, please choose Kubernetes or other orchestration system. This picture is about the typical system deployment from the official website. It took Amazon Web Service as simple. We can also replace AWS service with OpenStack instance. For example, we can launch an old number instance for public load balancer which provides virtual IP address. Then put another instance as master and agent to manage and consume the results. We can visit the public IP and visit the master. That's the really private agent worker isolated from outside. DCOS is powered by MetalSphere whose product is based on open source metals. DCOS has an app store from which we can get mining application support like Jenkins, Morrison, and some other such as even in some distributed DB service. The latest DCOS was coupled with Docker. It definitely supports Linux container. You can turn one of those machines into a single logic computer with a single GUI and increase the utilization and reduce cost. Let's talk something about the OpenStack Magnum. I think Magnum is the most hot project in the summit. Let's look at the infrastructure with a container. In Magnum, you create an entity, a cloud result called Bay. Bay is the place your container orchestration engine runs. You can choose one, choose a Kubernetes or my source in large Bay. Bay is just a collection of compute instances. Some enterprise has to adopt open source components due to a lot of reason. Just like a lot of support under the long and most stack and the complicated operation guide. I think it's right, but I think it's the still reason that our vendor exists. It has to say that Magnum is suitable for enterprise uses or not, but I believe it will grow and will be stable. The Magnum logic is clear. It creates and configs heat template called the OpenStack API and launch instance right service. This is the current methods driver in Magnum and the blueprint DCOS driver in Magnum. The current methods only includes methods and the marathon. It's better to improve the methods Bay how more component and the family enhances to an open source DCOS. A large part of the driver logic is similar. Install, IP detect, generate configuration YAML, but image and heat template logic are different. This is like this right how to create a new methods framework. I have said the methods also have two level results schedule monitor. So we have to add two level logic from the master side. We design the scheduler and assign the results. And in the engine side, we report state and when the state lies, our system will drop. You can read more from the official website. The methods need to communicate with open ID provider to application request. We can use the Google or GitHub of ID provider instead. But we can also use Keystone for also decision. That needs to be after, that needs to be after the DCOS logic. That's the work we have done. From this slide picture, we can see the default logic is changed. The stock and check user name was replaced by the email, check email default. The Keystone is the backend authentication node. We now customized some DB framework to fit our product, such as application support my circle group replication and my circle load balancer. We use my cat as our load balance proxy. My cat is a powerful proxy and open source, which supports circle 19 to standard band armor, our database. We also use it to DBA shaping and speaking work a lot. Okay, that's all. No demo and no rich media show. I'm sorry for that due to the short notice. You can email us and have any questions. We will reply as soon as possible. Okay, thank you. That's all.