 OK. Hello, everyone. Welcome to this session. And in this session, we will talk about managing container cluster in open-stock native way. OK, first, let's introduce ourselves. My name is Xu Haiwei. I'm from NEC Solution Innovators. Currently, I'm working for Sending Project. And Qiming, will you introduce yourself? My name is Qiming Tang. I'm with IBM. I'm currently the PDL of the Sending Project. I'm also a core reviewer of the HEAT Project and OpenStack SDK. OK. I am Motohiro Otsuka from NEC Solution Innovators. I'm a core reviewer of Magnum Project. Thank you. OK, let's move on. First, about the gender. We were talking about why containers, if you already have OpenStack. And then, talk about how to make a container as the first class is on OpenStack. And then, we were talking about the experience and outlook. OK, first, let's, Yuan Yin, introduce why containers, if you already have OpenStack. Hi. This section, I'll talk about why container, if we already have OpenStack. So container is a type of virtualization technology. And we can use as container. We can use container as a computing resource, but computing resource. OpenStack already have a NOVA, which is an abstraction layer of computing resource. Basically, NOVA handles a virtual machine. And NOVA provides an abstraction layer of managing virtual machine. So if container is a type of virtual machine, why containers, if you already have OpenStack? This diagram shows the difference between virtual machine and container model. The left side is a traditional virtual machine model. And the right side is a container model. Virtual machine requires a hypervisor and hypervisor to switch translate and emulate the hardware. And virtual machine has its own OS. Container provides isolation for process sharing computer resources. Containers are similar to virtualized machine, but share the host kernel about hardware emulation. So you can use host resources more effectively than virtual machine. So but in this case, you can use container like a virtual machine. So it means that NOVA can manage the containers. In addition, Docker provides simple tools and a system for container, which makes container technology become very popular. You can create a container image easily using Dockerfile. So container image includes application and its environment. So you can share container image using Docker registry. So you can move application from host to host easily. Furthermore, container scalability and elasticity are much better than virtual machine. And thanks to some management tools, such as Kubernetes or Dockerfile, managing of containers between different hosts become much easier. So OpenStack needs this technology to make cloud management easier. This slide shows a major use case of container technology. The first one is the application user who only wants the application to be started quickly. They don't care how the application is started. For the application developer, they care about application lifecycle, version management, and portability. And for cloud operators, they care about how to manage infrastructure effectively, how to upgrade the system, and so on. Let's see what container technology exists in OpenStack. We have NOVA, Docker, or LXDriver, and Magnum, so many projects supporting container technology. For example, NOVA has LXDriver and DockerDriver, which provides the same interface with virtual machine. Users can start containers like virtual machine. This model doesn't support all the advantage of container technology. But this can meet the application user's needs who just want to deploy application. Next hit, hit has two ways to manage container. One is Docker resource, a Docker container resource, and the other is software config, or structural config resource. This can also meet the application user's needs, but it has a limitation of managing the container after they are created. And the next is Magnum. Magnum is a container orchestration engine as a service which deploy and manage the container orchestration engine. Magnum is a container orchestration engine. Users can use all of the advantages of container technology through its container orchestration engine specific tools, such as Kube Control or Docker CLI. This can meet the developer and operator's use cases, but you must manage container with that in an OpenStack native way. Next one is Cora. Cora is OpenStack as a service, which uses container technology to make management OpenStack easier. This is one case of operator's use case. This is just used at containers, but not a way to manage container itself. So in order to manage container well in OpenStack, we need to find a new solution, but we have some problem to solve it. The community has discussed a lot about these issues. We create a unified API which can support virtual machine, bare metal, and containers. But the use case of virtual machine and container are different, so we can't provide unified API. Next issue is how to create a unified abstraction API for container orchestration engine. This also has the same problem, which is the difference between Kubernetes and Docker Swarm and other container orchestration engine. So Magnum Team tried to solve this problem, but we can't get an argument on it. Then next section, OpenStack Sending. We'll answer this question. Please. OK, thanks for the introduction. So I will talk about how we plan to do something that can make containers the first class citizen on OpenStack. First about the abstraction layer options today we have. Maybe actually there have been some discussions in the OpenStack dive million list about the unified API, a unified abstraction that allows you to create and manage physical machines, virtual machines, and also containers in the same way. So that's the unified compute abstraction. But we don't know whether the community can reach a consensus. Maybe we don't know either whether that is the right thing to do. So that's one of the options. The other option is about, so today we have Magnum providing the deployment of different COEs, container orchestration engines, Kubernetes, Docker Swarm, MISOS, whatever. So Magnum has some interest to provide a unified API above all these COEs. We are not so sure where that thing is leading us. So that is another option. So the question is, even that is possible. Should we do that? How many times do you need to switch from Kubernetes to Docker Swarm and then back? And then you switch to MISOS. So maybe for some users, they are a fan of Kubernetes. They are very familiar with Docker Swarm. They will stick to that tool chain, that command line interface, whatever. So we are really not so sure common abstraction here makes a lot of sense. So the struggle here for us as container users is, where is the right abstraction? Which is the abstraction level we should provide on OpenStack? Suppose you have already deployed OpenStack. So our consensus today is maybe we can just provide some clustering support for containers. So speaking of clustering, I'm going to introduce to you some work we have done during the past year. This is a clustering service we call Zending. It's originated from HeatProject. It started as an auto-skating service alternative, or you can call it next generation auto-skating service. But when we started this project, we soon realized that there is no such a generic clustering service allowing to manage resource pools on OpenStack, be it physical machine or virtual machine or whatever. So we started this project as a clustering service. Once you have a clustering service, you create clusters that's your resource pool. You can attach different kind of policies to your resource pool to make it auto-skillable. For example, based on the redundancy of resources provided by the resource pool, you can provide kind of highly available service to end users or to the control plan, whatever. And there are some other features. So once we have such a clustering service, auto-skating, auto-heating, all those kind of use cases are just user scenarios. So a brief introduction of the Zending project. Here is the overall architecture of the Zending project. We have a Zending client talking to the Zending API in a restful way. And the Zending API talks back to the Zending engine, maybe more than one engine through RPC. And we are making the service very generic so that it is capable of managing different kind of resources. So as an abstraction of the resource tab managed by this service, we have an abstraction called profile. With a profile, you can specify how to create, update, delete an object by calling some backend service. Today, we support some built-in profiles already. To make the engine a little bit smarter, not so dumb, we also support some policies so that when you are managing your clusters, you can attach some policies to do some policy decisions. So on this slide, I'm showing you the profiles we are providing today. Today, we support Nova Server as profiles and Heatstack as a profile. So there is some interest in managing biomedical machine resources as resource pools. And there are some interest in managing containers in your clusters. When you are managing such a kind of cluster, whatever the resource tab you are managing, you can always attach policies such as placement policy that decides where the new node will be placed is across availability zone, placement, cross-region placement, anti-affinity placement. And the deletion policy decides when you want to scale in your cluster which node you want to delete, the oldest one or the youngest one or the node with the oldest profile, whatever. You can specify which node to delete. When you want to scale the cluster, you can specify all the parameters you can find from Amazon Auto Scaling Service and all the parameters you may be using from Heat. And we are working on health management policy in this cycle so that you can make your cluster really resilient to node failures. And load abundance support, we are using load LB, LBus, V2. We don't have LBus V1 support from the very beginning. This page is a little bit complicated. So it shows us basically how the sending server was architected. The green boxes are the core components of the sending server. We have sending API and sending engine, managing a lot of householding things. And when you want to create something as the nodes in your cluster, you can use the profile plugin. Today we provide Nova Server and Heat Stack. With support to Heat Stack, basically we can manage almost anything today supported by Heat resource types. So if you want to further extend the sending service to manage something else, we can allow you to do that. That's the lower left corner. You can see yellow box drivers. Today, the only dependency we have on OpenStack is the OpenStack SDK library. We use that library to talk to OpenStack. All the backend services, including Keystone, Nova, and Heat and whatever. So that's the only dependency we have. If you replace that dependency, you use your own driver. You can manage whatever things you want, maybe integers, flows. So the upper right corner is a receiver abstraction. That provides you something that allows you to send in to react to external events or alarms. We don't do monitoring in sending. If you have, for example, in CELOMETER or MONASKA or whatever, data center monitoring software installed configured, you can configure your monitoring software when something weird, something strange happened. You can send a signal to send in so that sending will do the operation you specified. That includes auto scaling. That includes auto heating. So that's the architecture design. The lower right corner shows us the policies we already support today. So next, I'm going to talk about, if you want to use sending for managing containers, it may be not that difficult. For us, it's just a new profile that allows us to talk to the various Docker APIs. It could be Docker Pi, it could be RxC. So we don't think container equals Docker. So that's our understanding. The backend drivers, we only have Docker support in plan. But if there are some other interests to support other backends, we can do that. So here is one of the usage scenarios we are thinking about. Suppose you have a cluster, several clusters of containers installed deployed on your VM cluster. That's a common practice today because the container security isolation is not yet that satisfactory. So that's a common practice. Suppose you have container workloads reaching very high. You may want to scale not just the container group. You will need to scale your underlying virtual machines. So with sending, that is very, very easy. So sending can help manage both the container cluster and the VM cluster. So that's one use case. Another use case is about auto-heating. Suppose you have a VM going down. So that can be detected in various ways. And sending can help you to migrate or to recreate the containers that was running on that VM into another VM. And this kind of feature can be further enhanced. For example, we can create some warm virtual machines or standby virtual machines so that you can add those VMs to your cluster and bring up your containers instantaneously. You don't need to wait for your VM to create from scratch for your VM to boot up. So that's another use case. The third use case we can think of is actually sending is such a generic collateral service. Actually, you can use it in your control plan to provide the OpenStack HA. Today, we know that the common practice today is to use Pacemaker and, for example, that's the Linux HA software. We got some complaints from our users. They don't like switching from the different tool chain. You use PCIS to operate that cluster. And then you switch back to Nova to operate your OpenStack service. So one of the possibilities to change this is once you're deploying your OpenStack service using Cora, for example, in containers, we can do that. We can monitor your containers using whatever cluster monitoring software such as Sensual or Console. And if a node has failed, we can detect that. We can auto recover that. So that gives you a single tool chain to operate the container then. So with that, I'm handling over to Highway who will give us a quick demo of what we can do today and what's in our plan for tomorrow. I think we have a demo, right? Thank you. Thank you. OK, I will talk about this part. And as introduced by Chi-Ming, Sydney is a project which provides a clustering service. So currently, Sydney only provides a VM cluster. And when we support a container cluster, Sydney wants to do it in a similar way with a VM cluster. So at first, we need to invent a new type of profile, container type profile. We can define some properties in the profile, something like a name, image, command, and networks, which will be used to create a new container. This is the, we can refer to the number server type profile template. We can see they are almost in the same format. And after we have the container profile, we can use it to create a container node and a container cluster. We can see the container are created on VMs. And the VMs cluster is also managed by Sydney. Let's see how to create a container cluster. First, we create a VM cluster with three VM nodes. And then we use the container profile to create a container cluster. And we can create multiple containers in one VM server. And finally, we got a container cluster and a VM cluster. From the graph, we can see physically, the containers are running on VMs. But logically, the container cluster and the VM cluster are separated. They are managed by Sydney separately. That means to the end users who consume container services, they may just see the container cluster. They may even don't know the VM cluster. And next, about the scalability of VM cluster and the container cluster. The scalability control is the advantage of a Sydney project. When the resources are not enough, the user may want to scale out the container cluster. They may want more resources. So how to do it? In Sydney, we invented policies. We defined in the policy to tell the cluster how to scale in, how to scale out. For example, we can see there are policies like placement policy, deletion policy, and signaling policy attached to the clusters. When resources are not enough, Sydney will receive an alarm from a thermometer. And then the policy will be triggered. We can see now the scaling policy attached to cluster 1 is taking effect. And it will tell cluster 1 to create a new VM. And then the scaling policy attached to cluster 2 will also be triggered. And a new container node will be created. This is a very simple case of scaling out. Of course, when the resources are idle, scaling in can also be done. And the VM and the container resources can be deleted. OK, the next I will show you a demo. OK, at first, let's check the profile list. Currently, there is a heat stack type profile. This profile is used to create a VM cluster. And then we create a container type profile. OK, the profile is created. And then we use the heat stack type profile to create a VM cluster with three nodes, three VMs. And in fact, this will take some time. We can check it from novelist. And we can see there are three node server created. And then we will create a container cluster. We just create one node. We can point a host cluster to this command to tell where to create containers. And from the cluster node list, we can see a new container is created. And also, from the node list, we can see a container is created. The profile is a container type profile. And then we will do the scale out. We want to scale out to new containers. And again, check from the node list to new containers are created. At last, we just access into the VM to see whether the containers are created there. We use the Docker command. OK, the container is created. A very simple demo. And then I'll talk about the outlook. About the design for the container cluster, we have many issues to think about. The first one is the container backends. There are many container technologies now, like LXC, LXD, Docker, Rocket. Everyone has its own advantage. So it's a little difficult to decide which one to use. But currently, Docker is the most popular one. Maybe it is a best choice. And about the container scheduling. When create a new container, we need to decide where to start it, in which cluster, on which node. So we need a scheduler to do this job. But instantly, we still have placement policy. And in the policy, we can define where to start a new container. It is a kind of scheduler, but not so smart. We may need to improve it to meet our needs. Or if it doesn't meet our needs, maybe we need a dynamic scheduler like a Nova scheduler. About networking support and storage support. For networking, Corel is a project which provides the container networking. We hope we can use Corel to create container network, just like creating a VM network. That will be very helpful. About storage support, there are also Corel, Rexray, and Flokr. All are some good choices. We need to decide which one to use. About the container support in Sendin, we have had some discussions in Sendin team. And we also got some agreements on some issues. But we still want to hear the voice from the OpenStack community. We want to hear your ideas, your suggestions, and also we need new hands. So please join us if you are interested in this job. You can find us on IRC or weekly meeting. All the ideas, all the suggestions will be appreciated. That's all. Thank you. Are there any questions? Hi. What do you use for spinning up the container in Seruvium? Do you use any startup script? Is there an agent running on a VM that is spinning up the container? I think in this demo, we are using Corel-S. There are many other possibilities. If you are using some Fedora, you can do that from using some custom-made images with Kubernetes or Docker. So the image is customized to spin up the container. Any questions? Thank you. Thank you.