 Hello, everyone. My name is Wang Bo. I'm from EZSec, which is a leading cloud computing solution provider in China. Today I will share some experience in our product development based on the Magnum and Kubernetes. As we know, there are three container orchestration engine in Magnum, methods, swarm, Kubernetes. We choose Kubernetes. So all the contents in this slide focus on Kubernetes. This is an agenda. Firstly, I will give a brief introduction about Magnum. And secondly, I will talk about some features of a production-ready container platform. It includes the CI, CD tools, the private image registry, service discovery, and monitor and alert log collection and search. Last, I will share some details how to integrate the features into Magnum, such as the process of the cluster initialization and how to map the Keystone user to the hardware. So this is a Magnum in Metacart release. We can see that use Magnum, we can create Bay model, create Bay. And the Magnum conductor will call the heat. Heat will call the different OpenStack components, such as NOAA, Neutron, to provision different resources that we have on the top of Bay. We can create port service and replication controllers. In the Newton release, the Magnum had removed the container operations. And the Magnum is act as a container infrastructure management service. So what's the function the Magnum provides now? It's really limit just some operations, create, delete, update for the cluster template, cluster and cluster certificate and quota. That's really not enough to meet a customer's requirements. The continuous integration, continuous deployment is a very common use case of what a customer wants. The developers make a code change and push the commit into GitHub. It will trigger a Jack's job to build a new image. And Jack could deploy the image into the test zone and push the image into our image registry. After a test, operators could use the new image to deploy it into the production zone and to complete applications lunch. So we need the CI-CD tools. CI-CD tools could help to reduce disk and deliver reliable software in short actual relations. In short, CI-CD could help build faster, test small, and feel less. And CI-CD has become one of the most common use cases of Docker adopters. We choose Jack's as CI-CD tools because it has lots of plugins and provides rich features, such as it could generate a pod slave to run a job dynamically and destroy the pod when the job is complete. And you can create a pipeline, but not just a single job. And it could trigger a new commit, could trigger a new job dynamically. And you can define some time to task. The premise of image registry, our product is to help customers to build a private cloud. So some things we need to care about. Firstly, it's security. Customer has confidential code and other data. They want private registry and sometimes network issues. Generally, the private network will be faster than the public. And in China, there is a great firewall, so users cannot pull the image from public Docker hub. So in some cases, the internal private clouds, there is even no access to the internet. So we choose the harbor as our private registry. It provides the functions as following. You could create private and public projects with harbor. And the images belong to different projects isolated between different projects. And you need to log in and to do the authentication before you can get and pull images. Next we'll talk about service discovery. Service discovery is about DNS. That means you can access the service from the service name or the service URL. You don't have to access service with the class IP. Here is an example. So service A want to need to access service B. So without DNS, you must do a following in order. So firstly, create a service B and get the class IP or service B and create service A with the class IP as a parameter. So in some reasons, service B restart, destroyed, so class IP will be changed. Service A will fail to access B with the orange class IP. So why we need DNS? Because the class IP is complex and unstable. Service name is a permanent comparatively. The class IP is associated dynamically when we create a service. It will change when the service restart. There are two scenarios. We need to access service inside the cluster or outside. So we need internal DNS and external. For the internal, we use Kuba DNS as our internal DNS. With Kuba DNS, the service A only need to know the name of service B. And he can get the class IP of the service B. And internal nodes, the service with the class IP, it will redirect to the endpoint of the service B from the DNS rules in IP tables. OK, the DNS pod holds three containers, Kuba DNS, DNS Moscow and HealthAid. The Kuba DNS process will watch the Kubernetes master for changes in service and endpoints and maintains the memory lookup structures to service DNS request. The DNS Moscow container has DNS caching to improve the performance. And the health container provides health checks for the DNS Moscow and Kuba DNS. With the DNS pod, DNS pod is exposed as a Kubernetes service with static IP. So when a new container creates, the Kuba light will pass the DNS configuration as a flag for each container. This is external access. Here is an example about how to access a Kubernetes service. The service is a load balancer type. Firstly, you'll need to know the VIP and pod of the cloud load balancer. Then it will load balancer into one of the nodes. Internal one node, the node IP and pod will redirect to the class IP and pod. Then redirect to the pod IP and pod. It will do twice a DNS all through the IP tables. We use Ingress and Ingress Controller as our external DNS. The Ingress is a Kubernetes resource which gave a service URL to the service. In the picture, there is an example. We have a service WordPress. The first line is the class IP of the service. The following two lines are the two pod's IP. On the right side, here is the Ingress resource definition in YAML. The important part is the rules in the spec. Here you can see the host name, the pass, and the service in the back end. So here is the mapping from service URL to the service. Also, we need an Ingress Controller. An Ingress Controller is a reverse proxy. It could forward the service URL request to the endpoint. It works as following. Watch the Kubernetes API for any Ingress change and update the configuration of the controller and reload the configuration. In short, Ingress Controller could detect the Ingress resource change and fetch all the endpoints of the service. This is the configuration of the Ingress Controller. In this example, we use the index as a back end. On the left side, there are two parts in the picture. The first part on the top of the picture is there is a service name. It maps the service name to the endpoints, means to the pod's IP and pod. The second part is on the bottom of the picture. It's a service URL, what we define in Ingress resource. So with the Ingress Controller, Ingress, the access process is first go to the service URL and the request to the Ingress Controller load balance to the endpoints. So why we need the Ingress? Because Ingress does a better job than the load balancer. Ingress does not occupy the pod's on node. Because the load balancer service type, it will occupy the node pod's on every node. Another reason is the Ingress supports TLS access to the service. So this is an access service with Ingress. You need to know the service URL and the pod, and it will come into the Ingress Controller. And Ingress Controller reads the configuration and load the balance to the endpoint to a specific pod. Next, we talk about something about the monitor and the alert. We use the promissors to do that. And the alert manager and alert A are the plugins of the promissors. So on each node, we run the sea of the weather and the node exporter. The sea of the weather is integrated into the cobalite, and the node exporter is running as a demon site. The sea of the weather collects the containers running in for, and the node exporter collects the nodes running in for. The promissors pull the metrics from each node and push the alert. You can define the alerts by the promissors. And the promissors will push the alerts in for to the alert manager. And the alert manager will push the alert data to alert A. Here are some metrics we use in our products, the node infer and the container infer. It's very common, node CPU usage, memory usage, file system usage, and network IO rates, same metrics for the container. And next is log collection and search. We use Flunty and Elac Search and Kubana. The Flunty is running as a demon site on each node. It's responsible to collect the container logs, and the data will store in Elac Search, and the user can search any logs they want, and the Kubana to display the log infer. This is a cluster architecture. The rectangle in different color means the blue one means the VM. The orange one means the Kubernetes pod. And the arrows in different color means different network. In our products, we share one harbor, one private Docker image registry among different clusters, because the Kubernetes components, such as the Kubernetes API server, the schedule manager, are all running as the Kubernetes pod. So we need these images. And these images are the same for different clusters. So we share one registry to store these common images. On the slave nodes of the cluster, there are features, which I just mentioned, the Kubernetes, the ingress controller, the promissals, the JFK, and the Jackens. The Jackens' master will generate a slave to run a job. And the master communicates with the slave used by the private network. The masters need to call the harbor to get the images. And the Jackens slave also need to pull the new image into the harbor. These calls are using the public network. This is the initialization process. We share one harbor among different clusters. And we use heat to create a noise and do some config to run the harbor. That means the harbor is running on OVM. And for each cluster, Jackens runs as a service. Jackens slave runs as a pod dynamically. And the Kubernetes and the promissals and the JFK. Node exporter of the promissals runs as a demon site on each node. And the front D also runs as a demon site. And others run as a Kubernetes service. Here are some tricks about how to map the keystone user to harbor. We create one harbor user and one harbor project for one keystone project. So the users from one keystone project they could see the same images. They could see the same projects. So one user from dashboards or the command line to call the magnum, we add the logical process of the harbor images into the magnum. So one user called the magnum to get or push images. Magnum will judge if the user, if we have created the harbor user or the harbor project for the keystone user. We use the keystone project ID from the user to infer to judge if to map the harbor user and the project. So if there is no user and the projects in harbor for this user, we will create one. And after that, user could get images and push the image with the keystone user infer. OK, that's all. Thank you. Any questions? Sure. OK. And on the stuff you did, adding Prometheus and Fluendee, is this something you, because we did the same for Prometheus and we pushed it upstream in magnum for Kubernetes in Atomic, is this something you pushed for Fluendee also or are you planning to? We have to drop in our privacy. We didn't push the YAML files into the upstream. OK. Is this something you would be willing to do? Maybe we will. Maybe we will. OK, cool.