 Okay. Let's start. Good afternoon. It's my great pleasure to see all of you here. I'm Zhang Rong. I'm active in C-Class Knife Scale, Coup 3 Maintainer, Communist Member. And now, I work for Sonya.com, a container platform R&D center, mainly responsible for a Canadian platform, a scheduling. And today, I'm going to talk about Coup Spray functions, Coup Spray community, and our future plans. And I also talk about the topics that may interest you. First, let's take a look at the Coup Spray overview. Coup Spray is a single-class lifecycle project that can create, configure, and manage Kubernetes cluster. It provides optional addictive functionality on top of core Kubernetes. The mission of Coup Spray is to easily install and manage Kubernetes clusters. Now, let us look at Coup Spray in great detail. Coup Spray is a cluster lifecycle manager. It is composable, very flexible. It is production-ready, and it is based on Ansible. You only need to install a system, for example, Docker, CLO, RPM package, or Divina. It supports cross-platform and cross-architecture deployment. And our community was established in 2015, and we became a base of Coup ADMs in 2018. You just need to bring your own machine to run Ansible and install Ansible. Then you can deploy Coup Spray, and we are certified installers by CNCF. This is Coup Spray at Glance. Now, let us look at the deployment workflow. First of all, the Coup Spray OS in the past needs to be chosen by yourself. Now, you can automatically check the OS system. You don't need to choose, so this is the first step. A bootstrap OS. Second is a pre-install. You need to do some checks. For example, DNS, or hostname check, or network plug-in check. If your network plug-in is not within a Coup Spray, it will send out alerts. So this is pre-install, and then you need to install Docker and ETCD. And then you need to install master and Minion and configure a network plug-in. And then you need to install add-ons, like dashboard or storage plug-ins. So this is the deployment workflow for Coup Spray. Usually speaking, Coup Spray supports all cluster life cycle management. It supports full life cycle of cluster operations. Create a new cluster, upgrade cluster, scale a cluster by scale master or scale node. If a node fails, you can remove node, one node, or more than one node. Or if one cluster doesn't work anymore, you can remove an entire cluster. By upgrading cluster, it will back up to ETCD. So this is functions of life cycle management. Now let's look at certificate management, Coup Spray. It uses Coup ADM to do certificate management. It will automatically generate certificates. And for ETCD, the names are based on that way files. It will generate certificates in the first node. It will generate all certificates in the first node. And then it will check whether certificate is present on the second node. If not, it will copy to the second node. So it will copy certificate to master to all the nodes. And it will support member, client, and admin certificates. Now let's look at HA architecture. ETCD HA architecture, you use one server. Usually ETCD, if you have three ETCD, and if two fails, then the entire will fail. And then the API server, high availability. It supports external LB, for example, cloud LB or F5 LB. Of course, it supports other LBs. I will not go into details. And Coup Spray also supports local LB based on NGX or proxy. It can run static port in Kubernetes cluster. Now let's look at local LB structure. What is the benefit of a local LB? It's more practical and has more economic value. Whether you use F5 or cloud LB, you need to pay. But if you use local LB, if you use it at physical machine or cloud, it's free of charge. So it has a better value for money. So Coup Spray supports local LB. Now let's look at the user options provided by Coup Spray. Host provider, OS network, certificate management, container engine, Kubernetes features, and deployment mode. Host provider, I will not go into details. As OS, we have CentOS, SUSE, DBN, container, Linux, Red Hat, Redora, Atomic, Ubuntu, Network, plugins. It depends on your underlying architecture. For example, Flano, you can support host gateway. You can use the host gateway to separate control plans from other plans. Container engine. We support Docker and CRIO, which only support CRIO under Red Hat. And Kubernetes feature, which include cloud provider, public security, policy, basic author, OIDC, QoS, GPU, audit log, and proxy mode. Of course, there are other important features which are not listed here as to deployment mode. ETCD deployment, if the number of nodes is more than 2,000, your ETCD cluster will have a latency problem. You can divide it into two clusters. One is to store metadata. One is to store ETCD event data. In this way, it will improve the performance of ETCD. H.A. mode, I've already talked about it. Cross-platform deployment. Coup3, first support bare metal deployment, which include OpenStack, GCP, and other cloud platforms. By Ansible or Terraform, you can set up virtual machine and then return to Coup3, which can set up cluster. And you can also use TK8 tool. It is written by Goland. And you can use TK8 too. Multi-architecture deployment. You can choose ARM64 or AMD64 for deployment. If Pagin doesn't support ARM64 or AMD64, well, Flannel should support ARM and ARM64. So you can try yourself using this link. Next, let me talk about Coup3 community development. Over the past one year, Coup3 has grown very fast. Now we have 6400 stars. We have 2600 folks and 4400 commits and 450 contributors. Most of which actually join us over the past one year. That is, over the past one year, the number of contributors grew by over 200. Now let me talk about the cause of contact for Coup3 community. We want to provide production ready clusters. So all components need to be tested. And all components need to be safe for upgrades. All components are LHA-ready and scalable. And minimal comprehensive set of applications. At the same time, Coup spray is inclusive. All components run on all supported operating systems. And container runtime and network plug-ins can be chosen by yourself. It is based, it is an architecture based on Ansible. You can customize your own development. All options are configurable. The defaults are KAS upstream defaults. Coup spray has its own opinion on deployment strategy. For example, it only supports boundary or container deployment instead of RPM package or demeanor package. As a result, it will become too complicated or not transparent enough. And on promise is the first class choice for deployment. Next, let me talk about continuous integration. In the past, we deployed on GCE. Well, you know, the number of PR has been increasing. So we have got the donation from CNCS and a French company and opened a stack. Currently, our CR has been put in Pactoblog, which is a provider. We have created a 10 Kubernetes cluster and in-store KubaWatt. Through KubaWatt, we built virtual machines. And through Coup Spray, we deployed KAS clusters and tests. Currently, we support six operating systems and seven network plug-ins. And we support on-premise and cloud deployment. And our deployment strategies include all-in-one separate host. That is a separate KubaMaster, ETCD, and KubeNode. KubeMaster and ETCD are deployed on one node, or they can be separated. And next is the H8 deployment. And also the upgrade, the graceful upgrade and non-graceful upgrade. And that is the CR. And then, if you use Coup3, the process is improved by a lot. We can run a lot of tests. For example, YAMLIN and also the AnsibleLin. And also now we have the Shell and Pesson and also Terraform. All of the parameters can be set in the Kubernetes node default. It can be set there. And for the long-term contribution, for example, we have added new OSes and also new network plug-in or new storage plug-in. If it's a plug-in that only works on a single OS, we don't accept that. And also a HelmCharts, we don't accept that. The deployment of HelmCharts, for example, if you want to install any FK or other software on the production cluster, you cannot just use one step to satisfy all of the requirements. It's impossible. So now for Coup3, it's more about the lifecycle management. And if you submit a feature, for example, the network plug-in, then you can add your name to it and you maintain it. If nobody maintains that and if it's faulty, then we may waste it. Then for Coup3 release, we support it like this from 2.0. In the past, we only have the master branch. We only maintain the master branch. For the other branches, well, they are not that stable. For example, 2.7, 2.8. Later, we will support the 2.8.1 or 2.8.2 as long as we... If we have bugs, we can fix them ourselves. And also we can use... we can fix the master branch bugs. And if you want to renew faster, you can tell us. We can just release it pretty fast. And then let's talk about the Coup Spray community in 2019. First, we supported the CUBE ADM experimental control plane. It's provided by CUBE ADM community. And it's the mask function and node local DNS. That means for every node, there is a local DNS. It's a new project. It is the DNS local cache. It is the function of service discovery. And also we have the HA proxy load balancer. And also we support ARM now. And we have clear and Linux OX support. And also we support the local path provisioner. And then for CI, we support the CUBE VR to create VM. And then we return the RP to CUBE Spray. Then the CUBE Spray will deploy. And then on OVH, it's based on OpenStack. So we use that to test offline environment and also the OpenStack environment. We use Terraform to create VM to let it drive the CUBE Spray. And also we have the KS conformance. And as for roadmap, this is the roadmap. Now, for all of the options, they're not that transparent. And documentation is still a weak link. We hope that the configuration is more transparent or we can make them into parameters. And we want to support the SIG and the upstream community. For example, the CUBE ADM component config and ETCD admin. So for CUBE ADM or ETCD admin, you know that for component config, it's for the configuration of each component. This is not fully released into the community yet. It's just that somebody in the community mentioned this direction. For CUBE ADM, if we want to deploy, you need to have this class-in-class-configure and crop-prosy and CUBE net. To configure those, that is recorded in the configured map. And if you manage a cluster, then class-config, there are 200 or 300 lines of config files. It's hard to revise. So if there is the component config, then it's programmable. It's more transparent. And then the maintenance can be tighter and easier. Then the decentralized orchestration. It is to speed up the scaling. And also it supports now the automatic scaling. And also we support multi-arc and CI. This is the CUBE Spray community. You can use Slack, GitHub or WeChat to contact us. And this is the channel group of WeChat of CUBE Spray. If you want, you can scan it. And you can ask questions and I will try to answer them. You don't support auto-scaling? Yes, but you need to manually add them. When you add the nodes, the CUBE installation and so on, do you need people to install that? No, it's automatic. So as long as we add the node, then we run it. We run the Ansible, then there is a new machine. Yes. It's automatic. Really? You can try it. Thank you. I'm not done yet, actually. An interesting topic. Do you think Kubernetes is deployment is slow? Yes, it's slow but acceptable. 30 minutes. How many nodes? 10. 30 minutes. Because when you download it, you need a lot of time. So Amazon, it's okay, but in China, it's even unimaginable. So I can give you some suggestions. For example, you can have a bigger memory and also you can have the Ansible node near the target nodes and also you can choose load latency machines. And also, when you deploy, you can deploy master nodes first, then the other nodes. You can use the limit or retax. And also, now we support control plan. And it simplifies the process. So that was my topic. So if you want, we can discuss it. And there is another thing about managing Kubernetes in air gap and offline environment. So actually, if you want to ask questions, it's now limited to my presentation. Thank you. It is the Ansible script. Okay, it's Ansible script. So it's a command line tool. There's no API, right? No API. You can implement API. So all of the operation is based on command line parameter to upgrade or scale up. Right? Yes. Okay. I'm a user. I'll have three questions. First, could spray. Can it pack binary? You can download binary packet and other things. But in UCR.io, in China, you cannot access to it. So how do you deal with that? Yeah, you can listen to my next topic. How do we... Yeah, actually, there is this operation. For example, we can download it to one node. Yeah, yeah, we know it. So there's the delegate option. But there's the problem. The delegate node, when it's downloaded, it's slash tempo, temper. So one day later, the temp folder is cleaned. So we need to re-download it. But we know the volume is quite big. So there is no valid option to put it in a very good folder. That is the problem. Now speaking to the microphone. So the binary files and the image, do we have the repository in China? Yeah, yeah, maybe on the road. There is a folder, all of the links. You can just change it to the accessible in China. But we haven't found that. But maybe you can talk about that later. And also, if a node is lost, for example, it's down or it's deleted by fault, then in the inventory, if we want to change the inventory, will the script well fail in running? So what do you do? So we can just try to use remote load and then we can reinstall it. Can we do that? So you uninstall first and then you reinstall the CastADM because in it, there are some files that can cause the installation to fail. For example, if one node is missing, then in the inventory we just remove the line of that and then we run the road again, right? Okay, the third. Sit cluster lifecycle. Do you have anyone doing the Ali Cloud and integrate Hatchy Kuba boat but in the system? No, nobody yet. For Ali Cloud, we don't support that yet. But if you're interested, you can just have one PR to try to fix that. Any other questions? I have a question. So did you use kubespray to upgrade? Of course. When we upgrade with it, we have some pain points. We had four clusters being upgraded by that and the other key cluster is very important. So we are not trusting it. So for example, for the upgrade, for when it's downloading the image or yam, it can hang there. If the time is short, it may just jump ahead. I remember that there was this 15 plus 3 master node. We upgraded for four to five hours. We have to wait there. We cannot do anything else. Yeah, somebody asked that because we want to switch to maintenance mode for that node and sometimes it can evict some pot. It can consume some time. Yeah, so if the apps have different applications, you can just don't consider it and remove that. You may manually switch it to maintenance. Yes, we can remove the undrawn processes because if you switch it to remove a pot, then it can evict some parts and it takes some time. Yeah, it can have some impact. So I have this experience. When we upgrade, sometimes the image or yam pack can be downloaded with Asable with some node. Then we can use kubespray to upgrade the kube cluster. Then the time can be saved for downloading. But entering the maintenance mode is really time consuming. Thank you. One minute left. Anything else? I have two questions. I guess when we use kubespray, yeah, we're using it and we have a big cluster. So it's performance related. One is the HA mode. When you use node boxing. So how much? How big? Every node has the NDS proxy rate. So actually in OpenStack Community, somebody tested, we can support 2,000 nodes. Tested. Yeah, you've mentioned ETCD and Event ETCD should be separated to improve performance. You can separate ETCD and Event ETCD. And I did that before. It seems, ultimately, these two ETCD could only be put in one node because K8 API server could allocate one certificate. Yeah, the same certificate is the same. You use the same certificate for two ETCD clusters in the same node? Yes. Two ETCD put in the same node or using one ETCD for the same node. It seems it cannot improve the performance of one node because you run on the same node. Yes, but if you have 2,000 nodes in the cluster, you can have one ETCD for metadata. The other is for Event. It will improve the read and write speed and also improve the performance. Can we put it in two nodes? If we put it in two nodes, it can improve. Even if we separate them, we still put it in one node. Currently, we can only put it in one node. Thank you.