 Hello, everyone. It's my honor to talk about, to join the Open Source Summit North America and to talk about our session today. So today we will talk about an interesting topic about Kubernetes as a service, the Open Source Cloud on M64. And today we also have Xinliang from Linaro here to talk about this topic. Actually, now Kubernetes is running everywhere on top of the cloud. And now it has been a very famous service on different cloud vendors. But actually for the M64 platform and if you want to run an Open Source Cloud, maybe, yeah. So we have offered these good solutions. So just to have a brief introduction. So I'm Kevin Zhao. I'm from Linaro Data Center and a cloud group. And now I'm a tech leader of the cloud infrastructure. So my, I'm working on something related with Kubernetes and Open Source on the side. So here is my information about the ML Air Center site. And I'm working in Linaro too with Kevin. And we are all working on our server for the data center and cloud and so forth. Yeah, this is my email and my IRC. And you can talk to us. And if you have any stuff or questions, yeah. Thank you. Okay. So let's come to next. So some of you may know that what is Linaro? I just want to give a brief introduction. Linaro is an unbased ecosystem of open source organization. So we are dedicated for offering the open source project for the ARM ecosystem. So not only for the embedded system, but also for the data center and cloud. So here is the introduction about the Linaro LDCG. So we are focused on the server ecosystem. And we have several members now, as you can see that listed on the bottom on the slide. And we are now working on several famous open source software for the ARM servers. So in the architecture, we have UEFI, ACTN, server writing. Besides, we have a big data group working on the big top, the Hadoop, and Spark. And we have a cloud infrastructure group. That is Xenia and I working on. We are focusing on the Kubernetes open stack and site on the ARM ecosystem. So besides, we are searching and approaching for the scalable AI framework and the novel approaches. Besides, we are running a Linaro developed cloud. So that is an enterprise class-unpowered server hosted in the UK and available for the development and CI. So we can offer the VM-based instance and also offer the Kubernetes as a service. So if you have the interest on working with the ARM-based ecosystem, we are welcome to open, we are welcome you to registration. So we are totally based free cloud and benefit to the upstream. So here is several good projects that we are working on. And I have just talked before, so I will generally go through these slides. The next is general information about our Linaro developed cloud. So the developed cloud is used totally on top of the open stack site. So actually for building a private cloud platform, open stack is the only solution now, actually. So on top of open stack, we have offering the overlay service for Kubernetes. And now we are in to enable the IoT agent cloud ecosystem to develop pod test and enable the CI CD for the architecture. So, yeah. So now we are the open stack powered and all our resources are open to the totally open source community. Okay. So today we have this, we have talked about the three parts. As we know that we are talking about the open source cloud on the open source Kubernetes service on 64. So we should focus on the about two part. So first thing is actually we could not avoid the infrastructure as a series. Yeah, actually, in the public cloud side, you can choose some public cloud vendors, famous vendors to get the infrastructure service. But if we want to build up a totally open source platform, so we need to totally do that, a config network do these infrastructures all over all. And so in the first part, we will talk about the infrastructure service. So besides is our, yeah, Mungo is about to introduce how we have done to make the Kubernetes data service works on top open stack on the arm 64 platform. And in the last part, we will talk about the how is a Kubernetes pass level and with the open stack ice level integration with each other Kubernetes, how to leverage the open stack resources to offer the, offer the ability to the end users. So nice to have you hand over to Xinliang. So have you talked about the infrastructure level first? Okay. This helped me to switch to slide seven. Let me come to the first part. Yeah. So the first part is, in this part, we will share how we build our infrastructure service. Next slide. Yeah. This is our infrastructure service solution structure. We use open step to build the infrastructure service. We build and out to chase. It is all open source. On the right hand side, the bottom, on the left hand side, in the bottom, you can see we are all run our service on our, on our vendors arm 64 servers. For the storage, we use a set. And open step, we will use the latest stable, latest stable version. And we will keep, upgrading, once a new stable version can out yet. And on the right hand side, is our monitor tools and our deployed tools. Yes. This is the infrastructure service structure. Okay. Next slide. This is our neighbor solution. Yeah. As you can see, we mainly use three numbers. And on the top, it's turned out a network used by Never Know and to apply Internet access. And in the middle, the control network used by each node for control network flows. This is, meaning the API flow of the system. And the other neighbor is the internal network used by the computer and storage nodes. For the data or application, never flows. Yes. This is the neighbor solution. We are building. So next, this is our deployment solution. We use COLA container deployment method, which is very cool and amazing. And it package each component into the container image, which makes deployment quick and easier. It also supports online upgrade. Yeah. We have released the ARM64 Docker image on this Docker link. All the upstream events, they are all based on the tapping based images. Now we have released OpenSTEP, Lockheed, Stay and Trade version. And for that, we have released L and N version. Yeah. Also, for the, for the, we also deploy, develop a tool made with type of research to management, our bare metal machine. Yeah. This is like a method of the Open2 bare metal management solution. Yeah. This is the deployment solution. Next slide, please. Yeah. This is the monitoring solution. You can see our solution is Glamfana and fast, and then some OpenSTEP supporters. Yeah. We have developed OpenSTEP supporters to collect OpenSTEP metrics for the preliminary fears. Yeah. This is a sort of the, let's allow the OpenSTEP status status. Yeah. This is the monitoring solution. Next. Yeah. Now we have done before we have got the OpenSTEP power certificate, certification. Yeah. We all, we have passed all the OpenSTEP test shield. And we also donate our cloud to the OpenSTEP CI. They are used, now the Astrin are using our cloud to do the Amazon 34 server rotation. Yeah. Yeah. This is our infrastructure service solution. Next slide. This and that's fine. Yeah. The second part, we will talk about the Kubernetes service of our OpenSTEP. Next slide. Yeah. This is the Kubernetes service solution, et cetera. So look at the middle, on the middle. As you can see, we mainly use a mechanism project to build our Kubernetes service, which also utilize heat to create infrastructure resources and also for the low balance service. And on the left hand side, that is OpenSTEP cloud provider for Kubernetes to use the influence resource, such as the volume, low balance service and so on. Yeah. This is the Kubernetes service solution structure. Next. Yeah. This is the mechanism project with how we build our Kubernetes service. For the mechanism, no, as you can see, there is less for API service, so to serve a guy's request and then pass all the requests to the conductor to handle the Kubernetes service creation and also create the required influx resources. And the mechanism mainly use heat template to create the lower layer influx resources. All the Kubernetes service will run on the VN instance on the right hand side. Yeah. This is the mechanism architecture. Yeah. Next, I will introduce how we create our Kubernetes service for to create a Kubernetes service. First, we need to create a template. Yeah. A template will wire command line from the horizon UI. We can specify many parameters. Yeah. So you can see, for example, the Coup version and the locking key and which network solution, something like that. Next, let's slide please. Okay. Once we create a template, then we can create a Kubernetes service. That is a Kubernetes cluster. Yeah. We can also take some parameters from the template such as how many master nodes we want to create and how many working nodes we want to create. Okay. This is a brief introduction of the Kubernetes service. Next, I would like to welcome my colleague Kevin to give more details on how Kubernetes and OpenStat in creation. Yes. Kevin, thank you. Okay. Thank you, Xinliang. Okay. Thank you, Xinliang. Next, I will talk about the detailed creations for these precise. For the production level use case, for the production level use case, it will be essential for the Kubernetes has a multi-master node, right? So, we will need to create multi-load balancers for the different master. Traditionally, if you have a public cloud vendors, they will serve this and offer the load balancer service for you. We have not built our open source cloud on top of OpenStack. So, we need to serve for OpenStack for offering the load balancer service. In the left part in the picture, you can see that it is a general process for creating the load balancer and in the right side, it's a data model for these load balancer services to give a detailed introduction. For example, if we have two need to export, oh, sorry. We can see that for the first is the listener one. If we listen for the port 80 and the listener two will listen for port 443 and each listener will serve for its listener service. For each listener, they have a pool. So, for pool one, it will have an amplifier. So, this amplifier is a virtual machine and running with HAProxy to proc the HDP request for the different backend. And on the bottom of the picture, we can see there are two VMs. So, each VM will have running the application here. So, the VM one and VM two both have running the port has the application running to listen for the port about the 80 and 443. So, this is a data model totally for the Octavia. So, if you have a rotate service it will come from the listener one and will be approximately to the real VM backend. So, that is the general data models for this loader balance server. And it will be used to create the loader balancer for the Kubernetes cluster. We know that the Kubernetes API server usually use the port 6443 right. And EECDI use the 2379 and they need to be exposed to the outside API for the outside API request and also they need to be excised by the Kubernetes. So, we will create the Octavia for the Kubernetes API and also if you create the different use create the listener, create the pool and also add the member to the pool. So, in this picture we can see that this is totally for the loader balancer creation and this is a fundamental step for creating the Kubernetes cluster. Okay, let's come to the next slide. So, after the next slide is totally steps for the maximum creation and the creator loader balancer has been inserted to this totally precise. So, first when we create and when we want to get the Kubernetes cluster we should prepare the resources because it is a tenant-based multi-tenant support for OpenState and other actually the infrastructure as a service. So, it needs to set up the network, the virtual network for this different tenant and configure the security group rules to make it secure. And after that the mechanism will sign the TLS search for the Kubernetes cluster. Just you can treat it like the Kubernetes does for the Kubernetes cluster to send TLS search. And after that if you call the Octavia to create the loader balancer and configure the routes. So, after we go to the loader balancer we can see that now they need to create the master node and the process is very simple. So, it will provision the virtual machine and after the VM provision it will call the cloud init into this cloud init script to run the specified jobs. For example, the significantly job is use the system D to launch the Kubernetes cluster. So, we can see that the Qublate QAPS server and control manager. So, both of them are be launched by system D and managed by system D and also by Postman. So, after that the Kubernetes master has been launched and the other essential components. For example, the network driver container will be launched after the precise. So, after the master creation finished it will pick up the precise for create the worker. So, the worker are also the same precise. Let's use system D to provision these control this essential system containers and also after that it will join to the master to the total cluster. So, the difference, we are using the system D and the Postman to manage the total containers. That is a little different with traditionally using the Kubernetes static port methods. Yeah. Because we are actually using them to hold the federal OS operation system and this operating system traditionally have a very good competition compatible with the system D and Postman. So, it is very easy to use and manage. So, okay. We can see that it is general information about creation, right? So, but we can see that is a little complicated and traditionally you can see that it is much more complicated than creating, right? So, actually, yes, right. But actually for the creating it cannot be the core of the different resources with the cloud provider with the cloud provider side. So, it can just deploy a general cluster. But if you want to interact with the cloud side infrastructure side, yeah. So, you need to actually solve all the different problems. So, if it opens back the magnet will be the easy solution. So, here we come to this overview slide, right? So, generally in this overview slide we can see that we can see that there is a three part of the network and the first is the external network. So, that is for the external access and the public network access, right? And the private network. So, all our cluster hosts are running connected with the private network and in the green line we can see the container network is a traditional container network. So, you can also see there are three ingress. There are three load balancer. So, these are the key points of this cluster. So, you can see there is an API load balancer and a load balancer. So, for the API load balancer it is for the Kubernetes service and the load balancer is for the ETCD. So, they have offered the availability if the worker want to talk with the master they can directly connect to this master one or two or three even though they have connected in the private network. The worker need to connect it to the load balancer first. And we know that the API load balancer has served for the master server and the ETCD has served for the master ETCD right. So, the worker will so this load balancer are first act as the eternal HAA for the workers inside the private network. And also if the users want to expose the application exposed its cluster to the outside. So, the user need to amount of floating IP to this load balancer. And after that our Kubernetes cluster will have the outside access. Yeah, you can use the Kubernetes control to connect it at the remote side. And in the right you can see it is an ingress load balancer. So, that one is served for the applications that running inside the Kubernetes cluster to want to be want to be excised from outside. So, I will talk the detail in the following slides. Okay. So, so actually what is the difference with un64? We can see that we have talked about a lot of the OpenStrike and Kubernetes and even both sides. So, what have we have done to make the totally Kubernetes cluster Kubernetes service happen? Yeah, right. So, first is the project we have done to make OpenStrike and side multi-access port. So, this has included a lot of the problem, a lot of the deployment issue checking and a lot of the packaging and the enablement. So, besides we validate and the totally working precise and enable the totally Kubernetes service framework on top of the OpenStrike on un64 platform. So, after that we will start about a lot of problems about the production level, OpenSource cloud upgrade and maintain. So, besides that we will benefit to the upstream for the OpenStrike and also for side. And after that now we are serving, we are working for the Kubernetes conformist types on this unbased OpenSource cloud. Yeah. Okay. So, here is another maybe the next pending for our Kubernetes as a service. I do know that if we want to create a if we want to create a process for totally Kubernetes as a service, we need to create a lot of load balancers and create the master VMs for HA, right? But that will induce the problem about, firstly the slow creation precise. And the other is yeah, because the mechanism used for the actually script management. So, we need to change a lot of the code to maintain when the OS upgrade or when the Kubernetes upgrade and also even when yeah, and also even when the total framework has been upgraded I mean the department for frameworks. So, it has a lot of dependence. So, we are and also if you want to have an HA cluster, you need to deploy 3BM for master that will be a little cost that will be a high cost for the end users and not very convenient. So, on top of this this this yeah, advantages maybe. Yeah, so we have thinking about using another new approach to manage the Kubernetes. So, we call it proving queued. So, actually it is now a famous method in the private cloud deployment and as you may know, yeah, several companies are also using this method. But in the OpenStack word OpenStack offers infrastructure service that will be the first time. So, I can give a generally in brief introduction. So, for this we have divided into the cluster as the seed cluster and the custom cluster. So, the seed cluster has been launched by the main users and in the network and the custom cluster is offered by actually it is offered by other painters to use who want to deploy their Kubernetes. So, after that we can see that actually we can see that there will be two key parts when it is considered. First is about the control plane containers, control plane components. So, in the master, the seed cluster it has several worker nodes. So, the worker nodes will serve as a host to running as the components. For example the custom surveys the custom ones cluster, right? These clusters server, ETCD and other scheduler and something else will be running as a container inside the worker node in the seed cluster and there will be a load balancer to connect it to the main to connect it to the seed cluster network with the Knet cluster network. We can see that use this method the most it has problem is it will reduce the VM cost for the master for the custom watch. So, it will save a lot of time to provision the totally custom clusters and even for seed cluster you can use the traditional method to create the cluster. But for the custom cluster the control plane container will running at the worker node and also the worker node will running at the new VM. So, this will solve the problem about the high availability and the control plane container will be maintained by the seed cluster from the from the main site but this solution may have a problem. We are not working on to eliminate this effect is this cluster is heavily dependent on the HAP load balancer that we call the Octavia service. So, we need to we will to certify that this cluster will not have the performance issues. Yeah. Okay. Next, we come to the Kubernetes and OpenStack integration. This generally is because we have a cluster and actually we are not running a Kubernetes and have a Kubernetes cluster but it has running on top of the OpenStack site, on top of the infrastructure. So, the Kubernetes may need to several resources from the infrastructure because it may need to the volume it needs the authentication capability from service and also it may need it may need the network capabilities. So, we can see how does the integration with each other. So, yeah. For, yeah, realizing this we have several very important components. Each of them are the Kubernetes controller running inside the Kubernetes clusters. So, I've listed a component here and I will pick up several important components to give the introduction. So, the first is the Cloud Controller Manager. So, we know that the provider has been changed. The code of the provider has been refined due to it has closely relation with the control manager against server and cool it. So, it has been refined and pick up from the cloud provider first version to refine to the new one and it has several controllers. So, each of the controllers are sold for the different resources. We can see in the blue level, in the blue block yeah. It has a cloud controller cloud node controller. So, that is not only realize the node controller functions traditionally but also add several new functions. For example, the podElectionPolicy and if you monitor the infrastructure node, the infrastructure node status and also do the CIDR management. So, also it has a root controller but because that we are not a traditionally level 2 network right. So, we have different virtual private networks. So, the root network, the root controller is also very essential in the large-scale public cloud side and private cloud side. And after that, there is a service controller that serve for the load balancer to do some interesting things I will talk about later. And the last one is purchase the volume level controller. So, if you use the cloud provider to create the volumes. So, in OpenStack side, we use the thing that sets the driver to call the resources from the OpenStack. Okay. Next is a brief introduction and is a general picture about the Octavia. As we talked about before, the Octavia is a load balancer that service and it will take a very significant in due for offer the external network access to this cluster. So, we can just pay attention to the effort. Effort is a VM that running with HL proxy. So, it will serve as offer the load balancer capabilities to the Kubernetes. And yeah. So, we can see that what is the how to realize for if you want to running a application that inside your port to make access to the outside users or outside external access. So, first you have defined we can see that you have defined the port with a t-server right, port 1, port 2 has a t-server and the and there the t-service and port 3, port 4 is a coffee server and there is a coffee service right. So, for the service is a abstract way to expose the application running on a set of ports. Yeah, it is a very famous concept in Kubernetes and we have also an ingress object. So, ingress is a allows the external users and the clients to excise the HTTP service. So, we can see that we have defined the ingress object that the past T with the backend come to the t-service and there is a past coffee with the backend to come to the service with the coffee service right. So, and also they have a different service port yeah. When we done it first we define the service with the node port and after that the the service will have excise will be easy excise from the VM based node port right. And here after that each of them will have a node port exposed. And then we will have a very important component called Octavia ingress controller. So, that one we will be running at the Kubernetes to watch the ingress object change and after it watch the ingress object change it will call the Octavia. It will call Octavia to create the Ampfer to create a listener for this to create actually a load balancer right to excise to connect it to this to this ingress object and after that it will update the HAProxy rules after we after the creation success it will it will mount a floating IP to this load balancer and this load balancer will serve for the different node port and the totally chain floating will be come to from the node port from the load balancer that created by Octavia and come to the target node port and then come to the port. So, in that method we can actually achieve the network solutions and expose the capability for the running applications, quote. Okay, yeah. So here, yeah, thank you very much and here is all our session today. Thank you very much to attending this session and if you have any questions, please yeah any questions are welcomed and if you have any interest on the open source cloud on 64 and we are welcome you to register over the nano-developed cloud. Yeah, thank you. And we have one we have one question. So, the first is our data center move away from Intel to ARM, yeah. So actually we can see that there is a good change and there are more and more cloud vendors are trying to use more infrastructures apart from Intel and also ARM servers now have several good vendors over their product and actually the performance now has been improved during the open source community either from our member companies and also also from the upstreamers from the open source community. So, I think it's a good trend and it has been it will be more and more cloud vendors to over the ARM based servers and ARM based cloud servers, yeah. Yeah, this may the user have a second choice I think.