 Hi. Hello. Welcome to the session. This session is about the SIG multi cluster introduction in detail and updates from the community members. I am Shashi from Huawei. I have been an active contributor to SIG multi cluster, particularly in Federation. So I am joined here with Shuntan from IBM China. So he has been contributing a lot recently. So probably he will speak in Chinese. So you may understand better about the deep dive. He will cover that. So coming to Agenda, this is what we have for today, like mission. This has been already spoken about in various occasions, but still we reiterate for the benefit of the crowd. Then we will talk about the active projects under the SIG, but this is not an exhaustive list of the projects, which are inside the scope, but we have been actively involved in these two. So this is what we will talk about. And we will talk about what we have been doing in the recent past. And then we will give a deep dive into Kubernetes cluster tradition, concepts which we have built around in the recent times. And we hope we will have an interactive Q&A session. We will give a lot of time for that probably. So coming to SIG multi cluster mission, this is a special interest to focus on solving common challenges with respect to application management in atrocious multiple clusters. So we are responsible for designing, implementing and maintaining the APIs, tools and optimization around or related to the multi cluster have. So this does not only include the active automated approaches like the Kubernetes cluster partition, but the scope also includes many other multi cluster systems like the workflow based continuous deployment system like Pinnaker. So what we intend to do within the SIG is basically to build, basically to provide a basic building block to address this multi cluster scenario. So the problem domain is multifaceted and we do not expect everything to be solved within like one project or so. So there are many existing projects like the SIG multi cluster or the recently the Submariner which tries to address the multi cluster network. So we are very much excited to hear about that. There is a session later today about Submariner. So I appreciate like everybody who could attend that too. So this has been outlined in our community webpage also. So you can have a look at that. The active work efforts. So these have been actually pursued within the SIG right now. So if you ask really the SIG is kind of dispersed team. There are multiple communities right now within this domain like if you multi cluster could be Submariner to name a few. There are many others. So within the SIG what we basically do is to build basic building blocks. We don't try to provide a complete solution for everything. So end to end kind of facility. Probably we just build the basic building blocks common to all projects and then that can be reused by the systems which are implementing the multi cluster scenarios. So coming to cluster identity this has been identified quite a long time back but it was kind of a stall. But recently we have been feeling like this needs to be in place. So many other project systems they have went ahead and implemented their own cluster identity which becomes a hurdle when you want to make or renew some components from other projects. So basically the motivation behind cluster identity is right now the community cluster doesn't offer like unique identity there the cluster can be identified with. So this is particularly important when you are dealing with multiple clusters. Like when you have multiple clusters you might need to know like which the resource in which log or events related to that which is coming from which cluster. So it has been a really important thing. So there has been a lot of discussion going on in the SIG. So it is almost finalized probably we can see API within Kubernetes code to identify a cluster. So that could be soon added. And the next project which we are pretty much we are all involved in is food fact. So this has a quite a long history. And recently like a month or two we renamed the tradition B2 to food fact. So it started from tradition B1 but we initially started off being comfortable with the cater city but that caused quite a lot of vorage. So we moved on. It is kind of evolution right now. So right now the food fact is not a capital kind of a system. We tried to provide only the basic building walls and tried to be modular enough so that it could be used with other projects. Like maybe you can go add and use food fact along with multi cluster or maybe Submariner whatever project is required. So we tried to be lightweight and try to offer only the like you can choose which you would like it. So okay. So food fact is basically short form for Kubernetes cluster production. So we have recently renamed it to food fact. And the tool name also we moved from food fact to food fact cattle. And this is our vanity URL which is currently available. So let's talk about the food fact. What is it fit for right now? So it is basically the board name like user if he wants to coordinate configuration across multiple clusters. Maybe it could be a few clusters, it could be hundreds of clusters. So food fact could be a right choice. Seamlessly you can coordinate the configuration. The configuration here what we mean is basically about the K-test resources. So you want to you have hundreds of clusters and then you want to manage them together from a single API service. So this would be an ideal fit. So also this approach also what we have implemented use active reconciliation like actually keeps doing the reconciliation of the desired state of the apps or the resources. So what currently we have is the basic building block. So you can go ahead and build high level features around it like the geographic redundancy or multi-cluster scheduler. It could be like costless or savage discovery. So we are yet to see all those built upon the basic building block. So currently we are at about the stability of the project and such. So we have recently targeting the beta release so that you could get to use it and give a feedback. But already a lot of users are already started using it and we have received quite a lot of feedback. So our Rc2 is getting extended. So yesterday only we released the Rc3. So probably you could expect soon like we'll be releasing a beta. Probably beyond that we'll be supporting the backward compatibility for any features we are going to introduce. So okay let's talk about what we have been doing so far in the recent past. So to do that okay I'll take you through a demo to explain what all the things are available in the toolkit. So we'll be deploying a toolkit control plane. Then we'll be enabling the needed types or federated types. And then we'll federate an application, a simple application just for the demo. And we'll also demonstrate about how to override within a particular cluster. And then how you can control the placement of the resources to a specific clusters. Okay let's jump on to demo. So okay so I have a control plane already deployed here. I have two clusters here, cluster one and cluster two. And I have a control plane deployed. Basically the control plane is the core APS of federation plus the controllers which are running. So just a minute this network is a little slow sorry for that. Okay so we start with creating a namespace called demo. And create a config map, deployment and service. This is a very simple nginx app. So the deployment uses the file mounted by the config map. And offers the web server. So this is a simple app. Now what we have done is we have built a tooling around this like to take a KITS resource. And wrap it around and convert it to a federation type, federated type. So what I'm trying to demo here is like the namespace, the contents of the namespace will be federated. And up converted to federation, federated types. Okay so they have been converted. The CRDs been installed and then they have been converted and then installed into federation APS server. So let's go ahead. So this is the federated deployment type which we have defined. So it has directives called template which is basically wraps around your Kubernetes resource. For example this here whatever federated deployment is a deployment within the template. And then we have placement over sorry placement directives. So by default we place it to all the clusters within the federation. And there is another directly called overwrite by default it is not enabled. So we'll show that later. And then you can see that there is also a status part about the propagation status. So to where which classes the resource is being federated. Okay so right now it is federated to cluster one and two. So I have a this is a node port kind of a service. So I'm just querying through curl. There's a lag sorry for that. So okay I'm querying the cluster one. So okay it under responds. So the same thing I do with the cluster two is unstable actually sorry for that. Okay so the same thing is replicated across clusters. Okay so the next up. I'll show how do we override some things to do that. We are going to patch the federated config map. Just the one particular property or a field we are going to override like the content of that. So kuback on Shanghai. So and this will do only for cluster two. It is it takes a little time to do the reconstruction group. So just give it a little time. The cluster should be able to pick it up and restart the pod. Okay let's move on. Okay next up let's show about the placement to do this. We are going to apply a label to the kufed cluster. So right now you can see now in our federated deployment. So it has been deploying to cluster one and cluster two by default. So we have two classes in the Federation. Okay so we are going to label our cluster one with region equal to China East. Then we are going to update the placement saying deploy only to clusters which has this label region equal to China East. Yeah now you can see it is not deployed to cluster two. It only deploys to the matching cluster. Okay so let's move on to the slides. Okay this is the architecture of kufed. So basically we have a type configuration. Any KITS resource we kind of upconvert into federated type. So and then basically it has template placement and overheads, and there are a few of the characteristics within that type. We need this is a new API type that is the CRD which we create during the, we have provided tooling for that like to convert any KITS resource to the federated version of it. And then there are a little cluster configuration like what all clusters are there in your federation. You need to do join for that to join your federation. And then using this information like there is a controller which can propagate these resources to those clusters. This is an active reconciliation model. And above this like you can build up your custom controllers to work. We have a sample working prototypes like the cross question service discovery which we have built and scheduling. So which are built above this basic building blocks. These are not yet moved to beta but depending on the features we will move those later. So then there will be a propagation status which will be collected. And you don't have to visit each and every cluster to look at like what is the status of the resource. So tooling as I told like we have built a tooling to enable any given type without changing the code. So you have a KITS resource. We can change it to federation resource and deploy it to the federation control plane. And this particular federated is for a single command like you can use it in a command line to convert from, it's like a just a templating kind of a tool. So what we have done in the recent past is like initially we started off with federation v2. We initially separated these as a separate CRDs, federated type, placement and override due to reasons like different actors can control them. But we realized that it is a harder problem to solve like for both developer and a user to group all this for a single resource. So later we kind of, this is the change which we did before this beta release. Like we have put all of the directives or we call it subtenants to within a single unified federation type. So now we also have a status which gives the propagation status of it. So and also as part of moving to beta, what we did is to conform to the KITS API spec. We have a few federated, federation type CRDs which are required for the functioning of the features which we have developed. So coming to, sorry for this PPT actually, I couldn't see the Google network. So coming to future work, so we plan to improve the usability and documentation around it so that the learning curve is smooth. So we want to make it as easy as possible from deploying a single cluster, moving towards deploying a app to multiple clusters. And we plan to do a plugin to auto convert the manifest while deploying so that the default behavior could be deployed onto multiple clusters. And then we also plan to do some higher user level facing API, but this is a harder problem. So there is some other work within the community like a app CRD which tries to do these things. And we also plan to implement a full reconciliation kind of a model when it is similar to JITOps or multiple clusters. So I would now invite Shimpan to give a little more detail into the internal features. And then after creating Kubernetes, its controller will help us to do things according to the resources we have described. So federation actually follows the same method. We have a tool called kube.fi.ctl, and we also have a control plan. But in kube.control, we need to do three things because it is different from kube.com. First, we need to let kube.fi.ctl know which cluster is joined to my federation. And then we need to use federation to manage some types of resources. And then the third step is to let our already created Kubernetes resources, and then federation, that is, to share it with our management of different types of resources. So you can think of this as two parts. And then there are four aspects in total. Let's first look at the command line here. We actually need to let a normal cluster connect with our kube.fi.ctl. So the connection process is put in through kube.gen. So we can imagine that we need to connect a member or a member cluster to our kube.fi.ctl. What do we need to do? We actually need to let kube.fi.ctl know which cluster I can use. At the same time, we need to let kube.fi.ctl know how I can create my resources in those clusters. Okay, let's take a look. First of all, kube.gen. will create a service account in this member cluster. And then this service account will run some RBAC power lines, so that it can create resources in the member cluster. And then it will create a kube.fi.ctl cluster, our customized CRD. And then in this CRD, we will put the member cluster API endpoint in. We will put in the C1 bundle. We will put in the S8 token that we created. With these three information, our kube.fi.ctl controller can know which place I want to visit my corresponding member cluster. And then I want to use this security communication C1 bundle to do my security communication. And then I want to use what kind of identity to do my operation on that machine, which is the S8 token. Okay, we can say that when we use kube.ctl to create a cluster, kube.fi.ctl, this resource, it will enter our kube.conf.ctl management framework. It will form a federation with the kube.fi.ctl cluster that we already have. You can add more. Okay, after we add the kube.fi.ctl cluster, we will form a federation cluster. What we need to do now is to send some kind of resources to these clusters. Then we have a function to say which resources we want to send to different clusters. We can use kube.enable to do this. For example, if you want to send the deployment to these management members, then we can use kube.fi.enable deployment to do this. And then it will help us to create a federated deployment, a CRD, which is our definition. It is used to describe a common resource, a Kubernetes resource, how it can be distributed to these different clusters. It can be distributed to different clusters with the same definition. At the same time, we will create a type configure resource. It connects the federated deployment that we created before and the deployment that we created in the goal base. It only makes a page. These two connect. Okay, after we have this information, for example, our deployment, we will use the deployed version. After we have this information, we can create a deployment that you created in the Kubernetes cluster. We can use it. Sorry, I didn't say anything. Okay, we said that the federated deployment was just now. After we have it, we can put the deployment to different clusters. There is another resource that is not quite the same. It is the name space. The name space is a container. It has used all the resources. We can imagine that I created a test name space. I just distributed it to my cluster 1 and cluster 2. At this time, you created a federated deployment. At the bottom, I want to put my deployment to cluster 1, cluster 2, cluster 3. I am still in the test name space. There will be conflict at this point. The federated name space has made a limitation. It is to divide the group of resources for you. Then when you really want to use the federated cluster, you can't exceed the limit of the federated name space. It is convenient for management. The management can use some clusters to create a federated name space for these clusters. These name spaces can only be launched on these devices. When you use them, the member cluster range will not exceed the limit of the federated name space. We just talked about the distribution of resources. When we create a Kubernetes resource, we want to distribute it to the management of the federation. Then we need a life. This is one of them. We can create the resources directly. This command is for us. Kubernetes cluster is here. We use Kubernetes cluster federated. This command will help you create a federated deployment. If you use a deployed object, it will fully distribute the deployment to the entire member cluster. You might say, I have to do a Kubernetes cluster with a name space, and create all the objects in the name space. You can distribute them to the entire member cluster. You can use this federated name space with content to achieve this goal. In fact, what we just talked about is all about the management. Our controller needs to do some things. Otherwise, you will only have to do some things. When we use kubifiedenable, we have created a federated type config. For example, we have created a federated type config for deployment. After we have the internal controller, it will create a sync controller. It will create a sync controller. With the sync controller, if it has already been created, it will distribute the deployment to the entire member cluster. It will also control the system to ensure that all the things that we have defined in the federated deployment are in our member cluster. In fact, the most important part is the implementation. We have more CRD resources outside the service. If we need to deal with different resources, we can't use the same resources for each resource. So, we have selected the resources and made them an unstructured structure. You can think of it as a public, transparent structure. Based on this structure, we have a public logic that can implement each deployment, service, and the CRD of the federated type. We can use the same logic to deal with it. You may ask if we don't know how to determine if the object is updated. The federated deployment changes and does it really reflect all the deployment in our member cluster? In order to solve this problem, we made a simple check-sum. We made the template and override a check-sum. If the check-sum has been changed, we will re-enter the deployment resources as soon as possible. If you are grateful, you can interact with me more closely. If you have any questions, ok. Thank you.