 So, let's get started. Today my topic is regarding the topology aware service routine of Kubernetes. And the name is quite abstract. For this new feature, those who are familiar with Istio may know that there is a local routine. And this is more like the support of native Kubernetes, like local or region local. Any kinds of local you define or even other kinds of you define. And first brief self-introduction, my name is Dujun. I work for the upstream in the community and maintainer for Kubernetes Istio and Kubernetes Edge. And in charge of CNI-BadWice and other Kubernetes SIG network. And I already published two books and the three is about to come in the next month. So we will begin our session today. Maybe from four aspects. So the whole Kubernetes, the design of the whole compatibility system and the compatibility of the network. Why are we doing that? And the specific need as well as the actual design and the demand. For Kubernetes, we say that as the design for its compatibility, there are three main parts. And in 2016, I worked with Huawei in the community to promote this coordination compatibility. I think you must all know about that. And there is also a concept saying that for your pod and node is compatible with each other. For example, like pod A and pod B, they are also compatible with each other. And coincidentally, we have a local node. What does it mean? It means that I have the pod which needs storage device locally. Although that the PVC is combined with my node, but after the coordination, I found out that there are no local stores that I need. So we need to delay for this kind of connection. The third one, you will find out that for example, like for Huawei, we need two AAC and BAC. We can use AAC and AAC for the user end at AAC. I hope that the visits can be the same as all the AACs because the cross regions cost a lot. And so we are thinking that we can have for local and pod this new feature. And to be more specific, it's a prology service routine. One year ago, we already finished the design, but we didn't find the proper partner to begin the development together. So now we have already updated the version to 1.6. So in Kubernetes, how to express the compatibility? It can be compatible with a certain main engine to a zoom framework. So all those subjects and targets, like a label or map or value. So using the topology concept to express compatibility that we have this relationship in a certain zone or certain framework and vice versa. How to express this is via label to express any subjects or targets. And also we know long ago in the community that the coordination of compatibility, how to design a new one, we can maybe learn from that, get some inspiration and go back to this coordination. The question we would like to answer is which machine that I would like to run my code on. On the left, there are node affinity in the coordination process, how to write all the code. The node affinity, it is quite a similar expression and supports four kinds of expressions. You will realize that there are also two kinds of compatibility, one is the hard one and the other one is the soft one. One is the red at the line and the soft one is the preferred. So there are all kinds of other requirements that you need to match. And also there are a lot of, it can be called node one in our example. With the labels on it, with the AZN or AZ2 they can be coordinated. They also meet all those requirements. With the label of host one, it is preferred with the weight of 100, which means that in terms of coordination, it uses initial steel using the labels to describe topology and to request to describe the hard ones, the hard ones, soft ones. And in the network, we have also learned from this kind of method. In terms of the network topology, when we visit a server, we need to find the nearest pod on the left. It's the working function of the service. As you all know, it's via the label and to choose the pod on the back end, background. So the service and pod that you, in terms of Kubernetes, Kubernetes and Chordians are also connected to this pod and server. For the hardness problem using stable set, we use the hardness service. Change name and reject to the list at the back stage. So if you support that one, we need to do in the two parts, to develop in the two parts. One is crew project and one is this. So this picture illustrates the service, the internal routine of the service of Kubernetes. All together, three connecting targets, service pod and endpoints, OETCD and endpoints controller will read all the services to produce the endpoints, Subjects to run the proxy at each point, then targeting as for the user end, The service is a virtual IP, virtual service for endpoints, it's a real server, it's the actual back end. And also for kubo proxy, it supports IPT and IPvS mode to write the routine principles. And for OES, this directly creates the real match. And that is the common Kubernetes with topology, how to make changes. Endpoints we do not need to make changes, only the kubo proxy. And next we will preview those with the needs of those who are familiar with this part. External traffic policy can be defined, you can choose to be local or local means that you only transfer the traffic to the pod at the same node. Same nodes at the kubo thing, and with this function, why we still need to invent this new concept because it only supports the cost class local and the local region local. Therefore we need a common one that can be shared among them. If we can limit that to point to a specific one, then we can do the matching support into the software kubo proxy. But due to all kinds of needs, we need to have this kind of invention to limits the behavior and also from the perspective of security. For the cross machine there are some sensitive data, like on AW there is a proxy that is not allowed to cross the node due to security issue perspective. As for every node, there is a set to deploy one. Another is the cross June that costs a lot and also due to latency issue. So we decide that we use label to describe the topology field using the mechanism of the grenades. And as we discussed before, we have the hard compatibility and soft compatibility. You need to support both. And for the hard one is just making the request directly. Otherwise we will receive a connection refused. And for the soft one is that can be also used. If it is a soft compatibility, which can meet the goal post, which is preferred, that is under the customer's definition, you can label the weight on it. Maybe with such a label, it has a weight or a hundred weight or ten to see the weight. For example, that kubo proxy, whether it's IPvS or IP table, they both support WRR. Even if you have a score, they can also support those weights technically. We are talking about all the strategies who will do the configuration for external traffic policies. The user and for Istio and that policy is administrative using the set way control with the name list. And whoever, whichever the pause, are matched, there will be a constraint on them. And using the set way control and there's another one. So directly written on the server to add other strategy directly. And here are the questions. Here is the API design. It's defined API design. It's written on the solid spiker. It's not a set way control. It's a direct one to constrain the specifics of this. And API is quite simple, adding only one feature. It's a string called topology keys. And for the strings we have one, two, three. And the smaller it is, the constraint has the higher preference. It means that they have this kind of a preferred priority. For example, there are two elements in this string. And one is a host, another string. It means at first to meet the local host. If it doesn't match, then it will suit the local. Then this is kind of a soft priority. If all the components cannot be matched, what should we do? Then none of that has been fulfilled in terms of requirements. Then we'll have some, I would say, hard conditions in place. If the host is empty, then it can be compatible with the backends. That is to say, then we don't have any restraints as long as we can find out the fulfilling requirements for the backends, then it will be figured out. It's not just like scheduling or dispatching. It is placed like prefer or required something. And then wait for some other things. Of course, we can do some simplified design. And later, if we have to put a wait on that based on our user experience, then we can add that to it. It will be simple. It will be a wait host, and it can match to each one of the P hosts. And therefore, the component or element of the first host, that is the waiter host, that's actually the first match. Yes, that's it. And it's not even alpha. It's experimental. Why is that? Let's take a look at the first body flow. That's how it looks like. First of all, cube proxy, besides service in Chrome part, it will have access to all kinds of services and nods, because we've got many nods there. Many people are less worried about that, and it will produce large traffic based on our own experiment data, as well as the ideas from the professionals working in this area in the community. But it does not really depend on the traffic load. Rather, it depends on the frequency of changes of that. For example, for a long period of time, and it's empty there, then it will not lead to a large cost for the service end. Then if we have all these service nods, and when the label changes, the data will be transmitted according to our order, then the changes are not that frequent. Then for the traffic is not that much. If the heartbed or the nod is always subject to change, then it will lead to a large cost for the service. And as for the nod topology range, it's actually expressed by the nod cable. Then how can we express topology? And it's actually identical to the nod topology, for example, for... Nobles in ACE, then the part is in the same as well. Then how does the workflow look like? For topology, it has its own nod name, and we have all the nods, then at the localized cache, it has stored all these names of the nod, and nod can be the key to search the nod and get the nod object there. And then we just get the layers further. Then we'll be able to know the topology range. And as for the part, we'll be able to know whether this part and the other part is in the same range. If so, then the traffic can be directed to it, or else we wouldn't be able to divert the traffic to it. That's how it is. To summarize a little bit, for project service, we do need to return to the backhand part in the same topology range. If there's one, then just return to that. But if there are two, then we'll just do that, and there will be one later. That is to say, do that randomly. For Kubernetes, it is how it is. For headless service, I mean headless service, it can be implemented or actualized for DNS, how it's going to use this one. First of all, the Kubernetes service, the domain name, as well as the IP list at the backhand, will be matched or, for example, a service request has sent, and all the IP lists at the backhand will be projected as well as matched. But in topology, how are we going to do that? For the IP list in the same topology domain will be, or range will be returned. For example, originally there are only five, but after we check there are only three of them in the same topology range, then only three will be returned. It's complicated, right? But before we enter into beta, we don't need to realize all of this for alpha, we just need to just achieve the preliminary goals. By the way, you just feel like the design is not that reliable because it's complicated and then why you have to access all the nuts? As a matter of fact, in community, or for communities work, we need to pay attention to the current character heuristics, but also other character heuristics that can be related to the community in the future. For example, for us, for me, I need to watch an object and usually the data about this object will be all returned to you. Then what if I only label this note label and put that changes, you can push that to me, but that's what other data, I don't need that. And for the community, it has been wanting to do this for a long period of time, but it actually covers a lot of area and we haven't really seen it settle. Then why do we need to make full use of the key to access all the nuts? For me, if I want to access the nut and it is very useful, then we do need to resolve this problem. When you just introduce the problem in the system, you need to resolve that eventually. Then during this entire process, you will be able to bring everything forward in a community which is called Scavidity and the community leader is actually a bullish and he just rejects everything that my series will affect the Kubernetes. And that's how his take on this matter. And for this matter, this is much about affection. Brotherhood can also be taken as a very important characteristic and they do need to resolve this problem. Then how to do that? Then for master watch, this issue will have to be introduced and resolved. And if you want, you should know that. We can designate, first of all, for not this specific label, all of this will be returned. We can't support that, but there's no way for you to designate the attributes or the characteristics they return to us. If you do this kind of work, you might find that the Kubernetes or K8 will be enriched to another level. Why is that? First of all, we find that it is subject to the watch list and there's no way for K8 to watch list and when they do the watch node, they can actually do a lot of things and we can actually bring the K8 into it next level. And if we will be able to develop the alpha trait and after we move into beta, then the API will, of course, be changed and then we can see the CRD of publicate for this CRD. Of course, it can be used by Koopa, then it's not watch node in that way. Of course, you can be used by Kubernetes or Kotlinus, then what's inside there? I mean, what is stored there? Know that the, know that the name, the nuts name and also the pod IP running on it as well as the matching relations between the two. Why this kind of information will be stored there? If you don't watch node, then you have to know what exactly are the IP address of the applications running on the pod as well as labor. If there's a full master, then we can actually store this in the first place in the Android, then it can deliver benefit because the heartbeat of the node can be changed on every 10 minutes and this way we'll be able to do the core DS because it has to track the pod's IP address. Basically, that's my speech. Do you have any questions? Sorry, the audience are not using the microphone to raise the question. Some of the pod, they don't have the labels. Again, no microphone is used. I haven't considered your question. I'm sorry. You may be producing by yourself. If you use under pod control, of course things like you've just mentioned will be automatically produced. For all those questions, I haven't really taken into consideration so far. No microphone is used. For core DS, it's simple. For three IP addresses, you just write them into it. Let's just forget about WR only focused on R. We can do that, of course. And you've just mentioned WR. Of course, for the alpha version of that, it's not compatible with this function. If you take a look at API, you'll just find a host, but for the proper keys, priority has not been detailedly described. By the way, if we don't want to know it's okay, but we can't have something else here in the first place, we say CRD is first-class citizen. If you just watch the CRD, you'll find that the CRD's position will be improved in K8 world. You can find that there's no major component and there's no dominant component to watch the CRD. And there's no way for you to prove I'm wrong. No microphone is used. It's a couple of months ago. And for that characteristic, it's actually developed by our team member. We always talk about CRD is actually a first-class citizen or CSR controller. When they watch this, we're not very willing to accept that. If there's no more question, shall we call it the end of that? Thank you again. Thank you so much.