 Hello everyone. Thanks for joining this session. So today's topic is distributed authorization for microservices powered by Kubernetes, Istio and Open Policy Agent. Yes, I know it sounds like a very big topic and it indeed involves a lot of tools and techniques as we mentioned in the title, so I will try my best to tell a good story and hope you can enjoy it. Before we start, a little bit about myself. My name is Gong Mengnan. I have worked in Ninja Van before, which is a logistic company and now I'm a senior engineer at Shopee. Shopee is one of the largest e-commerce platforms in Southeast Asia too. And here's the agenda. We will start with background and goals and then we will talk about the tools that we mentioned, Kubernetes, Istio and OPA. Then we will talk about our solution and at last it will be a summary. Yeah, so let's start. Background and goals. Before we start, since this session is actually marked as entry level, so like regarding all the concepts that we mentioned in this session, I will try to give a brief introduction or some samples for you to understand. So we will start with RBAC basics. RBAC stands for role-based access control and is very classic and popular approach for the user access control and I'm pretty sure that you've seen in many, many other products and websites and companies. Even if you've never heard of it, there are a few concepts here to help you to understand. First is the user. It could be a person or automated agent or actually it could be anything or anyone that contact your service and then a role, which is a job function or title, which defines an authority level and then permission is just an approval, like if you have access to a resource. So it's pretty much close to the real-world definition. And here is a summary. A one-liner user is authorized for all those permissions assigned to any of the roles it is assigned to. Yeah, so this sentence seems a bit complex, but yeah, if you didn't get it at the first time like me, then you can try to read it again so we actually describe exactly what the RBAC is. And to help you understand, we have a small simple here. Let's assume we have user Alice and Bob and Alice is a shipper and Bob is the driver. So here, those two terms are actually within the logistics realm since this was the thing I did in the logistics company. So shipper, it just means who have parcels to ship with us, basically the customers. And driver is the one who drive for us straightforward to drive for us and deliver the parcels. And then let's assume that the shipper and driver has the following permissions. For shipper, it has the permission to create order and get the order. And for driver, it has a permission to get a route. And this action and resources is exactly like as the REST4 API definition. So most of our API are exposed in the REST4 style. And the good thing is in the REST4 API, the endpoints is actually representing your resource. And the scb method represent the action. For example, for create order, it will be posed to this slash 1.0 slash orders endpoints. So this is a perfect combo to represent our permission. And this is what we do in our company also. And then for the get order, you can see is get this endpoint. And for get road, it will be get this slash 1.0 slash roads endpoints. And combine these two, you should be able to make the decision based on the user and the action they want to do. For example, for Alice, if she want to post to this endpoint, and it will be allowed because Alice has a road shipper. And this road shipper has a permission create order, which is allowed to access these endpoints with a post HTTP method. So it's the same for Bob to get a routes. And but if Bob want to get slash 1.0 slash orders, it will be denied because Bob doesn't have a road shipper, which means it doesn't have the permission create order. So it won't be allowed to access this resource. So I hope it's a simple example. But I think it should be sufficient to understand what we are trying to do next. Okay, so we will talk about how the authorization works before, like before we introduce our new approach. And here are some typical components and maybe you've seen somewhere else having the roughly same approach. We have a client and then there will be a load balancer before it arrived our API gateway. And the access token is validated on the API gateway here. So which is the first step. And so one note is that we will mainly talk about the authorization in this session. So about the authentication, we won't be covered. So we just assume that you've already had a token. And then for the second step, the API gateway will forward the request to the targeted service. So basically, there is a routing happening on the API gateway. And then that's where the authorization happens. So it will happen on the service, which is Service A here. And then the Service A itself doesn't really have sufficient information because it will get the token, but it doesn't have the user information to determine if this user has the access to the specified endpoint or not. So the service has to make the request to another service, which is the auth service, who has all the user information to help for the authorization. So this was our previous approach and probably maybe some other companies also doing the same. But there is some challenges. We will talk about it later. So for this approach, since all the authorization is happening on the Service A, as we mentioned here, and here are some code tips about the authorization part. So there are two approaches, the authentication and auth middleware, which is actually what we did for Java and go, because that was our primary languages. Let's see for Java, if we can have an auth annotation and do specify the permission that you want to allow to access this resource, for example, create order, then you will annotate this controller function. And then for Golan, we don't really have something like annotation, but you can have something like the middleware to post this slash 1.0 slash orders, which is also create orders. You can annotate this, you can actually annotate these endpoints with the middleware and you specify other permissions here. So this was how we do in the code, but there are some challenges. Firstly, the permissions required for accessing endpoints are only available in the code itself, which is this kind of code. So it doesn't exist anywhere else unless you document it somewhere. And every time you make a change, you need to make a change on the documentation also. You need to keep these two parts in sync, which is not really realistic in our practices. And the second is very difficult for engineers to get the whole picture of the permissions required for service or endpoints. This is easy to understand because it's in my scattered everywhere in your code and either annotations or middlewares, you might use it in different places. And when you want to see, okay, for this microservice, what kind of the permissions it uses and for each permission, what kind of endpoints it can access, you can't really have this kind of information at one place. You need to maybe go through a few files to understand how it looks like. And the last point is even more difficult for non-technical users to understand our system. So yeah, because it's not available anywhere else besides the code itself. So for non-technical users, for example, the account managers, yeah, apparently, they can't really understand the code or even they can, they might not have the access to the code. And what's worse, the account managers have to guess the permissions behind the name and often end up renting on desirable permissions, which compromises the overall system security. Yeah, so here is an example. We have a user and the role and permissions. So let's see for Bob, he has a role shipper and it has a permission manager or this, but that's all we have. So all of these informations, the user roles, permissions, it can be stored in the database. But after that, it only exists in our code. So for example, for the action like get, post or delete, and for for service, like what one microservice it has access to. And for the resource, like all of these, you can't really get it because it's scattered everywhere in your microservice, in your code. Yeah, so this is definitely one of the difficulties for our account managers, because sometimes the rules and permissions are not even the straightforward like shipper or manage orders. It could be somehow big and then it's hard for you to guess the real meaning behind it. And based on those challenges, we just described, here is our goals. Firstly, we hope it to be language or framework agnostic. And then we hope to have a distributed authorization. So there won't be a single point of failure. Instead of now we are solely rely on the odd service. And third point is kind of related to the second point, we hope it to be scalable. And then we want it to be future proof. And yes, a bag, I'm looking at you. So a bag, you just replacing the R in our bag with a. So yeah, it means attribute based test control. So what is a bag? So for example, I want a shipper who name is Alice with red hair and a master card to ship with us. That's definitely a very ridiculous example, but I'm just showing you like let's see the username at least and red hair and master card. These are all users attributes. And instead of the role based access control, you can actually make a decision based on the users attributes. And this is much, much more flexible than our back. And we've already forced in that there might be some cases that we want to adopt this a bag, a bag fashion. And, and we hope if we want to refactor the whole thing, we hope this can actually support the a bag in the future. And the last but not least, we hope it to be user friendly, because as we mentioned, it's really not user friendly, especially for non-technical users. And we hope to make the system they are more usable for them. Yeah, and I hope you all understand our background, like how we did the authorization before. And, and now we will talk about the tools that we will utilize for the new approach to Kubernetes is still an open policy agent. So let's start with Kubernetes. Kubernetes is an open source platform for managing containerized workloads and services. And, yeah, period. That's all I have for Kubernetes. I'm sorry, because it's not really a Kubernetes 101. So I hope you have the, the basic understanding of the Kubernetes and how it works. And I have to jump to the next one, which is Istio. So Istio is a service mesh solution. And I might spend a little bit time explaining what is a service mesh. So as the diagram shows here, normally you will have clients and you might have many, many clients. Then that's where the proxy layer kicks in. So it can handle some logic, for example, caching or like HTTPS offloading and so on and so forth. Right. So, and then the request will arrive your service. And this direction as we showed here is usually called North South traffic. And when we talk about North South traffic, we mean the traffic from the outside to the inside of your system. And this part is openly, is often handled by API Gateway, which is also the proxy layer that we show here. But there is also a East West traffic, as we showed here. And the East West traffic, which just means the communications within your system, between your services. This is this part that we marked out. And the service mesh is mainly targeting for this part, the East West traffic, which are the traffic within your system. And the API Gateway and service mesh, I think there's definitely some aspects are actually overlapping with each other. But when we talk about service mesh, we will focus on the East West traffic here. And here is an illustration about the data plan. So one of the main components of the service mesh is data plan. And there is also control plan, which is in charge of the configuration management and keep all the data plans in sync. But we won't put too much weight on that. So for the data plan, so here are the main reasons that we introduce a data plan in our system. And at the beginning, before we have a data plan, the same functionalities are implemented repeatedly for different languages and frameworks, which are this part of code, the security retries, logging, tracing and routing, and maybe many others network related functionalities, because it will be reside in your code base, in your service. So you might need to implement it in Java, you then in Go, and maybe in the future, you need to do it in Node.js. So you need to keep implemented in different languages or frameworks. And then here is a new approach after we introduce the service mesh. So here you can see we extracted the code to a standalone proxy. And let's see if you ran a part, a Kubernetes part, then you can consider these are two containers, and your service will mostly care about your business logic, and you will upload the networking related functionalities to the proxy. And in a word, we are outsourcing the networking functionalities to a language framework agnostic proxy, also known as Cyca or data plan. You can see like in some articles or videos, these words interchangeable interchangeably. And here is this definitely has a lot of benefits. First of all, it's language or framework agnostic. And you can rely on the proxy to handle your incoming and outgoing network traffic. And this has many benefits. And one of it is the one we are using in our approach. And here is a very nice drawing I want to share with you. This is what our site time is. And yeah, so the external authorization in Envoy. So Envoy is the implementation of the data plan in Istio. So Istio is actually built on top of this. So Envoy is you can consider it's a proxy itself. And this external authorization is actually one of the amazing functionality that we rely on to build our new authorization approach. So you can see the service traffic. We arrive on Envoy first. As we mentioned, this is actually hijacking all your incoming and outgoing traffic. So everything will go through this proxy first. And before it gives the request to your service, it will actually use something called external authorization filter to make a request to OPA. And actually, it doesn't have to be OPA. It could be anything, as long as you fulfill the interface of the external authorization filter that required by the Envoy. And then your third party service should give an answer, I guess or no, to Envoy. And then Envoy will forward the request to the service if it's an authorized request or it will reject right away if it's not authorized. So yeah, so this is also what we do in our new approach. We actually rely on something called external authorization in Envoy. And the last piece, finally, the Open Policy Agent, also OPA. This is actually a very simple structure here. So when you make a query to OPA, so OPA will give you a decision based on the things that you feed to it, which is policy and data. So, compiling this, the OPA will be able to make a decision, and the decision could be any decent value. And I think that's mostly about the OPA, and we will show you how the policy looks like. So there's something called Regal. It's a policy language that comes with OPA itself. It's definitely another amazing thing that they did. Here is the two parts, the policy and the input, which is mapping to these two parts, the Regal policy and the data. For the policy, the Regal language is actually a bit different from what we are familiar with before. So at here, the default HALO equals to force is defining a variable is HALO, and the default value is force. So it's a Boolean. And then for this value, so this block means you will evaluate these statements within the calibrations, which are some statements that will be evaluated to determine the value of HALO. So within a block, within these calibrations, the statements, the relationship between the statements is n. So it means both of these need to be true for the HALO to be true. So the first statement is just declare a new variable m, and it assigns the value input.message to this m. And this will naturally result into a true. So and then it will go to the second one, which will evaluate if the m equals to world. And how do you make this work? So it will have an input here, and it will have a message inside, and the value is exactly what it means. The second one will be true also. Since this block is true, then the HALO itself will become true. So this is a very simplified example, and it's the regular version of HALO work. And later on, we'll show a slightly more complex policy. And but I think for now, that's it. And finally, we will be talking about our solution. And I hope you are still with me. So because this is actually where the good stuff began. Hey, so let's start with a high level design here. Actually, this structure is not very different from what you say before like what we did before. So there's still a load balancer, there's still an API gateway. And in the API gateway, we will route the traffic to the cross bounding service and then POD. So this means the Kubernetes service and then the Kubernetes POD. So I hope most of you already have the understanding of how the Kubernetes service works. So basically, your request can arrive the Kubernetes service and then it will actually forward it to the cross bounding POD. It will do a load balancing internally. So let's assume it arrives to one of the POD. And it will arrive on the envoy first because envoy already taking over the traffic. And then the second step, the envoy proxy will forward the request to OPA via the external authorization filter. So I think we talked about the filter a lot and the filter is just like a list of functions that envoy will exclude when the request come in or when a response goes out. So the external authorization is also one of the filter. So when you apply this filter, and it will be in the chain of the functions that it will need to call. So once we apply this filter, it will forward the request to the OPA that has been configured. And then OPA will try to evaluate the request and return the result. So the result is actually just yes or no. It's a simple binary question. And then the step four, the envoy will forward the request if it's authorized or the envoy will reject the request on spot if it's not authorized. So as simple as that, okay or not. Now let's rewind to step three to, as I said, the step three is about OPA evaluating the request and return the result. But hold up. I think like you are missing a very important part here is how does OPA evaluate the request? I think that's all it matters. And yeah, if you have that question, then yes, that's definitely the most important question. And we will describe it later on. And this is the red rectangle mark here. And there is one new thing. Yes, one new component, which is the auth operator here. And it's actually a Kubernetes operator. And if you are wondering what is a Kubernetes operator, we will have a small simple here. The Kubernetes operator pattern is actually based on what Kubernetes, the Kubernetes built-in design. So there is something called reconciliation loop, or you can call it a control loop. So that's actually how Kubernetes works internally. We all know that Kubernetes works in a descriptive fashion. So you will describe your needs, your requirements in a YAML file, and you submit it to the Kubernetes API server. So in this YAML file, you are just describing what you want. For example, you want Kubernetes deployments, and you want three or five or 10 pods within it. But you didn't really tell the Kubernetes the exact staff that you need to do. Instead, you're just telling it, okay, I want 10 pods. And that's my desire. And that's all. And then Kubernetes will definitely get your desire. And that will be your desired state. For example, the 10 pods. And in this reconciliation loop, it will keep observing the desired state and compare it with the current state. So it will make the adjustment to make the current state as close as possible to your desired state. So that's basically how Kubernetes works for most of the resources that it managed. And this is also the Kubernetes operator pattern. So you can also use this pattern in the Kubernetes system. You can implement your own operator, and you need to define your own resource, which is the custom resource, and you will work in the same way. You will run a reconciliation loop in your operator and keep watching the resource that you just defined. And whenever there's a change or you make a new custom resource, you will try to adjust the current state to meet your desired state. Okay, I hope you all get the Kubernetes operator's pattern and how does it work. And now we will continue with the part that we left over, which are the part that we didn't introduce from the last few slides. So here we have a YAML file here, and the first step is we're going to submit the CR, which is a custom resource, and which is also called the service grows.yaml here to the Kubernetes. And the arrow here is just I just try to show the relationships between the file and the Kubernetes operator. And in real life, you're actually going to upload it to the Kubernetes API server, and you talk to the Kubernetes API server at any time, and you don't talk to the Kubernetes operator directly. And then here is an example of how the CR looks like in the API version, the kind metadata and spec. These are kind of the default standard fields that you should include in your custom resource YAML file, and it's actually any Kubernetes resource YAML files. And here the kind is the service rule, which is we define with custom resource definition. And inside the spec, there is service and rules, cross is obvious. In the rules, it's actually a list, it contains a list of items, and each item has a few attributes here. It has method, the path, and these two together is combined, these two is actually mapping to a restful style operation to your resource. For example, here is the get orders. And then for the permissions, it's also a list of permissions here. We have get order and something else, so sorry, ran out of ideas. But anyway, you should be able to get a taste of it. So this is how the service rules work like. And of course, you can have as many of, as rules you want, you can just keep expanding this list. After you have the bundle, the next step is you are going to upload a compiled policy to the cloud storage. And this can be done by the OPA itself, and you can actually import the OPA as one of your dependencies. And then in your own Kubernetes operator, you can do the bundling in the code itself. And then you can upload it to a cloud storage. And the cloud storage could be AWS S3 or S3 compatible service or GCS or anything else. And another thing is the OPA bundling is actually quite powerful. And there are some knobs that you can tune, and you can even specify an optimized level. And this will change the behavior and will change how the bundle comes out. And I think we won't really cover that here, but do read that documentation if you are interested in it. Then after you upload the bundle to the cloud storage, the next step is how the OPA is going to get it from it. Then the OPA will download the bundle from the cloud storage during the startup. And it's actually going to periodically check the freshness of the bundle. This is also another very amazing functionality that implemented in the OPA itself. So you can configure the cloud storage that it can download the bundle from. And it will download it. And it will also check it periodically, for example, like three minutes or five minutes. And it will request the cloud storage and determine if there is any new version based on the ePag and also some other HTTP test control headers. If there is a new version, then it will download the new version of the bundle and it will just reload it. So this brings another advantage is that if you want to make some changes of your service rules and you can just submit it to the communities and then the OPA will get the changes and it will reload by itself. During this procedure, you don't even need to touch your deployments. You don't need to redeploy anything for your new rules to pick effect, which is very powerful. Then the next step is OPA will get the user data combined with the policy to make the decision. Here the OPA itself still again doesn't have all the information it needs. It mainly the information it has mainly consists of two parts. Firstly, is what it gets from the envoy. When the request coming, the envoy will forward the request to OPA. It contains a lot of extra request attributes. And the second part is the user information it gets from the service. And again, this can be cashed, so you don't have to request to another service all the time every time. And it should be able to speed up your procedure and also ease the load on other services. If you still remember this image that we showed before for OPA to make a decision, you need to have two things. First is the regal policy, which is actually converted by the YAML file, by this service rules YAML file that we uploaded. And the other half is the data, which is get from the envoy and from the OIS service. And then the step five, the last step, OIS operator will sync the service rules, which include the service name, methods, paths and permissions. If you still remember the YAML you just saw, it will sync that to the OIS service and the OIS service will persist it maybe in a database. And then later it can show it to the account managers. And this is mainly for the displaying purposes. So this is kind of the replica of the original service rules that you submitted in the YAML file. And by doing this, the OIS service can also be served as an HTTP server. And by doing this, you can have a nice web UI and show the users and your account managers and others, maybe engineers who have the needs to view the rules and permissions and also what other permissions are capable of. And now we will have a sneak peek of the regal policy that we defined for our RBAC needs. And yeah, you can click on the, you can go to the same link as me. And this is a real playground. Another cool thing that the team did. Yeah, the team really made a lot of nice tools. So for this playground, it just, like other playgrounds on the left is the main part of your policy. And on the right, there is the input, the data and the output. So when we run something, the output will be showed here for the input and data. So the difference of input and data is the input is mainly the dynamic things that you receive. And the data are more like the information you need to support your decision. And maybe it doesn't change that frequently. But for input, for example, at here is actually what we receive from the Envoy proxy. So when Envoy proxy making the request via the external authorization interface, this is what they're going to provide to the OPA sidecar. And at here, there are a lot of attributes. And I would say it's like more than enough for us to make a decision. And here what we care about is under these attributes, there is request and HCP. And here are like almost all of the HCP attributes that we need to support our decision. First is the headers. It includes all the headers, even the internal headers that are managed by the Envoy proxy. And you can see there is an authorization header and you can see the barrier token here. And you can also see the user agent. And then there is also the method and path. So these are also used in our decision later on. And now you can see the advantage of having a restful style API because you have the method and path in these attributes directly. Then you can just take it and use it. And the combo of the method and path can map to a permission directly is very convenient. And then for the data, it actually contains the permission information. And here is different from service by service. So let's see for this service, we have the permissions here and these permissions contains two parts. And the first is protected endpoints and then the public endpoints. I'm not sure if it's common in your use case, but for us, there could be some endpoints that we just want to expose to the public. And it doesn't require any authorization. And for this kind of public endpoints, it will just be here. And for the protected endpoints, there is another map here which the key is the permission it requires. And here is a create order. And inside this, it will be the action which is post and then the endpoints. And as you can see, this can be an array and this can be an array. And you can also have multiple permissions here and you can just expand this list when it's needed. So after we see the input and data, we can take a look on the left side. Here is a slightly complex wiggle policy than the Hello World example we saw before. And here we specify a package name. And here is just importing the input and the data which you guys just saw here. And we have a default allow equal to false. And this allow is the one thing, the only thing that we clear. So in our policy, the output outcome would be this allow field. And if it's true, then it means it's authorized. If it's false, then it means the otherwise. And firstly, we can have token. We can try to get the access token from the authorization header. And it also has some nice utility functions like the trim prefix, which can trim the barrier of the prefix from the actual token string. And here again, the style and the logic of the wiggle language might not be very familiar to you because it's quite different from the programming language that we used to have. But if you use some other similar policy languages, then you might get used to it. For this one, for example, for the token, here is the blog contained in these calibrations. This just means if this thing is true, and this thing is true when this exists. So this just means when there is the authorization header, so it will be true. And if it's true, then we will execute this function, which means we try to get the token string and assign this to the token value, which means after this, if there is an authorization token, it will be assigned to this token value. And it's a similar logic for the rest of it also. So let's try to get used to the style of thinking. And here we have two allows block here. And as we mentioned before, within a block, within one pair of calibrations, all of these statements, the relationship between them is end. So it means all of the statements in one block need to be true to make this allow as a true value. But between the blocks, let's see we have two allows here. And the relationship between these will be all, which means either the first block or the second block can be true. And then the allow will be true. And this is actually very convenient for us to break down our decision-making process, instead of all this together. So here you can see we actually split the public path and the protected path into two blocks. And either of it is true, then it means we should allow this operation. So for the public path, we will get from the permissions.public, which I will show you is this part. And after you get this part and you will try to fetch based on the method. So we're just trying to keep narrowing down to the minimum set that we need to evaluate. And here we will try to get the HTTP request method. So let's assume if it's a get, then it can get this only list here. And it could be post-late or some other methods. And again, another new thing is this underscore means it's a loop. So it will iterate through the array. And it will assign a value to public path. And then it will execute this glob matching and see if the public path actually match the request path. If it's a match, then it will be allowed because it's a public one. It doesn't require any authorization. And for the protected ones, it's slightly complicated. And here we define a variable S and this, and initially it should get from a function. And here is an example for requesting the user's permission from an auth service. And here we actually make an HTTP call. But currently, we're not running any auth service on my laptop. So I just commented it out. But in the real life that we can do that, you can refer to this code and you can make the URL query params. And you can send it out as an HTTP request and get the body as a result. And at here, one important thing is you can see we enable the cache. And I think so should you if you go with this approach because you don't want to request another third party service every single time when a request comes in. So it definitely makes sense for you to cache it locally on the OPA container itself. And but because we don't really have an HTTP server running right now, so I'm mocking it. So I just make it a static variable, which is a create order. And which means this S would be the array of permissions and the array only have one item, which is create order. And again, you see our old friend on the score again. So this means you're doing iteration again, even though it's only it only has one item. But if you have more than one item, it will loop through, it will iterate through all the values. And because this we need to evaluate the protected endpoints. So right here, we are getting permissions.protective. And at here, the protective only contains, it also contains the permission that this user have, which is create order or the coincidence. So at here, we will get the perm, which is create order. And after that, it's another map. So we will get by ACP, ACP request method, which could be get could be post could be anything. But and again, you can see this whole procedure, we just keep narrowing down to the minimum set that we need to evaluate. And this can make the process more efficient. For example, if if in your permissions data, there is no this permission, or there is no this ACP request method, then it means it doesn't match anything. And then you can just skip the whole thing. It can speed up the whole process. And again, once you get the path, you're going to iterate through the path again, and try to match the path. So that's really how it works. And at here, we can run some example. And let's see for this example in this input, our method is post and path is slash 1.0, that rose, which is not something we have here. As you see, we only have a post orders and get public orders, which we and we don't have a slash 1.0 slash rose. And if we if we click on the evaluate, and you can see, it actually print out like all the variables that we define, but we only care about the allow, which is a false. And for example, if we try to get to change it to get, and we will try to get this public and one, which is slash 1.0 slash public slash orders, and we evaluate it again, and you will see it's a true. And if we make it a post, and we make it to request the slash 1.0 slash orders, and you can see the permissions here is create order. And we also like make a dummy user permissions, which is matching this great order. So when we evaluate it, it should show you a true also. And if we change for the permission and we change it to something else, and then you can see that this user no longer has this permission to access this endpoint anymore, and when we evaluate it, it will be false. And if you click through the link, and you can actually play around and change something and see if it acts as what you expected. Okay, then let's go back to the slides again after the small demo. Then after we see all the policies and we roughly know how the OPA evaluate the request and how does it work with our RBAC model. And the next thing is how do we enable external authorization and void filter as we talked too much about this external authorization. And again, this is configured by a YAML file. And this if you have is still running and you once you submit this YAML file, there will be this envoy filter created. And at here we have two parts that are bold. So first is the workload selector. This is definitely very useful. It means it can specify the workload that you want to enable this external authorization filter. For example, you are running some experiments and you definitely don't want to enable it for all of your workloads, right? It doesn't really make sense. So most of the time you want to take it slow and step by step and try to rule it out slowly. And you can utilize this workload selector and you can create the external authorization filter only for one application. And then the next part is where the configuration is. And I think you definitely should refer to the documentation. And here we won't go too deep of this. And you can see a lot of examples in the envoy examples. And here we have a benchmark. And so here is a benchmark that we ran on our Kubernetes test cluster. And we have like the we actually ran on GCP and we have H1, HighMam, 16 or 8 machine. And it is and we tested with 100 service rules. That's like the rules that you saw before. We have 100 of them. And we will see it should be a reasonable number for how many service rules that you have. Because if you have a microservice that has more than like hundreds of endpoints, then I think we would question ourselves, did we really design it right? Is it still a microservice with like more than 100 endpoints? So I we will see that like maybe a hundred rules or a few hundred rules should be a reasonable number in the real world applications. And here we have a few test cases. And you can see the operations per second is actually quite impressive. And for most of the evaluation, it can be done in the microsecond level. It's not even a millisecond. And I think you can definitely try to optimize your rules and make it more efficient. And trust me, our policies and the permissions data is actually more complicated than what you saw. So I will see the performance of OPA is actually very, very impressive. And it's definitely meet our needs. And here is a summary. So by doing this, we kind of have a distributed authorization approach because the authorization now is running on the on each port with side by side together with your service container. So it doesn't rely on a single, a single like all service. And we also minimize the single point of failure. But I won't see we actually eliminate it because we still need to get the information from from somewhere which is the office service. But since we cash it, and for most of the time, it doesn't have to keep requesting the office service. And then we also extract the access control out of the code. And we turn it into the CR, the custom resource if you still remember, which is in the YAML file. And furthermore, we can integrate, we integrate it with our CICD pipeline, which means it can sit together with your code and it will just be a YAML file. It just like how you do your normal CICD actions that you put your, maybe your deployments, details, or your, your, your GitLab CI files there. So you can also put that service rules YAML file in your, in your Git repository. And then it can be processed automatically by the CICD pipeline. Right. And again, if we need to answer this question now before we, after the permissions, we don't know the action, we don't know the service. And we also don't know the resource. But now I will say that we answered all that question. So besides this information, the users, the rules, the permissions that stored in the database. Now, after we submit our YAML file, and it will be sent into the all service. And all of this, all of this information we can also get from our database. And we can show it in a nice web UI, for example, it can show in a tree. And we can see like for a specific service, what kind of permissions are needed to request this service. And all the, the endpoint inside the service. And for a user, we can see what other permissions belongs to the user and what other permissions can do. And we, we kind of field all the gaps in the between. And at last, thank you very much. And I think that's all I want to share today. And I hope you are still with me. And I hope you enjoy the content you are seeing. Thank you very much.