 Hi, folks. My name is Tim Henricks. I'm the CTO and co-founder at Styra and one of the co-creators of the Open Policy Agent Project. Today, I'm going to be talking about some design patterns that we've seen through the use of Open Policy Agent that help you solve a variety of cloud-native authorization problems. So what I thought I'd do is I'd start out with a little bit of an introduction to what do I mean by authorization and cloud-native and OPPA. So I'll give you a quick primer there. And then what we'll do is go ahead and dig into what these design patterns are, these common solutions to common problems. OK, so authorization, remember, is this problem that we like to talk about where you're trying to control the actions that either people or machines are making? I always like to give an example of, imagine you're logging into your banking application. Every time you do, maybe you want to look at your account balances for all of your accounts, and maybe you want to withdraw our deposit money. So every time you're reading information, every time you're trying to take an action, like withdrawing money, there's an authorization problem that that application has to enforce. Similarly, if you're a developer and you're actually responsible for building and running that application, there are all kinds of authorization decisions that are also being made to make sure that you're not doing anything by mistake. So every time you're spinning up a new resource on a Kubernetes cluster, or every time you're reconfiguring your application, those are actions you're taking, and authorization controls are in place to make sure that you're not doing something that you shouldn't be. Don't confuse authorization here with authentication. Remember, authentication is a problem of sign-in, of identity, of proving to the machine you are who you say you are. We're not going to be talking about authentication at all, or identity at all. We'll just assume that's a solved problem. And today, we're going to be focused on authorization. So how do you build software that gives the administrator controls over what that software's users, what actions those users can take? Now, the key observation here is that authorization happens everywhere in Cloud Native. Every time you're trying to spin up a new resource in a CI CD pipeline, there's an authorization problem that needs to be solved. Every time your microservice receives an API call, is that API call authorized or not? Every time somebody's running a query on a database, is that authorized or not? Any time somebody's trying to spin up a resource on a Kubernetes cluster, is that authorized or not? Same way with Terraform, same way with your server. So over and over, what we see is that every piece of software on the planet actually has to solve this authorization problem. And so at Styro, when we created the Open Policy Agent, we designed it to provide a unified way of solving this authorization, more generally, the policy problem, throughout this Cloud Native ecosystem. So the idea behind OPA is that it provides you a single policy language for expressing all of those rules and regulations about which people or machines can perform which actions on which software. It also includes an engine that when you load it with one of those policy files, knows how to make decisions. And then there's a bunch of tooling that we in the community have helped build up around OPA. And then there are a whole bunch of different integrations that OPA and the community has built for OPA with a bunch of software systems. A lot of those you see on the screen. Now, what you can see here is a list of the different areas where OPA has been applied by someone in production at scale. So this isn't just a screen where we're hoping someday to apply OPA, but rather in all these cases, somebody has applied OPA in production at scale to each of these categories, not necessarily at each of the logos. So that's kind of OPA and that's what it was designed to do is to provide that unified solution to policy. Now, like I mentioned, when we started this at Styra, we eventually donated to the Cloud Native Computing Foundation a couple of years later. It's gone through the full sort of maturity cycle from sandbox to incubating to graduated. So graduated within the CNCF is the same level that Kubernetes, Prometheus, Envoy, that some of the most popular Cloud Native projects on the planet are at that level as well. It's got a growing community. There's certainly a great Slack channel. If you have questions about it, just feel free to hop on there. Lots of folks are there to help you out. It is within the CNCF ecosystem, really the only project that's focused on policy in general. It is also the only graduated policy project. In terms of active end users, that's actually one way I like to engage with a new open source project is to sort of listen to the end users about how they're using it, what they're doing, where it's rough edges are, so on and so forth. So there are a couple of great venues that I'll point you to where there just happened to be like a good collection of end users talking about how they're using OPA. So it happened to be like, before pandemic, sort of the last physical KubeCon back in 2019. There are a bunch of folks talking about different use cases about how you can apply OPA. So Yelp was talking about microservice authorization. Goldman here is talking about Kubernetes authorization. I think SNCC was doing the same, Reddit was doing the same as Kubernetes. And then there was also an OPA summit that we held at that same conference where you can talk, you can listen to CAP 1, talk about Kube or Chef, talk about how they embedded into OPA into their application, Pinterest using it for actually several different use cases. I believe CAP was one that the data layer that they focused on as well, Atlassian as well at the application level. So go ahead and check those out and we'll touch on some of those and go into some of those different use cases in a bit more detail later, but it's definitely a good way to get to know OPA. So what is OPA? I kind of gave you a quick overview, but what is it in a little bit more detail? This is the picture we always like to show the first time we're introducing OPA. The idea behind it is pretty simple really. The idea is that any service, and that service could be a microservice or it could be a service mesh or a Terraform process or the Kubernetes API server or CAP or really anything, that service decides at some point in time that it needs an authorization decision. And so what it does is it sends a policy query over to OPA and says, hey, give me a decision. OPA returns that decision. Now it's the service's responsibility to enforce the decision. It is OPA's responsibility to make the decision. So really OPA is sometimes people like to call it a decision point. That's really how we think about the responsibilities for both the service and OPA. Now there are a couple of things on the slide I'll call your attention to, one of which is that policy query could really be any arbitrary JSON value. Let me give you a couple of examples. If the service is a microservice, maybe it sends across to OPA a JSON object that has three keys, a method, a user and a path. If that service on the other hand is like the Kubernetes API server, then maybe what it hands over to policy free to OPA is several hundred lines of JSON that represent that new pod or that new ingress or whatever the developer was trying to deploy onto the Kubernetes server. OPA doesn't really understand what the real world semantics of that policy query are in any way, shape or form. OPA just sees that policy query as JSON. The way this works then is that I, when I as a person am writing a policy and loading that policy to OPA, I know where OPA has been deployed. I as a person know what a Kubernetes ingress is and what kinds of rules and regulations I want to put in place for that Kubernetes resource. I as a person know which API calls for my microservice. I want to allow or deny. And so I as a person can go ahead and write the appropriate policy that is appropriate to make decisions for that query. All right, what that means then is that the policy language itself needs to be flexible enough that I can go ahead and write whatever policy I like over really any arbitrary JSON data that comes into the query. And so Regal was designed for exactly this purpose. It is purpose-built to be a policy language. It was purpose-built to be able to do things like deal with deeply nested JSON data to be able to iterate over the contents of the arrays or even the key value pairs that you'll see commonly in that JSON. And it was designed with, I think it's over 150 different pre-built functions that can do things like string manipulation, Cedar, network Cedar IP address arithmetic, Jot manipulation and the like. The next thing up on the slide is really that data in JSON. So we've already talked about how OPPA and Regal were sort of designed so that you can write arbitrary over arbitrary JSON values. Well, one of the things that we sometimes see people needing to do is write policy over information that's not natively contained within that policy query. These policies we like to call them context-aware. So we like to say that these policies make decisions based on the information about what's going on in the world. Well, how does OPPA know about what's going on in the world? How does that work? Well, that's what this data in JSON part of the picture shows. It shows that you can inject that arbitrary data into OPPA as long as it's arbitrary JSON. Good example here is let's suppose you wanna write a policy that says only people who are on call can make changes to my production Kubernetes cluster, can run API calls against my production applications, and can whatever run queries on a production database. Well, who's on call? How do you know? Well, OPPA doesn't know, your Kubernetes service doesn't know, your application doesn't know, your database doesn't know, even your IDP, your identity provider, whether it's AD or something else, it doesn't know either. On call, information is typically stored in some third-party service like PagerDuty. So what you can do is set up a little script that will go ahead and pull the data out of PagerDuty and inject it into OPPA. And now suddenly, OPPA just knows who's on call. So anyway, that's kind of the idea behind that data in JSON on the lower right-hand side. Last point on this slide is that policy decision. That policy decision as noted there can be arbitrary JSON again. I hope you notice the theme. The idea here is that while it is certainly the case that perhaps the primary use case for OPPA is authorization, and typically people think of authorization as making a loud and I, ones and zeros, true false decisions, what you end up doing with OPPA a lot of the time is returning policy decisions that are far more complicated JSON objects. Give you a couple of examples. Suppose you wanted to write a policy that made a rate limiting decision. Maybe the policy decision in that case is a number. Maybe instead you wanna write a policy that makes a decision about where you're authorized to deploy an application. Maybe there the answer is actually an array of the names of clusters. Or, and this happens just all the time, maybe when you make an authorization decision about whatever an HTTP API, you wanna return not just the Boolean yes, no, but also maybe, is there an error message? Is there a status code? And in that case, you might wanna return multiple values as a JSON object. So in any case, the idea behind this is that the policy decision itself can be arbitrary JSON. All right, so that's OPPA in a nutshell. Hopefully I've given you a good sense as to what OPPA can do. What I haven't really talked about are best practices around how you deploy it, but that's the point of the rest of the talk. One of the ideas here, and one of the reasons that OPPA is so powerful is that when you get to the deployment piece, like how do you actually run OPPA? You have a number of different very, you have a very, there are several different ways to do that. One of which is you can run as a CLI, you can bet it as a library or whether as a go library, or if you're interested in WASM, WebAssembly, you can go ahead and compile policies down to that. You can run OPPA as a sidecar or a daemon as well. You can even use OPPA as a building block to create a centralized authorization service. So architecturally speaking, OPPA is quite flexible and you can deploy it to achieve whatever goals you want. Another degree of flexibility that again, people gravitate to OPPA for is the policy language itself. It is a purpose-built policy language. It is more expressive than meaning that you can encode all the normal kinds of policy frameworks that you're used to, whether it's role-based access control or attribute-based or access control-less or even IAM policies that you might see in a public cloud, like you could do all of those with OPPA. OPPA is however, a purpose-built policy language, meaning it's not as expressive as a programming language. And so what OPPA tries to do is balance the need for expressiveness, but also give you the benefits of a language that has a bunch of safeguards built into it. So you don't have to worry about things like insurance. Another reason that people pick up OPPA is its flexibility around how you compose policies. So OPPA borrows a lot from traditional programming languages in the sense that you don't just load OPPA with one policy, you can load OPPA with a whole collection of policies that are all namespaced, just like you would in a normal programming language, and then you can have one policy invoke another policy. And the way that that, and one of the things that that materializes in is that you can write common libraries and then have two different teams write their own policies both reference and utilize that common library. So there's a bunch of different dimensions in which OPPA is flexible. There's probably another couple that I don't have on here around injecting data into OPPA. You've got quite a bit of flexibility there as well. All this flexibility is wonderful because it allows you to solve a broad range of policy and authorization problems. That was the purpose for which we designed OPPA to begin with, of course. But what it also means is that you need to understand how to apply OPPA to solve your problems and navigate through all of that flexibility to pick and make the decisions that you need to to meet your requirements. Whenever we're talking to folks about how to do this, we typically boil down the discussion to three key questions. The first of which is what policies do are you trying to enforce? Are you talking about enforcing policies about whatever, Kubernetes or APIs, people and machines? Do you care about really complex resources? Do you care at all about the actions, whether it's create or delete? So that's one of the key dimensions. The next is what software are you trying to enforce this in? If you're thinking about an application, are you able to enforce policy at the gateway level, at the backend level, at the database level, maybe it's even lower in the infrastructure cases where you're thinking about enforcing the Kubernetes layer in your CI pipeline? And then that'll have implications around architecturally, how do you actually deploy it? Do you even have the opportunity to run into the sidecar? The third idea here is really the data. So it's crucial to understand the data dependencies for the policies that you want to enforce. Some policies, for example, require lots of data. If you were to write a policy that says only the owner of a resource can delete it, it's fine to write that policy, but what you then need anytime you're making a decision, which is can Alice delete resource foo123, is you need some external data about, well, is Alice the owner of that resource foo123 or not? So that data, the data dependencies matters quite a bit when you're trying to think about policy. And then these three things are all related to each other, right? So like if you're trying to enforce a policy at a gateway, then you have to realize that you may not be able to shuttle all the data you need to to enforce the policies you want at the gateway. Whereas if you were to move this further down in the architecture and enforce it at the backend, well, maybe you've already got that data available about who is the owner of which resource. And so therefore the data dependencies are much easier to satisfy. So these are the kind of the three dimensions that we typically talk with folks about in order to understand how to properly apply OPA. The purpose of this talk is to really go through three and maybe three and a half design patterns that we have seen people successfully use and deploy to solve authorization problems. What is the design pattern for OPA? Well, it's very analogous to the design pattern for software engineering. Remember in software engineering, we're really talking about this idea that you have a common programming problem. And then what you wanna do is a design pattern sort of gives you an algorithmic solution, a way of solving that programming problem. With OPA, a design pattern is very analogous. The idea is that we're going to describe a common policy problem. And then the design pattern describes an architectural solution that helps you solve it. It gives you a starting point for solving your authorization problems. Now, just like any design pattern in software engineering, there are, I think we'll see seven different sort of properties or fields that we'll discuss. We won't go into all of them in a great deal of detail, but not surprisingly, three of those are the ones we just described, like the software architecture that you're deploying, the policy and any sort of data requirements on that that are required to evaluate that policy. So those are the three things, those are sort of the key three decisions that you're gonna make. And then for each design pattern, we're gonna go and talk about some other sort of consequences of those decisions. In fact, we'll talk about what are the performance characteristics of those decisions? What are the security characteristics of those decisions? And obviously there are some more decisions that you need to make for each of those. But then also, what are the key problems that this, what are the key user stories or problems there that this particular solution, that this particular design pattern solves? And then, you know, why would you use this to begin with? So anyway, you will go through these three to four different design patterns. And for each one, we'll quickly touch on all of these to give you a sense as to what, how to get started with open. All right, like I said, we're gonna have three to four. That's why I keep saying four is one of these is emerging. But when I think about the design patterns, I really break them into these two dimensions. So one of the dimensions is really what domain are we talking about? And there are two domains that we'll cover today, one of which is configuration slash infrastructure. So the real idea here is that you're talking about Kubernetes or Terraform or something where some typically developers trying to create some new resource and that resource is pretty complicated. So, you know, you're talking about several hundred lines of JSON data. And you're trying to put guardrails in place to make sure that those resources, that those configurations are done correctly and that they don't have any security or compliance or operational problem. So that's one domain, configuration authorization. Another domain that we'll talk about is APIs or maybe applications. The idea here is that the things that you're handing over to OPA to make decisions are relatively small. They're not several hundred lines of JSON. They're, you know, three, four, five, 10 lines, maybe. So a good example, there would be an HTTP API. Maybe you give OPA a user, a method and a path. And that's really all you need from an input point of view. And then you're just trying to make a decision whether that user's authorized to perform that action on that resource. So that's one dimension is domain. The other dimension here that we'll talk about is sort of compute power. How much compute power are you willing or need to provide to address your authorization challenges? And it's either small or large. And so we'll see what I mean by that. In some sense, this corresponds to online versus offline and centralized versus distributed. Okay, so let's jump into our first design pattern. The first one here is really offline configuration authorization. Imagine that you have a CI CD pipeline. You've got developers who are continually checking configuration files into your, let's say your Git repo. And what you want to do is put some authorization policies, some rules, regulations in place to just check that those configuration files are safe to be checked into Git or then perhaps if you're employing GitOps, you can deploy to real life systems. This is a common thing that we see people do all the time. Now, how do you address this problem with OPA? Well, how do you apply OPA? Well, they're actually, you can apply OPA in a couple of different locations and ways to address this problem. In fact, sometimes you might wanna apply them all. And so that's what you see here. What you see is that one way you can apply OPA is to apply it sort of on a developer laptop and have OPA run on a developer's laptop to evaluate all those rules against source code or configuration file changes that that developer is making long before they get uploaded to the CI pipeline. And then later, you could actually use OPA as part of like a unit testing framework effectively to go ahead and run OPA against those configuration files as part of a CI check and then fail the PR if those checks don't pass. You can even think about doing this later which is like after the files get committed, maybe you've changed your policies. You've increased the set of rules that you're trying to apply. And now what you wanna do is actually scan that repository and all the files in it to tell you whether there are any files that already or that have already been committed that fail your new policy checks. So it's another fine application of OPA. And in all these cases, this is kind of the architectural view of things. How do you apply OPA and where do you apply OPA to enforce these policies that you might wanna write? And the nice thing about this is all three of these instances of OPA as shown in this picture, they could be one instance in reality of OPA, but the three different applications of OPA, they can all use exactly the same policies. What kinds of policies do people end up writing? Well, here's a couple of very different examples. One is drawn from Kubernetes. The other is just a random configuration file for your application. And here what we see is maybe you wanna write a policy here about the labels that you require on load balancers. Maybe you wanna say every single one of these load balancers in Kubernetes has to have an owner label, right? That's a sensible thing to say. On the application configuration, you can imagine a similar kind of policy. I think for this one, what we said was, oh, just ensure that this storage, that let's say we're deploying this for production, that this URL has to end in huli.com. All right, I won't go through the details here if you haven't seen OPA's policy language. This is kind of what it looks like. For those of you who've seen it before, what you'll notice is that we're writing some deny statements, which means that these policies for configuration are often block lists, right? You're enumerating the conditions under what you wanna reject a file. And then what you're also doing is you're providing human readable error messages. So at the end of the day, you can kind of make it out here. It's saying, hey, if this check succeeds, and what we're saying is, hey, we found a load balancer that doesn't have an owner label, and so that's what exactly we'll actually tell the user about. So that when they get this failure on the CI check or on their laptop, they actually see what thing they need to go and fix. What are the sort of, the third, remember we've talked about two of the three different properties of the prescriptive properties of this design pattern, the policy and the architecture. In terms of data, typically what we see in this case is not a lot of external data being used. Remember, we talked about the pager duty example, for example, we don't see a lot of external data to two exceptions to that rule. One of which is that sometimes people end up wanting to write policies that span multiple files. If you're doing that, then you have to have some sort of additional data. You need to be scanning like the set of all files together as opposed to simply scanning them one at a time. So arguably that falls into the external data category. The other thing that sometimes we see people do is want to have a mechanism whereby they'll allow for an exclusion framework or an exception framework that says, hey, apply this rule generally to all of my files, except maybe there's this one team, they've got this file, they can't change it, there are good reasons they can't change it. And so you just need to be able to turn off that violation. You need to basically be able to turn off that rule for that particular file. And so that would be the exception. And often that's done through external data. Performance characteristics that you get out of this, a latency here is super relaxed, right? If it takes a second, that's fine. If it's a minute, that's probably fine too. You're often running this in a CI pipeline. Not so great in a developer laptop. Obviously you'd like that to be quicker. Availability here, you'll see later why we talked about what throughput and availability, these aren't really an issue in this particular use case because everything is offline. Security, again, not a real challenge. DevLaptops are fine, what you're really more concerned about is usability there anyway. That's the only real reason to run the checks on a DevLaptop. PR checks, routine scans, those are pretty secure. Concrete problems, I usually think of these as user stories. So from a DevLaptops point of view, maybe you want to put guard rails on your CI pipeline. That's a great use case here. From a security point of view, maybe you actually want to automate a bunch of the security checks that you would have to do manually. That's another great kind of concrete problem. And then from a developer point of view, maybe you've got a whole bunch of new folks who are you're trying to onboard. And what you want to do is instead of just teaching them how to configure Kubernetes or your applications correctly, you just want to take that wiki and turn it into code so that you don't have to actually teach them quite as much. They can learn through interacting with software. Okay, so that's the first design pattern. Hopefully that one was fairly clear. We'll go through this next one a little bit quicker because this next design pattern is really the online version of configuration authorization. The setup is very similar. It's the same kind of problem, except what we're doing in this case is we're actually trying to enforce rules within the API server that's supposed to be protecting and running, let's say Kubernetes or your public cloud or whatever those configuration files are going. So here you're really trying to inject OPA and enforcement into and protect your real live platform instead of some offline kind of thing. So here's the kind of architectural picture you've got in this case to be able to hook OPA into that API server that protects Kube or your public cloud. So you've got to be able to hook OPA into that API server. And then what you want is to ensure that every single request going into that API server is routed through OPA to make sure that OPA can actually make the authorization decisions. And then obviously if OPA says, no, that resource isn't good, the API server needs to be able to return a reasonable response back to the call and say, hey, no, we rejected this and here's why. And then obviously if OPA allows the check, then the API needs to go ahead and continue processing that request as if it never even asked OPA. For Kubernetes, this is a very popular use case for OPA. I will just call out here that what you see with Kube is that you don't just have one API server, you have many, right? Every Kubernetes cluster has its own API server. And so you might not end up with just one instance of OPA, you could end up with many instances of OPA, not just on one cluster, but across multiple clusters. So wherever those API calls are, the idea for this online configuration authorization problem is that you're always, you always have an OPA hooked in and that OPA can run locally or simply it's really up to you. But typically what we see is that OPA is run locally, next to and on top of effectively each of the API servers that you're integrated with. One interesting thing that happens in this case at the policy level is that the API server often has this ability to take the inputs that the user's provided and say, here is my Kubernetes resource or here's my public cloud resource and transform them before it hands that description of the resource off to OPA. And so that's what you're seeing here. This happens with Kube all the time. So when you're submitting a resource to Kubernetes, you as a person, as a developer, will submit the file that you see here. And then Kubernetes will send it through this pipeline of stuff. And by the time OPA sees it, it'll see this much different collection or much different YAML file in this case. And all that's done is it's added some structure to it. It's filled in some defaults. And as you can see, it's quite a bit longer. So that's one difference when you go from offline to online is that you have to deal with policies that are written over an input handed to OPA that's not necessarily the same as the input that people are accustomed to. Again, though, policies conceptually are the same. They're blockless. You're still generating human readable messages. And then here, I guess what I've said is just what I said already, you're operating on these intermediary representations that really the outside world might not know about otherwise. External data, there's a little bit more external data that we see on online configuration. Sorry, I have to fog in the slide in the online configuration authorization use case. Why? Because maybe you're uploading a binary and so you wanna do a scan against, sorry, you wanna apply a rule that goes ahead and takes into account the result of a security scan. Or maybe you've got exceptions. Maybe you've got multi-resource policies. We see that often as well. Like don't allow a new ingress to be created if there's another ingress that's using the same DNR. Performance here is quite a bit different. Latency numbers are between 10 and 100 milliseconds. Much, much different than the offline case. Throughput, yeah, you've got mobile requests happening all the time. You're on the critical path. OPPA is at this point on the critical path for making authorization decisions. So throughput does matter. OPPA does support multi-threading. And of course you could go ahead and run multiple instances of OPPA as well. Availability here is crucial. Once you're hooked into the API server, it is super important to make a decision about whether you're gonna fail open or fail closed with all the consequences of each one. If you decide to go fail open, then what you probably want is a compensating control that's doing the equivalent of that sort of platform scan that we talked about in the offline case. Where the idea here is that you're actually able to give out a report that says here are all the resources that are violating my policy. Security is super important to make sure this is secure. What you don't want is if you're actually responsible for the security of your platform, you need to make sure you're doing things like setting up off-in and off-see and encrypting traffic. Next up in the design pattern is design pattern number three. Here we're gonna shift away from that configuration or that infrastructure authorization. We're gonna focus more on the application, the API authorization. So here the idea is you've got an application. It's trying to service its end user's needs, but that end user is maybe something like me. I'm logged into my bank and I'm trying to withdraw money from my account. Is that authorized or is it not? There are all kinds of different ways that you can actually integrate OPA. One popular one is shown here. You integrate it into the service mesh or into a network proxy and then you just go ahead and have OPA make the decisions that it normally would. Is this API call authorized or is it not? Architecturally, here we've generalized this a little bit. The key here to the architecture is that we're gonna run OPA as a side par. We see this happening over and over. The idea being that you have in this picture, notice that each of these big boxes is a server. So if you've got, and within that server, you've got the service, like an instance of your microservice and an instance of OPA. And so that service is sending requests to and from OPA to get decisions. Importantly, if you've got 500 instances of like a microservice application and you're running OPA as a sidecar, then you have 500 instances of OPA. So the idea here is that you're running OPA very close to the service to get high availability and high performance. Here, in terms of policies, again, you could write either network level policies that kind of look at source and destination. You could look at, or you could up a level that a bit to even talk about the L7 method user path. So you could certainly do that. Or you could also apply this use case to make application or business level policy decisions. Here, you just kind of see maybe I'm handing over a user with some groups and a resource and an action. And then you're just making a decision. Does this, can this user execute this action on this resource? Can this user execute this HTTP method on this path? Typically here we're optimizing for speed and the policies that you write are typically allowed and I sometimes I'll return multiple items, but the core of that policy is really focused on allowed and I may do some status codes or maybe a little bit of an error message. Secondary properties here, external data. Remember this is open running as a sidecar. So this external data needs to be very small and static. You can't be replicating 10 gigabytes of data to every instance of your microservice all the time. Good example, there might be open API metadata like you find in open API, relatively small and stable, public keys. In addition, the other big source of information that people use is really typically the information contained in a job. So in an application, you often have a user sign in and then authenticate and then they're sending this jot around and that jot can be handed to OPPA to have OPPA make good decisions by inspecting the internals of that token. Performance here, quite a bit different. Again, we're hitting here one millisecond targets, not even 10, not even a hundred that we saw in the last one, but here it's like one millisecond targets. Throughput, you're talking about trying to get 1000 QPS, availability is absolutely crucial. This is actually the main reason that you would run OPPA as a sidecar. You run OPPA as a sidecar so that what you know is that you're no longer relying on the network to get an authorization decision. You can just jump over local host. Security, typically what we'll see people do is rely on host level security. Like once somebody, an attacker has compromised the host, there's so many things that they can do that we typically don't see people spend a whole lot of time worrying about securing the connection between OPPA and the microservice because it's all in the same host. For somebody to actually compromise that means that they've gotten access to the host. Do though, make sure that your OPPA APIs are restricted so that they can only be accessed through the local host. Concrete problems here, zero trust is certainly one, making sure that every API, whether it's a microservice API or it's your public facing APIs have good authorization in place. We've heard people talk about using this as a mechanism to get different teams within an organization within like a microservice development environment to coordinate so that if team A wants to start using team B service, team A's gotta come and tell team B, hey, I need, I'd like to use your service so that team A doesn't sort of suddenly in production flood team B services and they not be able to handle the load. End user authorization is another one that we'll see people deploy this way. Last one, it's an emerging pattern. This one isn't, we're starting to see this more and more so this one's a much quicker one. Here are the ideas that you've got an application and it is really in this mode where you've got, it needs authorization decisions and you wanna decouple those authorization decisions in the application, but at the same time what you don't wanna do because of data load is run OPPA as a sidecar. So in this case, what you end up with is using OPPA as a building block for centralized service because of basically data gravity. You've got too much data there. And so we're seeing people deploy OPPA more often like this. The kinds of policies people write are sort of similar to the ones that we just saw. Business level policies, policies about whether employees can perform actions or their end customers can perform actions. All the way, almost always here I see a large amount of external data. LDAP and AD would be an example. Often permissions data is what we see people storing or using OPPA for. Performance here, it can't be as, the latency can't be as extraordinary as for sidecar looking at 10 millisecond targets or on the order of 10 millisecond targets. Throughput those gotta be very high because now you've got a centralized service you can't rely on sort of distributing the load to the edge and availability is of course crucial and then what we see people do there is replication. Security here, you've got to go ahead and secure all of the OPPA APIs in the usual ways off then, off Z and encryption and transit. You know, a couple of concrete problems here around how do you migrate applications to the cloud if they're dependent on an on-prem LDAP or just how do you build a centralized permission for application? Okay, the thing to take away, several different design patterns that you can look at to give you a good starting point for applying OPPA to solve your authorization problems. And by all means, check us out online. There are a couple of different places here. I'm hoping, sorry, I'm in the midst of putting all this material together as a blog. So hopefully you'll have another form to look at this too. Hope that was helpful and definitely check us out online. Thanks all.