 Okay, so my name's Toran, I work for a startup called Steera, and I'm also the tech lead on a project called the Open Policy Agent. And today I'm going to be talking about how you can enforce bespoke policies and Kubernetes. And so the focus of this talk is going to be on how you can take admission controllers and Kubernetes and use them to enforce all kinds of custom policies that are organization specific in the matter to your companies. So this is an overview of what I'm going to talk about. First I'm going to give a little bit of background on how we think of policy. Then I'm going to introduce an example scenario where you'd why not enforce some custom policy. And then I'm going to show how you can do that with admission control. And then I'm going to introduce the Open Policy Agent project at the end. So when we're talking about policy, we're talking about sets of rules that govern how the systems should behave. So these could be authorization rules or admission control rules or network policies and so on. And the interesting thing about policies is that while they apply to every single organization pretty much on the planet, they're also vital to their long-term success because they codify important requirements around things like cost and performance and security and internal convention and so on. The challenge with enforcing policy obviously is that they apply across the stack. They're typically very specific to organizations and they often sort of develop organically and then evolve over time. And so they're hard to predict and they're hard to plan for a lot of the time. Now when it comes to actually enforcing policy, the approaches that are out there kind of are all over the place. So you find a lot of companies that just rely on tribal knowledge and wikis and docs and spreadsheets to actually enforce policy. So they write it down on a wiki and they hope it gets enforced. Or when you have a policy question, you have to go and ask somebody on another team and they know how to set the configuration just right to secure the system. At the other end of the spectrum, a lot of companies start by hard coding policies decisions into software. And while that works initially, the cost of that over time goes up quite a bit because not only are the policies harder to understand because they're so tightly coupled with the rest of the business logic in the system, but they're also very expensive to change because you need to do like an entire software release in order to get changes out. And so this isn't necessarily a new space. There are lots of policy solutions out there. But when you dig into them, a lot of the time what you find is that they don't have the core kind of expressiveness that you need to say what you want in your policies. So you can't write the logic that you want. You don't have access to the data or the context that you need to make policy decisions. You can't even necessarily generate the kinds of policy decisions that matter, right? So a lot of the time you need to express more than just like is this thing allowed or denied. You want to say what fields are allowed to be displayed or what are the annotations that have to be enforced or applied to objects when they're created. And then moreover, a lot of the time the languages that they give you are not particularly sophisticated and so you can't take policies and split them up and create reusable functions and rules and so on that you can share throughout your policy code base. So I want to sort of motivate this discussion with an example scenario. And so we've got this company, Acme Corp, and they've got engineers. And so Alice is a platform engineer. She's responsible for setting up the Kubernetes clusters and making sure they're secure. And then Bob is an application developer and he has to ship features as quickly as possible. And sometimes those features don't work the way that he thought they would and so he needs to be able to SSH or exec into those containers and run some commands to figure out what's going on. The problem is that Bob cannot be trusted. He has introduced too many vulnerabilities into the system. He's brought down the system too many times. And so Alice really needs a way to say, Bob cannot run, cannot get shell access to containers that are in a privileged security context in a certain namespace that happens to be running all their production workloads. So the question is, what can Alice actually do to enforce this policy today? And so over time, Kubernetes has developed a bunch of features that it's matured around extensibility. And so they've added lots of web hooks for things like authorization and authentication and image policies and scheduling and all these different things. But then in 1.7, they added a great feature called the Dynamic Admission Control, and that's what we're going to focus on here. And so what that does is it allows you to decouple policy decisions and policy enforcement from Kubernetes itself. Going forward, there's a bunch of other great features that are coming. Like you're going to have mutating web hooks. These features are going to beta. And so that's really exciting. So for people that don't know about Admission Control already, it's this stage that runs whenever requests are received by the API server. So after a request is authenticated, so whenever, for example, Bob runs kubectlexec, that sends a request to the API server, and that request has to be authenticated, and it has to be authorized using like role-based access control. And then it has to pass through a series of admission controllers that decide what to do with it. And so before any request affects SCD or any clients get notified about a state change in the system, these admission controllers run. And the goal of these admission controllers is to enforce various kinds of policies around resource management and security, and defaulting, and semantic validation, and so on. And so each admission controller gets a chance to either just allow the request or the change, modify the change, and pass it on, or deny it. And so the difference between authorization and admission control, one important difference is that in the authorization or authorizer framework, if a single authorizer allows the request, then it'll be allowed. However, if a single admission controller denies the request, the request gets denied. So it's like the last place where you can enforce policy in Kubernetes before resources are allowed into the cluster or created in the cluster. So it's very important for policy enforcement. Now before 1.7, this was sort of the state of things, so you had to basically statically compile and configure your admission controllers inside of the API server. And so over time, more and more admission controllers got added to Kubernetes, and so you'd see a handful of them added in each release. And some of them are very simple. Some of them are more complicated, and some of them are very specific to certain use cases. And so this is a list of all these different ones that you see in the code bases in Kube and OpenShift today. Moreover, in order to configure these things, because they're not always just static logic, you need to provide configuration to them for certain things, you had to do that by reconfiguring Kubernetes. You had to go and change the command line arguments that you used to start the API server with. And then you have to provide static configuration files that you have to set up when you're bootstrapping the cluster, and so that made it really hard to change the config on the fly. And so for example, if Alice wants to implement her Bob protection policy, she would first basically probably take Kubernetes, fork it into her private GitHub repo for Acme Corp. She'd go and implement the policy inside of the admission control framework. And then she would have to go and build, publish, and upgrade Kubernetes itself just to enforce this relatively simple policy. And so that's not what we want. The good news is that in 1.7 now we have these external admission webhooks that you can use to run admission controllers on top of Kubernetes itself. And so the webhooks act just like regular admission controllers, they get a chance to allow or deny incoming right requests before etcd is updated and before any clients are notified. And so you're able to externalize or decouple policy enforcement from the core of Kubernetes. Now the really nice thing, or one really nice thing about these webhooks is that you can actually configure them dynamically through the Kubernetes APIs. So you can deploy your webhook and then have the configuration change as the requirements for the webhook change. Now the way that you actually configure them, this is a high level view of the configuration, but you basically specify a bunch of match operations to say, OK, for example, I want my webhook to receive create requests on pods and delete requests on services. And then whenever a request comes in that creates a pod or deletes a service, it triggers a query or a call out to the admission controller to choose whether or not to allow or deny the request. And then you can also put wild cards into the request. So if you want to say, just call me on any create, delete, or update for any kind of resource, you can also do that. So it's kind of flexible, it's nice. Now the important thing here is that the admission controller webhooks give you all the context or the input, the data that you need in order to make policy decisions over the resources that are created in Kubernetes. So they provide the name of the operation, the user, the group, and so on, performing the operation, as well as a representation of the entire representation of the object itself. So if you're working with authorization or authorizers, you don't get this full representation of the object. You only get the name of the object, the type of the object, and so on. So in this case, though, Alice has all the information she needs in order to actually implement this policy to protect the system against Bob. And so she would write a webhook, basically. She could write a webhook now that makes a decision about whether or not to allow these connect requests. And then that decision would result in a response being sent back from the webhook to the API server that says that this request should be denied. And then she can include a message that tells Bob why he's being denied when he runs kubectl exec. And so it's important to know that if any of the webhooks return a denial, then the request gets rejected, right? So it doesn't behave like an authorizer. OK, so I figured we would actually look at how you would go about implementing one of these webhooks. So I was doing a drive-on of this demo right before the talk, and then my laptop crashed. And so hopefully, this goes well. OK, so the first thing you need to do is create a service that the API server can use to look up your webhook that's running on top of Kubernetes. So we're going to go ahead and create that. And then we're going to basically create a deployment that runs our webhook. And so this is pretty standard. It just identifies the container image that's got the webhook. And then it also has to specify the TLS certificates that the API server will use to authenticate the remote, the endpoint. So we're going to go ahead and deploy the admission controller now on top of Kubernetes. And then we can see that it's brought up immediately. So that's great. And then we can see that the webhook has actually registered itself as an external admission controller in the API server. So this is that configuration that controls whether or not the webhook gets called. So you can ignore this big certificate bundle at the top. The interesting bit down here is that it specified rules that say, OK, for any operation on any sub-resource of a pod, invoke this webhook and ask whether or not the operation should be allowed or denied. OK, so let's take a look at how this is actually implemented. So this is the go code that actually implements a webhook. So you have just a main function where you do the standard boilerplate of loading the Kubernetes config to talk to the API server. And then you just start a web server that serves the webhook endpoint. During startup, you need to register the webhook somehow. So you can either do this in code or you can do this with configuration. I've chosen to do it with code just because it makes it so that as soon as the webhook is deployed, it registers itself. It says, please call me whenever something happens. And so I'm just specifying that configuration here using the go client library. OK, and so then we actually get down to the business logic of the controller. And so here we're basically checking whether the incoming request is performing a connect operation on the production namespace. And we're basically looking up whether the pod or the container in the pod that you're trying to connect to is running in a privileged security context. And if it is, then we log it and then we return a denial back to the API server. And then the rest of this code is basically just setting up the web server that you need in order to accept these requests from Kubernetes. OK, so that's about 150 lines of go or so. OK, so now we're going to test this policy out. So we're going to create a privileged container that's just going to run an alpine image. And what we'll do is we'll first create it in the default namespace. And then we'll create another one in the production namespace. And so then if we try to exec into the default namespace, it works as we'd expect. But hopefully this works. This is what broke right before the talk. If we try to log into the production namespace, we get this rejection back from the API server. And we get the message that was set by the admission controller. OK, so the demo gods were kind to me this time. And yeah, everybody should clap for that, I think. OK. All right. So if you start implementing these webhooks, there's a few things you probably want to keep in mind. So the number one thing is that you really need to be careful with the dependencies that you introduce in the webhooks. Because every single call coming into the API server is going to be subject to whatever latency or availability constraints you put on the webhook. And so you really want to be careful about that. The other thing you want to pay attention to is that you don't want to be performing side effects, if possible, at all costs inside of these admission controllers. Because if your admission controller decides to allow their quest, it doesn't necessarily mean that an admission controller later in the chain is going to allow it. And so you don't necessarily know whether or not the requests are being allowed or denied. So you want to be careful about that. The second big thing that you want to pay attention to or be aware of is that right now the API server sends the internal representation of Kubernetes objects over the wire. And so this means that it's a little bit trickier to get the objects to deserialize if you're using the client go framework. And they just won't look exactly like the objects that you see when you do API requests against Kubernetes. So all the same data is there. It's just that the format of the data is slightly different. The sort of thing that's sort of getting better is that right now the API server fails open if the webhook call fails, right? So if there's a network partition or whatever and the webhook crashes or something like that, then that failure is going to, that error is going to go back to the API server and the API server is just going to allow the request. Now this has been designed to be configurable from the start, but they didn't allow you to actually set that configuration until it's going to come in 1.9. So that'll make this much more applicable. Another thing to watch out for is that you need to actually serve, or today you need to serve the webhook endpoint from the route path. So you can't embed this in another web server. You need to have your own. You need to own the IP port. But that's also becoming configurable in 1.9. So that's getting better. These things are improving. The last thing I'll point out is that actually, I think there's been a couple of talks on Client Go so far. But the vendering experience of getting Client Go to work inside of your project has improved a lot over the last six months or so. And so the first time I did it, is there a question there? Yeah, I thought you mentioned a representation of your client. Yeah, so the question is whether or not you need to use the Kubernetes code in order to make these admission control decisions. And you don't, like you can just receive these webhook calls as JSON and then do whatever you need to do with them. But yeah, you're not getting the versioned API objects over the wire. So I just want to point out that the Client Go experience has gotten a lot better. I had a lot of trouble getting it to work a few months ago. Last time I did this last week, it was super smooth. So that's excellent. OK, so these webhooks and other features that are part of dynamic admission control like initializers have sort of laid the groundwork for extensible policy enforcement, which is great. Because you don't want to have to go and modify Kubernetes itself whenever you want to enforce some new policy inside of your organization. The reason for that is that these policy decisions have been decoupled from the enforcement point. So you take the policy decision about whether or not Bob should be able to connect to a container, and you offload that to this webhook that's running somewhere. You don't bake it into your API server. No, they don't. So it's just an API that you have to implement inside of your process. The question is whether or not you need to implement these in Go. And the answer is you don't. But that's a great question because I think that we can probably actually do a little bit better than this. In the same way that when you configure a firewall or something like that, you don't write a bunch of C code to do that. Maybe we don't want to write a bunch of Go code every single time there's this new policy that we want. So what I think we really want and what everybody should want is a declarative way of specifying these kinds of access controls in their clusters. But obviously, this hasn't really been done so far. And there's a few reasons for that. But one of the reasons is because this is the type of thing you have to write policy over. It's these deeply nested data structures that have all this embedding and these collections. And they have these domain models that are incredibly sophisticated and complex. So for example, a pod has 35 attributes or something like that that are deeply nested. And then the metadata adds another 10 attributes. So if you try to imagine how you would apply a firewall rule to this, it doesn't really make sense. When you work with firewall rules, you're thinking about five or seven tuples. This is like a 500 tuple or something like that. So it's really tricky to think about how you would do this declaratively. But this is what I'm interested in, what a lot of people are interested in. And so if you had a solution for this that was declarative, this is what it could look like, for example. So first of all, you're going to need some way to reference these deeply nested data structures that are inside of the JSON. So you want some way to dot into that JSON and refer to deeply nested values. The next thing you're going to probably want is some way to have intermediate variables so that you don't have to repeat these deeply nested paths all over the place. And then you're also going to need some way to iterate. You're going to need some way to walk over, for example, the containers in a pod to check whether or not the image is what you want. Or whether it's got a privileged security context. And then once you have that, you need some way to actually express assertions or write policies or have logic that decides whether or not something should happen. And then once you have that logic, you're going to want some way to share that. You're not going to have to repeat that all over the place. And so you just need some way to wrap that up and share it and reuse it. And then finally, you need some way to take all of that together and build policies out of it. So for example, we could say that given the incoming review, if the user is Bob and the operation is Connect and the namespace is production and there's a container in that pod that's privileged, then deny it. We don't want Bob breaking the system. And so this is exactly what we built the open policy agent for. So OPA is an open source general purpose policy engine. And what that means is you can take it and apply it in any system at any layer of the stack. So the way this works is that you basically offload policy decisions from your service. So your service executes a policy query and provides a bunch of input, like a pod or a deployment or a service. And then it gets a policy decision back, which maybe do not allow this pod to run. Or here are the things that need to be changed in the pod and so on. So at the core of OPA is this high level declarative language that we call rego. And what rego gives you is the ability to write policies, to codify policy that answer questions like, is this user allowed to perform this operation on this resource? Or what cluster should this workload be deployed to? Or what clusters should this workload be deployed to? Or what annotations need to be set on the objects when they're created? And so the neat thing about rego is that you're not limited to just boolean policy decisions, like yes or no, allowed and aye. You can actually express decisions that are like collections of values. They're actually just arbitrary JSON data. OPA itself is written in go, and you can take it and you can use it as a library or a daemon. And so the idea is that it's meant to be as lightweight as possible, so it stores all the policies and data and memory that it uses to do policy evaluation, and then it doesn't introduce any kind of runtime dependencies. So there's no external database that it has to connect to or any external service that it has to reach out to when you've got it deployed. So it's very easy to deploy. When you're using it on top of Kubernetes, you can actually just load all your policies in its config apps, and I'll show that in a minute. In addition to the sort of core evaluation engine that gives you the policy parser, compiler, and interpreter, we also provide a bunch of tooling around to help you build, test, and debug your policies. So for example, there's an interactive shell that you can use to develop and experiment with policies and write queries. There's a test framework and a test runner that you can use, so you can actually unit test your policies and verify that they're doing what you think they should be doing. And then there's a bunch of debugging functionality so you can trace the evaluation of policy and understand why it's returning a denial for this particular user, or this particular pod, or this particular deployment. And then lastly, we have a growing community of people that are using it. So the project itself is sponsored by Stira, the company that I work for, as well as Google Firebase. We have a bunch of users. We talked on Wednesday with Netflix about how they're using it to do authorization across their stack in HTTP APIs and GRPC and SSH and Kafka. We've also got other users that are using it for Kubernetes and Terraform and more. And then we've got a bunch of integrations that you can use across projects like Istio and Kubernetes and Terraform and PAM and AWS and so on. So you don't have to actually start from scratch if you want to use OPA to enforce policies. You can use some of these integrations out of the box to enforce all kinds of custom policies inside of your organizations. OK, so let's have a look at how this actually works. OK, this is the part I really want to go well. OK, so first I'm going to create a namespace to deploy OPA into, and then I'm going to create a similar service like we did before for the other mission controller, and then I'm going to create a deployment for OPA itself. And I'm going to wait for OPA to start up. And so it's there, it's already running, that's great. And so the next thing I can do is check to see if OPA is registered. And so you can see here that it's registered, and it's actually saying give me all requests across all API groups, versions, operations, and resources. And you can see that the failure policy here is actually ignore. So this is what's changing in 1.9. So this will be configurable so you can say don't ignore failures if the endpoint crashes or something like that, treat that as a denial, which is what you want. You want to fall back to deny in this case. OK, so what I'm going to do is go ahead and create that pod again, that privilege pod in both namespaces. And then I'm going to load in a policy now as a config map to OPA. So the policies that can all be stored in Kubernetes is config maps. You don't have to add another storage layer or something for OPA, it can all just integrate with Kubernetes very nicely. And then when you load the config maps in, you can see that OPA has actually annotated them with a status of that policy. And so in this case, the status is OK, which means the policy has been installed into OPA, and it's being enforced. So you can actually tell whether or not there's been an error in the policy or something like that so that you know that it's actually working. And so I'm going to try to exec into the Alpine container that's in the default namespace, and that's working fine. And now fingers crossed. If we try to exec into the Alpine container in the production namespace, that gets rejected. OK, so that's the same behavior we saw before, but the implementation is a little bit different this time. So this is an example of a policy that we've written that implements this sort of access control in Kubernetes. And so at the bottom there's a bit of boilerplate, but at the top is sort of the business logic of the policy. So what we've done is we've defined a rule that generates an error message that says you can exec into privileged containers in production namespace. And that error is going to be generated if the body of the rule is true. And so you can read the body basically as input.spec operation matches get, and spec.namespace matches production, and the pod itself, or the pod, refers to a privileged container. And then we have a little function that defines whether or not a pod is privileged. And so we look up the pod spec from Kubernetes, and we search over the containers in that pod spec, and then we check if any of them have that privileged bit set. And then this policy, the logic down here, is just basically generating the response that has to go back to the API server. OK, so the question was, what happens if Bob changes the spec? And so you can write a policy that prevents Bob from actually modifying the spec if you want to. But in this case, maybe he should be able to do that. Maybe he should be able to exec into those containers. But OK, so that policy worked. That's great. But what I want to highlight is how OPPA lets you write policies that are very dynamic. And so everything's on fire now. Something happened in the infrastructure. And so we want to actually give Bob access now to that container. And so what I've done is I've defined this little simple policy here that just says break glass is true. And so I'm going to load that policy into OPPA as a config map again. And now if I try to exec into the container, I can. So that's cool. So this just shows how you can actually change the behavior at runtime very easily. OK, so now that the column has been restored, the issue has been fixed. And if I try to connect into the container again, I get the original error message back. So these policies can be very dynamic. I actually specified break glass as a policy, but it could also just be data that you load into the policy engine, so you could hook it up in different ways. OK, so the demo gods are very good to me today. And I'm very happy about that. So something we're announcing today that we're really excited about is this standard library of policies for the Open Policy Agent project. So we have a bunch of integrations for OPPA that let you hook it up to projects like Kubernetes and Istio. But we didn't until recently have a standard library of policies that you could take off the shelf and use to enforce all kinds of different things like admission control and authorization and placement and auditing and so on across a wide range of different projects. So for example, we have risk management policies for Terraform. We have security group auditing policies for Amazon. We have admission control policies for Kubernetes. We have auditing policies for Kubernetes. We've got similar things for Docker. And we've also got some auditing policies for Istio to make sure that your Istio cluster is set up correctly. And so this is a great, if you're interested in this kind of stuff, it's a great way to get involved because these policies are very high level. They're pretty easy to write. It's kind of fun because you get to work with data. It's very predictable and enjoyable. So yeah, so I recommend that everybody check out the extensibility features in Kubernetes 1.9. I think that they're going to really enable a bunch of new interesting use cases. Projects like the MetaController are quite exciting. OPA is a really cool project that we're really excited about because it provides this reasonable building block that you can use to enforce all kinds of different policies across all these different projects. And you can get started without having to write any code. You can take an off-the-shelf integration and a bunch of off-the-shelf policies and apply them to your infrastructure to do all kinds of things, like apply security controls and resource management controls and so on. So the last thing I'll point out is that it's just like what enables all of this is the extensibility that's been added into Kubernetes. So when you're thinking about designing these kinds of systems, think about extensibility and think about decoupling policy decisions from policy enforcement. OK, so thank you very much. If there are questions, I'm happy to take them. You can please go to the repo and the OPA repo and start it. And if you're interested in this demo, the code is all online. You can check it out. And yeah, thank you. That's a great question. So the question is in the policy that I showed before. So this policy is referring to data Kubernetes pods. And the question is whether or not that's being queried synchronously or whether it's being cached in OPA. And the answer is that it's actually being cached. So when we deploy OPA on top of Kubernetes, you can configure it to actually replicate all the pods and deployments and config maps and services and so on into OPA. So then you can write policy over them. So you can take the same idea and you can use it to write auditing policies, for example. So if you want to find whether or not all pods are specifying resource limits and resource requests and so on, you can do that. Or if you want to find pods that are not dropping privileges or security capabilities and so on, you can do that. So you can get all this data into OPA and then interact with it very easily. That's a good question. So the question is whether or not the logging, the audit logging, you mean, is being handled at CD? We haven't hooked up the audit logging to anything yet. So right now you can just query OPA and get back these events, basically, or these warnings, errors, and info messages. But you could certainly take that and pump them as events into Kubernetes and then use the Kubernetes event system to get access to them. Yes, so I haven't shown this here, but you can write basically built-in functions for the policy engine. And those built-in functions can call out to external, like you can do whatever you want in a built-in function. So you can execute a request against an external service to get some data or whatever you need. Obviously, if you do that, then you kind of couple your policies to that external system. And so it makes them a little bit harder to test in your development environments locally with unit tests. But it's definitely something worth doing if that's what you need. So the question is, what happens if you delete a config map that contains the policies? And the answer is that the COPA is basically watching the API server for config maps that have a label on them that say they contain policy. And if that config maps gets deleted, then OPA gets that notification that it's been deleted, and it uninstalls the policy from the engine. So just like it's adding them when it sees them created or modified or absurding them when it sees them created or modified, whenever they get deleted, they get retracted, they get uninstalled. So today, you would try to push an error back up. But then I guess, yeah, that's not well supported today. So one of the improvements that we'd like to do is make this, instead of using config maps, to use CRDs, because some resource definitions. And then you can do synchronous validation of the changes that are made against those CRDs and prevent that from happening. I guess you could also, yeah, that's probably the best way to do it. Third row, and then we'll go back. So here. So in this case, you can see that the policies are actually namespaced with a package directive. So you can prevent. The question is, what would happen if you'd find the same function in multiple files? Yeah, so I guess if they're in the same package, the policies get compiled. So we would either return an error back because there's a conflict or something like that, or if they're in different packages, that's certainly fine. Yeah, next behind. So the question is, how does RBAC compare to this model? Yeah, sure. So role-based access control is great. It provides a nice high-level, configurable way of controlling who can do what. But it's very limited in terms of the expressiveness that it gives you over who can do what. Because it limits you to say, OK, Toren can do pod creation in a namespace. But it doesn't let you say Toren cannot create pods that refer to an image that's outside of our internal registry, or any kind of combination like that. So it's limited in terms of expressiveness. You can build role-based access control policies in this language, though, if you want. So it's very expressive. Good for segregation of duties. Yeah, yeah, yeah. It's good for segregation of duties. Absolutely, yeah. So you could take a bunch of these auditing policies, for example, and just run them over all of your manifests in your CI CD pipeline, and then get back warnings, or errors, and so on that you might want to pay attention to, or just ignore, sort of up to you. So the question is, how do you use this in other environments, effectively? OK, so you can take OPPA, and you can run it as a library. So if you're building your own services, you can use it as a library by embedding it into those services. Or you can run it next to them, either as a sidecar, or on the host. And then your services can query OPPA and ask for policy decisions. Now, if you want to use it for something like Terraform, for example, you can generate a Terraform plan, and then run that plan through OPPA to get an authorization decision, for example, in your CI CD pipeline. If you want to use it in a cloud provider, then we need to have extensibility or hooks in those providers to call out to an external endpoint. So Kubernetes does a very good job of that. Hopefully we see more of that from other platforms. Yeah, so that's the, if you check out the standard library, that's where we're starting to build up this repository of common policies that you can take and use to apply to different providers and different projects and so on. Okay, one more question? So the question is whether or not you can apply policies based on what the container is actually doing. So the policies I showed are just operating over Kubernetes configuration, right? If you can get data that describes what's happening inside of those containers into OPPA, then you can write policy over it. So OPPA is not coupled to any of these projects or any of these providers anyway. It's all just data dough, but it doesn't, we're not coupled to Kubernetes or anything like that. All right, well, if there's no other questions, thank you very much.