 Thank you. Good morning. This is Mahendra Kuncham from AT&T, and I'm the associate director and part of the cloud team. I'm Mike Denney. I'm an architect in Mahendra's cloud realization and engineering group, and hola. I've seen many presentations from AT&T in the last, yesterday as well as today and the next few days. You're going to see more and more presentations coming in. And at the same time, you have seen that AT&T has been committed to run all of their enterprise applications on OpenStack cloud instances. Not only the enterprise applications, but at the same time, many virtual network functions moving into the OpenStack space. We are fully committed into that space to run our enterprise applications. Basically, if you look at it like from last year to this year to next year, we started with a few Cloud OpenStack cloud instances to 100s for this year, and again, a few more 100 to be added next year across the globe. With that, user access management system comes into picture. Specifically, when more and more environments will grow, we need to make sure that who gets what level of access to the OpenStack instances and what tenants they get the access. That's where the user access management comes into the picture. What is the OpenStack? User access management is all about. Primarily, it is to grant the permissions or the privileges to certain users to access tenants at the same time at multiple OpenStack instances. However, it's also the same thing as well as, we can say, we can prevent unauthorized access to the OpenStack environments as well. How exactly it's been done in AT&T space today? Here, today, we're going to talk about a couple of projects, wherein one is a user access management system, another one is going to be role-based access control. Wherein, when a user wants the access, they'll be assigned to certain roles so that that's how we're going to grant the privileges for them to access the tenants as well as OpenStack instances. And we're going to talk a little bit about the authorization slash policy enforcement at the same time, role and policy creation as well, which Mike is going to talk about that. How exactly it works in AT&T space today? Assigning the roles to the users, wherein a user wants to access a tenant at any OpenStack instance, they submit the request to the system, and then the request will be directed to the immediate supervisor wherein, getting the approval of that request, saying, yes, here is an authorized user, authorized request, so please go ahead and grant it type thing. If that supervisor is not the tenant owner, then we'll be trigger another reporting mechanism wherein another tenant owner will get a request trading that. Now this request came in for us, so do we really need to grant the privileges for this user or not? So overall, we need to have some sort of an approval mechanism, making sure that that's actually the authorized request wherein we have to have the proper approvals. Once the request has been approved, then through the automated provisioning, we're going to actually provision the user access into the identity management system and from there on into the OpenStack environment. Then once the provisioning is done, for any reason, say that automated provisioning did not work, then what we call the error handling, wherein the same steps will be performed through the manual process so that we can provision the user's access into the identity management system and also the OpenStack space with the manual process. This slide will talk about two pieces here. On the right-hand side, what's happening? This is a traditional model. On the left-hand side, we talk about a traditional model wherein any user requires any access to the application. They'll be assigned to certain roles so that they can get the access to the application. But on the right-hand side, the same concept will be applied as well, and we know that it does work for an OpenStack model as well, wherein users are assigned to certain roles and they'll be granted the permission to access a tenant within an OpenStack instance. Out of control. I don't know what's going on. Then once the provisioning is done, what this is all about, all about the auditing process. For instance, we now grant the permission to the users. They have the access. And on a regular basis, frequent intervals, we do retrieve that information from the Keystone instances. And then we compare that with our user access management system to making sure that all the authorized users have actually been granted the permission to access the tenant as well as the OpenStack instances. If any violations you do notice in that space, we actually notify to their management as well as remediation action will be taken. So that remediation action will be taken if it's going to be authorized users are there. Specifically, it's going to be requesting their managers to do the approval to grant the privileges or revoke their privileges so that their access will be taken out. That's actually what we call it as an auditing process. And as part of that auditing process, also, we do something called verification and validation as well. On a frequent intervals, we do actually send a note request as part of the verification process where the managers do get the request saying, OK, here is authorized users. You actually approved it. They still need the access or not. We do that on a frequent basis, making sure that because once you grant the privileges to the users, it's very obvious, right, in many cases where people do move from one position to another position, their job roles will change. And we really need that access for them forever. So that's where the verification and validation process comes into picture as well. But on this slide, it's more about the monitoring space wherein the monitoring system receives the information or the notification wherein an employee or a contractor leaves the company. And then automatically, their access will be terminated wherein first we do the removal of their access in the open stack space and also from there into the identity management system. So overall, the three-step process which we talked about provisioning the users, and at the same time, doing the auditing process to making sure that only the privileged users have the access to the environment. But at the same time, on a regular basis, we also do the verification, validation, and at the same time, we throw the lifecycle management of the users so that when the user access has been terminated, they've been taken out from the systems. Here I want to touch base a little bit about a few points here wherein anytime we added new open stack instances, at the same time, we add more tenants into the environment. We want to make sure that that information will be automatically populated into our user access management system so that when the users do come in and request for the access in our system, they say, here is the tenant I wanted. And here is the role they want to be part of. It could be a tenant member role, or it could be an admin role. Once that approval process is done, then we are going to provision into the open stack space. And then again, the reporting mechanism happens the way we did in the previous slide. We started with first two roles in our space by default, which is the admin role, wherein a tenant gets all the privileges what they need, but it means full privilege, like root user privileges, kind of. And then another one is a tenant member role, wherein they have the access to their own tenants. But AD&T is very particular about what level of privileges we can grant. They're very strict about the security policy requirements to be in compliance so that there was a request came in last year for us to saying, we do not want to give the tenant members everybody to create those snapshots so that we want to trim down the privileges to only to the certain users so that only few users can perform the snapshot capabilities rather than giving that role to everyone who are part of the tenant member. And then viewer role was created last year as well, specifically for us to making sure that if any users do not want to perform any actual tasks, rather they want to obtain a tenant information or they want to look into the tenant space about the configuration, whatever. So that's where it's only kind of a read-only role. That's what we call it a viewer role. These are the four different roles we have in place today. And we are still also thinking about adding a few more roles. As you all seen, that like more and more virtual network functions coming into our open stack cloud space, where virtual network functions may be owned by many different organizations or the teams. And we do not want to give those admin privileges so that each one is stepping onto that somebody's health domain to making sure that whatever the privileges they need, admin privileges, it will be pertained within their own network functions side only. That's what the VNF admin is all about. And then we also started talking about giving the global viewer is similar to the viewer role. In the previous case, we talked about it's only within the tenant space. But now it's outside the site. In their open stack site, they're going to have the global viewer role. Whereas the project manager or program manager wants to have the access to that environment so that they can view that rather than perform any other tasks other than the read-only role. And domain-level roles is all about usually we are considering that piece to be deployed in the future deployments wherein we're going to assign the roles at the domain level so that we all know that when the applications have been deployed, not only in one open stack instance, we are deploying across many open stack instances so that when you grant the privileges to the users, we can grant the privilege of the domain level so that a collection of tenants across different regions, they should be able to access as long as they're part of the domain level rather than assigning the roles for individual tenant, rather now we can do the same thing at the domain level. And then admin support is all about specifically for the users who provide the day-to-day operations support for the clients and their applications so that they have only those privileges will be granted. And security piece is more about mainly to from the security teams to perform their firewall rules or anything as such they want to do, they're going to be granted only those privileges. And quota administration and provisioning comes into picture specifically. Think about the capacity planning team. So they want to provision certain quota for them. Only those individuals will be granted only those privileges. So overall the intention is to grant the privileges or the permissions to the users based on their needs rather than giving like admin rights to everyone or tenant member rights to everyone rather trim down their access so that based on their role, based on their function, you can say this is what their role they belong to. They can perform these tasks beyond that they cannot perform any other tasks. So with that, I'm going to hand it over to Mike Denny. He's going to talk about how exactly it's been implemented in our space. Good morning again. As I said, I'm an architect in my Henders group. We're basically assigned to a lot of different projects. But before I, I'm going to look and see, make sure I'm hitting the right button here. Before I dive in, I wanted to recognize one individual in the audience here that's really key to the space. Adam, would you stand up for a second? Adam Young is a Keystone core member. His blogs are what you need to look up. He writes about a ton of stuff. And I was actually going to have his link. I was actually going to have his link here in the deck, but it's easy to find him. So I don't have it in the deck, let's go around. But basically, as I said, I'm in my Henders group and he assigns us to a lot of projects. So I had the good fortune of being assigned to these two parallel projects. So one was actually implementing that front end to OpenStack. That's the user access and registration system, the framework that he was showing. And basically, after a user requests access, it gets approved. It then provisions an OpenStack. So that front ends OpenStack. And then there's an auditing process that that system implements as well to make sure that the individual only has the privileges that they're approved to have. Or make sure that if they've left the corporation, those privileges have been removed from the system. So that's what that system is about. And I'm not going to go into any more on that, because my Hender covered it well. It actually showed you on one of the slides the touch points where you actually really need to interface between that system and OpenStack. So that was pretty good. What I'm going to actually dive into is another team, which is we call it the RBAC team, Rollspace Access Controls. Here, security has been waiting for five years. We've had OpenStack since 2011. Security has been waiting for, hey, introduce some granularity in this admin role. We've our member role. We've had 80 users on one OpenStack site that have admin roles. So there's the great concern that that's unsafe, that these guys are going to stop on one another somehow. So can we do something about that to limit the privileges that some subsets have on OpenStack? So sorry, I paused mainly because I'm missing part of my diagram. I don't know what happened to it. But basically, what you have to understand is this diagram. You're going to introduce custom roles at your enterprise. And you have that need, which security says we have that need. You have to understand the framework. What this diagram represents is after that OpenStack provisioning component actually provisions in Keystone, that association of a role with the user and a tenant, then the user can go in. And basically, they ask for a token. So they actually authenticate themselves with the identity service. And they get a token that's scoped to the tenant. And then later on, they actually want to make an API call. So they actually present their token as part of that request. The service, in this case, might be hard to see. But in this case, it's the compute service. Internally, it's got a Python library as Oslo code, which is basically going to look at the token, see what role you have, and look at the API call that you're making. It will pull in the rules that are in policy.json file and actually do a comparison. And then if there's a match, it will then give you authorization to execute that API call. That might be a long-winded description. But that's basically the framework that you have to realize exists out there. Not all of the services are required to use Oslo. This is voluntary on the services part. But the other aspect of that is actually how do our roles get created? And how do you actually create the policies associated with that role? Role creation is very simple. One line command is hard to see. Basically, I'll read it. It says keystone role create dash dash name equals new role. So you're creating this new role. And in the black area, that's basically a result of running that command. That's all it takes to create a role. Does that tell you what that role can do? No, if you actually want specific privileges, specific rules associated with that, you actually have to go to the service and figure out how and what they're using in order to set up those privileges. So in this case, it's showing an example of policy.json, which is basically you're setting up. It's actually a simple framework. And I'm not going to go into too much detail here, but you basically have two sections. One is where you're setting up macros or definitions. And those are used then in the actual declaration of the rules associated with the API calls. So the rules are actually a two-parter. It's a target. And then the rule that applies to the target and the rule is basically who can actually execute that target. Target is just a map to an API call. Now, how do you figure out what that map is? Well, that's up to the service to actually document that for you. It's not consistent. The only one that really does a great job of that is if you look at the Keystone documentation, they do a great job of, I'm getting a frown from that. They got a great job of mapping the target to the actual API call that's been made there. And you have to dive into either the documentation of the other services or actually look at the code of the other services to figure out what the actual explicit mapping is. So that's one of the problems that you're faced with there. The other thing is just to repeat here, it depends. It's service by service. So if you actually want to create a new role and you think that new roles should span multiple services of OpenStack, you're going to have to go and look at each one of the services and see what they've got there and see what actual rules you need to set up in those individual policy.json files, and it's S files. Actually, I need to look at more than just a single policy.json file. So before you dive in and do your custom role, you need to heed the warnings. This is actually the configuration reference manual that you're looking at here. It's a snapshot. It's this statement that's at the bottom down here, and I'll read it in a second. It's been there for release after release. It's basically saying, modifying the policy can have unexpected side effects and is not encouraged. So in other words, you have to be very careful. Here, in a corporation like AT&T, we've had OpenStack for five years, and we haven't introduced any granularity in our roles here. So security is getting nervous here because we're growing. We have hundreds of sites. We have thousands of users. We have, I don't know how many admins, but quite a number of admins where they actually want to see some delineation of capability there between those teams of administrators. But assume that you now want to move forward, and you really do have a desire to establish some roles. So did that go two slides or one slide? OK. I deleted a slide. Did I delete a slide? OK. You have to look at what it means to define a new role. So you go in and use that single line command to create a new role. Then you go and look at a policy.json. And you want to make sure that you're very careful about how you update it. And you might realize that in some regard, in some of these policy.json files, you actually don't see, and these are the default policy.json files, you actually don't see member defined. What you see is something like this, OK, if it's a role associated with a tenant. And the way this second line up there is defined admin or owner, that line there, the rule context is admin. So if it's an admin, R is associated with this tenant. So a member is associated with a tenant. So actually the admin and member both satisfy that rule. So wherever you see admin or owner in the policy that you're defining down here, they have those privileges to run that API call. So that's a framework you've got to work with. You may make a decision. Let's keep it intact as much as possible. Let's don't rock the boat, mainly because of that warning, to be careful about updating the file. Or you might take the opposite thing. Let's remove that blanket authorization there and be more explicit. And role by role, do your definition there. But let's say you left it intact. Those two roles that Mahindra mentioned, the snapshot member actually came from our security team. And he didn't mention that the real reason behind that, you don't want me to mention the real reason behind that? Yes. What Adam said is, negative rules are bad. This is one approach, believe it or not, it's actually on the Oslo documentation, showing this sample. So actually, that's the punchline I was going to get to. It's better to actually be explicit. But I don't have the example of the explicit one. I just want to show you. You've got to be careful. This is what this slide's all about when you go in and edit it. What this is doing is actually going in an attempt at defining the two rules that Mahindra mentioned, which were our initial rollout. The advantage of our initial rollout is our gave our teams a good experience with working with the role definition. So these are somewhat lightweight. Viewer, we don't want them to be able to change anything. We want them to be able to see whatever's there. These are project managers, capacity planners, management, leadership that actually wanted to have access to the accounts and see the activity going on there. So you actually want it to take away from the viewer the right to do the API calls that can change things. So one way to do that, which Adam's saying is a bad way, which is fine, is a not viewer role. So basically, this is a lightweight touch on the policy.json file to ensure that viewer couldn't do the change APIs. But there is a side effect to this. And the side effect is interesting is, let's say you have somebody that has an admin role, and then for whatever reason, you gave them viewer rights in a tenant. Well, this not viewer actually took away the admin rights then directly. So that's one thing you got to be careful with. So one thing that goes along with creating new roles is testing, lots and lots of testing. That's where we discovered, and this was one of our initial iterations. I think our final iteration actually went to explicit role definition. Initial iteration found some side effects there. It's good to know. So that's all I wanted to go into. Another thing is, notice this thing is, in this setup, the inheritance, you really don't have, like, role A implies role B implies role C and D, and therefore, A implies C and D. You really don't have that structure here. You actually have to explicitly define each role here. So SnapshotMember was the other one. And what that one was about was we had a lot of users that were uploading images that had not gone through our security process around images. So security really didn't want to propagate all of those incorrect or invalid images. So they actually wanted to limit the number of individuals that could actually do that function. So that's why they actually created the SnapshotMember role. You might think that's a lesser role than member, but the way it's defined here, it's actually the old member privileges and taking the Snapshot capability away from members. So members of a lesser role, SnapshotMember, is more of a supermember role. All right, that's all I'm going to go into in terms of actual implementation. I trimmed out about after initial review, I had about 15 more slides. And just the guidance was we really only had like 25 minutes for the session and 15 minutes Q&A. So we eliminated a good number of slides from our presentation. But what I would do want to touch upon, after you've decided that, well, maybe as part of your decision process, you're looking forward, do you actually want to create custom roles? Just realize what you're doing here. That means that every custom role that you have has to be refactored into every open stack release. So you've got additional design work that you need to consider when you move forward with each open stack release. So all of those policies that you set up around your custom roles, we need to re-reconcile with what they're setting up in the next release. Additional testing's going to be required in each list. You test for your roles, make sure your roles don't impact the default roles, and so on and so forth. Test internal to a tenant, test cross-tenant. You're going to have to, blankly, update your local, make a creative local version of open stack documentation where you populate the information about your roles. If you delineate it functionality within a role, you're going to have to possibly migrate users to the new role. And then communications about that, just what's going on. You have to make sure your users are on board and understand basically what the new role involves. And then you'll probably have to update instructions on how they go about requesting that role. You're going to have to make sure it upgrades well with Horizon. You may do a little bit of configuration work there to get that set up. And then obviously, there's a handshake with the user access management team. And that's really true. Every time you create a new role, you're going to have to make sure that it's been integrated with that user access management, user access administration system so that users can go in and request that new role, get it approved, and then provisioned. I changed the slide. Bottom line is, all of that list I had on that previous slide is going to slow you down a little bit in terms of your role out of open stack releases and you're going to have to weigh the benefit of that new role. And security is going to chime in on that. And they're going to be on the side of introduce a new role so you can actually implement the security principle of least privilege. The actual framework that's here that I just presented isn't really as extensible as you would like. You have to deal with each service and how they've implemented the role policy enforcement. You actually do not have any automated testing of the existing roles, but it's doable. But you have to be very careful going forward. So with that, I think we can stop here and ask if there's any questions. Mandra, come on up. Hey, so thanks for the presentation. Go ahead. So now I forgot my question because two things. One, you alluded to needing to read the source code to find the name of the roles in some cases where the documentation is poor. So that's normal in open stack, right? So what am I looking for when doing that source diving? Is it the name of the restful API call or some other high level abstraction I'm looking for? Adam, why don't you come join us here? Maybe you have to use that mic to work best. They gave me my own. Not that I really needed. It actually is a rule in the Keystone team. Never give Adam a mic. I fully support this. I got a couple notes. So first of all, the question was on a rest API? Where's our question? Yeah, right here. OK. How do you find the name of the undocumented role or of the API itself? Sorry, the rules. The rules. Yeah. What's the technical word? It sucks. It's not something that we have a solution to. Because when we implemented the policy API, which was not in the first rev of open stack, it was something that came in a couple iterations later, it was done at the Python level as opposed to at the web API level. And one of the things I'm trying to work through with the Keystone team this summit is how do we do this? How do we do discovery? And it really needs to be at the API level, not at the, excuse me, at the web API level, not at the Python level. But we don't have a good way to do that. We don't really want Keystone owning all that. There's a lot of issues with distributing policy files from Keystone. I had a presentation on dynamic policy a couple of years ago, which I've backed off of. Because a lot of the feedback was these are really sensitive. And we don't want somebody to be able to go into Keystone, change these, and thus open up the extra control policy for the entire thing. The operator's feedback was treat them as a content management system problem. You know, that's far afield from your question. I'm sorry. The best that I've come up with, and this is the straw man I'm going to throw out there, and I'd love the feedback, especially from people who are rest savvy, is that if you call on an API and it fails, for whatever reason, one of the pieces of information we give back is here is a role that you need to have in order to be able to execute this API. Now, it may not be the only one. In the case of the admin or member, admin or snapshot, you'd probably return the snapshot role. You return the lowest role that would actually allow you through that. And you might have private roles to give you administrative access into that as well. And you should be able to deduce those. That's really the best answer, because that's the only thing that works in a discovery sort of way. And that's what I'd like to work towards. I'm going to knock out, as you called me on out, I'm going to call out Dolph, who is in the role and is former PTO of Keystone, and is really good at taking my crazy ideas and making them into something reasonable. And I won't call you up on this one, but that's one of the things that I expect to get good feedback from him on, and come up with a better answer for that one, because we don't have it right now. Right now, policy files are managed by the individual projects. Actually, it's a problem that we're wrestling internally with at AT&T. The management of hundreds of sites, basically the APIs associated with those sites. We really don't have a discovery mechanism. It's raw manual right now. It's manually configured. We actually do have an initiative underway to figure out how to do discovery. That's just one aspect of our initiative, but definitely it's a current problem. If you look at how the various services in open staff, just the ones that are underneath the Big 10, not even people building stuff to work with it, they use different technologies to map from a URI template to the actual code that's executed. If you look at, for instance, some of the APIs for adding a role to a user in a project, there are three things that need to be filled in there. The user ID, the role ID, and the project ID. And yes, their project's not 10, it's damn it. Sorry, we changed the name on you guys in the middle and everybody wants to say 10, but we've settled on project. For those three things there, there's a template and the role assignment for, well, the role that would be enforced in doing a role assignment call would have to be on that match. The way that that's done in Keystone is different from Nova's, Duffin from Clans, and so forth and so on. So it's hard to get to a standard thing there. Fun math. Go for it. Maybe one question regarding the automation part. As you said, editing the policyization for you is currently manual. Testing is also currently manual, I guess. But what just came through my mind is we have that policy chasing file. That is a chasing file. Would there be some kind of way to automatically change this? So if I add a new role and I have an intention, what should I do? Could I write a binary that edits this policy chasing automatically so that for the next release, I'd probably need to check if everything still works, but at least the changes I'm doing to policy chasing can be put there automatically, even if some new calls were introduced. So let me, because I've started on that approach. First of all, it's not just straight Oslo. Oslo is a blanket term for all the common code. It's Oslo dash policy or Oslo dot policy, depending on whether you're looking at the repo or the code itself. But so in Oslo policy, there's a command line tool and you can run that against the policy.json file with the off data from a given token. And it will tell you yes or no on whether that will be accepted or not. That's the testing part. That will also allow you to show the difference. You could say test running on the old, running on the new. It should fail and the old one should pass on the new. And that's really your business rules that you'd wanna test there. So we're starting there. So we can at least have that tool for checking what's on there. And you can have a bunch of different tokens cashed too to say this is what an admin user would look like, this is what a service user would look like and make sure that the appropriate roles are in those tokens. And then when you check it, you should see yes or no on that. And obviously that's not a perfect answer, but it's a start. So there is the command line tool in there. And there's a couple of things that people should be aware of here. I mentioned a bit about the policy config management outside of Keystone and Ron should probably talk about. Actually and I are battling about that before we get to that. There's a bug that I have been battling forever, bug 968696. It's not dead yet, but it's on its last legs. This is that if you're admin anywhere, you're admin everywhere. And here's how we're going to solve it. And most of these changes are in to Keystone and the layers between there, we need to get into the services. But basically there is a magic project. And unfortunately it's only one right now and I was told there should be multiple and damn it they were right. If that project is in the config file in Keystone, then tokens for that project will have a flag on them. So this means that you can have admin on the admin project as a term is admin project and have policy rules that check on that. And this you would use for the global things that you want to be able to do there. You give viewer as a role on the admin project and you would have that in your policy. Actually our framework's already set up for that. That story we assigned admin that can't go into any individual tenant account. It basically goes into that admin account. So the problem is that nobody doesn't pass that through in policy management yet. That change has been submitted. Well we're ready for it, whatever it happens. Yeah, that's short term stuff. So being able to do global versus local, that's underway. Excellent. And I just want to also, you started to allude to what I call the difference between the scope check and the role check. Yes. And you need to have both. And in fact don't mess with the scope checks. Leave those tenant checks in place. Ideally the role check would be a separate policy file. There's all the problems with that and stuff like that. But when you're thinking about these and structuring your policy files, keep the role check separate from the thing. You want to talk about your external policy management? Run. I work for the security management in OPNV. And I just have two detailed questions about the presentation. Can you go back to the slide eight? Which one? Eight. Previous one. Oh I don't see the slide numbers there. Yeah this one. In this slide you talk about some spatial roles at the tenant level. You talk about the tenant approvers or something like this. I try to understand these are specific roles just for one or two tenants. Or these are roles that once we define it it works for all the tenants. It actually varies tenant by tenant. We've actually set it up so that when you actually get a project or a tenant account on our platform we basically assume that the resources are owned by that project team, that business team. So they need to tell us who's allowed to actually approve access to that tenant. So we actually configure it so that only those individuals can approve that assignment of a role within that tenant. So also it changes a little bit when you actually talk about the admin role. The admin role is more of a, since it's implemented as a global role we actually only have one team that's allowed to do the approval there. So they're saying basically who's asking for it and why are they asking for it? What do they need to do? Imagine if we define, we create this role in the policy.json file and then all the tenants that we create will have this role. Am I right? It would be available to all of them. Okay, nothing, please enroll to a specific tenant. No, ma'am. Two different dimensions. Okay, my second question is about the resources you talked about in one tenant. And do you think about more fine granularities of controls within each tenant? This means that, for example, in one tenant we have several version machines and some roles can work on some of them and several roles can work on others of them. Yeah, actually we don't have that granularity within our system right now. It really is at the roles at the tenant level, the bucket. So there's lots of virtual machines within that. If you're talking about application level privileges we aren't doing it in that space. We're really talking about open stack provisioning privileges who's allowed to use the open stack API. Yeah, okay, yes. Okay, just I have another proposal to discuss maybe in these meetings and maybe for some users we'd like to have a extension of open stack in the sense that we have a really policy enforcement system which is an extension. This means that we define the policy for each tenant and we can activate these extensions based on Keystone. This is what we have done in OP&AV and I'd just like to understand from your side you think this is a good approach just to have extensions that we can activate for fine granularities for each tenant or not. That's interesting. Yeah, thank you. Yep, thank you. I think we're out of time. We're over time. He wants to. Adam, you want to say something? I'd say essentially he's saying an external policy decision point. True, yeah, okay. Well thanks everybody, thanks for attending. Thank you.