 All right, we're going to get this started. This is a panel of Keystone experts that are going to talk about their experiences with Keystone. Vess and, well, except for you, Jesse, their experiences with public and private cloud. And so the first thing I'm going to do is quickly give them just a very brief amount of time to introduce themselves, their role, and how they deploy Keystone. So the first person I'm going to look to is Matt Fisher from Time Warner. Matt, why don't you talk to us about your role and how you deal with Keystone? I'm an engineer at Time Warner Cable. I started working on Keystone because we needed someone to work on Keystone. It's not a lifelong passion for identity or anything, like some of the guys. But basically, we run a private cloud for internal use at Time Warner Cable for software projects, basically. And we currently deploy Keystone in Docker containers and run it on top of Uwiski. But I've gone through several iterations of that to get to that point. OK. Please, Morgan. Monti, if you could take the mic and tell us your role with Keystone. Hello. So hi. My name is Monti Taylor. I deploy Keystone between 10,000 and 20,000 times a day because I'm involved in running the OpenStack infrastructure gating system. I also work for IBM where they have strangely titled me Distinguished Engineer. I have no idea what that's all about. And I'm there involved with the public cloud deployment effort. And I'm also a consumer of Keystone in that I work on client library things to talk to OpenStack, including the Shade Library and the Ansible modules that use that. And also weirdly, I'm a core on Keystone Auth, which clearly is a mistake. Our next panelist is Jesse Keating from Bluebox and IBM Company. That's right. So yeah, I'm Jesse. We're still Bluebox for the next couple of weeks, I think. Congrats. Bluebox, comma, an IBM company, seemed to just be IBM. I am the OpenStack lead for our private cloud as a service product that Bluebox brought to the table. And in that product, we utilize Keystone. So I don't install it as many times as Monty does, but we do install it quite a few times. We run many, many, many, many small clouds that have some very special needs that Keystone helps and hinders us with. So yeah. Excellent. Next, we have Dolph Matthews from Rackspace. Hi. I work in, I guess I've been working on Keystone for about five years, spent most of that time working in the private cloud group at Rackspace. So I get poked and prodded and stabbed with questions and problems from our support team all the time. And I've spent just a little bit of time in the public cloud side of the house, which is now all under one big umbrella. OK. Next, we have Steve Martinelli. He's Keystone PTL. Yeah, as Brad said, I've been working on upstream development in Keystone for about three years now, maybe a little bit more. And I'm the current, I was the Keystone project lead last release and for this release. So I'm here for more of trying to defend Keystone from all the shanking that's going to go on between these guys. Yeah. OK. Yeah. Excellent. So I do have a large number of questions and I could easily keep us afloat here for an extended session of three to four minutes. But before I do that, it had been advised to us that we really wanted to open it up first to questions from the audience because we want to hear those questions first. I do have a list that we can go to if we need them. So we do have microphones. And as motivation, we've got three free books. The first three people who ask a question, each person gets a book. And then for the rest of you, you can get them at the IBM booth. And they're signed. We will sign them after the session for you, sure. Some of these guys that do do an author can sign it as well. So please do not be shy. In all seriousness, we have a lot of Keystone experts here and this is the time. Oh, right. Go ahead. OK, you want to take this one? But I might be on now. I think it's all right now. Try it. Yeah. So my question is less specific on Keystone and more specific on management of Keystone, I guess the question I'd ask. So Keystone lets you define users somehow, right? And I don't really care how, whether it's a directory or you have them in the local database. And you define roles. But how are folks managing who gets into that? Who gets into those roles? Are you using an external IDE management system? Is there some poor admin running on a wheel just adding users as requests come in? Is there a need for an audit trail? Stuff like that. So I'll take this one. I think this answer is going to vary across because we all are doing vastly different things with our clouds. So in our case at Bluebox, what we provide to our customer is the entire private cloud. Our customer gets the whole thing and they get a certain level of administrative rights on it. They don't get full admin because we don't want them to do silly things like delete a compute service and pay just in the middle of the night or create networks that loop each other and take down parts of the data center. So they get some level of rights and that does include the right to create tenant projects and users and assign them roles within Keystone. We don't allow them to create the admin role or assign the admin role to their users. But beyond that, we sort of wash our hands of it and say, it's your cloud. Do what you want. Some of our customers do want to back it by an LDAP and have us assign that to a domain for them, in which case that's all managed by them. The vast majority of our customers though are doing whatever it is that they need to do to stuff the users that they want into the localized Keystone database. So you're saying that it's more ad hoc. You're not running into customers trying to use like Oracle Identity Manager or something to that effect to try and manage who has the access. It's more admins pushing users into roles rather than some kind of automated process. Right. We guide them. A lot of ours are first time cloud users and so we guide them on what to do. And our product hasn't necessarily had the feature that they would allow for some of the deeper integrations. But that has been a request from many of our customers to, as we grow into the enterprise space, to be able to tie into their existing identity. And the direction we're going with that is you slap an open ID exposure on your identity service. We'll tie into that through Federation and off you go. But the point being that they manage that side and we just manage the trust and beyond that. That's it. Anyone else want to answer? Yeah. I'm going to, the way that I, the model that I like, so one of my roles is that I go and I beat up on all the public clouds that are out there. And there's a bunch of different models that people provide me as a user and consumer. And the model that I like the most for this is similar to what Jesse was just talking about, which is that as a customer of a cloud, what I want to get is I want to get a domain admin account. And then I want to be able to create as many users and projects and map those users to roles as possible. Then personally, the way that I do that is to manage those users and roles and projects using Ansible. That might have something to do with the fact that Shrew's here and I are the maintainers of all of the Ansible modules to manage opens at clouds. So I'm going to probably use those. But being able to have a playbook for my cloud that says, I want these users and these projects and I want these users to have these roles and these projects is really useful. In my particular case, in this, if I have that domain admin ability to do that, rather than it being about me managing for all of the users, say, in my organization at work or something like that, each of these are more likely project specific. Here's the project I'm creating in Keystone that I'm going to stick my control plane services in. Here's the project that I'm going to stick this dynamic pool of nodes in. Here's the project that I'm going to stick my mirrors. Things where I want to make sure that there's some service level privilege separation. But it's actually I'm the human that's actually operating all of those users and roles. So there's a, it's not exactly the thing, but yeah, that's sort of my thing on it. Anyone else want that? I'll go ahead. Ours is similar to Jesse's, except I think you're doing yours for separate companies, right? So ours is separate project teams. So one project team might be running a website and the other project team might be doing some kind of support function. And so originally when we started, and no one's really using it, we just tell people just file a ticket with us and we'll make you an account. That didn't scale very well because the first thing we found out is they got an account and then they would say well now what am I supposed to do? So then we wrote a bunch of tooling to make them an account, a project, a router, a network. You know just like basic bootstrap. And then that didn't even scale well because as soon as someone found out they liked it the whole rest of his team wanted an account too. And we were doing a lot of Gira tickets. So we introduced a concept, I think it's called a project owner. So the tech lead on your team might be granted project owner and it's a special role and we have special code in Horizon actually so that if we have a new team member join that team they don't have to talk to us, they need to talk to someone on their team and say hey add me in here, give me an account. And then that person gets a limited ability to control what that person can do. For example we control who can use our LBAS service. Or we wanna know who can use it. So by default you can't but if your tech lead on your team says yes you can use that or you can use heat. They deal with it themselves and they don't call us and they don't file tickets. It's way more scalable but those are all sort of it's basically custom Horizon code that handles that. And we still have all the bootstrapping stuff in place for times when we need it but we've tried to outsource that away from us. Doesn't make sense to have a ticketing system involved in getting started with OpenStack. You're good? Okay. Cool, thank you. Come up and get your book. I actually already grabbed one from the idea. We'll just recycle it. Next question. I'll just check over here. Go ahead. Hi my name is Krishna, I work for Cisco and thank you so much for your contributions. It's been useful in every day of my life with Keystone. I have questions specifically on Federation. I have been trying out Federation ephemeral user based authentication and I have been successful in that. I have three, four questions on that. The first one is, when is the CLI based support for ephemeral users are gonna come? Thanks. So the CLI support for Federation is tricky because they are primarily browser based. It does exist in some fashion. It's just specifically the problem lies on OpenStack client needs to migrate off of the Keystone client code and start using the Keystone off library now. Once that happens, which is targeted in just a few weeks, we have prototypes of it, but once that happens it should, there are plugins for both OpenID Connect and SAML and ADFS and you just have to configure your identity provider to expose certain features that'll allow for the redirect to happen. Sometimes in Newton? What was that? Newton, you wanna know if it's Newton time frame? Oh yeah, yeah, for sure. Okay, cool. Second question is, why groups in the mapping instead of direct projects? I mean, if you needed a group to map the identity provider, you could have a group and then I can directly map the projects. Thank you, because I was so happy that someone would ask a question that Dolph would have to answer, so thank you. That's a book, that's a book. Yeah, that was basically a technical design limitation that we dealt with. So shadow users, which we implemented over the past release and we'll continue to work on in Newton, we'll totally alleviate that. So I told kind of level side everyone, if you set up a identity provider today and are federating users into Keystone, the only way you can assign them authorization is to map them into groups and you do all your authorization management on the groups, not on the users themselves. So the way we're changing that in past release and next release is we're going to be shadowing all of your federated users into the local identity database so they can treat all users exactly the same. They're all effectively local users. It's just a matter of where you authenticated to get into the cloud in the first place. So you'll be able to assign roles directly to federated users, put users into arbitrary groups after they've already started using the cloud and so on, so changing. Would that help with auditing in any way, having the shadow identities or? That's actually something I want to talk about at the summit. Excellent, well we'll all wait with beta breadth then. Okay, I must have probably regarding the groups might have bumped into a bug basically is if you have multiple groups, it works through the Keystone Manage by actually checking the mapping but when I do the actual federation, the list of the groups that you're unicode you, it's actually, it becomes a part of the list. So when it tries to iterate through the groups, it gets both the groups or multiple groups together and that's it. Maybe I can show you that after this thing. Yeah, file a bug. I did actually file a bug on that. My last question is, why is- It's still hasn't turned out yet. Why SSO redirect HTML at the last stage to give downscope token back? Isn't there a better way to do that? Can you repeat the question one more time? During the federation flow, the last stage you have an HTML file which actually puts downscope token to give it back to Horizon. Why is that? That was really the only way we could get it to work between Keystone and Horizon and that's what the CERN guys were using as a workaround in their deployment. Is that secure? That's probably my question. Is it secure? Yeah. They're all bearer tokens. It is just the ID. It is a bearer token. It is a- The fact that it's HTML will make it less secure. Yeah, no. All right, that's all I had. Okay, now have you already gotten a book? Yeah, I did. Okay. See, I knew I would need three. Oh, well. Okay, it'd be our pleasure afterwards. Okay. Do we have another? Yeah, sure, so it's on federation again. Sure. I'd maybe like to get the view of the operators, how they see deploying a federated Keystone at scale, perhaps, if there's any sort of, if they have any experience of that or plans to do that. So we're not at scale. What is at scale is my bar tab from trying to implement federation. It's a bit rough to get all the pieces together. So in particular, our method of operation is to automate everything. If we can't put it in an Ansible Playbook, we don't put it in the product. And trying to automate the dance that has to happen with federation is difficult. We started and managed to be part of the demo at Vancouver for a Keystone to Keystone Federation where we just acted as a consumer from somebody else's identity. And that was pretty handcrafted to make the demo work. And we scrapped the, adding it to the product, partly because a lot of the user-land tooling wasn't ready yet. It was raw HTTP calls that we're throwing at Curl to get things set up. The landscape is better now, and that's why we're looking at it again, but it is still quite a lot of moving parts to try and automate a thing that we can just stamp out over and over and over again, provided the right input from the particular client. So I don't have much of a better answer for you than that, except that in six months, we should have a much more detailed answer there. The IBM's public open stack that we're in the process of sort of rolling out and getting up is going to be both at scale and using federated identity. So there's a team working on that literally right now. So I don't have a great sort of succinct answer for how that's working out, but in six months, if you join us in Barcelona or in 12 months, if you don't want to go to Barcelona in six months, which is a mistake, they have lovely food in Barcelona, but in 12 months in Boston, there should be some really good feedback from that at scale and some lessons learned from that. Okay, hold on. I can totally add to that. So one of the problems that you're going to run into trying to operate, say, a lot of identity providers at scale, like I'm assuming that's what you mean by scale, is actually managing those. Currently, we assume that you are going to use Shibboleth because that's kind of our recommended the fastest path to federation that we had. If you're doing that, and so you're doing that in a public cloud or something, that means you're bouncing all your keystone nodes every time you need to manage IDPs and certs and things, change Shibboleth configuration, and that's obviously a no-go. So one of the things we're prototyping right now, there's a spec that was originally put out around the Tokyo summit, is to kind of reimplement Shibboleth and get it all into Python, so that we can do everything through the API. So that dance that you're doing right now, hopefully we can ease that and make it all pure HTTP. Yeah, no more bouncing. Nice. That's great. Thank you. Oh. We have a taker. Yay. You have made my day. That and if I only had a chair. Who is our next? Anyone else? There's somebody at the mic. Yep, me. Go ahead. So as public cloud operators, do you guys see a need for having, I guess, headless users, like API users or kind of what Amazon has with their instance users? Because I mean, it just doesn't really make sense for some automation scripts to have users that are associated to a human being do some automation. Maybe you'd wanna restrict the roles that those users have or something, right, to do just fetch stuff from Swift or something or those kind of restricted headless users purely for automation. Yeah, so I 100% agree with you. I think that all of the public clouds that are out there, regardless of their intent, should absolutely give you, as the customer, the ability to have one or more users, right? And then you can manage what you want to do with those, right? I think that's like for, in fact, for the applications that I run in the public clouds that I'm consuming right now, in a couple of them, I have the ability to create the headless users that I need. And in some of them, I have to go and create multiple separate accounts each with their own credit card and everything like that, which is ridiculous, right? Like that's absurd and shouldn't be the case. But it's, many of those public clouds have been around since before there was really good support in Heason. So I'm not really picking on them. It's just the state of a growing and fast-moving project. But that is definitely the shape of it. There's also a session tomorrow, I think it is, the instance user session. After this. It's after this, great. Yeah, I think those are also a really, so I've in fact experienced that exact problem that we have in the OpenStock's build infrastructure. We, at the end of jobs, currently, well, some of our build logs go to just a file server, but we've been migrating to uploading them to Swift. But that means that those build logs that we're running on ephemeral nodes need to be able to authenticate to Swift to upload the logs. Obviously, I do not want to give my credentials to ephemeral things that are running code that somebody submitted over the internet into a code review system. The credentials to delete all of my servers, that would be really terrible, but that is one of the only choices that I've got other than there's, you can use Swift, temp URL, middleware stuff, and that gets really strange. So 100%, yes, that is what everybody should be doing, and we just have to go beat them all over the head until they do that. So, I, oh, I'll go first. So, Private Cloud is the exact same use case because our users actually primarily authenticate with their LDAP username and password, which is also controls like where your paycheck goes. So, when you tell them, we have some users that say, hey, I want to use Jenkins to do something really cool, but I don't like my employee ID and password in there. So, we'll create what we call service accounts, and it actually works great for the users. We don't do any role restriction really, but the downside is then when the security guys say, okay, we have these audit logs, but who is this user? What person is behind this? And then it's kind of a complicated question. So, we've tried to tie it back to the project owner, which I mentioned before, and then we also have the same problems with enforcing like password rotation policies, which LDAP does for us, but MySQL doesn't, and you want your MySQL service user password to be one, two, three, four, five. There's nothing currently to stop you from doing that. So, it's caused some security challenges that we haven't quite figured out, but it's definitely something we use all the time. Well, it seems like the instance users is the solution to that part anyway, right? Partially, it depends on what you're meaning by instance users and whether or not they're just short-lived for the moment and then blown away. The AWS way, whereas when you create your instance, you basically say, I want to create a user associated to this instance, which has these rights and it lives on the instance and that's it, right? But those are, that is a thing, but you've got to be able to create that instance in the first place, and so you've sort of got a chicken, so if it's only instance users, like in my case, I'm externally to the cloud running automation that is creating and deleting instances, but I've got to have something that is the user that's doing that, and I don't want that to be me, the human, I want that to be the automation role, and I probably don't want to put my billing account credit card password stuff into the Jenkins, don't put any passwords into Jenkins. And we also might leave the company, and that's running all your CI for your team, so what do we do? If it's tied to your LDAP ID, no one knows your password and that ID is not going to work anymore anyway, so I don't think instance users solves the complete problem for me. Not necessarily, but you can't have it tied to human users, clearly, so there's got to be something that we can, the flip side to that also is that the services themselves need to support some concept of, okay, you are a user that exists, but you don't have rights, and in one of the previous design sessions, it was apparent that a lot of the services just use the are you part of a, do you exist in this project and that automatically gives you some level of right access? So the concept of read only doesn't really exist or read only in some projects and write to a specific. So in Monti's case, he's going to want read only for everything except for Swift. And for Swift, you want to be able to do that thing and that doesn't exist in the policies that the services have right now, but that's a thing that we're trying to fix. But that brings us back to, I was just at the previous session where Adam Young was saying that, yeah, he once had a proposal for dynamic policy creation, right? That would be managed by Keystone or whatever, but point is you could dynamically create policy that would be stored in the DB and that was rejected by the operators, apparently, but what I'm hearing this summit is a lot of operators saying, well, we need a way to create dynamic policy, right? I mean, you guys are saying it, other people were saying it. Yes and no, it's not, but not so much dynamic policy that exists in the database. It's more of discoverable policy in reasonable defaults for that policy that you can override the small amount and see what changes over time, et cetera, et cetera. And the flexibility in being able to, outside of changing the code of the software, make policy changes and how a consumer consumes the cloud. And make it easier to maintain and also to make it discoverable so that I don't have one copy of my NOVA policy with NOVA and another copy of my NOVA policy with Horizon and drift happening, causing very odd things. Paul, can I also want, Paul, can you make it so? Yeah, I also want, well, so I think that the people that wrote the original policies top should be shot in the face, but this is just a terrible idea in the first place, but it wasn't you, yeah. I didn't know who it was, it was, it was what? No, it was. But in any case, the thing that I think that we're missing that is related to that is so we have policy and we have roles and those are the hammers that we're using to hit everything and the thing that we don't have is we don't have ACLs, right? We don't have a way, so like if you go to GCE, right, and you create a, like to create an instance and create an instance user for that, you like you do with other things in any of those web services, you say, this user can create this, not create this, do this other thing and there's like 12 or 14 different things that you can give different ACL things for and we just don't have that concept at all, right? We have roles that are defined in a policy file, you can get a list of roles, you have no idea what those roles can do and you can assign those roles to users. That's hence the discoverability part. Exactly, yeah, so at least if we can get the discoverability of what does this predefined role do, that'd be neat, but if we could have ad hoc, rather than just roles, if we could have ad hoc ACLs, I'd be a really happy person, but then these guys might want to kill me for making their lives harder, so I don't know, you know. We just want policy around creating the ACLs. Yeah, but you want policy for creating the policy? Yeah. That'd be great, yeah. Can we have roles for who can create the policy to create the roles so that your roles can create policy? Who gets admin access from that? You. And another problem that I see and maybe I'm wrong here, which is something I'm not grasping correctly, but you know you have the other per domain back end in Keystone, right? So I create domain whatever and let's say I have ADFS federation set up with that, right? So I'm authenticating my human users against my IDP, which I should be doing, right? It's the right thing to do. But then let's say I did want to create service users for the non-human tasks. Do I have to create them in AD, right? Which is a no-go for us, at least in my company, right? So is there a way to have a kind of a split? I know we could have different back ends, but only for different domains. What if I want to have a different back end within the same domain? And maybe this way, maybe I'm hallucinating. Maybe it's not a problem, but. Hopefully Adam's not here because he hates this. We did ours before, I don't think domains are a thing back then, but we basically have a driver that talks to LDAP and the MySQL. Yeah, we forked it from SUSE's driver a long time ago. And I think eventually we should probably switch to using domains, but I haven't really seen a clear path in what we have kind of works. So service accounts are in MySQL and employee accounts are in LDAP, AD. I don't think so. Adam wants to, yes. The Keystone guys don't really like it, but it does solve our problem. Congratulations, you have brought me the thing that brings me despair and unhappiness in this session. There's always got to be at least one thing that makes me less happy about the world in every session, so. And that's worth a book if you don't have one. I have the book, I have the book, all right. Do you have any other questions? I have a question. Please. I have two questions. The first one is about role assignment. So right now the role assignment, you need to specify a target. So say a group, a role, and a target. The target has to be a project or a domain. And it makes sense for a project level admin, but if I want something like a global admin who can manage domains and control, you know what I mean. And you cannot have an assignment of that sort. And I know that there is this way you can configure your policy files so that you can specify that, you know, specify your admin to a particular domain or project and give them all the powers in the world. But that seems more like a workaround than the right way to do it. So has this concept already been discussed and thought about? Yeah, there was a session just in the other building where there's an open spec for trying to define the idea of a global admin type role. And it's, actually you guys probably can talk about this more than I can, right? The is admin role, yeah. It's a very long running conversation. Yeah, and there's an additional problem of a lot of hard coded code all over a different open stack project with checks for is admin. And that is not actually controlled by the policy JSON file. And the is admin check comes from the is admin context role that is set in the policy JSON file. So that would mean that your RBAC does not completely come from policy JSON file. And then you would have to actually go and fix all those projects to get the actual multi-tenancy concept working. Yes, the second question is a consequence of the first. We all I think agree that we need a higher level admin and a really good definition of it for like global root access across your cloud. And then we need domain level admins below that. We've tackled that a couple of different ways. We don't have a crazy elegant solution. So it's an ongoing conversation. There are other sessions about it. And we have at least a session about that at every summit it feels like. So. Yeah, just to further that it's a huge thorn in our side and it's a huge part of that just because it's backward. We have to be backwards compatible. And I agree the is admin check is probably the worst thing in my opinion. And we do have to go in and fix the individual projects. I don't know if it's we have to do that but someone has to fix them. Yeah. There's a cross project effort around this in what we sort of went forward with is identifying in all the things that use policy to file the bugs against the direct in the code check for admin like that step one, stop doing that. Use policy files instead and step two is to implement the defining a project within your cloud that is the admin project. And if you exist in the admin project you gain global view of things whether it's global admin or global observer or global member you get just some that's your trigger for being global. And then that's a feature flag that either yes you support this or no you don't support this your policy either works as it is today if you don't support it or it breaks if you don't change it if you do support it but there's a way to carry that forward between two releases that you can do the right deprecation cycle. So there's the path forward is becoming clearer at least for me as an operator point of view whether or not the code can be written to match the path is a deeper question. Okay, and the second question I think it's been discussed many times but I would still want to raise it. This is about being able to customize the policies and files. So right now using the rest API you can just go ahead and create a new role but then what do you make out of that role? You would still have to say let's say on a particular open stack installation you're running five or six open stack projects you would have to open each of the policies and file and manually edit each of the rules to actually give any value to the role and then restart all the services. It doesn't seem like a real way to do it. It kind of doesn't make sense at all and that's one thing which customers always ask I mean okay how do we customize it and there's no real way to customize it other than giving more granular control by creating many number of roles which ultimately gets really complex. Yes, everyone's kind of mumbling that you don't actually have to restart services when you change their policy files. They're read on every request I believe. So you can just change them but yes you totally have to go customize all your policy. The session we had earlier today one of the driving goals was to resolve that operator complaint which really came up at the Tokyo summit. Basically every operator is customizing policy to some extent and shares your pain and so we want to get what the community perceives as like here's the conventional roles that we're all using anyway. Get those into all of the services policy files so that you're not having to customize so much and the second half of your pain that you didn't really describe is like okay then you go to upgrade open stack and you've got to resolve the conflict of migrating all of your changes that you customized in the last release of open stack and then applying those very security sensitive changes on top of new policy files from the new release. That's a really delicate process. So hopefully we can resolve both of those things over the next release. And one more question is there are still a lot of projects which does not use the policy engine at all. So for example Swift does not have support for the policy engine and all it does is in the confile you can just specify the roles. So it's just a role. You can just control it based on roles. So that sounds kind of weird that everything is not consistent and you don't really have our back control on a popular open stack project like Swift. Yeah, so it would be really great if you'd give the Swift team the feedback that it would be important for them to do the same things as everybody else in open stack because they take the point of view that nobody cares and that it's not important for them to conform to the rest of things like that and to their credit they usually claim that none of their users request it. So on the one hand I'm being snarky and negative and that's in poor form on my side but on the other hand I'm being quite honest. I believe that the Swift team cares very deeply about their users, the Swift users, not necessarily the open stack users and so the more feedback Swift users can give the Swift team that this is paying for them I believe is actually the feedback that they need like they're waiting for people to tell them that it is important because they're also trying to balance their long time users. They have users that were there before open stack was even a thing so users outside of open stack and very large installations that predate some of these shared services. So there is a cost, an opportunity cost there for them so more communication to the Swift folks I think is the way to help them. Yeah I think it would always be something that's optional if it did support Keystone's policy or the Oslo policy it would always be something that's optional for Swift cause I can't see them not supporting their long time users. So whether it's and it would probably be off by default I imagine because again backward compatibility and all that. Yeah and if I could just add a little bit to that I believe Mike Perez who works for the foundation is looking at things that are cross project consistency type items and I think we ought to try and feed that back to Mike it might be another one for him to track. And just like a theme on that point speaking as both someone who works on an auth project and someone who in other lines of work like consumes off if you have a business case that you're trying to solve and it doesn't have anything to do with auth then you don't care about auth. You go solve the problem you're trying to solve and auth is a solved problem check box is admin true you don't care. So that actually explains like a lot of the historical precedents and laziness around how auth is consumed across not just OpenStack but like in software period and that's just a reality of auth so. Just one more point it should be only Swift that doesn't use the policy if it's not if there's another project that's not Swift that is not using the policy checker or the engine then let us know because. Does Swift have some special probabilities? Would you like your book? I got one from the last release. So six years ago OpenStack started by taking some folks at NASA who had written Nova and some folks at Rackspace who had written Swift and smushing them together into a thing that we then called OpenStack. So most of the rest of OpenStack has basically as it's parent lineage the things in Nova and the Swift team had a running production service that was not, that did not do things the way that the Nova stuff did and again it's really easy to sort of get into sort of a snarky and I really place that they have a very good point. They had a production service running while we were still doing the very early iterations on all of the rest of the services. And so as we were building out the thing that we all think of as OpenStack right now they were running in production a very large cluster. And so as we were getting this stuff going they were like, no, we're in production. And that's admirable. Like that's them actually really, really caring about their users and really doing a good job of that. It is an unfortunate artifact of the way that that worked that we continually have to say things like, oh yeah, and also Swift, right? And that frustrates them too. Like they don't like it when we have to say that but there are some good historical reasons. It's always frustrating when the answer to why is something different? Well, six years ago we were all in a room and 20 of us, blah, blah, blah. And you're like, oh my gosh, that doesn't really help my problem, right? But there's some real non-trivial differences that the cost of solving them hasn't really been outweighed by the pain of the divergence. And so it's one of those things that sort of persists to this day. And it's probably only, like it's not gonna get any easier because there are now a bazillion really big Swift installations out there. It's a really good product, right? And so as we talk, and we have very honest, frank, and deep conversations about these things. Like, okay, well, that's interesting, this thing that you've got here, but how do we roll that out to our billions of massive Swift clusters and we're gonna have a good answer for that, right? So until we do have a good answer for how they deal with that, then that divergence, that historically-based divergence is gonna continue until we have a good story that doesn't involve pain for their operators or users. So on the one hand, I can be snarky, beer, let's go have a beer and I'll rant. But it's actually all very much stemmed in the very strong caring about their operators and their users that I think that is very important. Excellent. And I think we are out of time. We're a little bit over. Oh, audience impression, that's awesome. Wasn't that awesome? That was awesome. So thank you all for coming and the great participation and we really enjoyed the questions. All right. And if you needed a book, we've got two.